2 Comments

This post comes as is a bit of a surprise because it brings up a few non-existent issues. Indeed the addition of immersive and spatial sound, requires that new criteria be met, along with learning about object-based audio...or one could choose to refuse to mix for that end use and stick to two channels. However saying that it requires more time and money to do that in addition to stereo suggests a lack of understanding of the principles of object based audio. Mixing for object-based sound is generates a superset that is going to be perfectly valid for stereo because *that is how it works* and the renderer deals with the number of channels...not the producer. And it is NOT questionable whether most people can tell the difference when they have a setup that is capable of playing back Immersive Sound.

As for the loudness and dynamics bit...why are we talking about this as if this were a problem any more? There is no "shuffle between tracks dilemna" on streaming platforms or with server software on home streaming from local files these days, and it is no longer a problem. Tracks will be levelled by the streaming platforms and the server software (Roon, JRiver, Foobar, etc). This is a solved problem.

You ought to be aware that more and more streaming services are now normalizing tracks -- Apple Music at -16 LUFS, Tidal, Amazon, YouTube at -14 and the big daddy Spotify at -14 (or -23 or -11 user selectable) . The AES (which I am assuming you are a member of) recommends music at -16 LUFS with loudest album track at -14 LUFS. From this, it would seem that one can set things at -14 LUFS safely and it will track well on all (most) streaming services. If you mess about, anything hotter will be simply turned down a little,

As for dynamics...well, that is an artistic decision and affects the relationship of things within the music itself. Messing about with dynamics to "make it sound louder" could be considered a fool's errand especially if it sacrifices the emotion and intent of the music simply because you want it "loud" and one is better off using judicious compression to elevate the soft parts of the track as required to keep them above background noise. Mixing for people to specifically listen to things over earbuds on a noisy subway is weird...particularly when more and more playback software/hardware can compress the audio on the fly for those environments.

I'm at a bit of a loss as to why you appear to be making the above an issue? There are clear standards that one may adhere to and one is free to make artistic decisions within those constraints. IT would behoove one to make those artistic decisions wisely and keep in mind that while it is very easy to compress audio in a noisy environment on the fly when needed, but it is impossible to faithfully do the opposite if the audio is produced with a sub-optimal environment in mind.

Anything other than mastering for good audio in a good environment is merely a solution looking for a problem.

Expand full comment

I remember the days when digital was sold as offering a huge increase in dynamic range. Which, it does pretty well. It's ironic that the trend has been to the reverse.

Part of the problem, I think, is that most people listen to music on pretty marginal speakers - phones, laptops, or cheap earbuds. Real audiophile systems in an ideal environment are more rare. And that means heavy compression and limiting is almost a requirement for music to cut through the limitations.

It does make sense to have have different masters for different platforms (and audiences).

For streaming platforms, I'm typically mastering to between -11 and -8 LUFS (integrated) for heavier rock or EDM style tracks, and between -14 and -11 for more ambient/acoustic tracks. Even that produces noticeable distortion. I personally find heavy compression painful to my ears so it's always a challenge to balance loudness with quality.

Expand full comment