Back in mid-2021, Apple made “news” by upgrading all subscribers of their Apple Music service to lossless audio. They, of course, used their own, proprietary ALAC format. Several services, among the Amazon Music HD, welcomed them to the club (though Amazon uses the open-source FLAC format).
Apple's going lossless at no extra charge to its subscribers was heralded as a major sea change. It did have one immediate benefit for me: Amazon made their HD (lossless) tier "free" with their basic subscription, thus saving me a couple of dollars a month. Thanks, Apple! Beyond that, and in the months that followed, it stirred an on-going debate about the audio quality that we've all come to expect and, more to the point, accept. This debate is bound to grow now that Spotify has delayed the rollout of their lossless service, leaving their premium subscribers "stuck" with lossy MP3s.
Because of the Bluetooth codecs Apple uses, its existing wireless ecosystem doesn't support lossless audio. Apple tends to play the long game and they need their users to regularly update their gear, so this is bound to change as the years roll by. For now, actually hearing lossless audio from an Apple iDevice involves using non-Apple hardware, namely an external DAC (digital analog converter) along with wired speakers, headphones, IEMs (in-ear monitors), etc.
Android is a tad better. While common Bluetooth codecs have all the limitations of Apple’s hardware, you have access to a wider variety of Bluetooth wireless earbuds. And while Apple caps all of those via their baked-in AAC codec, Android lets users install more advanced codecs, such as Sony’s LDAC, or aptX, etc. If you buy Sony’s wireless earbuds you get Sony’s LDAC and voila, CD-quality Bluetooth audio.
What does that all mean? Let me quote SynAudCon from 2014:
“CD quality” audio resolution uses a 16 bit word for each sample. The sample rate is 44.1 kHz. This is often described as simply “16/44.1k.” This translates into an analog dynamic range of approximately 96 dB, and an analog bandwidth of approximately 22 kHz. Technology broke through these limits long ago, and today the most common bit depth is 24 bits, and sample rates of 192 kHz and beyond are possible (24/192k). Those who deal with professional sound systems rarely encounter 44.1 kHz as a sample rate option. It has been increased to a more logical 48 kHz rate, yielding a bit more high frequency extension. Most DSPs use a 48 kHz sample rate and 24 bit words (24/48k).
This same article also gives an example of how futile the quest for “ultimate audio” can be, so I recommend reading the entire thing.
"Audiophile quality" is the next step up and is generally accepted as starting at 24/96. It is also often discussed as a function of bitrate, i.e., anything over 1,000kbps. I find that defining line interesting, in that I have several CDs that I've ripped that result in bitrates over 1,000kbps. For example, my personal rip of The Killer's CD Hot Fuss results in Mr. Brightside having a bitrate of 1,119kbps. This 16/44.1 recording is “high def”?
There are several on-line places where you can test your own ears with your own gear, which is really the only test that matters. Here’s a link to NPR’s online test.
Pursuing "the ultimate" based on someone else's opinion, their ears and their gear, is beyond pointless. You can easily pay hundreds of thousands of dollars to build the "ultimate" system, but why? When it comes to audio, you have to decide for yourself when enough is enough.
With my ears and my gear, when comparing the best MP3 with lossless audio, I can usually pick out lossless a third or more of the time. I've seen stories with people who have perfect pitch and much better ears and gear than mine, and they get it right about two-thirds of the time. (See Rick Beato’s video about this very thing.)
So why bother with anything above 320kbps MP3s?
For myself, there seems more presence with my music when it is at least 16/44.1. My current "standard" for acquiring new music is 24-bit. Since there is no additional cost for getting, say, lossless FLAC of 24/320 as compared to 16/44.1, or even an MP3, I buy the highest quality available. And if there's nothing above 16/44.1, which happens quite a bit, I can often purchase a physical CD for less and just rip the files myself (hello, dBpoweramp CD Ripper).
My Sonos speakers can't handle anything beyond 24/48, so that's what I'll "downgrade" higher rates to. And yes, I'll confess, often I can't tell a bit of difference between a 320 MP3, a 16/44.1 CD, and a 24/48 (or higher) version of the same song.
So if Apple Music streams CD-quality or higher and their Bluetooth systems (eg., Airpods) only support lower bitrate AAC, does it really matter? And you may ask the same thing of an Android user streaming Amazon Music HD to their Bluetooth earbuds.
For the most part, no, not at all. But there are exceptional moments. If I'm listening via my Aria IEMs and my Astell-Kern DAC on my Android phone, especially if using USB Audio Pro and playing the 24/192 file I purchased, there's an indefinable something going on. (And for the record, this setup also makes my MP3s sound fantastic.) Is it worth the hassle plugging all that in?
Yes. Every now and again you have to sit back and just absorb the music. For any other time, especially if I'm moving about, I'm more than happy with the Sony WF-1000XM4 earbuds on my Samsung or even the Airpod Pros with my iPad Mini.
If all this rambling has added up to anything it's this: Once you hit 320kbps with MP3s (or its Apple equivalent), numbers become less and less relevant. The measure is made with your ears on your gear. Anything else is rubbish and only relevant to statisticians.
Comments