Reviews of Attainable Hi-Fi & Home-Theater Equipment


Reviews of Attainable Hi-Fi & Home-Theater Equipment


According to my notes, it’s only been about eight months since I last posted a rant about the ongoing war between objectivists and subjectivists in audio, and I thought that would be enough said for at least a year. But you know what they say about the best laid schemes o’ mice an’ Wookiees, right? Here we are again. And yet again, what’s driving my desire to address this topic is that, as an objectivist, I feel like my position is constantly being misrepresented at worst or misunderstood at best.

What do I mean by that? Well, many of you who’ve been reading my stuff for any amount of time know how important I think objective bench-testing and measurements are to the overall review process. Given that, what would be your best guess if I asked you to predict what I think is more important: listening or measuring?

The answer is obvious. It’s listening. And if that response didn’t surprise you in the slightest, congratulations. You can stop reading now. There’s nothing in this rant that’s going to illuminate such a self-evident truth any more clearly.

But if that answer is surprising to you—if you assume every measurement freak is guided by graphs and charts and tables alone and believes every aspect of the listening experience can be gleaned from cold, hard specifications—you’re not actually arguing against any of the objectivists I know and converse with. You’re arguing with a straw man.

If you’re not intimately familiar with the term, the straw-man fallacy is an informal logical fallacy with roots dating back as far as Aristotle, although he didn’t articulate it in quite the same way we do these days.

Perhaps the most succinct description comes from the popular website Your Logical Fallacy Is, which describes it as follows:

You misrepresented someone’s argument to make it easier to attack. By exaggerating, misrepresenting, or just completely fabricating someone’s argument, it’s much easier to present your own position as being reasonable, but this kind of dishonesty serves to undermine honest, rational debate.

Of course, that’s intentional straw-manning, and a lot of audio scam artists are quick to undermine the arguments of objectivists because they’ve got some B.S. to sell and don’t want to be called out on it. But I think even many well-intentioned subjectivists unintentionally straw-man the position of us objectivists because they just don’t understand us.

My friend Steve Guttenberg is one such person. Steve is one of the nicest human beings on the planet and a genuine asset to our hobby. He just wants everyone to get along and enjoy great sound and great music. But in defending his subjectivist opinion and his dismissive attitude toward measurements as part of the review process, he inadvertently makes straw men of folks like me.

Take one of his popular videos on the subject, with the straightforward title, “Is it possible to measure sound quality? I don’t think so!” Ever the picture of humility, Steve describes his gear-review process as simply this: “I give my opinion, that’s it. It’s all I have to give.” So far, so good. How could anyone argue with that? It works for him, it works for his viewers, and he’s built a heck of a reputation out of doing that and nothing more.

But in attempting to describe the viewpoint of people like me, Steve immediately—and I think completely unintentionally—goes off the rails. He describes my process of backing up subjective impressions with objective data as follows: “The thing about measurements is, they’re satisfying in a way that they’re, well . . . objective. They’re not somebody’s opinion; they’re cold, hard facts. A measures better than B—done. That’s all there is. That’s all she wrote. A measures better than B and is therefore better.”

The question I would ask to dismantle that argument without even trying is this: what does that even mean? What does a better measurement even look like in the absence of controlled listening tests? What are these measurements supposed to describe, if not a product that either sounds good or doesn’t? And how could anyone know that without controlled listening tests?

Did some middle-aged dudes sit around in a smoky back room and just arbitrarily decide what a good measurement looks like and what a bad one looks like? Not in any sense that really matters (and we’ll get to the outlier exceptions in a bit, I promise).

The real answer is that, by and large, what we objectivists mean when we say that one component or speaker measures better than another is simply that its measurements more closely resemble those of components that sound better to most people in controlled conditions.

As I was typing the above, I found myself getting more and more frustrated by this disconnect, so I sent an email to the patron saint of audio objectivism, Dr. Floyd Toole, and asked if he had any words of wisdom for me. Unfortunately, he was submitting the manuscript for the fourth edition of Sound Reproduction: The Acoustics and Psychoacoustics of Loudspeakers and Rooms and didn’t have much time to chat. He did send me some pages of the final manuscript that touch on the subject, none of which I can quote here. But his email did, as expected, include some mighty wise words.

“It is a never-ending and fundamental problem in the industry,” he said. “BTW, I don’t know anyone who is so attached to measurements that they don’t listen. It was listening that gave us entry to understanding measurements. Good luck, have fun, teach reality—there is so little of it in evidence these days.”

So there you have it. A fundamental truth bomb from the man who used speaker measurements to revolutionize our understanding of the things. When you talk to the people who are doing or have done the real work of correlating controlled subjective impressions with objective data, they’re always working to understand why people preferred the sound of one thing over another, and measurements are one (in my opinion indispensable) way of doing that.

But, as much as it pains me to admit this, there’s one thing I disagree with Dr. Toole about. Or at least I have one experience that’s different from his. When he says, “I don’t know anyone who is so attached to measurements that they don’t listen,” I want to nod my head and agree, but sadly, there are data junkies out there on the internet who think you can look at the measurements of headphones or speakers and know everything about them. To the point of calling beloved headphones that measure idiosyncratically “failed products.”

Also, there are groups of so-called objectivists who think the performance of a piece of electronics can be so effectively reduced to a measurement of SINAD (signal-to-noise and distortion) that they create lists ranking electronics by that single metric, without ever showing or referring to controlled listening tests that reveal the precise correlation between listener preference and SINAD, or whether there’s a point of diminishing returns. In other words, can anyone really hear the difference between the noise and distortion of two components, one with SINAD of 105dB and the other with SINAD of 110dB?

I mean, if there is, I want to read the white paper, evaluate the methodology, and learn something I didn’t know before. But until someone gives me evidence to the contrary, I’m gonna side with folks like Gene DellaSala, who deconstructed this SINAD obsession pretty thoroughly. Hell, even Amir Majidimehr of AudioScienceReview.com, the originator of the famous SINAD ranking of electronics, has tried to explain to his readers that it’s not the be-all and end-all metric that some of them seem to think it is.

Gene DellaSalla

Just to be clear, though, I’m not saying we should stop measuring SINAD. I’m not saying we should stop reporting SINAD measurements. Such measurements are, in one sense, a way of keeping unscrupulous manufacturers in check and making sure we call out an amp or component with truly audible noise/distortion issues (<80dB SINAD, for example).

Steve says that the product engineers know more than reviewers, so no measurement is going to catch a product developer making a mistake. I strongly disagree with that. Or at least I have a take that’s incompatible with that. Just recently, we saw measurements that revealed a manufacturer’s claims about its $17,000 phono preamp to be inconsistent with reality, and someone at the manufacturer in question got their feelings so hurt that they issued a copyright strike and had the video taken down.

But the original video still lives on the Internet Archive. And the Streisand Effect has kicked in so hard that now the leading discussion about that product is how the manufacturer tried to bury a bad review. And people are reuploading the original video in solidarity. So, yes, we absolutely must do measurements, if only to confirm or deny the claims of manufacturers operating in a post-fact world. Sorry, Steve, but some manufacturers are indeed charlatans.

Measurements are also a useful tool in helping us understand what we’re hearing—or not hearing—as I’ve said. In my recent review of the Dynaudio Emit 30 loudspeaker, I talked at length about some recessed frequencies that caused the speaker’s tonal balance to rub me the wrong way. I described time and again a loss of energy between 1kHz and 5kHz that made the speaker sound dull and lifeless when rendering percussion and flutes specifically. What I didn’t necessarily hear—not consciously—was an excess of energy at around 1kHz. The dips registered to my ears and brain far more than this spike did, despite the fact that the spike is low-Q enough that, in isolation, I think I would have heard it.

I’ve learned something from that. And I don’t know if that knowledge will change the way I perceive the next idiosyncratic speaker that crosses my threshold. But I’m still armed with that understanding of what I was hearing, when listening alone wouldn’t have given me that understanding.

And to be clear, we’ve known for ages that too much bass will also register in the brain as too little treble, and vice versa. We know that once the balance of frequencies gets off, we perceive it in unintuitive ways. But I’m not sure I’ve ever seen any generalizations about the correlation between subjective preference and objective measurements that fully explain why I heard everything but that massive excess of energy at 1kHz.

If you want to understand the brain of the objectivist better—assuming you’re not one yourself, of course—it boils down to this: What we fundamentally believe is that if you can hear it, we can measure it. And if we can measure it, we might be able to make better predictions about what people will and won’t find pleasing when they hear it.

The converse isn’t necessarily true, though: just because we can measure it doesn’t mean we can hear it. Maybe we can; maybe we can’t. But we measure it anyway because we might gain some understanding down the road that retroactively explains things we heard.

As I always say, a speaker whose measurements reveal even on-axis response and good dispersion characteristics will sound good to most people. On the other hand, a speaker with wobbly on-axis response and uneven dispersion may or may not sound good to you, depending on what’s going on across the entire listening window. It’s kind of a coin toss. And if it’s purely a coin toss, what on earth is my opinion worth?

Picking

So with all that said, let’s go back to my buddy Steve’s inadvertent straw-man summary of the objectivist’s opinion. He claims that we measurement true-believers effectively say, “Measurements [are] cold, hard facts. A measures better than B—done. That’s all there is. That’s all she wrote. A measures better than B, and is therefore better.”

I don’t recognize myself in that statement at all. In fact, I don’t recognize the viewpoint of anyone I know who understands what any of these measurements actually tell us. Instead, I would phrase it as follows: “Measurements provide some confirmation of what we’re hearing, and also serve as a check against spurious claims from the manufacturer. And ourselves. Measurements give my readers more confidence in my subjective impressions. Or at least I hope they do. They also help us explain why we like the sound of one speaker more than another. If A measures similarly to another speaker that consistently wins in blind listening tests, and B measures more similarly to a speaker that’s hit-or-miss in blind listening tests, the measurements may help us understand why, and likely serve as a predictor of which speaker most listeners would prefer in a blind listening panel.”

That’s a bit of a mouthful, isn’t it? Nuance is hard.

Listening

But the fact of the matter is that it’s hard to get this point across when subjectivists seem to think that we objectivists don’t consider listening to be the most important thing. How could it not be? But without measurements, it’s difficult to fully explain what we’re hearing—and furthermore, to demonstrate what we’re hearing. I spent years as a “just trust me” audio journalist. And yes, I’ll admit it: it’s scary to have these objective data that could easily reveal my listening impressions to be the ravings of a lunatic.

But I can’t imagine going back to reviewing without measurements. Because—and I know this is repetitive, but apparently it needs to be repeated ad nauseum—without them, it’s more difficult to explain how all this gear works and what makes one component sound different from another without hand-wavy and unhelpful descriptions that are so vague as to be simultaneously uninformative and irrefutable.

. . . Dennis Burger
dennisb@soundstagenetwork.com