• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Damned audiophiles

At the end user's D/A converter

Past that I don't think anyone cares about accuracy, they chose speakers and such based on how they like it and accuracy be damned.

As long as we've got the signal at the output of their D/A converters to be measurably similar (and we certainly can measure this) to our consoles' mix buss, our industry has done its job
 
At the end user's D/A converter

Past that I don't think anyone cares about accuracy, they chose speakers and such based on how they like it and accuracy be damned.

As long as we've got the signal at the output of their D/A converters to be measurably similar (and we certainly can measure this) to our consoles' mix buss, our industry has done its job

But, that does not address the issue of acoustics!
 
They're not chasing "accuracy", unless we define that term as meaning "what I think the performance should have sounded like had I been there sitting in the 10th row back centre stage" without actually having been there.

Would the 10th row back be accurate? Seems to me that some of the sound would have been absorbed by the audience (AKA "water-based acoustic absorbers") by that point. Perhaps they should imagine themselves in an empty room in front of the band. But then the audience won't absorb the reverb of the room, and the sound engineer will probably have done the sound check bearing in mind that the room will be filled and so guessing how much sound the audience will absorb...

In other words, even picking a point from which to objectively declare something "accurate" is purely subjective.
 
But, that does not address the issue of acoustics!

Nor a host of other things, including the ubiquitous smiley face EQ in the hands of every single user in the 80's

I get your point that I'm missing the consumer's end of the chain, but I think we can all agree about where it needs to be accurate before joe numbnuts gets his hand on it
 
Except that when you do a live performance in the listening room, you're not in the Concertgebow. So you lost your reference.

Only if the Concertgebow is your reference. I think most would be satisfied with accurate reproduction of live instruments in their listening/living room. But if you really want the Concertgebow, my second method can be used.

"Which sounds more realistic" is testing preference and only preference. No testable, verifiable equivelence there.

Sure there is - just repeat the experiment many times, with many different listeners. Assuming there is a clear preference on average between A and B (where those are two different recording methods, or two different stereo systems, or whatever), that's a win. If there isn't a clear preference, change something else and A/B that instead. The end result of many iterations of that is likely to be pretty decent.

Your third method has some hope of working, but now, how do you record this information? You have two ears, and your head moves around at a concert. You have to capture a lot more than the mere pressure (which is 1/4 of the information at a single point in the atmosphere) at one point in order to do this.

Yes, I'd do it with two microphones on a dummy head, and take care of head movements somehow (either actually move the head, or average over phases, or something I'd have to think more carefully about).

But I actually think this third method is probably the weakest of the three, because certain aspects of the sound field cannot possibly be reproduced accurately in some random listening room (like reflected sound with all its crazily complex phase and delay patterns, which is most of the sound in a concert hall). So knowing what to focus on and what to let slide requires knowing how humans perceive sound and how to fool them - which is really hard. The first two methods take that out of the equation by using a human to tell you what they perceive/prefer, and uses averaging over many people to beat down the BS level.

I'm not saying measurements like that are useless - far from it, they are crucial to set a baseline and just get close. And the more we learn about human audio perception the more useful they will be. But I think in the end real live humans are the ideal "microphone" for that sort of test, because what we care about is what real live humans perceive, not what mics record.
 
Last edited:
"A music lover will stop what he's doing and stay glued to a favorite piece of music even if it's coming over a 3" speaker or a public-address system..." - Ken Rockwell

Though as an audiophile, if the sound is really bad, I will get annoyed and then when I get home I will listen to the tune again on my hifi :D


...on the other hand, the best sound reproduction in the world doesn't mean a thing when you're dealing with source material that's inherently noisy.

Noise, distortion, and shallow dynamics can even enhance the listening experience of some music, adding color and ambiance. I absolutely love the lo-fi sound of recordings like this, this, this, and this.
 
As I say, it's been a while and I can't remember the specifics, but IIRC, it wasn't just using the CAT5 as is, but rather taking the cables apart and re-plaiting them so that many strands connected onto the one terminal of one connector - essentially turning the numerous small wires into one large wire..


Yeah, that makes absolutely no difference in the function of an audio signal cable, and even less difference in that of a power cable.

The only plausible reason to do that is for aesthetic reasons, or to make it look different for the purpose of selling it at a higher price to people who don't know the difference.
 
This thread reminds me of some sample headphone systems we got from Studer some years ago (back when I was at the BBC). The idea was to use infrared reflectors on the headphones to monitor the listener's position, and use DSP to make the audio move with the listener so it felt as if the soundfield was static and you weren't wearing headphones at all. OK, people had been talking about doing this for 20 years or more, and I can't remember what year this was, but it was back when actually achieving it was some pretty hot ****. I think it spun off from their development work on virtual surround panning on their Vista consoles.

Anyway, the effect was very convincing, if a little granular (turn your head slowly and you could hear the steps) but the really fun thing was you could choose your imaginary listening environment. They had sampled a selection of concert halls, studios etc and by adding in early reflections from virtual walls, they reproduced a very impressive effect. The 'Volkswagen Passat' setting was fun.

We almost bought into this system for mixing live concerts, but both sides had irreconcilable expectations on price. :)
 
In my experience, the sound quality at many live venues isn't as good as I can get from my hifi at home; but the overall experience of being there usually more than compensates, so the imperfections in live sound quality matter less than they would if I were listening at home.

When I hear 'golden ears' saying 'veils are lifted', and there is a clearly obvious 'night & day' difference when a particular piece of kit is in use, yet they can't distinguish them in a blind test, I get suspicious.

Blind tests may be a pain, and unreliable at times, but they are some kind of objective benchmark. For me personally, a 'significant' difference means is should be audible under blind test. So if I'm going to spend a significant sum on an item that should make a significant improvement to my listening pleasure, I'd better be able to pick it out under blind test as the one I prefer.
 
Yeah, that makes absolutely no difference in the function of an audio signal cable, and even less difference in that of a power cable.

I know. I was responding to your assertion that the wires inside a CAT5 were too thin to carry an audio signal by explaining how this method got around that.

The only plausible reason to do that is for aesthetic reasons, or to make it look different for the purpose of selling it at a higher price to people who don't know the difference.

Or, as I've already said twice, if you have lots of CAT5 cable that you don't need lying around and not enough audio cable, which is the situation I was in when I considered it briefly.
 
Nor a host of other things, including the ubiquitous smiley face EQ in the hands of every single user in the 80's

I get your point that I'm missing the consumer's end of the chain, but I think we can all agree about where it needs to be accurate before joe numbnuts gets his hand on it

Form follows function -- the smiley face EQ was arose for playing the 80's pop on 80's boom boxes. The small speakers in light weight plastic boxes with no bass ports really needed the midrange cut and bass boost, and pushing the treble kept the vocals from getting muddied by the bass. The "loudness" button on some stereos basically does the same thing -- increases clarity of the sound in high-noise environments (like cars) by cutting the mids and boosting both ends, without just increasing overall volume.
 
In my experience, the sound quality at many live venues isn't as good as I can get from my hifi at home; but the overall experience of being there usually more than compensates, so the imperfections in live sound quality matter less than they would if I were listening at home.

When I hear 'golden ears' saying 'veils are lifted', and there is a clearly obvious 'night & day' difference when a particular piece of kit is in use, yet they can't distinguish them in a blind test, I get suspicious.

Blind tests may be a pain, and unreliable at times, but they are some kind of objective benchmark. For me personally, a 'significant' difference means is should be audible under blind test. So if I'm going to spend a significant sum on an item that should make a significant improvement to my listening pleasure, I'd better be able to pick it out under blind test as the one I prefer.

I don't go to any concert for the SQ, they vary from not bad (Ars Nova's version of Terry Reily's 'In C') to the truly awful (Iggy Pop).

Blind testing of cables shows that people who claim golden ears in reality have worse ears to those who listen to cable swaps and cannot hear any difference.
 
I had somebody telling me once that I must have at least 1 kwatt for each speaker in a 5 channel listening setup, with 89 db 1w/1m speakers. Assuming incoherent summing, that's a maximum output (if nothing breaks, cough hack) of thereabouts of 126dB SPL give or take in a moderately dead room (i.e. one that doesn't store too much energy).

Let's not forget that's a level that causes near-instant harm to the cochlea with normal sound pressure levels.


So 5000W for a 5-way setup? What is the size and room capacity of your theater or discotheque?

My 400W bass rig (through a 2x15 speaker cabinet rated for 800W) rattles plates and vibrates stuff off of shelves and coffeetables when I turn the volume knob up past the 2:00 position.

At any rate, your speakers should be rated to handle the RMS output of your amp, plus plenty of overhead to handle the transient peaks. Overpowering your drivers is a good way to damage your speakers and the power stage of your amplifier.
 
Last edited:
Nor a host of other things, including the ubiquitous smiley face EQ in the hands of every single user in the 80's

I get your point that I'm missing the consumer's end of the chain, but I think we can all agree about where it needs to be accurate before joe numbnuts gets his hand on it

Agreed, but now what you need to be accurate becomes the question.
 
Only if the Concertgebow is your reference. I think most would be satisfied with accurate reproduction of live instruments in their listening/living room. But if you really want the Concertgebow, my second method can be used.
Well, the point is that one wants, in the home, to be immersed in the original venue, not in an instrument played in the home.
Sure there is - just repeat the experiment many times, with many different listeners. Assuming there is a clear preference on average between A and B (where those are two different recording methods, or two different stereo systems, or whatever), that's a win. If there isn't a clear preference, change something else and A/B that instead. The end result of many iterations of that is likely to be pretty decent.
You used the word "preference" yourself. Don't forget that the details of auditory memory last 200 milliseconds or less, and that the first level of abstracted details last seconds to minutes. You're giving the subject an impossible task.
Yes, I'd do it with two microphones on a dummy head, and take care of head movements somehow (either actually move the head, or average over phases, or something I'd have to think more carefully about).
Next time you're at a concert (life, not reinforced is the best choice for this) watch how people move their heads when listening. They are picking out the details (by reflex, effectively) in the soundfield that satisfy them. You need to be able to do the same thing in a play back room, MODULO the details that can be perceived. Yeah. Hard. Yes. I know.

http://www.linkwitzlab.com/Recording/acoustics-hearing.htm Has the slide deck for a talk I've given all over the world on this very issue. :) I am pointing you to Seigfried's site because of the way he presents the slides, you can also find an older version of the deck at www.aes.org/sections/pnw/ppt.htm if you like.
But I actually think this third method is probably the weakest of the three, because certain aspects of the sound field cannot possibly be reproduced accurately in some random listening room (like reflected sound with all its crazily complex phase and delay patterns, which is most of the sound in a concert hall). So knowing what to focus on and what to let slide requires knowing how humans perceive sound and how to fool them - which is really hard. The first two methods take that out of the equation by using a human to tell you what they perceive/prefer, and uses averaging over many people to beat down the BS level.

I'm not saying measurements like that are useless - far from it, they are crucial to set a baseline and just get close. And the more we learn about human audio perception the more useful they will be. But I think in the end real live humans are the ideal "microphone" for that sort of test, because what we care about is what real live humans perceive, not what mics record.

Indeed, your last sentence is my point.

It is possible to reproduce a soundfield to an arbitrary accuracy in a small space, don't ask what it costs,but it's not necessary. The slide deck talks about what you DO need to introduce to be sound accurate, but that's still not easy.

And we have to remember that diffuse radiation ability is important, and that a soundfield has 4 variables at any one point. A stereo miking is capturing 2/8 of the soundfield at 2 points in a room and thereby missing nearly all of the actual soundfield information (but will get most of the direct sound, which is kind of important.... to say the least).
 
So 5000W for a 5-way setup? What is the size and room capacity of your theater or discotheque?

My 400W bass rig (through a 2x15 speaker cabinet rated for 800W) rattles plates and vibrates stuff off of shelves and coffeetables when I turn the volume knob up past the 2:00 position.

At any rate, your speakers should be rated to handle the RMS output of your amp, plus plenty of overhead to handle the transient peaks. Overpowering your drivers is a good way to damage your speakers and the power stage of your amplifier.

You note I didn't suggest it was good advice :)
 

Back
Top Bottom