• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Acoustics Question for the experts

Hellbound

Merchant of Doom
Joined
Sep 16, 2002
Messages
15,112
Location
Not in Hell, but I can see it from here on a clear
I've been having a discussion elsewhere about digital signals versus analog, and we were discussing things such as quality and sampling rate.

Well, we know that a digital signal cannot be an exact replica of an analog source, because digital by definition cuts the amount of infromation down by sampling at some rate. It would require an infinite (or near so, down to one sample per quanta of whatever unit) sampling rate for this. The discussion turned to digital sounding better, error control, and whatnot.

Anyway, to make a long story short, the question arose as to the "sampling rate" of the human ear. Basically, what is the highest sample rate that the human ear can distinguish? In other words, can a human ear tell the difference between an original, analog source and the same source sampled at, say, 128kHz? 256kHz? Any of our audiophiles know the answer to this?
 
Huntsman said:
Well, we know that a digital signal cannot be an exact replica of an analog source, because digital by definition cuts the amount of infromation down by sampling at some rate.


That's only part of the issue. An analog source also has a limited resolution, due to physically required distortion and noise mechanisms. (Yes, I mean "required", as in you can't make them go away, ever.)

Furthermore, all real-world analog signals have finite bandwidth and finite energy, as long as we stay out of cosmology and particle physics, and the like.

So, you can't aruging for any infinities for analog signals. They (literally) can not exist.



It would require an infinite (or near so, down to one sample per quanta of whatever unit) sampling rate for this.


No.

There is something called the "Shannon Sampling Theorem" that comes about from the Nyquist conjectore, that shows that sampling need not exceed twice the frequency of the source bandwidth.

This is not conjecture, nor is there any controverting evidence. It's a mathematical theorem.

Now, in practice, one must use filters to limit the bandwidth of a signal, and those filters can introduce problems, but that is another story for another day, and one that is playing out right now in the pro audio arena.


The discussion turned to digital sounding better, error control, and whatnot.


As long as the bits are captured right, digital adds no more distortion than it had when it was originally digitized, coded, or whatever.

Every time an analog signal is amplified, re-recorded, transferred, transmitted, repeated, etc, there MUST, and I mean MUST be a finite degredation in the analog signal.

That is the fundamental difference between digital and analog signals.

The two things that make this possible in a digital signal are sampling and quantization. Sampling creates the frequency limiting, and quantization sets the noise floor.

Drat, I just gave a long talk about this in Chicago, but they haven't put up my slides yet. I'll bug them to do so.


Anyway, to make a long story short, the question arose as to the "sampling rate" of the human ear.


AS the ear isn't linear, that question is more or less meaningless.


Basically, what is the highest sample rate that the human ear can distinguish?


THAT question is meaningful, but you must include the filtering issues as well as the bandwidth issues, ie. state it as "what combination(s) of filtering and sampling is/are not audible"

Most adults can't hear beyond 20kHz, most in fact above 16kHz. Kids can get up to 25k or so, until growth and the noise of the modern world ruins that.

So, it's probably reasonable to say that 20kHz as a frequency maximum is workable for adults. However, if we make the sampling rate very close to twice that, the filters that do the filtering may have a main lobe (i.e. the energetic part of the filter's impulse response) that is as long or longer than that of the ear, and may "smear" the time components, or even, by virtrue of exceeding the ear's Time resolution, create frequency smearing (remember, ear. nonlinear. ear. nonlinear.)

My own take is that with good, slow transition filters, 64kHz ought to do for adults, and 88.2 kHz for most any human.

That is, however, open to argument and testing.

To date, despite the proliferation of higher than 44.1 sampling rates, it's been )(*& hard to actually run a test, because it's ()*&*( hard to get material that can be used in a fair fashion to run the test. It's also hard to find a system that can REPRODUCE the stuff well enough to be sure.

Is 16/44 enough? Not clear, and maybe not.


In other words, can a human ear tell the difference between an original, analog source and the same source sampled at, say, 128kHz? 256kHz? Any of our audiophiles know the answer to this?

Well, I hope a psychoacoustician qualifies for giving some answers here.
 
I'm don't think anyone, except maybe a uniquely gifted individual can make any distinction. I know I can't.

Also digital can produce much better quality than analog even if it's 'cut up' as you say because many overlook the 'noise' and artifacts that occur with analog. Though the wave is continuous it is difficult to record it and reproduce it as cleanly.

And there is also the subjective side to the argument.

Some feel that the artifacts are actually desirable as they add 'warmth' to the recording and I think there is some validity to that. I play keyboards that produce a very clean sound but if I play through a digital amp (or rather solid-state would be more accurate) it's a little too pristine for my liking. I prefer the cleanly produced sound through an older tube type amp as it does seem to make it sound 'warmer' and less clinical.

Same applies to the vynyl versus cd debate. Generally digital is cleaner but analog is warmer but if accuracy is the question (as in a non-musical situation ) I'm pretty sure that digital would be the way to go, especially if you were willing to bump of the sampling rate.
 
Wow! Thank you jj. For an ordinary music lover who has a sort of decent hi-fi and both digital and analogue sources, that was absolutely fascinating and extremely informative.

I'm grateful for your patience in explaining it.

Rolfe.
 
Yeah, jj I have to keep you in mind the next time I have a related question

You da man.
 
jj,

Thanks a million. That's exactly the info I was looking for. The discussion is in a online classroom for a college computer course, and I was talking in ideals rather than realistics to keep it somewhat simple (i.e.-I ignored noise and such, and had no clue to think about filtering). Anyway, your comments in general are a big help, and I posted a link to here in the classroom. Hopefully, it'll get me a boost to my class participation grade ;)

So, tell me, what's a good, inexpensive surround sound system for my new DVD player? ;)
 
Most audiophiles regard analog as "warmer " and digital as "harsh". The cause tho is directly opposed to the accepted wisdom of most Hi-Fi types. The warmth and harshness stems not from an artifact in the digital A/D D/A process but the LACK of the built in noise that is the reality of analog sound. Which sounds more pleasant a traffic jam complete with horns in the winter with snow on the ground or the same traffic jam in summer with no damping and all hard reflective surfaces exposed?

Analog aficionados ( of which I am one , at least in guitar amplifiers) will throw around terms like stair-casing, aliasing and Nyquist limits but the fact is audio information with regards to the human ear is more faithfully reproduced by digital devices then analog. If you enjoy a KT-88 powered Macintosh thats fine just don't try to attribute it only to analog being better then digital.
 
Cracking response jj, maybe you can answer a more trivial digital-only conundrum we were playing with the other day.

We'd been converting CDs to MP3 for our various iPods and associated expensive toys.

The debate was around sampling rates from CD to MP3 ... 128kbps is popularly described as "CD quality". Is that actually the sampling rate used in preparing a CD or is it just that that is considered indistinguishable?

We tried researching it but couldn't find enough data on what is printed on Audio CDs.
 
Benguin said:
Cracking response jj, maybe you can answer a more trivial digital-only conundrum we were playing with the other day.

We'd been converting CDs to MP3 for our various iPods and associated expensive toys.

The debate was around sampling rates from CD to MP3 ... 128kbps is popularly described as "CD quality". Is that actually the sampling rate used in preparing a CD or is it just that that is considered indistinguishable?

We tried researching it but couldn't find enough data on what is printed on Audio CDs.

CD's use 16 bit PCM at 44,100 Hz sampling rate, for a bit rate of roughly 1.4114 megabits/second.

128kb/s MP3's are, for difficult signals, at least, fairly easily distinguished from the original CD signal. A test by Grusec and Souldre from the CRC (hm, ought to be a web cite for that somewhere) has much information on this issue, as do the MPEG Audio verification tests themselves.

"CD Quality" is perhaps a marketing term. I would, personally (yes, I've been in this argument before, and been roundly reviled for being way too picky) insist that "CD Quality" mean that subjects (properly trained, selected, etc) subjects, in a properly run, sensitive, controlled ABC/HR or ABX test, can not distinguish between the coded (be it MP3 or other) signal and the original.

It is surprisingly difficult to get a null result in such a test. I know. I've written lots of perceptual coders in my lifetime, including big parts of MP3 and MPEG-2 AAC.

Oh, and MP3 is a "perceptual coder". Let me find where Sharma has my ppt on file...

http://www.ece.rochester.edu/~gsharma/SPS_Rochester/presentations/JohnstonPerceptualAudioCoding.pdf

There, that one is up. I'll bug people about the ADC tutorial I did two weeks ago.
 
Yes, I thought it was an attempt to play with subjectivity to be optimistic about how many 'tracks' one can get on a memory card of x megabites.

Any decent studies about of MP3 quality at different bitrates?

I agree with your definition of "CD Quality", and your postulation of how the term is used.

I assume the digitisation of MP3 is using more advanced algorithms than that on conventional CD audio, resulting in a better (excuse the clumsy term) bit to quality ratio. Is that correct?
 
Benguin said:
I assume the digitisation of MP3 is using more advanced algorithms than that on conventional CD audio, resulting in a better (excuse the clumsy term) bit to quality ratio. Is that correct?

CLick on the URL in the post above the one quoted. It tells you more than you ever EVER wanted to know about MP3, probably, from the veriest basics. (Click on the www button in my posts for why I'm giving that kind of talk.)

http://www.ece.rochester.edu/~gsharma/SPS_Rochester/presentations/JohnstonPerceptualAudioCoding.pdf

For your reference, included again directly above.

The various standards tests and the CRC tests are quite good tests about the quality of MP3 and other perceptual coders at various rates. Google "Soulodre + crc"

http://www.telos-systems.com/news/reprints/rw_092601_aac.pdf

is an advertisement including data from one of Gilbert's papers. MP3 is the data points for "Layer 3" in the graph there.
 
I was wondering about sampling rates for my mp3 conversions, so I set up a blind test. A short sampling at 128, and the same one at 320. Put them both into my media player, with "shuffle" on. As it switched back & forth (I couldn't see the playlist), I could not tell a difference, so I encode 'em all at 128 now. Actually, I didn't hear much difference between that and the wav file either, and I have in the past spent many hundreds of dollars on some audio gear, including Bose speakers. I still keep the wav files (and the original vinyl) though.
 
alfaniner said:
I was wondering about sampling rates for my mp3 conversions, so I set up a blind test. A short sampling at 128, and the same one at 320.

Sampling rate and bit rate don't have to track each other. Down to 128kb/s you are still most likely using the same sampling rate.

For very low rates, there is some advantage to reducing the sampling rate via a process called (for no good reason) decimation.

Crochiere and Rabiner's "Multirate Digital Signal Processing" is a place to look for that.
 
jj said:
128kb/s MP3's are, for difficult signals, at least, fairly easily distinguished from the original CD signal. A test by Grusec and Souldre from the CRC (hm, ought to be a web cite for that somewhere) has much information on this issue, as do the MPEG Audio verification tests themselves.

"CD Quality" is perhaps a marketing term.
jj

If you decided to plonk all your CDs onto a hard drive for playback, what bit rate would you use for encoding to WMA format? I'm talking about music files for everyday listening on low end (say $1000 speakers and $800 amplifier) equipment, not for archiving.

The reason I ask is I'm currently doing just that, and I'm not sure whether or not I can hear any difference between the original CD and the 128kb/s WMA version, the power of suggestion is as you know very compelling when it comes to audio. Now, MS Media Player 9 can encode losslessly to WMA format, but the compression is only of the order of 2 to 1, I would have thought music CD data would have a lot more redundancy than that.
 
Iconoclast said:
jj

Now, MS Media Player 9 can encode losslessly to WMA format, but the compression is only of the order of 2 to 1, I would have thought music CD data would have a lot more redundancy than that.

Well, the APT-X format, which I gather isn't a perceptual encoder, but a form of ADPCM, claims lossless compression of 4:1.

It's used in ISDN codecs for doing remote recording; APT-X and MPEG layer 2 are mostly used by the broadcast industry, while the de facto standard for the music industry seems to be Dolby AC-2 at 256 kbps.
 
ktesibios said:


Well, the APT-X format, which I gather isn't a perceptual encoder, but a form of ADPCM, claims lossless compression of 4:1.

It's used in ISDN codecs for doing remote recording; APT-X and MPEG layer 2 are mostly used by the broadcast industry, while the de facto standard for the music industry seems to be Dolby AC-2 at 256 kbps.

I really don't want to comment on various things I've invented, and that I work on, and that others use to compete, thank you, so I won't.

I will say that regarding APTX you might look back to a 1979 JAES paper by some Johnston guy...

You also ought to check up on what broadcasters and theatres are using these days, but that's another story.
 

Back
Top Bottom