Well, we know that a digital signal cannot be an exact replica of an analog source, because digital by definition cuts the amount of infromation down by sampling at some rate.
That's only part of the issue. An analog source also has a limited resolution, due to physically required distortion and noise mechanisms. (Yes, I mean "required", as in you can't make them go away, ever.)
Furthermore, all real-world analog signals have finite bandwidth and finite energy, as long as we stay out of cosmology and particle physics, and the like.
So, you can't aruging for any infinities for analog signals. They (literally) can not exist.
It would require an infinite (or near so, down to one sample per quanta of whatever unit) sampling rate for this.
No.
There is something called the "Shannon Sampling Theorem" that comes about from the Nyquist conjectore, that shows that sampling need not exceed twice the frequency of the source bandwidth.
This is not conjecture, nor is there any controverting evidence. It's a mathematical theorem.
Now, in practice, one must use filters to limit the bandwidth of a signal, and those filters can introduce problems, but that is another story for another day, and one that is playing out right now in the pro audio arena.
The discussion turned to digital sounding better, error control, and whatnot.
As long as the bits are captured right, digital adds no more distortion than it had when it was originally digitized, coded, or whatever.
Every time an analog signal is amplified, re-recorded, transferred, transmitted, repeated, etc, there MUST, and I mean MUST be a finite degredation in the analog signal.
That is the fundamental difference between digital and analog signals.
The two things that make this possible in a digital signal are sampling and quantization. Sampling creates the frequency limiting, and quantization sets the noise floor.
Drat, I just gave a long talk about this in Chicago, but they haven't put up my slides yet. I'll bug them to do so.
Anyway, to make a long story short, the question arose as to the "sampling rate" of the human ear.
AS the ear isn't linear, that question is more or less meaningless.
Basically, what is the highest sample rate that the human ear can distinguish?
THAT question is meaningful, but you must include the filtering issues as well as the bandwidth issues, ie. state it as "what combination(s) of filtering and sampling is/are not audible"
Most adults can't hear beyond 20kHz, most in fact above 16kHz. Kids can get up to 25k or so, until growth and the noise of the modern world ruins that.
So, it's probably reasonable to say that 20kHz as a frequency maximum is workable for adults. However, if we make the sampling rate very close to twice that, the filters that do the filtering may have a main lobe (i.e. the energetic part of the filter's impulse response) that is as long or longer than that of the ear, and may "smear" the time components, or even, by virtrue of exceeding the ear's Time resolution, create frequency smearing (remember, ear. nonlinear. ear. nonlinear.)
My own take is that with good, slow transition filters, 64kHz ought to do for adults, and 88.2 kHz for most any human.
That is, however, open to argument and testing.
To date, despite the proliferation of higher than 44.1 sampling rates, it's been )(*& hard to actually run a test, because it's ()*&*( hard to get material that can be used in a fair fashion to run the test. It's also hard to find a system that can REPRODUCE the stuff well enough to be sure.
Is 16/44 enough? Not clear, and maybe not.
In other words, can a human ear tell the difference between an original, analog source and the same source sampled at, say, 128kHz? 256kHz? Any of our audiophiles know the answer to this?