• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

How does radio work?

julius

New Blood
Joined
Sep 30, 2010
Messages
6
Hi,

I am wondering how radio works. I have read that radio is a an electromagnetic wave and that electromagnetic waves behave a bit like mechanic waves, like the ones you see in water, but that they are also quite different.

I would like to know how an electromagnetic wave that is broadcast by a radio station can "contain" a song. I know that a radio wave has an amplitude and a frequency, but isn't a song a mixture of a lot of tones with different frequencies? So how is a song packaged in a radio wave? Or do they use multiple waves to carry a song?
 
Hi,

I am wondering how radio works. I have read that radio is a an electromagnetic wave and that electromagnetic waves behave a bit like mechanic waves, like the ones you see in water, but that they are also quite different.

I would like to know how an electromagnetic wave that is broadcast by a radio station can "contain" a song. I know that a radio wave has an amplitude and a frequency, but isn't a song a mixture of a lot of tones with different frequencies? So how is a song packaged in a radio wave? Or do they use multiple waves to carry a song?

Wow!

It is difficult to explain just how electromagnetic waves work without covering a few other things first.

Also, there are two different methods of commercial radio: FM and AM. Again, it is difficlut to explain these terms without first having a good grasp on electromagnetic waves and some basic electronics.

However, I may be able to point you to some books and such which discuss these topics. Would that help?
 
Hello Crossbow,

I know, it's a tough subject. I have also read about modulation (AM, FM, ODFM) and I understand how with AM the amplitude of the wave is used to encode information and with FM the frequeny. So I understand that with either modulation type you can encode a binary data stream.

But if that is how the song is transported, then I am still wondering how the song itself is encoded. How are all the tones that happen simultaneously encoded in a binary datastream that is sent/received/read/decoded sequentially? Or am I all wrong and is this not how it works?

Oh, and book suggestions are always welcome of course.
 
Last edited:
I would like to know how an electromagnetic wave that is broadcast by a radio station can "contain" a song. I know that a radio wave has an amplitude and a frequency, but isn't a song a mixture of a lot of tones with different frequencies? So how is a song packaged in a radio wave? Or do they use multiple waves to carry a song?

The "carrier" wave is at a very high frequency (compared to the waves in the audio portion of the song). And since it's a wave, it's very repetitive. You can predict what the wave "should" look like.

The carrier wave is then deformed in a way corresponding to the signal. So the receiver can see the deformation and turn that back into the information from the song.

This wikipedia image shows an information signal (top) that is used to deform two carrier waves. The first one is deformed in amplitude and the second in frequency.

http://en.wikipedia.org/wiki/File:Amfm3-en-de.gif
 
Hello Crossbow,

I know, it's a tough subject. I have also read about modulation (AM, FM, ODFM) and I understand how with AM the amplitude of the wave is used to encode information and with FM the frequeny. So I understand that with either modulation type you can encode a binary data stream.

Traditionally, the encoding is not binary (or digital). If your reference signal is low, you deform the carrier wave a little. If your reference signal is high, you deform the carrier wave a lot. So this is all analog.

But if that is how the song is transported, then I am still wondering how the song itself is encoded. How are all the tones that happen simultaneously encoded in a binary datastream that is sent/received/read/decoded sequentially? Or am I all wrong and is this not how it works?

Unless you're on HD radio or satellite, the song is not digitally encoded. Terrestrial AM/FM are analog signals.
 
The "carrier" wave is at a very high frequency (compared to the waves in the audio portion of the song). And since it's a wave, it's very repetitive. You can predict what the wave "should" look like.

The carrier wave is then deformed in a way corresponding to the signal. So the receiver can see the deformation and turn that back into the information from the song.

This wikipedia image shows an information signal (top) that is used to deform two carrier waves. The first one is deformed in amplitude and the second in frequency.

Ok, I understand how it is the deformation of the signal that carries the information. But how is the actual information content of a song 'encoded' in the wave? I my understanding the receiver of an FM signal sees something like this over time:

3khz
5khz
3khz
100khz
1ghz
20khz

These changes can hold information. This sequence of _electromagnetic_ wave frequencies might describe the changing _sound_ frequency of a single tone over time that is broadcast by some sender. But what if we wanted to send an entire song, which is, at each time the song plays a combination of a lot of tones or frequencies?

The way I look at a song from a data point of view is that it is a large collection of outputs in a frequency range, like what you see on the equalizer on your stereo. The total frequency range might be the entire spectrum of audible sound frequencies and the intensity of each subpart of the entire range would have to be transported, right? So, how is this done?

Wow, I hope my english is good enough to really convey what I mean. I could be all wrong in the basic understanding of how this works. I am a programmer and I tend to look at things in terms of collections, bits, etc.
 
Last edited:
Hello Crossbow,

I know, it's a tough subject. I have also read about modulation (AM, FM, ODFM) and I understand how with AM the amplitude of the wave is used to encode information and with FM the frequeny. So I understand that with either modulation type you can encode a binary data stream.

But if that is how the song is transported, then I am still wondering how the song itself is encoded. How are all the tones that happen simultaneously encoded in a binary datastream that is sent/received/read/decoded sequentially? Or am I all wrong and is this not how it works?

Oh, and book suggestions are always welcome of course.

Actually, at least for AM radio it's really not complicated at all. All sounds (including a song in mono) correspond to a single waveform - the position of a speaker cone as a function of time as it plays, for example, or the voltage at (one channel of) the output of a CD player. So all the radio transmission needs to do is transmit that waveform. That waveform contains lots of frequencies, but forget that - just think of it as a single waveform as a function of time.

AM radio transmits that in a very simple manner, best illustrated by a picture: here for instance. The high frequency of the carrier wave (that's the rapidly oscillating wave in the lower image) ends up being irrelevant. All you hear is the "envelope" - the waveform of the song being transmitted. If you fed that AM signal into a stereo, or even into the metal braces you might have in your teeth, you'd hear the song, because your ears can't hear the megahertz carrier frequency, but they can hear the audio-frequency envelope.

FM is more complex, because it uses frequency modulation and it's stereo.
 
Last edited:
Ok, I understand how it is the deformation of the signal that carries the information. But how is the actual information content of a song 'encoded' in the wave? I my understanding the receiver of an FM signal sees something like this over time:

3khz
5khz
3khz
100khz
1ghz
20khz

No, that's not a good way of thinking of it.

A microphone is just a little bladder reading the pressure density in the air. As a tone impacts it, it moves in and out rhythmically. If it's not a pure tone, then it moves in and out in a more complex manner. But it's just moving in and out, and you can locate how far it is at any point in time.

Now you take that motion and you can do something with it. On a record, you can shove a needle to move the sides of a groove up and down. On a carrier wave, you can shove the frequency around.

Yes, you can represent a waveform as the addition of various pure frequencies. But that's not necessary to simply capture the image of the waveform and transmit it.

The way I look at a song from a data point of view is that it is a large collection of outputs in a frequency range, like what you see on the equalizer on your stereo. The total frequency range might be the entire spectrum of audible sound frequencies and the intensity of each subpart of the entire range would have to be transported, right? So, how is this done?

I'm suggesting you don't look at it that way. :-) Instead, think of a microphone/speaker position over time. t=0, x=0. t=0.0001, x=1. t=0.0002, x=4, ....
 
Last edited:
Ok, I understand how it is the deformation of the signal that carries the information. But how is the actual information content of a song 'encoded' in the wave? I my understanding the receiver of an FM signal sees something like this over time:

3khz
5khz
3khz
100khz
1ghz
20khz

These changes can hold information. This sequence of _electromagnetic_ wave frequencies might describe the changing _sound_ frequency of a single tone over time that is broadcast by some sender. But what if we wanted to send an entire song, which is, at each time the song plays a combination of a lot of tones or frequencies?

The way I look at a song from a data point of view is that it is a large collection of outputs in a frequency range, like what you see on the equalizer on your stereo. The total frequency range might be the entire spectrum of audible sound frequencies and the intensity of each subpart of the entire range would have to be transported, right? So, how is this done?

Wow, I hope my english is good enough to really convey what I mean. I could be all wrong in the basic understanding of how this works. I am a programmer and I tend to look at things in terms of collections, bits, etc.

You might be confusing yourself by thinking in the frequency domain. In the time domain, a song is just a function of time. And because radio is a real time or nearly real time process, it's might be easier to understand that way.

If you do want to think in the frequency domain, start with AM. Imagine taking the Fourier transform of (i.e. decompose into frequencies) the lower waveform in the link in my last post. You'd get a big spike at the carrier frequency - which is irrelevant for what you hear - plus a bunch of other frequencies and phases that (when played simultaneously) reproduce the song. So all you need to do at the receiver is filter out the carrier frequency - but that's essentially automatic (the listener's ears will do it on their own, for one thing).
 
Ah,things are starting to become more clear now.

I came up with the frequency range, because it seems so... impossible that something as complex as music could be carried by something a simple as a single wave, but that is what actually happens. If I think about it I still find it hard to understand how a single speaker cone, or a microphone can capture the simultaneous drums, bass, guitar and singing of a rock song. All these tones happening at the same time.... hm. But of course, this is what is happening, because in the end my ear drums are no more than an oscillating membrane too.
 
Last edited:
But if that is how the song is transported, then I am still wondering how the song itself is encoded. How are all the tones that happen simultaneously encoded in a binary datastream that is sent/received/read/decoded sequentially? Or am I all wrong and is this not how it works?

It's interesting that people nowadays may first think of sound in digital terms.

The digital part is basically just a stream of numbers representing the position of the diaphragm in a microphone (say) as a function of time. The rest is pure analog: different tones are added on top of each other, at their source or in the air. It's the same idea as when you see waves of different sizes added together on an ocean, the lower frequencies being the longer waves. The measurement of the position of the wave at a single point will have just one value at any given instant.

The stream usually consists of two simultaneous measurements, representing the instantaneous position of your two eardrums.
 
I don't think I disagree with anything said above, but some of it seems like it might be skipping past the simple view of what amplitude modulation (AM) is.

The amplitude of the sound wave to be transmitted is used to control the amplitude of the transmitted wave. When the sound level is high the output of the transmitter is correspondingly high, when the sound level is low the output of the transmitter is correspondingly low.

Frequency Modulation (FM) is a slightly more complicated concept. With FM the amplitude of the sound wave to be transmitted is used to control the frequency of the transmitter. When the sound level is high the output frequency of the transmitter is raised and when the sound level is low the output frequency of the transmitter is reduced. The FM receiver converts the variation in frequency of the transmitted signal to variations in amplitude which can then be amplified and used to drive speakers.

Radio can also be used to transmit sounds digitally. In these kind of schemes the sound is encoded as a series 1's and 0's and the encoded data that represents the sound is transmitted. This is the technique used for cell phones, satellite radio and modern digital TV transmission.
 
The way I look at a song from a data point of view is that it is a large collection of outputs in a frequency range, like what you see on the equalizer on your stereo. The total frequency range might be the entire spectrum of audible sound frequencies and the intensity of each subpart of the entire range would have to be transported, right? So, how is this done?

All the frequency components that make up the original sound are modulated onto the carrier and they are all demodulated from the carrier at the receiver.

If you are doing this with a roughly linear analog device you normally don’t even need to consider the fact that the sound itself consists of many different frequencies, you modulate the whole range onto the carrier at the transmitter and extract the whole range at the receiver.
 
The rest is pure analog: different tones are added on top of each other, at their source or in the air. It's the same idea as when you see waves of different sizes added together on an ocean, the lower frequencies being the longer waves. The measurement of the position of the wave at a single point will have just one value at any given instant.


That is what seems so strange to me, with a single 'value' at every single point in time, then where is the complexity you hear when you listen to a rock song? I am going to describe a situation. Please tell me if I am right or wrong and why.

If, for example, Neil Young would simultaneously hit the snare drum, sing, strike a cord on his guitar and fart than a lot of pressure waves :) would bump into each other, creating a single resultant wave. That wave could hit the membrane in my ear or a microphone and it might be transported by radio or it might not be transported. Once the pressure waves are combined into a resultant pressure wave, how can my ear (or my brain) decompose this resultant wave into different instruments? Is the oscillation of my ear's membrane and its movements in time so.. complex and diverse that it this oscillation can carry all the richness of sounds I hear when I play a song or when I am simply on the street? It must be very sensitive to minute differences in oscillation to attain this rich ... ehm understanding or sensing of sound.
 
That is what seems so strange to me, with a single 'value' at every single point in time, then where is the complexity you hear when you listen to a rock song?

It's not present in any single "instant". If you take a song or sound source, pure note or complex, and just play a fraction of a second, you won't be able to tell what it is. It will sound like a "beat" with no tone. As you get a smaller and smaller time slice, it really stops having normal "frequencies". So the complexity comes from pulling the frequencies out as you hear it over time.

If you have a sound editor like Audacity, go and grab a 1 second sample and play it back. Then play back shorter sections like a tenth of a second. It starts sounding very odd.

I am going to describe a situation. Please tell me if I am right or wrong and why.

If, for example, Neil Young would simultaneously hit the snare drum, sing, strike a cord on his guitar and fart than a lot of pressure waves :) would bump into each other, creating a single resultant wave. That wave could hit the membrane in my ear or a microphone and it might be transported by radio or it might not be transported. Once the pressure waves are combined into a resultant pressure wave, how can my ear (or my brain) decompose this resultant wave into different instruments?

Because they're acting over time. Over a few milliseconds, the interference changes and the individual frequencies can be teased out by your ear and brain.
 
This is great. Thanks for all your answers. I have been wondering for the past couple of days how this really works and now I understand.
 
But how is the actual information content of a song 'encoded' in the wave?
Lot's of good explanations in this thread. But the simplest answer to this question is that the "encoding" method is simple addition. The signal that you care about (the song) is added to the carrier wave. One minor complicating arises in that there are multiple ways two signals can be added together. In AM broadcasting the amplitudes of the two signals are added together. FM is a bit less direct. In FM broadcasting the amplitude of the song is added to the frequency of the carrier.

In both cases you wind up with a broadcast signal that is not a simple single frequency when you're done. It's a multitude of frequencies constantly varying around the frequency of the original carrier frequency.
 
That is what seems so strange to me, with a single 'value' at every single point in time, then where is the complexity you hear when you listen to a rock song? I am going to describe a situation. Please tell me if I am right or wrong and why.

If, for example, Neil Young would simultaneously hit the snare drum, sing, strike a cord on his guitar and fart than a lot of pressure waves :) would bump into each other, creating a single resultant wave. That wave could hit the membrane in my ear or a microphone and it might be transported by radio or it might not be transported. Once the pressure waves are combined into a resultant pressure wave, how can my ear (or my brain) decompose this resultant wave into different instruments? Is the oscillation of my ear's membrane and its movements in time so.. complex and diverse that it this oscillation can carry all the richness of sounds I hear when I play a song or when I am simply on the street? It must be very sensitive to minute differences in oscillation to attain this rich ... ehm understanding or sensing of sound.

You can display the resulting sound as separate sine waves, each with it’s own frequency and amplitude or add them together to get a more complex signal that has a single discrete value at any given time. Either is just a different way of displaying the same information.

As to why a single sample from that complex waveform doesn’t seem like sound, as explained above, it isn’t. The music/sound comes from both what it is and how it’s changing so you can’t look at just one point.

When you digitize sounds, you actually do something like this and look at the value of the complex wave at different points in time, but when you are doing this you need multiple samples. This allows you to understand not just its value at that one time but how it’s changing over time. In fact you need more that 2 samples per cycle for the highest frequency you which to capture and it’s best to have more than that. IOW if you want to capture voice up to 15 KHz you need to sample more that 30 000 times per second. (AKA oversampling)
 

Back
Top Bottom