Art Vandelay said:
I'm familiar with the basics, but it's the reason for the nomenclature that I'm wondering about. Is there some clear reason why "memory" is not an appropriate term for storage, or did it just develop that the term was reserved for RAM?
It makes it easier for computer people to communicate if they segment terms out. Memory is really a black box term that refers to a large, flat space where you can store things. It could be implemented via an SPI interface, it could be SRAM, DRAM, mercury tubes, etc.
Is storage slower because it's indirect, or is it indirect because it's slower?
Its slower for physical reasons. Compared to the processing speed of the CPU, and speed at which RAM can be accessed, the hard disk takes a very, very, very long time to make one revolution.
If one wanted to build a computer where the CPU directly queries the hard drive, wouldn't that be possible?
Yes, but it'd be slower. Because the cpu doesn't operate in the time scale of the hard drive, its much better just to put a request into a request queue, have the hardware carry out the requests, and be notified when they are complete. In that way, the cpu can go about its buisness doing other tasks.
I thought that the reason this is not done is because it would be so slow, not because it's impossible.
Many older computers functioned in this way, simply because the OS and hardware was not advanced enough to take advantage of cpu offloading.
And is memory really accessed directly? I thought it's copied to the registers, and processed there.
It depends on the CPU architecture. Many architectures access memory directly (like x86). But modern ones do not, they load registers first. Also, whenever you are accessing memory, you are usually accessing a cacheline that is loaded with that memory.
With processor speed now making it impossible to put RAM within a clock cycle of the CPU
Actually, no, older processors, the cpu and ram functioned at the same clock speed. But with modern processors, the cpu functions at a much, much higher speed. When any access is made, the processor bursts a larger amount of data from the RAM into cache, then, when more data is accessed, the data will hopefully already be in the cache.
and caches now being larger than the entire RAM of earlier computers, do you suppose that the term "memory" will shift to referring to the cache, and RAM will become "intermediate storage"?
naw, like I said before, memory is really a programmers definition, they don't care how its implemented.
I take it than unless a special system is set up, when you network a bunch of computers together, each computer treats memory in every other computer as storage?
afaik, distributed NUMA (non-uniform memory architecture) systems exist.
Is there "virtual storage", a counterpart to "virtual memory"? For instance, if you tryconnecting 5 GB of memory to a 32 bit processer, you wouldn't be able to address the last GB, right?
Virtual memory is the amount of memory in your memory map. When you load a program, or file, it is mapped into virtual memory. If you don't have enough RAM to hold it, thats fine, it just pages from the disk when it needs certain parts of the file. Swap is done in a similar fashion. Thus, the 4GB limit becomes a problem even if you don't have 4GB of ram, because you virtual memory is limited to 4GB.
Using more than 4GB (or 3GB, depending on the architecture layout), is usually done by giving each individual process a 4GB memory map.