Moore's Law still holding on

This Guy

Master Poster
Joined
Mar 24, 2006
Messages
2,140
OK, so it was more of a hypotheses, I believe, than a law, or even a theory (Seems I read a quote from Moore that it was more a guess, but can't swear to that, and it was only expected to hold true for 10 years or so) -

The term Moore's Law was coined by Carver Mead around 1970.[4] Moore's original statement can be found in his publication "Cramming more components onto integrated circuits", Electronics Magazine 19 April 1965:

“ The complexity for minimum component costs has increased at a rate of roughly a factor of two per year ... Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.[1]

http://en.wikipedia.org/wiki/Moore's_law



But it looks like it's going to be valid for a few more years anyway -

http://www.technewsworld.com/story/innv/55441.html

"Moore's Law, which postulates that the number of transistors on a chip will double every 18 to 24 months, recently faced a major roadblock: power leakage. However, by using so-called "high-k" materials, IBM and Intel both say they have remedied this efficiency problem, allowing the continued shrinking of computer chips."

I'm just waiting for my HAL on the wrist! ;)
 
Last edited:
Yes, but with gate lengths of only several hundred Å you are getting towards the level where you have to stop treating the device as a "lump" of silicon with an average doping level, and start treating it as a collection of atoms...

It's all going quantum... (misquote of Terry Pratchett)

Power dissipation and power density is another issue: expect more asynchronous logic (no clock pulse to use up power)

3-D packaging could be interesting, (see problem about heat dissipation) as well as organic logic (larger area's but prionted where silicon wouldn't be...

I believe it is slowing down:

Also one of the main drivers for process research (the Crolles Alliance) is now falling away (with two of the three main players pulling out) (NXP, ex Philips, and Freescale, ex Motorolla).

That leaves TSMC and IBM probably.

You want how many billion for a new fab???

Jim
 
what's the current number of transistors on a chip we can do?
How many angels are dancing on our pinheads? :)
 
working on 45nm (450Å) gate lengths...

The high-k (dielectric) materials make efficient capacitors (good for dram) and can make higher-gain transistors. Short-channel effects are important (leakage, which is why the "FINFET" is getting to be important:

Jim
 
working on 45nm (450Å) gate lengths...

The high-k (dielectric) materials make efficient capacitors (good for dram) and can make higher-gain transistors. Short-channel effects are important (leakage, which is why the "FINFET" is getting to be important:

Jim

Was this an answer to Andy's question about the number of transistors on a chip?

I'm so confused.

I actually tried to find something (anything) that actually stated the number of transistors per inch, or chip, or anything, and kept coming up with words like "Increased transistor density" or 40% increase, all nice information, but no actual transistor counts.

Anyone have a link or listing for the number transistors for say the PI, PII, PIII and P4 processors, for comparison?

I'd love to see something like that. But appears I'm too stupid to find it :crazy:
 
Yes it was, one orang looks like another to me, sorry

Can't be bothered to work out the densities from the reciprocal length...

I'm a lazy git

Jim
 
Yes it was, one orang looks like another to me, sorry

Can't be bothered to work out the densities from the reciprocal length...

I'm a lazy git

Jim

lol

Naa

I'm just not smart enough to figure it out. I was hoping you were talking about something else, cause I feel even more dumber now ;)
 
here we go....intel has a "fun facts on 45-nm transistors" :)


Fun facts about 45-nm transistors
Hundreds could fit on the surface of a single red blood cell
2,000 fit across a human hair
30 million fit on the head of a pin
It can switch on and off approximately 300 billion times a second
http://www.intel.com/technology/silicon/45nm_technology.htm


and here's there pdf fun facts sheet http://www.intel.com/pressroom/kits/45nm/Intel 45nm Fun Facts_FINAL.pdf

wow.....30 million on a pin head......:jaw-dropp
 
"Moore's Law, which postulates that the number of transistors on a chip will double every 18 to 24 months, recently faced a major roadblock: power leakage. However, by using so-called "high-k" materials, IBM and Intel both say they have remedied this efficiency problem, allowing the continued shrinking of computer chips."

I'm just waiting for my HAL on the wrist! ;)

Moore's Law has several variations and their implications are more interesting than most people realize.

First of all outside of the IT industry nobody seems to have noticed that one of the key forms of Moore's Law has ended. It used to be that transistor counts and clock speeds both doubled quickly. But clock speeds maxed out a while ago and will not improve substantially in the forseeable future. Instead Intel and AMD are moving to putting more cores on one chip. In essence instead of getting faster CPUs, you'll be getting more CPUs instead.

There is much talk about our being forced to new programming paradigms to take advantage of this. Of course for half of what I do (web programming) there is no issue - there are always so many requests running concurrently that we can use any number of CPUs with naive parallelism. And for the other half (developing reports that run on a relational database) it is officially someone else's problem. (Namely Oracle's.) But there is no question that more programmers will need to worry about issues of concurrency and parallelism in the future. But I'm not (currently) one of them.

On another side note, it has been discovered that Moore's Law is not as exceptional as had been thought. It has been documented that in a wide variety of industries with agreed upon measures of success there is constant improvement. And normally the improvement is usually approximately exponential in time for an extended period of time. (Though admittedly it is slower than Moore's Law.) So even after Moore's Law has failed, there will still be a lot of variations of Moore's Law available.

In fact there are other examples of Moore's Law like phenomena in computers. All of the following show exponential improvement in time:
  • Number of transistors per chip
  • Clock rate (this one stopped improving recently)
  • Density of data in RAM
  • Density of data on hard drives (this one grows faster than other variations of Moore's Law)
  • Rate of data transfer to/from hard drive (this one is slow)
  • Latency of hard drive (ie time for a round trip) (this improves very slowly)
These are all important in technology. But in many ways the fact of improvement is not as important as the relative rates of improvement. Because those relative rates determine where the bottlenecks are going to show up.

For example it used to be smart to have a certain amount of data in RAM, and then have a swap partition with more space for data. This made sense when RAM cost a lot. However if your machine actually has to go to swap, then it has to make a lot of unpredictable seeks to specific items of data. Given the latency of hard drives, this is relatively very slow, and getting slower all the time.

It still is standard to reserve space on a hard drive for swap. But knowledgable people when configuring a server tend not to do that. Instead they'll buy extra RAM and not have any disk reserved for swap. As the tradeoff gets more dramatically worse, that's going to be a more and more common choice going forward.

Cheers,
Ben
 
As the tradeoff gets more dramatically worse, that's going to be a more and more common choice going forward.

On three machines in a row now, I have achieved a more profoundly noticeable performance boost by adding RAM rather than the attendant (or in some cases subsequent) processor upgrade.
 
Was this an answer to Andy's question about the number of transistors on a chip?

I'm so confused.

I actually tried to find something (anything) that actually stated the number of transistors per inch, or chip, or anything, and kept coming up with words like "Increased transistor density" or 40% increase, all nice information, but no actual transistor counts.

Anyone have a link or listing for the number transistors for say the PI, PII, PIII and P4 processors, for comparison?

I'd love to see something like that. But appears I'm too stupid to find it :crazy:

OK, I've got the PC for a while...

assuming a 45nm gate transistor is, with its source and drain ~50nm, so (1/50e-7 per cm) or 500,000 per inch.

Of course you need to attach the "wires" and the rest of the circuitry and worh out how wide you want your transistors...

You also neeet to realise that you are making these on a 12" slice of silicon, and need to mace an economic number, so need to keep the number of defects down, and have enough of the transistors operating in the right part of the bellcurve for them to perform at speed. (The Intel P90 and P75 chips were apparently the same design, just the ones that were on the better part of the distribution (as tested) were turned into P90's and the slowere ones into P75's)

You can sometimes design in some fault tolerance, at the expense of silicon area, so

(It's complicated, but interesting, is the short answer)

BTW Do I have to wait for 50 posts before I can put up my avatar? I currently get an option to "not display an avatar" but none to display one...

Jim
 
On three machines in a row now, I have achieved a more profoundly noticeable performance boost by adding RAM rather than the attendant (or in some cases subsequent) processor upgrade.

Oh that has been standard advice for a long time now. When configuring a machine, add extra RAM and hard drive. The difference in performance between a top CPU and mediocre CPU is not very big. The difference in performance between a machine that has run out of memory and one that has not is infinite. And, as I pointed out, having to go to swap really hurts (and will only hurt more going forward). Therefore having extra RAM and hard drive will result in a machine that lasts better than one where you put that money into the CPU instead. (Unless, of course, the machine breaks.)

Cheers,
Ben
 
The amusing thing is that IBM recently was getting out of hard drives, google the IBM centipede: it's back to basics for them, a punched card machine (on the nanoscale).

Well I find it amusing.

The cache-RAM-Swap optimisation is simply a means of getting the most frequently used data as close (in space and thus in delay time) to the processor as possible. cache could have ben SRAM, on the processor so no need to refresh it unlike DRAM...


Jim
 
Moore's Law has several variations and their implications are more interesting than most people realize.

have the "new" developments of this thread been included in forecasts of compute power over the next 30 years or so? i have an old DARPA slide (changes in cpu power from 1990 till 2030-ish) but need a new projection (DARPA, IBM, anyone reasonable will do!).

thx
 
It is worth noting for the non-cognoscenti that the speed of light is about a foot per nanosecond. This becomes significant when you're running three or four CPU cycles per nanosecond, as you are at 3 or 4 GHz.
 
On three machines in a row now, I have achieved a more profoundly noticeable performance boost by adding RAM rather than the attendant (or in some cases subsequent) processor upgrade.
'Twas always the way. We knew this back in the 70's, when we were configuring *gasp!* minicomputers running swapping OS's (notably the first UNIX systems).

Of course, RAM was FRIGHTFULLY expensive then, so the much more cost-effective solution (yet to be discovered by Windows coders, incidentally) was to make your code much smaller and more efficient, so it didn't have to swap at all. It cost way less to pay a programmer (such as me!) for a month to write good small tight code that ran orders of magnitude faster than to buy a chunk of RAM that gave only a small percentage gain.

Funny - that still holds true today... :boxedin:
 
Lets cut the crap, when can I expect a 100 terabyte machine?
 
Lets cut the crap, when can I expect a 100 terabyte machine?
You can have one right now. Cost a bit, though- a terabyte of disk is going for what, a few hundred simoleans? Call it five hundred. So that would be $50K. Plus the box to store stuff on it and pull it off.
 
'Twas always the way. We knew this back in the 70's, when we were configuring *gasp!* minicomputers running swapping OS's (notably the first UNIX systems).
Indoody.

Of course, RAM was FRIGHTFULLY expensive then, so the much more cost-effective solution (yet to be discovered by Windows coders, incidentally) was to make your code much smaller and more efficient, so it didn't have to swap at all. It cost way less to pay a programmer (such as me!) for a month to write good small tight code that ran orders of magnitude faster than to buy a chunk of RAM that gave only a small percentage gain.
The Russians could only get like 286s and stuff, because of the technology embargoes, while we were playing with pentiums, so they did stuff like write something in C, then compile it, then DISASSEMBLE it, optimize the assembly language, and reassemble. Tetris was written that way, rumor has it.

Funny - that still holds true today... :boxedin:
Well, I suppose it does some places, for some tasks. Obviously not so much for the Microserfs.

The problem with such code bloat, of which they have been repeatedly accused, is that it means that there is a lot of code that has only been seen a few times. If you go over a piece of code enough to make it tight, you KNOW it. Nobody knows the code in Windoze like that, at least not most of it. Which is why they have so many damn bugs and security holes and crap.
 

Back
Top Bottom