• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Where should technology research be focused?

Create a program that accurately creates complex models. Test the program on several previous simulations that are known to be successful to see if the same results are achieved within an acceptable variance. Voila.

As JJM suggested, this is much easier said than done. When attempting to model known situations, it is almost impossible to do such a thing without making assumptions or inserting "fudge factors" which lead to the desired result. Once such a model is used for a situation with an unknown result, those assumptions and intentional alterations are likely to lead to incorrect results.

Also, I don't think you grasp just how far we are from being able to model something as complex as the human body from first principles. Medicine, like many fields of science, is still littered with concepts, processes, and interactions that we don't fully understand. Not only do we not have anywhere near the computing power necessary, but we don't have adequate knowledge of the science that such a model would be built off of. Without a complete understanding from first principles, we would still have to do extensive laboratory testing to obtain the inputs that would go into the model.
 
Create a program that accurately creates complex models. Test the program on several previous simulations that are known to be successful to see if the same results are achieved within an acceptable variance. Voila.
Simulations require accurate descriptions of the physical process you are trying to model. Without that, all the simulations in the world won't tell you anything.

Electronics are pretty well understood. There are simulators that can be used to test circuits before building them, and they fail sometimes in surprising ways. All it takes is a tiny parameter that is left out of the model, or is inaccurate in some way.

One of my favorites involves a 7473 JK flip flop and an LM3909 LED flasher.

Connect the oscillator capacitor from pin 1 to pin 8, then connect V+ on the LM3909 to the clock input of the 7473.

Tie the J and K inputs on the flip flop together. Apply power to the 7473, and the outputs (Q and /Q) will sit there and alternate between high and low.

Where's the clock signal coming from? From the 3909. The 7473 clock input leaks enough current to the 3909 for it to operate. It builds up for a flash on the leakage current, then when it fires it tries to draw more current. The 7473 input can't deliver more current, so the 3909 pulls the input low - which clocks the 7473 into changing output states.

Build that circuit in any simulator, and it just sits there looking stupid. The 7473 model doesn't include the leakage current, and the 3909 model doesn't tell the simulator that it (the 3909) will work on that tiny amount of current.

The circuit above has an actual use. Connect a red and green LED anti-parallel across the outputs Q and /Q, and it will blink alternating red and green. Now remove the Q side and connect simple a diode in series. If that extra diode is good, only one of the LEDs will blink. If it still blinks red/green then the new diode is shorted. If it stops blinking, the diode is open.

This was used to test diodes in alternators. If your test leads are the right colors, it can also be used as a test for the polarity of an unmarked diode.

Unneeded for a trained electronics technician, but very helpful for mechanics in a hurry to check alternator parts. Green is good, polarity as shown by the lead colors, red is good but reversed polarity, errors as described above.

Another one that was fun was PLD that a company I worked for had designed for use in one of our products. Simulations showed it as fully functional. The first prototype worked as planned. Later units would lock up. I was assigned to find out why.

It turned out that the model used in the simulator (from the chip manufacturer) didn't accurately model the temperature sensitivity of the chip. If it got warm, it would take just that tiny bit too long for the address signals to be decoded, and nothing would work. A drop of alcohol on the chip housing, and evaporation would cool it enough that it would go back to work as though nothing had happened.

I ended up correcting the PLD design to lower the load on the address bus and to eliminate some unneeded data latches. With that done, the chip would work up to design temperature with out locking up.

The point of all this is that a simulation is only as good as the model it uses. If the model is wrong, your simulation will be wrong.

In order to have an accurate model, you must investigate the real object in extreme detail - which eliminates some of the advantages (savings in time and effort) you would hope to have from the simulation.


With an accurate model, there's a chance that you might discover something you wouldn't have expected - but that's more a comment on personal expectations and the mental "models" we all use to understand the world around us than it is a comment on the usefulness of computer simulations.
 
The point of all this is that a simulation is only as good as the model it uses. If the model is wrong, your simulation will be wrong.

This is a point that can never be over-emphasised. Simulation can only ever do what it is programed to do. There are some terribly sad threads around this forum on perpetual motion where people have simulated something that either produces energy or just goes forever. The problem is that the simulations are based on the assumption that Newton's laws are correct, so no matter what results you get out, you can only ever show that either the model is flawed or you did something that it couldn't cope with. While it is possible to find things you haven't noticed before, a simulation cannot make real breakthroughs since it can only ever be based on what we already know.
 
This is exactly the oversimplification we are writing about. A program can only tell you what you know- not what you don't know. If a simulation leads to an unexpected result, it must be physically verified.

Your approach has failed dramatically. About ten years ago some economists (including Nobelists) wrote a program that simulated all known international influences on equities, currencies and commodities; they formed a mutual fund company called Long Term Capital Management (LTCM) based on the program. They were remarkably successful, until Russia decided not to pay the interest on its national debt. That had never happened before, and LTCM tanked.

Wow, you're dinging someone for not predicting something that has never happened before. Consider this: Betting on a football game. It's been done for over a hundred years, and analysts and bookmakers have made a living being experts on it, who to go for, what the spread is, how to cover it, etc.

So I look at all this and bet on an NFL team in a particular game, and I feel pretty good. Then a player pulls a gun out of his helmet and kills my team's quarterback. If I'm you, I will say that my approach failed miserably, despite being based on a hundred years of experience and history. But I'm not you.

Now consider the WOPR from the movie Wargames. Sure, it's a fictional computer, but I feel confident that these types of simulations have been run on some government computer somewhere numerous times. None of those simulations involved an alien invasion. Are they doomed to fail dramatically?

I disagree that a program can only tell you what you know. In fact, I think you tell the program what you know, and it tells you something you don't know.

I also believe that whatever it is we don't know can be determined/derived/discovered much much faster with more powerful computers. Databases like the Human Genome Project can be expanded and improved upon.

And the company you mentioned? Its simulation ultimately proved to be correct. The problem is that it hedged its portfolio with investments outside the scope of the models, and was heavily levered. That's what Wikipedia says, anyway.
 
As JJM suggested, this is much easier said than done. When attempting to model known situations, it is almost impossible to do such a thing without making assumptions or inserting "fudge factors" which lead to the desired result. Once such a model is used for a situation with an unknown result, those assumptions and intentional alterations are likely to lead to incorrect results.

Also, I don't think you grasp just how far we are from being able to model something as complex as the human body from first principles. Medicine, like many fields of science, is still littered with concepts, processes, and interactions that we don't fully understand. Not only do we not have anywhere near the computing power necessary, but we don't have adequate knowledge of the science that such a model would be built off of. Without a complete understanding from first principles, we would still have to do extensive laboratory testing to obtain the inputs that would go into the model.

All I'm saying is this: For any given subject X and for any length of time it will take to understand subject X, that time will be drastically reduced if computing power is improved. Improving computing power, in other words, will speed up the progress of humanity.
 
All I'm saying is this: For any given subject X and for any length of time it will take to understand subject X, that time will be drastically reduced if computing power is improved. Improving computing power, in other words, will speed up the progress of humanity.

The first problem with this generality is that you don't define where to stop. We could improve computing, and then use that to improve computing, and then use that to improve computing. At some point we have to stop concentrating on improving computing and use the technology to do something else. When do you think that should be? How do you know we haven't reached that point now?

Also, I disagree that your principle applies to all (or even most) scientific subjects. There are lots of fields of research which which would not as greatly as you think from increased computing power. Many studies are limited by manpower, human expertise (which can't be replaced by computers at the levels we are talking about), and monetary resources, not computers.

Fields that can benefit from improved computing power can only benefit so much from it. If computing power races off ahead of everything else, then we won't have adequate knowledge to fully utilize that power. For example, what would be the use of a supercomputer capable of modeling the human nervous system if we as a people didn't concentrate on solving some of the large mysteries that still exist within it? Like others have pointed out, you need knowledge to build a model. If you don't have that knowledge, then no computer in the world will be able to help you.

Finally, your statement that all fields would benefit from improved computing (if it is true) is misleading because computing does not have an exclusive claim on this. Many fields would benefit from improved computing, yes, but many fields would also benefit from advances in:

-Energy generation, transmission, and storage
-Transportation
-Communication
-Manufacturing
-Chemistry
-Medicine (by improving the availability of scientists and engineers)
-Education

and that's just what I came up with off of the top of my head at 1am.

Most of these fields would benefit from improved computing. Computing, in turn, would benefit from improvements in most of these fields. Few, if any, fields of research stand independent of others. To say that one alone should become the "focus" is poor planning because that field itself will struggle if related fields do not adequately advance at the same time.
 
Last edited:
Wow, you're dinging someone for not predicting something that has never happened before.
Are you being deliberately dense? You suggested the notion that we can simulate some complex system and then get all our results from the program. I cited one example, among many, that show you are wrong in practice- and you think it is irrelevant.
{snip} I disagree that a program can only tell you what you know.
It is a subtle concept; but I have been writing programs and using computers since 1969, and it is true that a computer can only provide results that the programmer knew (intrinsically, if you wish).
In fact, I think you tell the program what you know, and it tells you something you don't know.
This is trivially true. I can write a program relating the weight-bearing ability of a nylon monofilament to its diameter. Then, if I don't know the capacity of a 1 cm cable, I can calculate it with considerable accuracy since the formula is simple. In this sense. the program can tell me something I didn't strictly know.
I also believe that whatever it is we don't know can be determined/derived/discovered much much faster with more powerful computers.
No, when you get new (non-trivial) results from calculation, they have to be experimentally verified. Your belief, as with belief in general, is based in the absence of facts; sorry. Reality is what exists despite beliefs.
And the company you mentioned? Its simulation ultimately proved to be correct. The problem is that it hedged its portfolio with investments outside the scope of the models, and was heavily levered. That's what Wikipedia says, anyway.
Your last sentence suggests a much-warranted doubt about Wiki. There are unequivocal examples from chemistry; but they are too hard to explain.
 
We've done that experiment. Computing power has doubled multiple times. The problem is that that increase in computing power has not resulted in increased power delivered to the user; when the power of the CPU doubles, Microsoft simply relaxes its standards for efficiency of code, with the result that the end user sees no significant increase on a typical desktop.

True, but your focus is too narrow: computers as desk top number crunchers.

Consider the drastic decrease in the cost of sequencing DNA over the last 10 years - from over $1000 per base to under $10, with sub $1 costs in sight - driven by technology controlled by embedded computing power. Genetics is becoming a branch of information technology, and we are poised for enormous breakthroughs in our understanding of medicine and our origins. IMO, we may well one day discover the gene for the "Golden Rule" and trace its evolution through the animal kingdom.

Increasing computing power has potential in transforming energy technologies and transportation. Already engines are monitored, controlled, and diagnosed with the help of computer processors.

Analog radio circuits are becoming obsolete as increases in processing power make it possible to digitally synthesize (or decode) virtually any transmission format.

Continuing gains in computer power will benefit us not (just) through simulations, but by incorporating digital technology in all kinds of appliances, machines, and tools.
 
How about directed research (and development) of Very Cheap, lightweight, and re-chargeable batteries for plug-in hybrid cars instead?

The sooner we end our dependence on middle eastern oil - the better.
 
Consider the drastic decrease in the cost of sequencing DNA over the last 10 years - from over $1000 per base to under $10, with sub $1 costs in sight - driven by technology controlled by embedded computing power. Genetics is becoming a branch of information technology, and we are poised for enormous breakthroughs in our understanding of medicine and our origins. IMO, we may well one day discover the gene for the "Golden Rule" and trace its evolution through the animal kingdom.

That's all well and good, but mapping the human genome is neither terribly important at the moment or a good example of science in general.

Making faster computers and doing things like mapping the human genome make for good news stories and can be very useful down the line, but there are some very important things that need attention now. An aging population is stressing medicine and industry. An energy crisis is looming on the horizon. Pollution is adversely affecting the environment and our health. Transportation systems are bending under the weight of oil prices and overcrowding. Growing populations tax food production, make food distribution more difficult, and increase vulnerability to famine. All of these are problems that need solutions soon, and improved computing isn't going to help much. Even if it would, we might not have time.

Talking about focusing on computing power and making advances is great when having a philosophical discussion about technology, but practical and urgent demands must fall higher on the priority list.

I'm of the opinion that if we must chose a focus for research, it should be energy -- both electricity production and energy for transportation.

Increasing computing power has potential in transforming energy technologies and transportation.
Most energy systems already have adequate computer control. Making computers better isn't going to squeeze much more electricity out of power plants or make internal combustion engines much more efficient. Any benefits computers give in these systems is going to be small compared to the benefits of making advancements in other key components of these technologies.

I work for a company that designs, builds, and supports nuclear power plants. I have never heard anyone say that better designs and efficiencies are being prevented because of computing limitations.

Already engines are monitored, controlled, and diagnosed with the help of computer processors.
And how much more will better computers do? Having the computers we have now is much better than having no computers, but what makes you think that having more powerful computers will make things much better than using current computers?

Analog radio circuits are becoming obsolete as increases in processing power make it possible to digitally synthesize (or decode) virtually any transmission format.

Continuing gains in computer power will benefit us not (just) through simulations, but by incorporating digital technology in all kinds of appliances, machines, and tools.
Again, I think you may be overestimating the benefit. Most homes and businesses will probably only see a marginal benefit from this.
 
Last edited:
Psychology and neuroscience.

We need to understand how to change human minds at least as easily as we manipulate matter. Many future problems will involve consensual action on the part of billions of people with different beliefs and attitudes.
We have to learn how to make people see sense.
 
I'm tempted to agree with Soapy Sam about psychology, and also with research into transportation, after all it took very little computing power to get us to the moon, and the vast increases in computing power haven't really moved us on a lot from there in 40 or so years. Also whilst sequencing the human genome is a great achievement, we are a long way off using this knowledge, I don't know that computing power will help deliver cures. The one thing computers really lack is the ability to innovate.
 
That's all well and good, but mapping the human genome is neither terribly important at the moment or a good example of science in general.

First, the human genome project stimulated advances in lab equipment and automation, resulting in completion much earlier and less expensively than originally estimated.

Second, those advances have pushed the cost of sequencing so low that it is now possible to concieve of a project to map an entire cancer genome and then compare the that to the human genome to identify possible causes, therapies, or detection methods. These, and similar types of projects (comparing the genome of the young to the old, etc.), will require improved computing power both in the lab and in data processing.

However, I did not mean to imply that, per the theme of this thread, that reasearch efforts shold be directed toward improving the speed or power of computers. As others have mentioned, that is happening already without requiring any additional encouragement.

My point was that many fields will progress by making use of advances in computing power beyond just more powerful desktop PCs. Many of those advances will take forms that neither you nor I even imagine at the moment.

Digital radio technology, for just one example, may ultimately loosen the stranglehold repressive regimes have on the free flow of information and ideas by creating transmission schemes that can't be jammed or blocked, or by allowing web surfing without going through government controled portals.

For another example, designs for "greener" diesel engines rely heavily on on digital technology to monitor and control every aspect of operation from fuel intake to exhaust gas treatment. Some engineers believe a diesel car using this technology could outperfom todays hybrid cars.
 
Last edited:
First, the human genome project stimulated advances in lab equipment and automation, resulting in completion much earlier and less expensively than originally estimated.

Second, those advances have pushed the cost of sequencing so low that it is now possible to concieve of a project to map an entire cancer genome and then compare the that to the human genome to identify possible causes, therapies, or detection methods. These, and similar types of projects (comparing the genome of the young to the old, etc.), will require improved computing power both in the lab and in data processing.

That doesn't change the fact that mapping the human genome has little impact on most fields of science. Also, while increased computing power had a very significant influence on that particular field of study, there is no reason to believe that this is typical for most fields.

My point was that many fields will progress by making use of advances in computing power beyond just more powerful desktop PCs. Many of those advances will take forms that neither you nor I even imagine at the moment.

Digital radio technology, for just one example, may ultimately loosen the stranglehold repressive regimes have on the free flow of information and ideas by creating transmission schemes that can't be jammed or blocked, or by allowing web surfing without going through government controled portals.

For another example, designs for "greener" diesel engines rely heavily on on digital technology to monitor and control every aspect of operation from fuel intake to exhaust gas treatment. Some engineers believe a diesel car using this technology could outperfom todays hybrid cars.
I do not disagree that increased computing power will help many fields. My problem is when people say that in a manner which implies that this claim is exclusively owned by computing. There are a great number of fields of research that generate benefits for many others.

Computing development itself is reliant on advances in other fields. Advances in the fields of battery technology, heat transfer, materials science, and manufacturing (just to name a few) all pave the way for better computing.

My claim is not that improved computing's influence is small, but rather that its influence on other fields is not particularly large when compared to the influence of other fields.
 
The first problem with this generality is that you don't define where to stop. We could improve computing, and then use that to improve computing, and then use that to improve computing. At some point we have to stop concentrating on improving computing and use the technology to do something else. When do you think that should be? How do you know we haven't reached that point now?
Never stop. You know, like we aren't stopping now. Intel will never stop making the next generation of chip, etc.

Also, I disagree that your principle applies to all (or even most) scientific subjects. There are lots of fields of research which which would not as greatly as you think from increased computing power. Many studies are limited by manpower, human expertise (which can't be replaced by computers at the levels we are talking about), and monetary resources, not computers.
You forget, improved computing will make independent AI robots possible. Manpower. Shifting things from humans to computers will reduce the amount of money needed. A simulation can test a drug rather than spending money on drug trials.

Fields that can benefit from improved computing power can only benefit so much from it. If computing power races off ahead of everything else, then we won't have adequate knowledge to fully utilize that power. For example, what would be the use of a supercomputer capable of modeling the human nervous system if we as a people didn't concentrate on solving some of the large mysteries that still exist within it? Like others have pointed out, you need knowledge to build a model. If you don't have that knowledge, then no computer in the world will be able to help you.
I disagree. Please note, I said focus on improving computing power - I didn't say focus on improving computing power to the utter and absolute exclusion of everything else. But people generally find a way to use up all the computing power.

Finally, your statement that all fields would benefit from improved computing (if it is true) is misleading because computing does not have an exclusive claim on this. Many fields would benefit from improved computing, yes, but many fields would also benefit from advances in:

-Energy generation, transmission, and storage
-Transportation
-Communication
-Manufacturing
-Chemistry
-Medicine (by improving the availability of scientists and engineers)
-Education

and that's just what I came up with off of the top of my head at 1am.

Most of these fields would benefit from improved computing. Computing, in turn, would benefit from improvements in most of these fields. Few, if any, fields of research stand independent of others. To say that one alone should become the "focus" is poor planning because that field itself will struggle if related fields do not adequately advance at the same time.

It's simple. Improve computing, then use the improved computing power to allow people to accomplish more things in the other fields. I mean, improving medicine won't help transportation. Improving computing will help them both.
 
Are you being deliberately dense? You suggested the notion that we can simulate some complex system and then get all our results from the program. I cited one example, among many, that show you are wrong in practice- and you think it is irrelevant.
You cannot give a single example of how a simulation failed and generalize it to indicate that deriving previously unknown meaningful results is impossible/improbable.

At most, you've shown me that this group's simulation was flawed.

It is a subtle concept; but I have been writing programs and using computers since 1969, and it is true that a computer can only provide results that the programmer knew (intrinsically, if you wish).
That makes no logical sense. I think you're playing fast and loose with the word "knew".

This is trivially true. I can write a program relating the weight-bearing ability of a nylon monofilament to its diameter. Then, if I don't know the capacity of a 1 cm cable, I can calculate it with considerable accuracy since the formula is simple. In this sense. the program can tell me something I didn't strictly know.
And now I'm sure you're playing fast and loose with variants of "know". You're attempting to use qualifiers for a binary state. Either you know something, or you don't.

If you don't know the capacity of a 1 cm cable, you don't know it, period. If you write a program to calculate its capacity and then run the program and find out the capacity, you've run a program on a computer that has informed you of something you didn't know previously. That sounds a lot like the opposite of what you claimed.

I also believe that whatever it is we don't know can be determined/derived/discovered much much faster with more powerful computers.
No, when you get new (non-trivial) results from calculation, they have to be experimentally verified. Your belief, as with belief in general, is based in the absence of facts; sorry. Reality is what exists despite beliefs.
So is it your claim that faster, more powerful computers do not allow us to determine/derive/discover things much faster? Because a cursory glance at the history of computing completely indicates the opposite, speaking of reality.
 
How about directed research (and development) of Very Cheap, lightweight, and re-chargeable batteries for plug-in hybrid cars instead?

The sooner we end our dependence on middle eastern oil - the better.

While you're absolutely correct, I think it's too narrow a focus.
 
How about directed research (and development) of Very Cheap, lightweight, and re-chargeable batteries for plug-in hybrid cars instead?

Already being done. Problem is there is a limit to how far you can move ahead of the rest of the solid state chemistry field.
 
My belief? Computing power.

If computing power is doubled multiple times, then researchers in practically every field of science, medicine and technology can use massive simulations instead of real-time research. Will a particular medicine have negative effects on people? Take the characteristics of the drug, feed in all the data from all the people who contributed DNA to the human genome project, factor in a few lifestyles, and run a few million simulations. Rather than putting it out there after 10 years of testing and finding out later that it affects a certain segment of the population lethally, the simulations would catch it without risking lives.

I disagree. Simulations are only as accurate as our fundamental understanding of the subject being simulated. Faster processors merely exaggerate our lack of knowledge by coming to the wrong conclusion a billion times faster.

The value of computer modelling in biological experimentation today is that since the simulation always differs from the in vitro results, it exposes a gap in our knowledge and shows us we have more to learn about the subject.
 

Back
Top Bottom