• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Where should technology research be focused?

Simulations require accurate descriptions of the physical process you are trying to model. Without that, all the simulations in the world won't tell you anything.

The point of all this is that a simulation is only as good as the model it uses. If the model is wrong, your simulation will be wrong.

In order to have an accurate model, you must investigate the real object in extreme detail - which eliminates some of the advantages (savings in time and effort) you would hope to have from the simulation.

all models are wrong, so all simulations are wrong in that sense. but i would agree you need observations to see if the model is of any use. we do not "accurate descriptions of the physical process" to build weather models, but we have lots of forecasts and corresponding observations to learn how to use the models.

but in seasonal forecasting, for example. we really only have 20, maybe 40 years of good observations on which to run/test our models. and those tests are all in-sample (we used insights from those years to build the model. our models have 10^7 degrees of freedom and thousands of parameters to be specified.

all with 40-ish data points...

i bigger computer would help a lot; a little more understanding would go a long way too. but even with both, it will take time to verify whether or not we have made real improvements.
 
Energy research and applications. The world needs to rapidly develop resonable sources of energy that have long term viability. Everything we do takes energy and the majority of that energy is coming from resources that are being depleted much too quickly. Given the turn around time in developing energy sources, we had better start soon...otherwise, the numbers aren't going to be too nice in a few decades.

Such as:

Efficient solar cells with resonable cost.
Resonably priced fuel cells
Nuclear power
Wind power
Geothermal --possibly helpful for home heating.
Coal gasification...stop gap to the oil shortage.
Much more efficient buildings and transportation.
Enhanced oil recovery...stop gap as current methods don't work that good.

glenn
 
I disagree. Simulations are only as accurate as our fundamental understanding of the subject being simulated. Faster processors merely exaggerate our lack of knowledge by coming to the wrong conclusion a billion times faster.

The value of computer modelling in biological experimentation today is that since the simulation always differs from the in vitro results, it exposes a gap in our knowledge and shows us we have more to learn about the subject.
My point, exactly. However, you and I seem to suffer from practicality. In the imaginary world- simulations are inerrant.

Since I have suffered the limitations of computation for nearly 40 years, I would like to move to the Starship Enterprise (where the computers are infallible). Scotty, beam me up.
 
Last edited:
My point, exactly. However, you and I seem to suffer from practicality. In the imaginary world- simulations are inerrant.

Since I have suffered the limitations of computation for nearly 40 years, I would like to move to the Starship Enterprise (where the computers are infallible). Scotty, beam me up.

The real problem is that we don't have enough starting data to run a simulation very far before its starting inaccuracy causes discrepancy.

ie: modelling a quadrillion points in a cow simulation is of little value if you're assuming a spherical cow.

This is one reason that the increase in computing power has had no effect on, say, weather prediction. There is not enough data about the current state to start a simulation. All those unrecorded temp/pressure points are not in the model, but they're here in the real world, interacting with the those few pieces of data we put in the model. Ultimately, the model is so different from reality that it could very well not have been based on it, for all anybody can tell.
 
The real problem is that we don't have enough starting data to run a simulation very far before its starting inaccuracy causes discrepancy.

ie: modelling a quadrillion points in a cow simulation is of little value if you're assuming a spherical cow.

these are two very different things: the first one (uncertainty in the initial conditions amplified via "chaos") we have a technical fix for (ensemble simulations), expensive but understood. attempted daily in weather forecasts.

the second one is model inadequacy (spherical cow), there is no good initial condition to feed into the model. no improvement in the starting data can help here, other than by guiding us to better models.
 
I disagree. Simulations are only as accurate as our fundamental understanding of the subject being simulated. Faster processors merely exaggerate our lack of knowledge by coming to the wrong conclusion a billion times faster.

The value of computer modelling in biological experimentation today is that since the simulation always differs from the in vitro results, it exposes a gap in our knowledge and shows us we have more to learn about the subject.
What do you mean, "the" simulation? With drastically improved computing power, we can run thousands or millions of simulations, and even follow that with a statistical analysis of the whole group.

If it exaggerates our lack of knowledge a billion times faster, won't that lead us to reaching the right conclusion much faster?
 
What do you mean, "the" simulation? With drastically improved computing power, we can run thousands or millions of simulations, and even follow that with a statistical analysis of the whole group.

If you run thousands of millions of simulations without adequate understanding of the inputs and analytical methods, then you will simply end up with thousands of millions of wrong simulations.

You cannot simply use a shotgun approach when conducting research. Running millions of simulations will force you to determine which are right and which are wrong. Without adequate base knowledge of the system and inputs, this task is impossible.

Even if you could compare them to known results, that still doesn't prove that the model is correct. I once ran an FEA model for my graduate research and I got exactly the answer I was expecting on the first try. When I went back and checked my model, however, I realized that I had made several errors. These errors managed to align themselves perfectly to give me the right answer, but the model was still wrong. Unless you have the ability to go through your model and justify every single assumption, input, and calculation, you cannot assume the model was right just because it gave the right answer on some test runs.

All simulations require several things (list may not be exhaustive):
-In depth understanding of the system being modeled
-Knowledge of the inputs/initial conditions
-Justification of assumptions
-Understanding of the processes/interactions/laws that determine the result
-Ability to verify results
-Data processing power

Computing power only improves the last item on that list. This is a problem because, in many (if not most) fields, data processing power is not the limiting factor that is holding back advances.

If it exaggerates our lack of knowledge a billion times faster, won't that lead us to reaching the right conclusion much faster?
No. We already are aware of our lack of knowledge. Ask engineers or scientists if there is anything in their field that requires further study, and they will probably each be able to immediately list a dozen things each. We don't need more powerful computers to remind us that our knowledge is lacking in many regards.
 
Last edited:

Back
Top Bottom