Shrinker
Graduate Poster
- Joined
- Oct 25, 2004
- Messages
- 1,459
Thats cool! A fellow pixel pusher.
Your right about the scaling down of the grid. that's more than likely what he means.
Rendering an image, no matter how complex the image takes the same amount of computation power and time. The computer is just determing what the color of a pixel should be. What adds complexity is any physical computations that determin what the pixel should be. This adds up if you have run the simulation for a large sample area.
You know that the rendering time of a single frame of an animation doubles and quadruples if you add things like global illumination, caustics, ray tracing, or ambient occlusion. That not to mention other CPU intensive physics based motion algorithyms like motion blur, fluid dynamics, particle simulation, soft and rigid body collision detection and cloth and hair dynamics. The more particles or vertices you have to calculate for the longer it takes the CPU to crunch through the numbers.
And ofcourse all this is nothing compared to physicy accurate computer simmulations. look here for example:
http://www.physorg.com/preview4343.html
I know it is a simulation about galactic formation. But it is essentialy a particle dynamics sim. (That may be oversimplifing it bit!) It took a super computer over a month to run the simulation!
Here are some links concerning the software and algorythims used by the author of the vid.
http://en.wikipedia.org/wiki/ProEngineer
http://en.wikipedia.org/wiki/Finite_element_analysis
http://en.wikipedia.org/wiki/Finite_element_method
Thanks uruk, I think we're in agreement but you're missing my point a little (or maybe I'm missing yours). Simulating the movement of a million stars isn't much different from simulating the movement of a million dust particles. The scale is massively different but you're still just calculating the forces acting on 1 million points.
As for finite element analysis, I assumed the principles were similar to the dynamcs engines you and I are familiar with; that you could simulate a structure of any size, so long as the dataset doesn't get out of hand. In a grid based simulation (like fluid dynamics) that's achieved by scaling the grid along with your model. If the software only supports grid divisions of say 1m x 1m x 1m then I can understand the problem, but I would find that quite surprising.
No legitimate reason. These things DO NOT SCALE WORTH an ED!