CTist's computer simulations of collapses

Thats cool! A fellow pixel pusher.

Your right about the scaling down of the grid. that's more than likely what he means.

Rendering an image, no matter how complex the image takes the same amount of computation power and time. The computer is just determing what the color of a pixel should be. What adds complexity is any physical computations that determin what the pixel should be. This adds up if you have run the simulation for a large sample area.

You know that the rendering time of a single frame of an animation doubles and quadruples if you add things like global illumination, caustics, ray tracing, or ambient occlusion. That not to mention other CPU intensive physics based motion algorithyms like motion blur, fluid dynamics, particle simulation, soft and rigid body collision detection and cloth and hair dynamics. The more particles or vertices you have to calculate for the longer it takes the CPU to crunch through the numbers.

And ofcourse all this is nothing compared to physicy accurate computer simmulations. look here for example:
http://www.physorg.com/preview4343.html
I know it is a simulation about galactic formation. But it is essentialy a particle dynamics sim. (That may be oversimplifing it bit!) It took a super computer over a month to run the simulation!


Here are some links concerning the software and algorythims used by the author of the vid.
http://en.wikipedia.org/wiki/ProEngineer
http://en.wikipedia.org/wiki/Finite_element_analysis
http://en.wikipedia.org/wiki/Finite_element_method

Thanks uruk, I think we're in agreement but you're missing my point a little (or maybe I'm missing yours). Simulating the movement of a million stars isn't much different from simulating the movement of a million dust particles. The scale is massively different but you're still just calculating the forces acting on 1 million points.

As for finite element analysis, I assumed the principles were similar to the dynamcs engines you and I are familiar with; that you could simulate a structure of any size, so long as the dataset doesn't get out of hand. In a grid based simulation (like fluid dynamics) that's achieved by scaling the grid along with your model. If the software only supports grid divisions of say 1m x 1m x 1m then I can understand the problem, but I would find that quite surprising.
 
Thanks uruk, I think we're in agreement but you're missing my point a little (or maybe I'm missing yours). Simulating the movement of a million stars isn't much different from simulating the movement of a million dust particles. The scale is massively different but you're still just calculating the forces acting on 1 million points.

As for finite element analysis, I assumed the principles were similar to the dynamcs engines you and I are familiar with; that you could simulate a structure of any size, so long as the dataset doesn't get out of hand. In a grid based simulation (like fluid dynamics) that's achieved by scaling the grid along with your model. If the software only supports grid divisions of say 1m x 1m x 1m then I can understand the problem, but I would find that quite surprising.
notes on "too big"
Since this is what I do for a living, I thought I'd put my .02 n here.He's probably referring to the size of the arrays (matrices) and number of degrees of freedom...
In a Finite Element model, there are 6 degrees of freedom(dof) for each node. A length of beam requires 2 nodes. In order to exhibit bending or buckling behavior, you need a minimum of 2 beams,end to end--18 dof. For non-linear(Post failure) analysis, it takes 3 nodes per beam.
A shell element requires 4 nodes. bending and buckling require at least 2 elements--10 nodes, 30dof. Buckling requires 8 nodes per shell element, or 13nodes, 78 dof.
Put these in an array to generate the stiffness matrix, and a mere 200 nodes is now 1200x1200 mass array, and a 1200x1200 stiffness array. Invert a 1200x1200 array.
and you cannot describe much with 200 nodes.In order to get a moderately accurate description of the WTC, for 1 floor, you are looking at on the close order of 100,000 nodes--for 1 floor! you'd have to make some pretty iffy assumptions to do that, even.
To model all the joints, welds, and other incidentals, such as studwork, walls, etc--
I wouldn't tackle it with anything less than a big honkin' CRAY, and billions of gB of swap space...
 
Thanks rwguinn. Can you think of any reason why he would say 'the model is too big therefore I have scaled the floors to 420mm per side'? He doesn't say anything about reducing complexity, just reducing the actual size.
 
Thanks rwguinn. Can you think of any reason why he would say 'the model is too big therefore I have scaled the floors to 420mm per side'? He doesn't say anything about reducing complexity, just reducing the actual size.

:shocked: No legitimate reason. These things DO NOT SCALE WORTH an ED!
Believe it or not, in the real world, size matters
you cannot make a scale model and expect it to behave like the full scale device/building/whatever.
Gravity is not scalable. Molecules are not scalable (nor are atoms)
420mm is less than the width of a vertical beam (Aka a collumn)

My response to the video/whatever--I can't look at it at work) would then be:

:dl:
and MRC_Hans--
He started with a work of fiction, if all the above is true...
 
bending of the floors like in this video is beyond reality


I really didn't get his point...

He seemed to be suggesting there would be no sagging, etc...

He also seemed purely to be using sagging based on removal of structural supports.

It is my understanding that the floor sagging occured due to heat, not removed supports.

Indeed, given that the sagging floors pulled the exterior columns inwards to cause collapse, the supports NEED to stay intact for collapse to occur.

In addition, NIST haven't claimed enormous sagging. A structure like that will only sag so far - the exterior columns were reported as having bowed a matter of inches before failing.

Even minor sagging would exceed integrity and result in failure, I would think. On a graphic as small as that one was on my screen, I wouldn't be surprised if failure occured at a level of movement too small for my eyes to detect.

-Gumboot
 
As anybody seen these?

Who is this "mmmlink"?

And if you look at his/her subscribers at youtube, you get... a certain Truthseeker.

I dare you to look at his myspace page.

"World Can't Wait", skeptigirl?
ok--
I cannot view the video. I can, however, comment on what he says.
I created them using ProE and have been working on them for about nine months now. I used a modeling program to produce the animation. They are to scale down to a mm based on drawings/literature from NIST and FEMA. There were many iterations. The original only had the upper floors. I was going to try and perform a Finite Element Analysis on fires bringing them down but found out quickly how big the model would become and that it would not demonstrate the collapse.
First off, he is claiming accuracy of 1mm over a length of 416160 mm, or 2.4 parts per million (0.000024 % out of 100%), which is ridiculous.
The last sentence is key, here, however. What that says is " I couldn't do an actual analysis using science and engineering math, so I used an animation to present what I think should have happened" He states his working hypothesis in
NIST and FEMA were very careful to only show 2D drawings and illustrations because showing a realistic 3D model would make it even more difficult to explain fires causing the collapses (which after $20 million, is yet to be simulated).
If he wee a FEM'er, he would know that Pro-E, Algor, I-Deas, and NASTRAN--and their ilk--will only show you where the first failure will occur to go beyond that requires lots and googles of computer power and scratch space. He admits this in the first paragraph-
it would not demonstrate the collapse.
The obvious "This is what I think should have happened" clue comes from
I am confident office fires did not weaken the steel causing a sudden global collapse
If he had done any actual analysis, with temperature reduction factored in, that FS of 5 which is (Allowable stress)/(actual stress) very quickly becomes <1. That means the actual is higher than the allowable, and things start bending. Once bending starts, with nothing to stop it, things begin to break.
(I actually prefer to use "Margin", which is (FS-1). 0.0 margin and up lives, Margin<0.0 dies)

so, in short, what he did was, as we call it, "make pretty pictures" There is no analysis involved that I can discern.
 
Last edited:
so, in short, what he did was, as we call it, "make pretty pictures" There is no analysis involved that I can discern.


That's the impression I got. It seemed a bit of a "I say the building didn't collapse from fires and I would know because I did this cool model" even though the model doesn't seem to actually show anything at all.

-Gumboot
 

Back
Top Bottom