• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

CTist's computer simulations of collapses

Pardalis

Banned
Joined
Mar 31, 2006
Messages
25,817
As anybody seen these?

Who is this "mmmlink"?

And if you look at his/her subscribers at youtube, you get... a certain Truthseeker.

I dare you to look at his myspace page.

"World Can't Wait", skeptigirl?
 
Last edited:
As anybody seen these?

Who is this "mmmlink"?

And if you look at his/her "subscribers at youtube, you get... a certain Truthseeker.

I dare you to look at his myspace page.

"World Can't Wait", skeptigirl?

The simulations are a littlebit older and there was a thread about it.
But the whole video is new to me:

 
The end is classic. He zooms into the basement where you see a curious blue box. Then as you get closer, you see a nuclear symbol on it. THE END. *anticipate*
 
This is a far more legitimate analysis than I've ever seen by a CTer. (I don't mean accurate, mind you.)

The real question here is why he isn't publishing these results.

Did he make any of his model and it's parameters public?
 
Why was it lame? Someone mixed the clips together in Hufschmid-Style and accompanied it with cool music. :D

What is it with CTists and videos that take for-freaking-ever to get to the point? That slowpoke 10-minute video could easily have been cut down to three or less.
 
The majority of the video seemed to be to demonstrate explosive charges being placed in his model of the WTC. I really don't understand how there's anything even remotely scientific about that.

Are we meant to automatically believe the explosives theory just because it uses a model to demonstrate it?

-Gumboot
 
A fundamental problem with his model is that he assumes that the structure elements are infinitely strong. He only uses their elastic deformation. For instance, he has a point where he has removed all core supports but one, and all outer supports but one side, and then he shows the whole thing flexing. However, that scenario is certain to load not only the supports, but also the floor elements far above their breaking strength.

As for his analysis of how high the remains should be, he conviniently forgets the many storeys of subterran structure: The piles of debris were much "higher" than you see above the ground.

After that, of course, he goes off in pure fiction.

Hans
 
I'm confused by one thing. He says because the model is large it has to be scaled down? Huh? Was it too big to fit on his monitor? Computers don't care about size. It's complexity they have a problem with.

He says his floors are 420mm wide. In another thread the good people here were able to confirm for me that a 1:100 scale model would be 100 times stronger than the real thing. In this simulation has he scaled down the material strengths appropriately?

Admittedly I know nothing about finite state analysis. Does this scale thing strike anybody else as odd?
 
Yes, it does. You are right about strengths. If he simply scaled it down, material strengths and elasicities may have been affected.

I don't know why he had to scale it. Perhaps his analysis tool has some internal finite scale that limits the size of the objects it can handle. I expect a floating scale (gridless) will make the calculations more complex by adding an overhead of scale calculation to each iteration. Since this type of analysis is intrinsically limited by your computing capacity, there is bound to be some built-in simplifications to speed it up. A basic fixed grid could be one such simplification.

Basically, you can never just scale a structure, because areas increase with the square of the scale and volumes (and hence weights) increase with the power of 3. To illustrate it, if you were to take practically any structure and double its size by simply scaling everything, the resultant strucure would collapse under its own weight (this may not apply to structures with a high strength redundancy, like bunkers, taks, etc.)

Hans
 
I think he scaled it down beause, more than likely, he ran the simulation on a PC. The full scale calculation would probably have taken days or weeks to run.
I do some CG stuff at home on a standard PC. Some images and animations can take a really long time to render (days) depending on what kind of simulation your runninng. (i.e. particle/fluid sim, global and caustic illumination..etc)

And from what he mentions in his description, he seems to be assuming the outcome , so he sets up his initial parameters appropriately.
 
I do some CG stuff at home on a standard PC. Some images and animations can take a really long time to render (days) depending on what kind of simulation your runninng. (i.e. particle/fluid sim, global and caustic illumination..etc)

I do the same thing at the office everyday :-)

My point is that scale is irrelevant. Extra scale does not necessaily mean extra complexity. If I asked you to render and image of Jupiter or an image of a snooker ball, you'd need about the same level of computation.

I've never used the kind of software they use for this analysis. If they operate in a grid based system where the grid components are always of a fixed real-world size then I guess that would explain the scaling.
 
I do the same thing at the office everyday :-)

My point is that scale is irrelevant. Extra scale does not necessaily mean extra complexity. If I asked you to render and image of Jupiter or an image of a snooker ball, you'd need about the same level of computation.

I've never used the kind of software they use for this analysis. If they operate in a grid based system where the grid components are always of a fixed real-world size then I guess that would explain the scaling.

Thats cool! A fellow pixel pusher.

Your right about the scaling down of the grid. that's more than likely what he means.

Rendering an image, no matter how complex the image takes the same amount of computation power and time. The computer is just determing what the color of a pixel should be. What adds complexity is any physical computations that determin what the pixel should be. This adds up if you have run the simulation for a large sample area.

You know that the rendering time of a single frame of an animation doubles and quadruples if you add things like global illumination, caustics, ray tracing, or ambient occlusion. That not to mention other CPU intensive physics based motion algorithyms like motion blur, fluid dynamics, particle simulation, soft and rigid body collision detection and cloth and hair dynamics. The more particles or vertices you have to calculate for the longer it takes the CPU to crunch through the numbers.

And ofcourse all this is nothing compared to physicy accurate computer simmulations. look here for example:
http://www.physorg.com/preview4343.html
I know it is a simulation about galactic formation. But it is essentialy a particle dynamics sim. (That may be oversimplifing it bit!) It took a super computer over a month to run the simulation!


Here are some links concerning the software and algorythims used by the author of the vid.
http://en.wikipedia.org/wiki/ProEngineer
http://en.wikipedia.org/wiki/Finite_element_analysis
http://en.wikipedia.org/wiki/Finite_element_method
 
Last edited:
I have some expiriences with finite element calculations and for me it does not make sense at all. Especially that they don´t tell what software is used or how accurate it is - bending of the floors like in this video is beyond reality and if they scaled the model down, wich also makes no sense, then any calculations based on heat must be corrected, too. Finally the simulation says nothing at all about the real circumstances on 9/11.
 

Back
Top Bottom