• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The physics toolkit

Sorry for the erroneous attributation W.D.C.
No, he did not. He noted that I had "speculated about the possible reasons why faster-than-freefall acceleration of certain building features would be possible". In the following paragraph, femr2 offered his own speculation about a slingshot mechanism. It appears to have been jaydeehess who incorrectly attributed the slingshot idea to W.D.Clinger.


Fine. I hope you will stop blaming me for it, however.

Looking back I see why I read femr's post as attributing this to W.D.Clinger, the speculation followed femr referencing your name.
My apologies. As I said I really don't care who comes up with these things, I am interested only in discussion of the topic.

Its been 30 years since I did any statistical analysis in physics so much of that discussion is only vaguely grasped, and although several of my university buddies were engineers that makes me about as qualified in structural engineering as I would had I stayed in a Holiday Inn Express last night.

That said I do have the background in physics, and I can deduce what these added complexities mean in as much as drawing conclusions about what caused the collapse. Simply put it removes the ability to label the high acelleration as the result of explosive demolition of the columns.


In posts 216, 231, and 294, I have suggested a fourth choice: an over-simplified model of the roof's motion (or, if you prefer, tfk's unfortunate choice of time origin). If you don't understand how neglecting the roof's initial velocity at t=0 can lead to an incorrect conclusion of faster-than-free-fall acceleration for the roof as a whole, then you have a little homework to do before you rejoin this conversation.

From high school physics
s=s0+v0t+0.5at2 an equation which I found at the time of my first exposure to it in 1974, to be elegant in its prediction of what happens. or its derivative ds/dt=v0+at I agree that anyone who can look at these simple equations and not see how initial velocity will affect the results (or worse, ignores it/assumes it equals zero) needs to take a course in physics
 
Last edited:
I can see them. It seems i's not a problem of the graphics format, but rather of the fact that they're attachments. (ETA: Probably an incompatibility with femr2's web browser or settings or ad-block or antivirus.)

If you give me your permission, I can temporarily host them at my site so that femr2 can see them.

PG,

Sure, be my guest.

It's a mystery to me. The JREF site is already hosting them as publicly available.

Here: http://www.internationalskeptics.com/forums/picture.php?albumid=523&pictureid=3406

and Here: http://www.internationalskeptics.com/forums/picture.php?albumid=523&pictureid=3396

Frankly, it's still a work in progress, and I wouldn't waste the bandwidth.

But I've got no problem with your posting them.

Tom
 
Someone correct me if I'm wrong....
......................Wouldn't the reasonable thing to say is that none of this data conclusively proves explosives or even implies it strongly and so we must go with the more likely conclusion that even if we are seeing a real acceleration then the cause is probably something much more mundane than explosives?

Am I way off base here?

Accurately put AFAIAC.

Once I learned about Chandler's use of the physics toolkit and that he actually got an acelleration of >g , then used this to conclude the use of explosives I had to give my head a shake.

How, I wondered, could one make a leap like that when an acelleration >g would, IMO, indicate that the difference between that value and g was the absolute lower limit of a margin of error and that Chandler had not bothered himself with such trivia as calculating what his margin of error actually might have been.

Thus my OP.

The response evolved into a more detailed examination/discussion of the video and came to the same result, an acelleration >g.
Still no numbers on what the error margin might be, however one source, one very big source it now appears, is the determination of when t=0 is.
There is speculation of some storage of energy in the deformation of the structure just before the onsetof global collapse and the release of such at the t=0

How any of this can ever be used to show the initial cause of global collapse,,,,,,,?

Chandler is an idiot, glad my high school had better physics teachers.
 
So let me get this straight. The NIST conclusion, is still acceptable? Just not what you would have done to reach it?

Even Chandler got similar results to what NIST and femr get.

The ONLY difference is in the detail to which femr is examining the video. It appears that he is getting a more precise measurement but it only underlines the importance of picking a t=0 point , and introduces the apparent upward travel of his reference locations during the period before all is doing nothing but come down.

Since we are speculating..................
I can envision that the structure would be twisting as the eastern core fails first, then twists back the other way as the core failure progresses westward, and that this would force the NW corner upward, westward and away from the camera by a foot or so, then it twists back, overshoots the origianl , zero position and starts to again twist the other way again.

In the experience of many adults we know that bending a coat hanger back and forth weakens it and eventually it snaps. How much undesigned for twisting of a structure with major core damage does it take to have a similar thing happen to the remaining steel columns?

In WTC 7 we have the initial impact damage to the south face which reduces the load carrying capacity of the south and west portion of the building. There is also a report of an elevator car that was ejected from its shaft and indicates possible damage as far into the south side as the core of the structure.

Next came the fires and while NIST has the most probable senario being that the expansion of the beams of the 12th floor(or is it the 11th?) pushing the girder off its seat and leaving an already heated col 79 unbraced on one side for several floors, it is also possible that the girder was pulled off its seats by the contraction of deformed beams during the cool down of those beams.

Thus col 79 fails and pulls down the upper 35 floors at the col 79 location, including the 'penthouse' which contained IIRC the air handling/conditioning units(heavy machinery). All of this debris in falling and damaging adjacent flooring/columns which results in a horizontal progression of core column failures low down in the structure.
This causes a 'kink' to be visible on a vertical line along the location of col 79. This indicates that there would be a lateral stress on all floors,and thus all columns between core and perimeter, inward towards that kink.

Without the core columns the transfer trusses above the original Con-ed building are without their northern most support. Perhaps now those columns are more free to move horizontally initiating the twisting of the north part of the building. The south side is without its SW corner column and is less likely to contribute any arresting of this twisting.

The building's steel then acts as a very large torsion bar and twists back the other way, overshooting its original position and then starting to twist back again towards the original position. By this time the collapse of the core is complete, the rest of the columns of the structure have just been twisted back and forth, the south side is actually missing some perimeter columns including that of the SW corner. In addition many columns at the 11/12th floor level are still hot and have thus lost some portion of their strength in both axial loading and torsional loading.
The structure fails quite suddenly along a long section of the north perimeter quickly subjecting remaining columns to many times more loading (vertical and lateral) than they were ever designed to take and they fail as well at the lower level.

In this video we do not see what is taking place at the 12th floor but with this senario there is basically no support at all at the 12th floor. There is no core left from at least 12th floor on down (certainly no core columns along the portion where we know the rooftop had fallen in), with all remaining columns out of vertical, there is nothing at all to stop the top 35 or so floors from falling.

Building go boom!

I need to be told again when during this senario it would require that columns be severed by the use of explosives...................
 
Last edited:
It's a mystery to me. The JREF site is already hosting them as publicly available.
Anyway, it seems to be a problem in femr2's side. Noone else complained.

Frankly, it's still a work in progress, and I wouldn't waste the bandwidth.

But I've got no problem with your posting them.
I don't pay for bandwidth and I will eventually remove them, so it's no problem.

Here we go. I will be quoting the whole messages for femr2 to see the context, but the images are posted as links to not waste other members' bandwidth.

Hope I didn't make any mistake. I doublechecked but it wouldn't be the first time that a mistake slipped in after that. Sorry if so.

Post #202:



Post #203:

The Results:

Graph:

1. Your data, plotted as points.
2. Your data, first 1 second expanded.
3. Your data plus empirical equation #1 (red line, applies from 0 to 1.3 sec.)
Next 3, Your data plus empirical equations #1 (Red) & #2 (blue line, applies from 1.3 sec to 4.7 sec)

You can see that the fit & the transition between the lines are good.

http://www.formauri.es/personal/pgimeno/temp/jref/Post203-user12552_pic3309_1278718778.jpg

http://www.formauri.es/personal/pgimeno/temp/jref/Post203-user12552_pic3310_1278718778.jpg

The Empirical Equations for drop, velocity & acceleration vs. Time.

The next graph is the "residual" (=data - model) for drop distance. Not bad. Typical variation < about 1 foot.

But the next graph starts to show the real problem...

Drop velocity vs. time.

The solid curve is the empirical equation.

The dots are from your data points, calculated as a "balanced difference". That is, the velocity is given by (DropPoint[i+1] - DropPoint)/(Time[i+1] - Time). This value is set at the midpoint of the sample times. (= Time + 0.5 dt, where dt = constant for your data = Time[i+1]-Time.)

The empirical equation velocity is also calculated at this midpoint time.

As mentioned, I used one empirical equation for t< 1.3 sec & a different one for t > 1.3 seconds. The discontinuities at 1.3 seconds are not surprising.

You can immediately see that your velocity data is all over the map. This is a direct result of your very short time base between data points. Even small errors in the measured location will result in huge variations in velocity.

I strongly believe that this scatter is an artifact of the errors in your measurement technique.

I also believe that the only way to get rid of this is to apply a smoothing filter to the data.

Which, of course, gets rid of all the high frequency components that your data shows.

But here's the rub: I do NOT believe that those high frequency changes in velocity are real. I believe that they are an artifact of your (camera, compression, analysis technique (i.e., pixellation), etc.

If one accepts them as "real", one has to go to the next step & accept that the building is undergoing completely absurd accelerations.
___

The acceleration is handled exactly the same as the velocity was.

But now, you can see that you're manipulating velocity data that has huge artifacts built in.

This makes the calculated acceleration data absurd: over 2500 ft/sec^2 or ~80G's.

Here are the curves:

http://www.formauri.es/personal/pgimeno/temp/jref/Post203-user12552_pic3311_1278718778.jpg


And, last, here is the results of the Empirical Equation "best fit".



http://www.formauri.es/personal/pgimeno/temp/jref/Post203-user12552_pic3312_1278718800.jpg

The "best fit" to your drop distance vs. time data produces 41 ft/sec^2 (about 1.3Gs of acceleration initially, decreasing to about 33 ft/sec^2 (just above 1 G) over the first 1.3 seconds.

Sorry, I don't believe this for a second.

I've got other things that I've got to do.

I'll talk to you about this over the next couple of days.

I can send you the raw data, but you can just as easily imput the empirical equations into an Excel spreadsheet & create the graphs yourself.


Tom



Post #207 (snipped to leave only the part where the images are discussed):

If you look at the displacement vs time data set (the raw data & the overlain empirical curve) in my previous post, you'll see a pretty good agreement. Once the raw data is low-pass filtered, I believe that the agreement will be even better.

If the agreement is this good, then increasing the polynomials degree amounts to a bit of gilding the lily. And will likely result in poly constants that are close to zero.

Femr, have you already done this (drop -> velocity -> acceleration) analysis yourself?

If so, please post your velocity & acceleration vs. time data (or graphs).
__

Nonetheless, I've already redone the analysis using 5th order (with 6 constants), and the results are not hugely different. I'll be interested to see what happens with smoothed data.

Here's the result of using this higher order polynomial. (I used it over the entire times span. You can see that it doesn't provide as good a fit at the early times as the previous one. But you can also see that it follows the gross (i.e., low freq) shape of the raw data pretty darn well.

http://www.formauri.es/personal/pgimeno/temp/jref/Post207-user12552_pic3315_1278741065.jpg

http://www.formauri.es/personal/pgimeno/temp/jref/Post207-user12552_pic3314_1278741065.jpg

You can see that the fit between the empirical curve & raw data is pretty good. And that the empirical curve is a pretty good "smoothed" version of the raw data.

The acceleration is proportional to the radius of curvature of the red line in the drop curves. I can see that a better fit (as the lower order poly in the previous graphs) at the earliest time (t< 0.6). But I don't see much leeway for increasing the radius between 0.7 < t < 1.4 seconds. And the results say that this amount of curvature in the drop curve results in >1G accel.

It's possible to construct a "1 G" arc for this chart to see if it can be fit to this raw data. Looking at the data, the curvature of the empirical (red line) equation right around 1.4 seconds corresponds to 1G of acceleration.

In order for femr's data to be correct, one would have to be able to overlay that degree of curvature (or less) on all the data points throughout the data set. I do not see how that is going to happen for t < 1 second. No matter how much low-pass filtering one does.

Here are the resultant velocity & acceleration curves, again for a 5th order poly with 6 constants:

http://www.formauri.es/personal/pgimeno/temp/jref/Post207-user12552_pic3313_1278741065.jpg

Again, for t < 1.4 seconds, accel is > 1G.


Post #300:

Hey guys,

There was nothing sacred about that time=0 selection that I chose.

I needed a starting time reference in femr's data to fit a curve.

I arbitrarily picked the last positive value for y before the y-value went negative and stayed negative. And therefore was unequivocally part of the descent. See below. I shifted all the data points by setting this point's coordinates to {0,0}.

By definition, the real start of the descent begins with a zero vertical velocity, and the slope of the position vs time curve needs to be zero.

From looking at the filtered data below, two things are pretty clear:

1. the initial value for the vertical position (y) is not equal to 0, but approximately 0.6.

2. The previous t0 I picked is therefore too late. The slope is not zero at this t0. A reasonable t0 in femr's data should be around the {4.3, 0.56} point that I've indicated.

I'm not at all sure what the prominent dip is before the descent. It looks like the roof line lurched as something broke just before the roof gave way.



http://www.formauri.es/personal/pgimeno/temp/jref/Post300-user12552_pic3396_1279718071.png


Tom


And post #327 (again, snipped):

The data from camera 3 that you provided HERE.

Which matches the graph & time base plotted in your graph HERE.

The graph below (in a different format) shows the raw data & a 13 point averaging filter, to match the value that you said you used.

http://www.formauri.es/personal/pgimeno/temp/jref/Post327-user12552_pic3406_1279823932.tiff.jpeg

As I said, one condition for t0 is that the vertical velocity be zero. That is, the slope of the red curve be zero. The old initial point doesn't meet that requirement. It has a significant downward velocity.

ETA: Feel free to hotlink if needed.
 
Last edited:
I will be quoting the whole messages for femr2 to see the context, but the images are posted as links to not waste other members' bandwidth.
No idea why I don't see the original images, probably cookies, but have found that if I go to Toms profile and look at the albums directly, I can see the images.

Simplest way forward is for me to use that route from now on. Spent quite some time with the quoted messages at the time as there were lots of problems with the methods used, so familiar with the context. Thanks for your efforts.
 
the importance of picking a t=0 point
It's only critical when attempting to fit a simple curve to derive metrics, and to, for instance, determine the time of descent. Looking at change in rate of increase in velocity between two arbitary points is not dependant upon a specific t(0). There's no smooth descent curve in this instance, so it'll even be difficult to relate a *good* t(0) to at-G or over-G derivations.

I can envision that the structure would be twisting as the eastern core fails first
A longer copy of the video would clarify, but twisting would appear to preceed descent of the East penthouse by up to 20s.

This segment presented earlier (if watched until your eyes bleed) may provide insight into internal behaviour...
920361115.gif
 
How any of this can ever be used to show the initial cause of global collapse,,,,,,,?
The original data was provided to Tom to extrapolate a full-cycle descent time. Personally I'm very cautious of making conclusions based upon the data until I've gone through it with a fine toothcomb. It may be that the data is more useful to exclude conclusions, rather than include them.
 
Hell no, in fact you just rounded third, and scored. It's been said many times, I hope it is finally heard. Not gonna hold my...

I wouldn't hold it either.....this seems to just go round and round....
 
Accurately put AFAIAC.

Once I learned about Chandler's use of the physics toolkit and that he actually got an acelleration of >g , then used this to conclude the use of explosives I had to give my head a shake.

How, I wondered, could one make a leap like that when an acelleration >g would, IMO, indicate that the difference between that value and g was the absolute lower limit of a margin of error and that Chandler had not bothered himself with such trivia as calculating what his margin of error actually might have been.

Thus my OP.

The response evolved into a more detailed examination/discussion of the video and came to the same result, an acelleration >g.
Still no numbers on what the error margin might be, however one source, one very big source it now appears, is the determination of when t=0 is.
There is speculation of some storage of energy in the deformation of the structure just before the onsetof global collapse and the release of such at the t=0

How any of this can ever be used to show the initial cause of global collapse,,,,,,,?

Chandler is an idiot, glad my high school had better physics teachers.

Most truthers seem to be idiots in my experience....
 
A longer copy of the video would clarify, but twisting would appear to preceed descent of the East penthouse by up to 20s.

A combination of the NIST full timespan and zoomed view of horizontal movement of a low point on the West edge of the building...
763037413.png


A longer timespan view of the video would allow for clearer understanding of building flex movements.

The NIST data is not available, nor the longer video view. Both would be very useful to begin making conclusions. Anyone with access to either, please get in touch.

As has been made clear already, the methods I use allow similar levels of movement accuracy to those used by NIST, and in addition those methods can be used for any building feature, not just a single appropriate pixel on the West edge of the building...
89078455.jpg


---

I suggest the pointless posts from those who are contributing nothing to this thread should either be removed or split. In particular the tedious and trolling repitition from Carli should be sorted out. It arises from him claiming a difference between *0.5 and /2. And has been his method of *attack* for several years. Wow. Hilarious. If you want to start a thread on that point Carli, be my guest, but please keep your nonsense out of this one.
 
You mean the people you can't answer simple direct questions from? Pfft...

It seems my very clear answer was yet again over your head. Not good. It's not difficult.

I suggested that the NIST method for determining T(0) was poor, and have provided reasons why. However, that's not the focus of your questions, which were...
does it mean they could not get close enough, for their needs? If so can you please, show evidence of that?

I then provided exactly the detail requested...

a) Pixels on the (poorly defined) roofline will vary in HSL throughout the entire trace period.
b) Data already provided shows features with t(0) in advance of that stated by NIST.
c) There is no pixel near the center of the roofline adjacent to sky until after at least the East penthouse has descended.
d) NIST do not state the position of the pixel they tested.
e) Their method does not quantify movement, simply brightness change, so there is no way to differentiate between pre-release flexing and vertical drop, or indeed any other cause of pixel data change.
f) NIST state their t(0) as *the exact instant that the north wall began to collapse*.

I also summarised for you, stating that there is no way to check whether their t(0) is correct as they have not stated it's location, and more importantly in the context of your question, that t(0) for other building features PRECEEDS that of the NIST t(0), and therefore the NIST t(0) is WRONG. Their *need* is quantified by stating it as per (f) above, so it does indeed mean they could not get close enough for their needs.

You wanted an answer, and evidence to support that. You got both, several times. Please don't ask yet again.
 
It seems my very clear answer was yet again over your head. Not good. It's not difficult.

Sadly it's you whose head is beneath! Try pulling it out!

You wanted an answer, and evidence to support that. You got both, several times. Please don't ask yet again

The question was a YES, or NO! Does it change the conclusions reached by NIST? I'll answer for you, since you aren't capable.

NO!


What a maroon!
 
The question was a YES, or NO! Does it change the conclusions reached by NIST?
I quoted your direct questions...
To you perhaps, but does it mean they could not get close enough, for their needs? If so can you please, show evidence of that?

The NIST T(0) does not mark *the exact instant that the north wall began to collapse*.

They could not get close enough for their needs. Their needs are defined by the context in which they use t(0).

Playing infantile games and increasing font size does not change the outcome. You did not understand the answer initially provided to you, and have subsqeuently changed the wording of your questions. I have answered your direct questions several times. Please stop wasting my time.
 
I phrased something poorly earlier & I'll clarify now.

femr said:

ETA: There's not going to be drastic difference from results already presented without changes to the base scaling metrics, but by all means look at alternate noise reduction methods.
And I replied:

There is not going to be changes to the position vs time data.

But filtering has huge impacts on the calculated velocities & especially - the principle topic of this whole furball - the accelerations. That is the whole point of this.

I left out several important qualifiers to explain my sentence.

It should have read "a careful analysis can have significant impact on calculated instantaneous accelerations".

Even more specifically, "... a significant impact on unreal, instantaneous acceleration that are artifacts of poor analytical technique."

(I am not claiming that femr's technique is poor here. Anybody's poor technique.)
___

The "big picture" acceleration is NOT going to change.

We know the end points with sufficient accuracy to put hard constraints on the overall accelerations. A careful analysis may decrease the instantaneous acceleration at one point in the observed time interval, but in order to do so, it MUST increase the instantaneous acceleration at some other time.

In this case, you gotta rob Peter to pay Paul.

The overall (i.e., gross scale) integrated acceleration and acceleration profile WILL remain the same.

As NIST defines the stages:

Stage 1: a short period (~ 2.5 seconds) of acceleration gradually increasing from zero to "near g".

Stage 2: a short period (about 1 second) of acceleration "right around g" (possibly even slightly above g)

Stage 3: a rapid drop-off to lower accelerations after

Which, I predicted (based on NIST's info, of course) a long time ago, would look something like the blue curve in the figure below,

picture.php


Resting comfortably between the (uncalculated, but pompously predicted) error bands (in dashed - and dashing - black).

And those with a discerning eye for graphs will immediately note that the actual accel profile looks absolutely nothing like a "free fall" acceleration profile: the red line.

BTW, it should be noted that, if at any time during the fall, the wall went into real free fall, then the acceleration curve would immediately (not gradually) jump right up to the "free fall" red line.


Tom
 
I quoted your direct questions....
You must have missed this! I wonder why?
Does your way, change the final conclusion, assessed by NIST? Yes, or no.

The answer is...NO!

The only infant here appears to be you! No one is holding a gun to your head to answer, you just reply, with scattered thoughts! Afraid of something, perhaps? Have fun dancing with worthless minutia!


Done with you.
 
The NIST T(0) does not mark *the exact instant that the north wall began to collapse*.

They could not get close enough for their needs. Their needs are defined by the context in which they use t(0).


"t0" is a myth. There is no MEANINGFUL t0.

NIST's conclusion & discussion do not depend on t0. Either in theory or in calculation.

They needed to pick a point to set a reference for their time data.

IIRC, they picked a pretty arbitrary one, one defined by their [ETA:] FEA model, not anything that happened on the building.


Tom
 
Last edited:
As NIST defines the stages:

Stage 1: a short period (~ 2.5 seconds) of acceleration gradually increasing from zero to "near g".

Stage 2: a short period (about 1 second) of acceleration "right around g" (possibly even slightly above g)

Stage 3: a rapid drop-off to lower accelerations after
Eh ? Am sure you know that is not what NIST defines...
155958691.png


NIST said:
* Stage 1 (0 to 1.75 seconds): acceleration less than that of gravity (i.e., slower than free fall).
* Stage 2 (1.75 to 4.0 seconds): gravitational acceleration (free fall)
* Stage 3 (4.0 to 5.4 seconds): decreased acceleration, again less than that of gravity

The content of the rest of your post does not use any actual data. I'll await the results of your analysis applying a "smooth, noise-robust differentiator" to the data provided to you.

tfk said:
unreal, instantaneous acceleration that are artifacts of poor analytical technique.
Let's hope that your new method does not suffer from the problems of your initial attempt.

Not sure if I've posted any accel graphs as yet, but they are quite smooth. Still looking at scaling metrics, so I'll hold off posting derived value graphs until the metrics are more *final*.

tfk said:
IIRC, they picked a pretty arbitrary one, one defined by their [ETA:] FEA model, not anything that happened on the building.
No. They use a point on the building, using a method based on pixel intensity as is made perfectly vlear on this thread.

There is no MEANINGFUL t0.
I do not agree. There are MANY meaningful t(0) values. Only if an over-simplistic approach of applying low order curve fitting is the *meaning* of any particular building feature t(0) diluted.
 
Last edited:

Back
Top Bottom