• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The physics toolkit

Here are just the numbers from the above xml: (ETA: from post 116)

Code:
0, 169.4358306188925, 272.56938110749184
1, 167.43452768729645, 270.56807817589583
2, 167.45509282655848, 270.57683858594464
3, 167.50888529948486, 270.58898572756107
4, 167.6260052644111, 270.6059304400288
5, 168.0152068460198, 270.58400946297945
6, 168.30673712107017, 270.5898519186658
7, 168.41693310035782, 270.59129072496205
8, 168.5030092185695, 270.6000769987073
9, 168.7825481047422, 270.6213227470911
10, 169.32245229707013, 270.58296360443126
11, 169.4512201146199, 270.5805901057213
12, 169.6785668566863, 270.5864764639815
13, 170.2899951996568, 270.55125937902267
14, 170.46200596681444, 270.5468878690839
15, 170.75586289530588, 270.53615749002313
16, 171.30229382810882, 270.4824555341229
17, 171.487805310643, 270.4637747748925
18, 171.79194300086124, 270.4340238066741
19, 172.2456007015545, 270.2705539706759
20, 172.4382655733458, 270.214724687082
21, 173.1697142880472, 269.8622629360866
22, 173.3849250884292, 269.8027266353713
23, 173.49569386973587, 269.7598245076488
24, 173.57325658072295, 269.72709546936403

Respectfully,
Myriad
 
Last edited:
Like this?
Yes. Thanks.


ETA: Aha. Myriad. Values.

Myriad,

Could you convert the values for post, er, 120 please ? (My post 116 was just a test for Grizzly Bear to know the format)

We'll then have Tony's new data in csv form.
 
Last edited:
This can easily be done by placing demolition devices in a timed fashion to weaken the structure just prior to impact, so that it continues its fall without decelerating. My opinion on this is that it was done this way without using impacts for reliability, to ensure it would not possibly arrest.

There were no demolition charges at WTC.
 
All numerical data (header info and index, x, y table) from the xml in post 120:

Code:
xorigin = 345.97712418300654
yorigin = 426.30392156862746
angle = 1.3016262181085436
xscale = 3.3116604938851464
yscale = 3.3116604938851464

x1 = 574.6902654867257
y1 = 429.7345132743363
x2 = 569.7345132743363
y2 = 199.64601769911502
xArm = 570.9734513274336
yArm = 257.16814159292034


216, 412.40229286086503, 151.82907764460657
222, 412.1521625846795, 151.82907764460657
228, 411.90203230849403, 151.82907764460657
234, 411.6519020323085, 151.82907764460657
240, 411.6519020323085, 152.5794684731631
246, 412.1521625846795, 154.08025013027617
252, 412.90255341323603, 156.0812923397603
258, 412.1521625846795, 158.58259510161542
264, 412.40229286086503, 162.08441896821262
270, 412.65242313705056, 166.33663366336634
276, 411.90203230849403, 171.5893694632621
282, 412.1521625846795, 177.59249609171442
288, 411.90203230849403, 184.8462741010943
294, 411.6519020323085, 192.85044293903073
300, 411.90203230849403, 201.6050026055237
306, 411.6519020323085, 211.36008337675872
312, 411.6519020323085, 221.86555497655027
318, 411.90203230849403, 232.87128712871285
324, 411.15164147993744, 244.8775403856175
330, 411.40177175612297, 257.13392391870764
336, 411.15164147993744, 270.64095883272535

Respectfully,
Myriad
 
All numerical data (header info and index, x, y table) from the xml in post 120
Thanks.

Hmm. Immediate observations from the data...

Tony,

1)
Code:
<property name="startframe" type="int">216</property> 
  <property name="stepsize" type="int">6</property> 
  <property name="stepcount" type="int">21</property> 
  <property name="starttime" type="double">0.0</property>

Why did you manually select to use a stepsize of 6 ? (and so skip 5 out of every 6 frames. Default is stepsize of 1.)

2)
Did you run the autotracker, or position the points by hand ?

3)
What processing steps did you apply to the source video ?
 
Tony sent me the file
If he sent you the video, I don't suppose you could put it online ? MegaUpload is fine by me.

(I'll do a high-resolution trace to compare from the same initial location.)
 
Last edited:
If he sent you the video, I don't suppose you could put it online somewhere ?

(I'll do a high-resolution trace to compare)
He also sent me a "quick time movie" file (80 MB). I'm at work right now so I can't do anything with it. I don't have a site to host it, I might be able to find one later.
 
He also sent me a "quick time movie" file (80 MB). I'm at work right now so I can't do anything with it. I don't have a site to host it, I might be able to find one later.
Okay, thanks. MegaUpload is fine. They'll accept anything at all.

That it's a .MOV is a bit odd. Why not go from the mpeg-2 (DVD) *original* eh ?

Sauret mpeg-2

Seeing the .MOV will be quite insightful.
 
I don't know if this is helpful.

Yes, very. Thanks.

Tony,

I suggest you get in touch with me asap.

Am not impressed, either with the quality of the video, the decision to trace the point chosen (which the autotracker could not possibly latch on to), the 6 frame stepping or the apparent lack of congruety between the point locations and the chosen *feature*.

I'm aware that your baseline premise is looking for large magniture *jolts*, but, given the amount of input and advice you've been given on the technical tracing process, there's really no excuse.

I've opened the thing in Tracker by the way.

I suggest, if you want your new data to be taken seriously, that you dump your latest set, take all the technical advice I've given you onboard, and do it again.

There's no way high-fidelity feature tracking will latch onto this spot...

413905189.png
 
Pedro, I am sure you understand what happens when the legs or necessary vertical support is pulled from under something. The onject falls without impacting anything and there is no deceleration until the object hits something which is more than strong enough to support it, like the ground.
That's not how I was taught the transfer of momentum.

You know:

[latex]$v_1m_1+v_2m_2 = v_f(m_1+m_2)$[/latex]

So each floor's mass should have produced a deceleration, shouldn't it?

What on earth could prevent that from happening?
 
Last edited:
So each floor's mass should have produced a deceleration, shouldn't it?
Rather depends upon what the floors' mass was connected to at the time.

As an extreme example...

If floor 50 became detached from core and perimeter (for whatever reason), would you expect it's impact with floor 49 to cause a *jolt* at the roof-line ?

I've been asking Tony for his assessment of *what is impacting what* within this thread. What's your take ?

The other factor which does not seem to have been raised is this...

Tony is looking for a *jolt* after a 12ft drop, so what about after a 50ft drop ? Still expecting a *jolt* ? (Can include Verinage in that question by all means, though not, of course, for the first *virtual* impact)
 
Tony,

Your posted subset...
1.500 sec, 17.361 ft.
1.667 sec, 22.055 ft., 4.694 ft.
1.834 sec, 27.395 ft., 5.340 ft.
2.000 sec, 33.487 ft., 6.092 ft.

Where are you calculating the timings from ?

The video framerate is 29.97fps.
You're taking every 6th frame.

1/29.97*6 = 0.200200...

Your interval in the data above is ~0.166

Simple mistake ?

Again, I think a new data-set should be derived, or you're welcome to use mine if you want, as long as it's made clear I'm simply providing you with the raw data, not any conclusions or interpretation.
 
Last edited:
Thanks.

Hmm. Immediate observations from the data...

Tony,

1)
Code:
<property name="startframe" type="int">216</property> 
  <property name="stepsize" type="int">6</property> 
  <property name="stepcount" type="int">21</property> 
  <property name="starttime" type="double">0.0</property>

Why did you manually select to use a stepsize of 6 ? (and so skip 5 out of every 6 frames. Default is stepsize of 1.)

2)
Did you run the autotracker, or position the points by hand ?

3)
What processing steps did you apply to the source video ?
OK I have to admit I'm impressed that you could derive anything useful from the gibberish (to me) that I posted. I suppose it's the same thing that my customers think when I turn the 2 d plans of their new house into something they can live it.

Anyway. Wouldn't the very wide frame rate turn this supposed more accurate data into exactly the same thing he had before with the "smoothing"?


If my understanding is correct he's ONLY looking for something the size he figures should be needed.
 
Rather depends upon what the floors' mass was connected to at the time.

As an extreme example...

If floor 50 became detached from core and perimeter (for whatever reason), would you expect it's impact with floor 49 to cause a *jolt* at the roof-line ?
First, I admit my own confusion about whether a deceleration or a jolt is expected in a given situation.

Second, if we're talking of an ideal situation (simultaneous impact on all parts of the floor, infinite precision in measurements), then yes, I'd expect either a jolt or a deceleration. In the real world, at that distance probably the elasticity of the columns would absorb a good part of the impact as to make it less noticeable, possibly unnoticeable depending on the measurements. If we're talking about a tilted impact, that would surely make it even harder to notice.

And third, I was just trying to follow Tony's reasoning about the momentum transfer. I don't believe that one floor would cause a noticeable deceleration if the impact is not flat.

I've been asking Tony for his assessment of *what is impacting what* within this thread. What's your take ?
My take is that it's rather impossible to determine because of the chaos involved and the lack of images of the collapsing core, but extremely unlikely that columns impacted one to one. I seem to appreciate a bit of lateral displacement, which would be more than enough for the columns to not impact other columns, so Bazant's most favorable scenario for collapse arrest did not happen in reality.

As for the jolts, I don't have any strong opinion. Because of the high amount of unknowns, it's basically impossible to know the amount of expectable jolt.
 
derive anything useful from the gibberish
Am fine with XML, just don't have tools handy to extract to csv. Quite handy with the ol' tracing process these days too, which is why it's a tad annoying that Tony is still using such a poor process.

Wouldn't the very wide frame rate turn this supposed more accurate data into exactly the same thing he had before with the "smoothing"?
May even be worse. At least last time the washer was used as the feature, which is a lot easier to track manually (though I'd never manually track unless no other option was available)

If my understanding is correct he's ONLY looking for something the size he figures should be needed.
I'd have to agree, though why on earth he'd re-do the tracing using the same low-resolution is beyond me.

If it was me, and given the level of criticism received, I'd take onboard each and every suggestion of how to improve the data validity.

Take this for example...

959813768.png


It's the raw trace data for the period of camera shake in the Sauret footage (vertical component).

The vertical axis is in pixels, and is *4 the original, so 2 pixels on the graph is actually 0.5 pixels of the original DVD footage.

Note the smooth decay as the camera stops shaking.

That is the level of detail it is possible to extract from the Sauret footage.

Not impressed with Tony's efforts.
 
I'd have to agree, though why on earth he'd re-do the tracing using the same low-resolution is beyond me.

Really?

I've said it before. I have no problem with (and I encourage) knowledgeable people to examine all the evidence, as long as they are truthful about their results and don't go in with a preconceived notion. I don't think we're seeing this here.
 
Scroll down for graphs of Tony's new data. First, however...
The real joke is your thinking that any jolt of less than 1/6th of a second would have any effect. In the Missing Jolt paper we show that the effect from any jolt capable of causing collapse continuation would cause the velocity to take nearly a second to recover. The jolt itself would be too short to be seen but its effects would not be.

It is obvious that people like you and W.D. Clinger have nothing else to go on but to try to say "see there could be a little bump they are missing", while not alerting them to the real issue which is the velocity loss which would take time to recover.
On two previous occasions, I have directed Tony to a fallacy in his argument above: His alleged model implies jolts should have been separated by periods of free fall at 1g, but he and MacQueen calculate the recovery time by assuming free fall at 0.7g.

The models I presented in an earlier post contain large jolts, but the velocity of those models recovers in time to match Tony's sampled data perfectly. They are able to recover because free fall is 1g, not 0.7g.

You are either being a dumbo here or are trying to B.S. people into thinking that any little bump matters, when that is far from the case.
Chandler, MacQueen, and Szamboti had claimed, quite falsely, that their data were sufficient to rule out any decelerations. I refuted that claim.

Tony's own data imply that decelerations probably did occur. Without conceding that fact, Tony is now arguing about the size of the decelerations and their duration. Until Tony concedes that decelerations probably did occur, I'll just note that people more qualified than Tony or myself have already demolished the part of Tony's argument that insists upon one large, well-defined jolt.

The rest of the Tracker data had the same pattern, in that there was no single point reduction in velocity at any time. If you want to see it I would post it.
As shown below, Tony's new Tracker data are almost identical to Chandler's. I have already demonstrated that most plausible reconstructions of the original signal from Chandler's data involve actual decelerations.

Here are the accelerations for three different models of Tony's new data:
szamboti20100705A.jpg

The velocity and position functions are here:
http://www.cesura17.net/~will/Ephemera/Sept11/Szamboti/szamboti20100705V.jpg
http://www.cesura17.net/~will/Ephemera/Sept11/Szamboti/szamboti20100705Y.jpg

Thanks to Tony Szamboti and DGM for making this new data available.

It's easy to see that Tony's new data are almost exactly the same as the data published by Chandler, except Tony's data are in feet. They used the same software, and I would guess they took their data from the same video and tracked the same roof feature.

I also thank femr2 for making his data available. It's the best raw data I've seen on this, by far, but its higher resolution is (quite naturally) accompanied by greater noise, which will require greater care in analysis. femr2's data apparently end about 1.5 seconds before the end of the Chandler/MacQueen/Szamboti data. There are obvious large jolts near 4.2 and 4.75 seconds (with femr2's time origin; I think those jolts correspond to about 0.5 and 1.1 seconds on the Chandler/MacQueen/Szamboti graphs I have constructed). More later.
 
Tony,

Your posted subset...


Where are you calculating the timings from ?

The video framerate is 29.97fps.
You're taking every 6th frame.

1/29.97*6 = 0.200200...

Your interval in the data above is ~0.166

Simple mistake ?

Again, I think a new data-set should be derived, or you're welcome to use mine if you want, as long as it's made clear I'm simply providing you with the raw data, not any conclusions or interpretation.

That data was interpolated to match the times in the hand data.

The Tracker data was taken by David Chandler.
 
Am fine with XML, just don't have tools handy to extract to csv. Quite handy with the ol' tracing process these days too, which is why it's a tad annoying that Tony is still using such a poor process.


May even be worse. At least last time the washer was used as the feature, which is a lot easier to track manually (though I'd never manually track unless no other option was available)


I'd have to agree, though why on earth he'd re-do the tracing using the same low-resolution is beyond me.

If it was me, and given the level of criticism received, I'd take onboard each and every suggestion of how to improve the data validity.

Take this for example...

[qimg]http://femr2.ucoz.com/_ph/6/959813768.png[/qimg]

It's the raw trace data for the period of camera shake in the Sauret footage (vertical component).

The vertical axis is in pixels, and is *4 the original, so 2 pixels on the graph is actually 0.5 pixels of the original DVD footage.

Note the smooth decay as the camera stops shaking.

That is the level of detail it is possible to extract from the Sauret footage.

Not impressed with Tony's efforts.

The reality is that if the little bumps in your double interlaced whiz bang data are the best anyone can possibly measure, then it is clear that there was no significant deceleration in the fall of WTC 1 and I have shown on the 911 free forum that a shock would not be attenuated. There should have been a deceleration on the order of what is seen in the Verinage demolitions and the fact that there isn't is proof that something else was removing the strength of the structure.

It is quite impressive that a number of you guys put everything else down and jumped right on this.

If I didn't know better I might think that some of you guys here are hacks with a mission to discredit any finding which might prove controlled demolition was involved in the destruction of the WTC towers.
 

Back
Top Bottom