• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The physics toolkit

jaydeehess

Penultimate Amazing
Joined
Nov 15, 2006
Messages
20,849
Location
40 miles north of the border
I have looked through much of the discussions concerning Chandler's use of the physics toolkit and cannot find the answer to something that is bugging me.

He plots instantaneous velocity vs. time.

What exact sequence of calculations is the physics toolkit doing in order to come up with those values for instantaneous velocity?

Since v=vi+at
or
average v=delta d/delta t it would seem that he is using the later and sampling perhaps every frame of video thus making delta t =0.033667s

So is each plot of velocity actually the average velocity during this time period?
 
Since v=vi+at
or
average v=delta d/delta t it would seem that he is using the later and sampling perhaps every frame of video thus making delta t =0.033667s

So is each plot of velocity actually the average velocity during this time period?

It seems to me that that's the only thing he can possibly be doing. There's no conceivable way of measuring an instantaneous velocity from still video frames, so any velocity measured from a video is inevitably an average over the time interval between a given pair of frames.

Dave
 
JD,

The data that he inputs is pixel position vs frame. IIRC, he picks data from every (3rd? 5th??) frame. (Count the number of data points in his set, look at the real start vs. stop time interval. Compare the actual number of data points to 30 points per second if he use all of them.)

Using scaling factors, this translates into height vs. time.

But he doesn't publish this raw data. I've asked him for it, but he won't reply to me anymore.

Then he has the program produce discrete difference values, and assigns these as velocities. I don't know if they are forward differences, backwards differences or balanced differences. Can't tell without the raw data.

Then, using the calculated velocity vs. time, he picks a group of points that looks approximately linear, and has the program produce a "best linear fit".

In other words, he forces the data to produce a linearly changing velocity and then (what a surprise!) gets a constant acceleration.

His data handling technique is 2nd rate.

He doesn't appear to understand what an "error analysis" is.

He won't share his raw data so that others can follow along or point out weaknesses.

He accuses those of disagreeing with his conclusions of being biased & politically motivated.

Some example of scientific honesty, objectivity and rigor, no?

Tom
 
Then, using the calculated velocity vs. time, he picks a group of points that looks approximately linear, and has the program produce a "best linear fit".

In other words, he forces the data to produce a linearly changing velocity and then (what a surprise!) gets a constant acceleration.

His data handling technique is 2nd rate.

He doesn't appear to understand what an "error analysis" is.

He won't share his raw data so that others can follow along or point out weaknesses.

He accuses those of disagreeing with his conclusions of being biased & politically motivated.

Some example of scientific honesty, objectivity and rigor, no?

Tom

The same methods show a severe deceleration in every Verinage demolition and this agrees with the results of everyone else who has measured the fall rate of those collapses. These methods also showed WTC 7 was undergoing full freefall acceleration during the first 2.25 seconds of its fall, which NIST admitted was true.

Given the above what logic says these methods wouldn't show a deceleration in WTC 1 if it had indeed occurred?
 
Last edited:
There's no conceivable way of measuring an instantaneous velocity from still video frames, so any velocity measured from a video is inevitably an average over the time interval between a given pair of frames.
I'd put it differently: For any set of sampled position data, there are infinitely many acceleration functions that are consistent with that position data. The Chandler-MacQueen-Szamboti fallacy consists (in part) of believing that their selected model of the acceleration is the best or only model possible.

On page 12 of their (as-yet-unwithdrawn?) paper, MacQueen and Szamboti claim to be looking for a 90 millisecond event, which cannot possibly be seen directly using their effective sampling rate of 3 Hz. We can, however, model the acceleration within each interval using curves of that expected width.

For example, I wrote a little computer program that models the acceleration within each interval by a polynomial of even degree that's symmetric about its midpoint and constrained to equal 1g at both endpoints of the interval. With those models, Chandler's data imply at least some sort of jolt within all but 4 of his 20 1/5-second intervals. When the polynomial is a parabola, 8 of his 20 intervals contain jolts greater than 1g; in other words, the velocity actually decreases during those 8 intervals. With polynomials of higher degree, as needed to match the 90 millisecond events anticipated by MacQueen and Szamboti, the number of intervals in which the velocity decreases is greater than 8.

When the same automated program is applied to the hand-collected data reported by MacQueen and Szamboti, a similar picture emerges. We see decelerations in 8 of their 19 intervals. When the jolts are modelled by parabolas, we see jolts greater than 1g (that is, the instantaneous velocity actually decreases at some point during the interval) in 4 of their 19 intervals. When the jolts are modelled by symmetric polynomials of degree 10, in order to match their anticipated 90 millisecond events, we see the velocity decreasing in 6 of their 19 intervals, including decelerations of approximately 3g at 0.5, 1.75, and 3.1 seconds.

In my opinion, the data strongly suggest that the deceleration events were longer than 90 milliseconds, as you'd expect from a collapse that's rather more chaotic than a controlled Verinage demolition.

As you can see from his post in this thread, Tony Szamboti believes the collapse should look just like a controlled Verinage demolition, presumably because he believes it was a controlled demolition. That's an example of the fallacy known as begging the questionWP.
 
Tony,

Let's take first things first.

These methods also showed WTC 7 was undergoing full freefall acceleration during the first 2.25 seconds of its fall, which NIST admitted was true.

NIST "admitted" nothing of the sort.

Care to try to restate this with the slightest amount of rigor or accuracy?

And, no, this is not just a hair-split. But turns out to be the cornerstone of why your arguments are baseless.

Tom
 
Given the above what logic says these methods wouldn't show a deceleration in WTC 1 if it had indeed occurred?

Given that your data shows one deceleration and femr2's data shows two, who cares?

This is getting bizarre. You're claiming that the absence of a deceleration is suspicious, yet you - or anyone else - has yet to produce a dataset that doesn't show a deceleration. You're getting as irrational as the no-planers; your entire argument is reduced to denying the existence of what's right before your eyes.

Dave
 
Given that your data shows one deceleration and femr2's data shows two, who cares?

This is getting bizarre. You're claiming that the absence of a deceleration is suspicious, yet you - or anyone else - has yet to produce a dataset that doesn't show a deceleration. You're getting as irrational as the no-planers; your entire argument is reduced to denying the existence of what's right before your eyes.

Dave

Dave, it seems you are now reduced to blowing smoke and telling falsehoods.

You know the point you and a couple others here claim was a deceleration was proven to be a small measurement artifact.

You are wrong that nobody has produced a dataset that doesn't show deceleration, as the data we took with the automated Tracker program showed no deceleration whatsoever. I showed you that here and yet you have the audacity to say nobody has produced anything like it.

If anyone is denying reality it is you.
 
You ignored the first 1.75 seconds.


You are a liar.

No, the building exterior falls in a sudden way as a unit and does not fall at less than freefall acceleration for the first 1.75 seconds.

I have found that comments like this are usually a reflection of the commenter's physchology rather than those they are commenting on.
 
Last edited:
Tony,

Let's take first things first.



NIST "admitted" nothing of the sort.

Care to try to restate this with the slightest amount of rigor or accuracy?

And, no, this is not just a hair-split. But turns out to be the cornerstone of why your arguments are baseless.

Tom

If you are actually claiming that NIST does not admit that WTC 7 fell at freefall acceleration for 2.25 seconds then you aren't worth responding to.
 
You are wrong that nobody has produced a dataset that doesn't show deceleration, as the data we took with the automated Tracker program showed no deceleration whatsoever. I showed you that here and yet you have the audacity to say nobody has produced anything like it.

Remind me where you "showed" me that your Tracker data doesn't show any decelerations; I seem to have missed the post where you presented your raw data for inspection.

Dave
 
Vuja-de...

378476413.png


No point arguing about it. There are points of deceleration.

New data-sets, if done properly, should show similar low-magnitude points of deceleration.

SynthEyes is much more useful for performing video feature tracking than something like Physics Toolkit.
 
Last edited:
Vuja-de...

[qimg]http://femr2.ucoz.com/_ph/6/378476413.png[/qimg]

No point arguing about it. There are points of deceleration.

New data-sets, if done properly, should show similar low-magnitude points of deceleration.

SynthEyes is much more useful for performing video feature tracking than something like Physics Toolkit.

These points you claim to be deceleration do not seem to correlate with collisions between stories and even if real, they are quite small and unlikely to be a cause of continuing collapse.

You are also the only one who has measured the descent of WTC 1 who claims to have measured deceleration, and seem to be arguing that you are the only one who is measuring it correctly. The problem with that is that everyone who measures the Verinage collapses sees the deceleration.

All of these reasons cause me to suspect that the little blips you show in your WTC 1 data are not real decelerations.

Additionally, the program we recently measured WTC 1 with, and saw no deceleration, was Tracker, not Physics Toolkit. Tracker's primary purpose is to measure video motion and analyze it.
 
Last edited:
These points you claim to be deceleration do not seem to correlate with collisions between stories and even if real, they are quite small and unlikely to be a cause of continuing collapse.
Yes, they don't correlate directly with a simplified view of *expected* floor collisions separated by ~12ft, but then I wouldn't expect them to really. The feature tracked was the NW corner, not a floor asembly.

You are also the only one who has measured the descent of WTC 1 who claims to have measured deceleration, and seem to be arguing that you are the only one who is measuring it correctly.
No, Achimspok has also replicated similar results, and I have already presented you with graph of his work. A full 15 pages of such are included within the *missing jolts found* thread over at the911forum.

The problem with that is that everyone who measures the Verinage collapses sees the deceleration.
Apples and oranges Tony. But I'm aware you don't agree this point.

All of these reasons cause me to suspect that the little blips you show in your WTC 1 data are not real decelerations.
The tracing was performed with great care and you have access to the raw data if you should choose to verify it.
I have no reason to try and deceive you or anyone else Tony.

Additionally, the program we recently measured WTC 1 with, and saw no deceleration, was Tracker, not Physics Toolkit. Tracker's primary purpose is to measure video motion and analyze it.
Okay, SynthEyes is much better than any other system I've ever seen for tracing. Yes, I've looked at Tracker.

Which brings me to...

I asked a while back for you to detail the technical procedures you were using when performing the tracing, but received no reply I'm aware of.

The reason was to try and ensure you took heed of all the method developments we've honed over the period, in order to get to the results as included above.

I'll not get into video image data handling right now, but do request you make the steps from base video public (or private back home) asap.

One thing I will point out now though...

From the post you linked above, it appears you are using a 0.1666... second sample interval.

The tracing performed by myself and achimspok uses de-interlaced copies of the Sauret footage, and so a 59.94 sample per second rate. Sub-pixel accurate automated tracing method is an absolute must also.

1/59.94 = ~0.0167 second sample interval.

Roughly ten times the sample resolution you are using.

The *real* decelerations in my graph are indeed of very low magnitude.

You are NOT going to find them with such a crude sample rate.

If you seriously want to try and find such, you're going to have to be very careful in how you handle the original video data (to retain it's quality) and most definitely up your sample rate.

You'll then obviously have to decide how you want to go about noise dampening in your resultant data.

I used a 9 sample symmetric differencing for the graph above if I recall correctly.

ETA: As another example, here's a recent trace for WTC 1 horizontal movement from here...
238393243.jpg

http://femr2.ucoz.com/_ph/6/238393243.png

You should be able to tell where the major camera shake is (between frames 1150-1250).

Black thick line is horizontal movement of the NW corner.

Grey is raw NW corner.

Blue is static point.

The static point MUST be taken into account when performing such tracing, to take account of low-level camera shake.

Vertical axis is PIXELS btw.
 
Last edited:
Yes, they don't correlate directly with a simplified view of *expected* floor collisions separated by ~12ft, but then I wouldn't expect them to really. The feature tracked was the NW corner, not a floor asembly.


No, Achimspok has also replicated similar results, and I have already presented you with graph of his work. A full 15 pages of such are included within the *missing jolts found* thread over at the911forum.


Apples and oranges Tony. But I'm aware you don't agree this point.


The tracing was performed with great care and you have access to the raw data if you should choose to verify it.
I have no reason to try and deceive you or anyone else Tony.


Okay, SynthEyes is much better than any other system I've ever seen for tracing. Yes, I've looked at Tracker.

Which brings me to...

I asked a while back for you to detail the technical procedures you were using when performing the tracing, but received no reply I'm aware of.

The reason was to try and ensure you took heed of all the method developments we've honed over the period, in order to get to the results as included above.

I'll not get into video image data handling right now, but do request you make the steps from base video public (or private back home) asap.

One thing I will point out now though...

From the post you linked above, it appears you are using a 0.1666... second sample interval.

The tracing performed by myself and achimspok uses de-interlaced copies of the Sauret footage, and so a 59.94 sample per second rate. Sub-pixel accurate automated tracing method is an absolute must also.

1/59.94 = ~0.0167 second sample interval.

Roughly ten times the sample resolution you are using.

The *real* decelerations in my graph are indeed of very low magnitude.

You are NOT going to find them with such a crude sample rate.

If you seriously want to try and find such, you're going to have to be very careful in how you handle the original video data (to retain it's quality) and most definitely up your sample rate.

You'll then obviously have to decide how you want to go about noise dampening in your resultant data.

I used a 9 sample symmetric differencing for the graph above if I recall correctly.

ETA: As another example, here's a recent trace for WTC 1 horizontal movement from here...
[qimg]http://femr2.ucoz.com/_ph/6/2/238393243.jpg[/qimg]
http://femr2.ucoz.com/_ph/6/238393243.png

You should be able to tell where the major camera shake is (between frames 1150-1250).

Black thick line is horizontal movement of the NW corner.

Grey is raw NW corner.

Blue is static point.

The static point MUST be taken into account when performing such tracing, to take account of low-level camera shake.

Vertical axis is PIXELS btw.

This is all missing the point. The reality is that any amplified load capable of continuing the collapse naturally could only have been a result of serious deceleration and velocity loss which would easily be observed. This is shown by the fact that everyone who has measured the Verinage demolitions has observed the serious deceleration and velocity loss required in a natural collapse.

Even if those little blips you claim are decelerations are real, they are nowhere near sufficient to cause collapse, and they do nothing to diminish the case that there is no deceleration and velocity loss capable of causing a natural collapse.

You are actually muddying the waters here, since you have to know that this noise you see could not have caused the continuation of the collapses. You need to get real.
 
Last edited:
This is all missing the point.
I'm fully aware of, and have repeatedly made clear, that I know you don't think the low magnitude decelerations are big enough.

The reality is that any amplified load capable of continuing the collapse naturally could only have been a result of serious deceleration and velocity loss which would easily be observed.
You are assuming a rigid body, and impact between elements which did not collide.

This is shown by the fact that everyone who has measured the Verinage demolitions has observed the serious deceleration and velocity loss required in a natural collapse.
Again, apples and oranges. I do not have any reason to think that the jolts you suggest should be there for WTC1 are in any way supported by metrics of verinage demolition. The structures are entirely different, and the mode of destruction is also different.

Even if those little blips you claim are decelerations are real,
They are.

they are nowhere near sufficient to cause collapse,
Wrong end of the chain Tony.


There's only partial collision between a minority of perimeter columns, very unlikely that there will be much collision between core columns and anything but floor assemblies or core cross-bracing, and only around the 70MJ mark is required to separate an entire floor from its connections to both core and perimeter.

You've seen the FEA of column impacts and *jolt* magnitude clearly diminishes the further you get from the contact point even in ideal conditions.

What is colliding with what that you think should propogate all the way to the NW corner through the non-rigid flexible and compressible structure of the upper block of WTC 1 ?


(oh I really shouldn't have asked that. Tony, PLEASE don't go on autopilot and suggest core column impacts. The graph I included above should make you think before you leap there)
 
No, the building exterior falls in a sudden way as a unit and does not fall at less than freefall acceleration for the first 1.75 seconds.

I have found that comments like this are usually a reflection of the commenter's physchology rather than those they are commenting on.

The inconvenient truth is that you are in fact a proven liar. We all know it. You know it.
 

Back
Top Bottom