• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Care to Comment

The original measurement data in the Missing Jolt paper was taken by hand using a pixel measuring tool called Screen Calipers.

We retook the data last night with a much more sophisticated and automated tool called Tracker, which is meant for just this sort of thing and locks onto the feature to be measured. These measurements show the distance traveled between 1.667 and 1.834 seconds into the fall of WTC 1 is greater, not less, than the distance traveled between 1.500 and 1.667 seconds into the fall.

So it was in fact noise in the hand data, probably caused by not being precisely locked on the point being measured for each measurement.


Could you share with us the measurements of those two distances.
 
You are somehow skipping several steps when you jump to working in your alleged 1g error with measuring distance in feet, which must come from your struggle to understand what constitutes a deceleration.

It seems you have been trying too hard to refute something that unfortunately is irrefutable.

I see you have since re-examined the collapse using a different tool, but let's put some numbers to the uncertainty since you seem to be questioning my math.

Here is a subset of data from page 7 of your paper:

Time (sec) Roof fall distance (feet)
1.000 11.44
1.167 14.96
1.334 20.24

We can see that the average velocity between points 1 and 2 is 21.08 f/s, and 31.62 f/s between points 2 and 3. This gives an average acceleration of 63.11 f/s2 over these three points. Since this is nearly 2g - which is impossible - we know your error is at least 1g. Back-calculating to the distances, we find that the related error is at most 0.89 feet for any of the three data points, which is a significant percentage of the change in fall distance between data points.
 
I still see this implicit assumption that you actually know the resolution which you have in the video. All Tony did in his paper was divide some number of pixels by some other number and correct for what he thought was camera angle. The smallest things that I can see are some of the antennae on the television tower which are larger than one foot (maybe three?) So, it's absolutely no surprise to me that the results might show anything. If you can't demonstrate a specific one foot (one pixel) object, then I think that this whole attempt to detect a jolt based on video evidence is completely flawed at the outset. In the Verinage videos, what is the calculated resolution - no one seems to know that figure either. Making "more accurate" measurements doesn't get over this problem ata ll . How about this, what's the smallest object you can point to in any of these videos and how big is it? I must be missing something here since no one else seems to be concerned about this - comment?
Rgrds-Ross
 
I still see this implicit assumption that you actually know the resolution which you have in the video. All Tony did in his paper was divide some number of pixels by some other number and correct for what he thought was camera angle. The smallest things that I can see are some of the antennae on the television tower which are larger than one foot (maybe three?) So, it's absolutely no surprise to me that the results might show anything. If you can't demonstrate a specific one foot (one pixel) object, then I think that this whole attempt to detect a jolt based on video evidence is completely flawed at the outset. In the Verinage videos, what is the calculated resolution - no one seems to know that figure either. Making "more accurate" measurements doesn't get over this problem ata ll . How about this, what's the smallest object you can point to in any of these videos and how big is it? I must be missing something here since no one else seems to be concerned about this - comment?
Rgrds-Ross

Full details of various tracing methods are found in the following thread, from about page 15 onwards (a long read):
http://the911forum.freeforums.org/missing-jolts-found-film-at-11-t222.html
I'm not advocating the methods Tony used in the original paper, but the techniques used in that thread have developed such that very small displacements can be detected accurately, well sub-foot. See the image I posted earlier. Forgot to highlight it's axes. The *smooth* line is vertical displacement of the NW corner in feet, with 59.97 samples per second base footage, against time in seconds. The *wobbly* line is velocity and is scaled correctly on the time axis, but arbitarily on the vertical axis.
 
To appreciate the full sophistication of Tony Szamboti's argument, consider these facts concerning the data presented in the paper by MacQueen and Szamboti:

  1. The quantization error in the position data is plus or minus 1/2 pixel.
  2. Velocities are calculated from the positions by differencing (whether simple or balanced).
  3. The quantization errors for adjacent position measurements are independent, so the correct way to calculate the worst case quantization errors for those differences is to subtract the most negative possible position error from the most positive position error, and vice versa.
  4. 1/2-(-1/2)=1
  5. -1/2-(1/2)=-1
  6. Hence the quantization error in the differences used to calculate velocities is plus or minus 1 full pixel.
  7. Each pixel represents about 0.88 feet.
  8. Hence the quantization error in the calculated velocities is plus or minus 0.88 feet per interval.
  9. For simple differencing, the interval is 1/6 second.
  10. For balanced differencing, the interval is 1/3 second.
  11. Hence the quantization error in the calculated velocities is plus/minus 5.28 ft/s for simple differencing (the "per interval" means you divide by 1/6 second, which is the same as multiplying by 6) or plus/minus 2.64 ft/s for balanced differencing (you multiply 0.88 by 3).
  12. Accelerations are calculated from the velocities by differencing (whether simple or balanced).
  13. Although the errors in adjacent velocities are not entirely independent (the estimates for two adjacent velocities cannot both be at the high end of the quantization error, nor can they both be at the low end), the worst case difference for two adjacent velocity estimates comes when the quantization error for one of those estimates is at the high end and the other at the low end. That situation is entirely possible.
  14. Hence the correct way to calculate the worst case quantization errors for those differences is to subtract the most negative possible velocity error from the most positive velocity error, and vice versa.
  15. 5.28-(-5.28)=10.56
  16. -5.28-(5.28)=-10.56
  17. 2.64-(-2.64)=5.28
  18. -2.64-(2.64)=-5.28
  19. Hence the quantization error in the calculated accelerations is plus/minus 10.56 ft/s2 per interval for simple differencing, and plus/minus 5.28 ft/s2 per interval for balanced differencing.
  20. Hence the total quantization error for the calculated accelerations is plus/minus 63.36 ft/s2 for simple differencing (you multiply by 6), and plus/minus 15.84 ft/s2 if the accelerations are also calculated via balanced differencing (you multiply by 3).
  21. Notice, however, that the MacQueen and Szamboti paper never calculates, tabulates, or graphs accelerations at all; the paper tabulates and graphs velocities only, and compares adjacent velocities visually, as Tony has continued to do in this thread.
  22. That's equivalent to calculating the accelerations via simple differencing from velocities calculated via balanced differencing.
  23. The total quantization error for accelerations calculated via simple differencing from velocities calculated via balanced differencing is plus or minus 31.68 ft/s2 (obtained by dividing the plus/minus 5.28 ft/s quantization error of the calculated velocities by the 1/6 second that separates adjacent estimates of the velocity).
  24. 31.68 ft/s2 is approximately 1g.
  25. Hence the quantization error for the accelerations that Tony Szamboti has been discussing in this thread is plus or minus 1g.
  26. To compute the total error bound for the calculated accelerations, we'll have to add measurement error to that plus/minus 1g quantization error.

The original measurement data in the Missing Jolt paper was taken by hand using a pixel measuring tool called Screen Calipers.

We retook the data last night with a much more sophisticated and automated tool called Tracker, which is meant for just this sort of thing and locks onto the feature to be measured. These measurements show the distance traveled between 1.667 and 1.834 seconds into the fall of WTC 1 is greater, not less, than the distance traveled between 1.500 and 1.667 seconds into the fall.

So it was in fact noise in the hand data, probably caused by not being precisely locked on the point being measured for each measurement.
What's going on here is that Tony Szamboti is warning us against taking the data presented in his paper too seriously. He wants us to add some positive measurement error to the inherent plus or minus 1g quantization error of the accelerations he gets by simple differencing of velocities calculated via balanced (symmetric) differencing.

You are somehow skipping several steps when you jump to working in your alleged 1g error with measuring distance in feet, which must come from your struggle to understand what constitutes a deceleration.

It seems you have been trying too hard to refute something that unfortunately is irrefutable.
So I filled in the skipped steps. As for that last sentence...

Feckless arrogance rocks.

I see you have since re-examined the collapse using a different tool, but let's put some numbers to the uncertainty since you seem to be questioning my math.

Here is a subset of data from page 7 of your paper:

Time (sec) Roof fall distance (feet)
1.000 11.44
1.167 14.96
1.334 20.24

We can see that the average velocity between points 1 and 2 is 21.08 f/s, and 31.62 f/s between points 2 and 3. This gives an average acceleration of 63.11 f/s2 over these three points. Since this is nearly 2g - which is impossible - we know your error is at least 1g. Back-calculating to the distances, we find that the related error is at most 0.89 feet for any of the three data points, which is a significant percentage of the change in fall distance between data points.
 
Thank you WD and AZCat I love math porn.

The original measurement data in the Missing Jolt paper was taken by hand using a pixel measuring tool called Screen Calipers.

We retook the data last night with a much more sophisticated and automated tool called Tracker, which is meant for just this sort of thing and locks onto the feature to be measured. These measurements show the distance traveled between 1.667 and 1.834 seconds into the fall of WTC 1 is greater, not less, than the distance traveled between 1.500 and 1.667 seconds into the fall.

So it was in fact noise in the hand data, probably caused by not being precisely locked on the point being measured for each measurement.

Tony
I think by now the data should be "done."
What are those two measurements for the distance traveled.
 
Last edited:
Tony
I think by now the data should be "done."
What are those two measurements for the distance traveled.

The data taken with the Tracker program gives the following values at the times between 1.500 and 2.000 seconds into the fall. which we were discussing. The data is Time (sec), Vertical distance traveled (ft.), and Delta distance traveled (ft.) from the last measurement.

1.500 sec, 17.361 ft.
1.667 sec, 22.055 ft., 4.694 ft.
1.834 sec, 27.395 ft., 5.340 ft.
2.000 sec, 33.487 ft., 6.092 ft.

In the entire overall measurement set, taken over about 3.3 seconds, the distance traveled in a given length time increment is always greater than it was in the previous time increment of equivalent length, so the velocity is constantly increasing. I will be updating the Missing Jolt paper with data taken using the Tracker program.
 
Last edited:
I will be updating the Missing Jolt paper with data taken using the Tracker program.

Will you, in the meantime, withdraw the current version of the paper, which we now all know is based on a dataset that exhibits noise artefacts that mimic the effect whose absence is claimed in the discussion? Will you also submit your alterations to the same peer review process that failed to spot this glaring error in the paper, or will you be looking for some better reviewers?

Dave
 
The data taken with the Tracker program gives the following values at the times between 1.500 and 2.000 seconds into the fall. which we were discussing. The data is Time (sec), Vertical distance traveled (ft.), and Delta distance traveled (ft.) from the last measurement.

1.500 sec, 17.361 ft.
1.667 sec, 22.055 ft., 4.694 ft.
1.834 sec, 27.395 ft., 5.340 ft.
2.000 sec, 33.487 ft., 6.092 ft.
Just for grins, let's compare that new data to the data on page 7 of the current version of the paper. While we're at it, let's add the velocities computed by dividing the simple differences shown above by 1/6 second.

The new position data, simple differences, and velocities are shown in green, with the old position data, simple differences, and velocities shown in brown.

Position data:
1.500 sec: 17.361 ft versus 25.52 ft
1.667 sec: 22.055 ft versus 32.56 ft
1.834 sec: 27.395 ft versus 38.72 ft
2.000 sec: 33.487 ft versus 45.76 ft

Simple differences:
1.500-1.667 sec: 4.694 ft versus 7.04 ft
1.667-1.834 sec: 5.340 ft versus 6.16 ft
1.834-2.000 sec: 6.092 ft versus 7.04 ft

Velocities calculated from simple differences:
1.500-1.667 sec: 28.164 ft/s versus 42.24 ft/s
1.667-1.834 sec: 32.040 ft/s versus 36.94 ft/s
1.834-2.000 sec: 36.552 ft/s versus 42.24 ft/s

At least one of the following statements must be true:
  • The raw data presented in the current version of the paper are really, really bad---so bad, in fact, that every argument that has ever been based upon that data should be retracted/rejected pending analysis of better data.
  • The new and old data were measured using different origins for time, which means Tony was wrong when he said above that the new data for 1.5-2.0 seconds correspond to the old data for the interval we've been discussing.
 
No sign of the paper having been withdrawn or amended yet. I'd be interested to know whether this is because the editors refuse to withdraw it, because the authors are happy for conclusions from a near-useless dataset to remain in publication under their names, or because nobody thinks any of this is worth bothering to put right. Tony, care to comment?

Dave
 
No sign of the paper having been withdrawn or amended yet. I'd be interested to know whether this is because the editors refuse to withdraw it, because the authors are happy for conclusions from a near-useless dataset to remain in publication under their names, or because nobody thinks any of this is worth bothering to put right. Tony, care to comment?

Dave

Who knows? Tony is still reading this forum (his last activity was last night, according to his profile) so maybe he'll deign to enlighten us.
 
Oh, BLAH-DEE-BLAH, BLAH! Why does there need to have been a jolt in the first place? This was not exactly like verinage. It was about as close as you can get to it, but not quite.

There would be a jolt IF a separated geometric solid fell onto the standing geometric solid that was the lower floors of the towers. The top part just settled very quickly onto the bottom part. Instead of a jolt, you get the vertical movement converted to horizontal movement at varying points. The rotating of the top of the south tower shows this very clearly.

Stop obsessing over number-crunching and just look at the damned towers. You don't even need to know the compressibility of any of the columns. They weren't destroyed by compression. Bazant was a bit off when he said that crush-up would be the final stage of the collapse, but his reasoning was sound. Just not that great an observer.
 
No sign of the paper having been withdrawn or amended yet. I'd be interested to know whether this is because the editors refuse to withdraw it, because the authors are happy for conclusions from a near-useless dataset to remain in publication under their names, or because nobody thinks any of this is worth bothering to put right. Tony, care to comment?

Dave

I think the quote 'the man doth protest too much" is appropriate here.

The data taken with the more sophisticated Tracker program shows there was no deceleration whatsoever in the fall of WTC 1, so the premise of the paper is sound.

I will be revising the paper shortly to use the more accurate data which more soundly supports the premise.

Zdenek Bazant had more than just artifacts in his papers on this issue and there has never been a revision to correct those errors let alone a retraction.
 
The data taken with the more sophisticated Tracker program shows there was no deceleration whatsoever

I think you need to clarify *no deceleration*.

378476413.jpg


The lower line is velocity, but hope it's clear that at least one velocity *decrease* is evident to you.

I assume your new data shows the *decreases in rate of acceleration* previously identified.

Given we've been through the issues with the original data collection methods, it would be useful if you could provide precise details of the data capture methods at the earliest possible point in time. Over at the911forum is fine by me, but it would be counter-productive to go through the process of a paper update to have the capture method criticised, yes ?
 
The data taken with the more sophisticated Tracker program shows there was no deceleration whatsoever in the fall of WTC 1, so the premise of the paper is sound.
:p

The paper's premise was
...a refutation that is:

  • easy to understand but reasonably precise
  • capable of being stated briefly
  • verifiable by any reader with average computer skills and a grasp of simple mathematics.
As readers with a grasp of grade-school arithmetic have known for several months now, the raw unsmoothed data presented by MacQueen and Szamboti refute the main claim of their paper.

Even if that had not been so, the raw data presented in their paper have neither the accuracy nor the resolution necessary to support the authors' primary claim (that no deceleration occurred).

I will be revising the paper shortly to use the more accurate data which more soundly supports the premise.
The new data could hardly support the premise less soundly than the old.

Tony's been claiming that his raw position data were accurate to within ±0.44 ft. Until, that is, he decided to present "more accurate data" which differ from his allegedly ±0.44 ft data by 8 to 12 feet.

Tony's new claims imply he has been exaggerating the accuracy of his data by a factor of more than 15, which means the accelerations derived from his data contained potential errors of over ±15g.

Remembering that history, I trust femr2's data more than Tony's:
I assume your new data shows the *decreases in rate of acceleration* previously identified.

Given we've been through the issues with the original data collection methods, it would be useful if you could provide precise details of the data capture methods at the earliest possible point in time. Over at the911forum is fine by me, but it would be counter-productive to go through the process of a paper update to have the capture method criticised, yes ?
Yes, there is much to be said for pre-publication peer review.

Had MacQueen and Szamboti availed themselves of competent peer review, the paper's failings would have been communicated to the authors via confidential channels, and all of us would have been deprived of the ensuing public hilarity.
 
I think you need to clarify *no deceleration*.

[qimg]http://femr2.ucoz.com/_ph/6/2/378476413.jpg[/qimg]

The lower line is velocity, but hope it's clear that at least one velocity *decrease* is evident to you.

I assume your new data shows the *decreases in rate of acceleration* previously identified.

Given we've been through the issues with the original data collection methods, it would be useful if you could provide precise details of the data capture methods at the earliest possible point in time. Over at the911forum is fine by me, but it would be counter-productive to go through the process of a paper update to have the capture method criticised, yes ?

You should put some labels on your axes so one can tell what you measured.

I haven't taken a second derivative of the Tracker measurements yet and I will say again that decreases in the rate of acceleration aren't germane to the argument. Real deceleration is needed to cause load amplification. It just isn't there.

It would be interesting to see some of the complainers here, like W.D. Clinger, take some of their own measurements.
 
You should put some labels on your axes so one can tell what you measured.
True. The axes are vertical drop of NW corner, in feet, and time, in seconds....for the *smooth* position graph. The *wobbly* velocity graph is correctly aligned on the time axis, but abitrary on the vertical axis. As my intention was simply to find the low magnitude jolts, it's not too important. If you want the raw position data, no probs. (Am sure I've already given you the link, and the data itself in CSV form)

I haven't taken a second derivative of the Tracker measurements yet and I will say again that decreases in the rate of acceleration aren't germane to the argument.
But as I've tried to remind you, the argument doesn't take account of the lack of upper block rigidity, the probability of CC jolts actually transmitting to the NW corner, or the recent FEA showing rapid jolt magnitude reduction as distance from impact locaion increases (and that's with simple perimeter assembly FEA).

My point, however, was simply to highlight that stating *no deceleration whatsoever* ain't a great thing to say.

Real deceleration is needed to cause load amplification. It just isn't there.
Please define *real* deceleration, as opposed to acceleration rate reduction.

It would be interesting to see some of the complainers here, like W.D. Clinger, take some of their own measurements.
To be honest, what's the point ? As long as the tracking method uses deinterlaced, stabilised DVD quality footage at 59.94fps and preferably automated on visual features to output sub-pixel feature position (by eye is just no good), and again preferably with static feature position subtraction, most definitely using full 24bit colour depth... the resultant base data is all going to be fairly similar. As it's jolts that are being looked for, accounting for viewpoint perspective is not a big issue.

As derivatives are taken, there is likely to be differing smoothing methods used, but with suitably high sample rates (59.94sps) I see no reason that quite wide symmetric differencing should not be acceptable.

There's no massive jolt. It's not there. Some little ones, but that's all. I think everyone is clear on that.

What would be really interesting is application of some structural engineering knowledge I personally don't have to determine the actual effect of the non-rigid structure (and wotnot, inclusing the very minor tilt) on the transmission of any actual large jolts through the various building elements to the NW corner.

Even if a large section of core *vanished* there are still many building elements colliding, the jolts from which do not reach the roofline in any sort of scale you expect, which is a bit of a paradox for the argument.
 
It would be interesting to see some of the complainers here, like W.D. Clinger, take some of their own measurements.
The claims are yours, Tony. As I told you several months ago, the very first thing you should have done was to determine whether the available data are good enough to detect the jolt you think is missing.

They weren't. Had you taken a moment to consider the Nyquist rate for your alleged 90ms jolt, or performed a forward error analysis as described in chapter 1 of R W Hamming's Numerical Methods for Scientists and Engineers, then you'd have known better than to go public with your argument. Estimating the Nyquist rate should have taken you about two seconds; if your math skills are rusty, then the forward error analysis might have taken you a couple of minutes.

Your argument divides into two main parts:
  1. The upper block fell so cleanly onto the lower block that one would expect one large, clear jolt instead of a near-continuous cascade of lesser jolts.
  2. Observations show there was no jolt.
Your own data showed an apparent jolt, and it was gob-smackingly obvious that your data weren't good enough to rule out the possibility of other unobserved jolts. That's why we've been discussing your spectacular failure to establish the second point.

Note well, however, that you haven't established the first point either. I expect to see less than 1g acceleration (as in the data) but I do not expect to see a single jolt that's large enough to show up in the best possible analysis of the available data. Yes, I understand how there could be such a jolt; I also understand how there might not be such a jolt. I therefore have no reason to care about the second part of your argument: It doesn't matter whether the downward acceleration was relatively smooth or was punctuated by large jolts.

Because I understand that it doesn't matter, there is no reason for me to take better measurements. If you persist in your two-part argument, however, then you will need far better data to support the second part of your argument, and you will also need a far more convincing argument for the first part as well.
 
The claims are yours, Tony. As I told you several months ago, the very first thing you should have done was to determine whether the available data are good enough to detect the jolt you think is missing.

They weren't. Had you taken a moment to consider the Nyquist rate for your alleged 90ms jolt, or performed a forward error analysis as described in chapter 1 of R W Hamming's Numerical Methods for Scientists and Engineers, then you'd have known better than to go public with your argument. Estimating the Nyquist rate should have taken you about two seconds; if your math skills are rusty, then the forward error analysis might have taken you a couple of minutes.

Your argument divides into two main parts:
  1. The upper block fell so cleanly onto the lower block that one would expect one large, clear jolt instead of a near-continuous cascade of lesser jolts.
  2. Observations show there was no jolt.
Your own data showed an apparent jolt, and it was gob-smackingly obvious that your data weren't good enough to rule out the possibility of other unobserved jolts. That's why we've been discussing your spectacular failure to establish the second point.

Note well, however, that you haven't established the first point either. I expect to see less than 1g acceleration (as in the data) but I do not expect to see a single jolt that's large enough to show up in the best possible analysis of the available data. Yes, I understand how there could be such a jolt; I also understand how there might not be such a jolt. I therefore have no reason to care about the second part of your argument: It doesn't matter whether the downward acceleration was relatively smooth or was punctuated by large jolts.

Because I understand that it doesn't matter, there is no reason for me to take better measurements. If you persist in your two-part argument, however, then you will need far better data to support the second part of your argument, and you will also need a far more convincing argument for the first part as well.

Unfortunately, for you the Verinage demolitions refute what you are saying. They all show large decelerations which has been observed everytime someone measures their falls. The Verinage demolitions need the dynamic load caused by the deceleration of the upper section in order to continue their collapse.

The lack of a jolt in WTC 1 proves there was no dynamic load and there is no other natural way for the building to collapse with the large reserve strength in the columns below. The tilt does not explain it as even separate impacts would show a deceleration and there is no chance all of the columns missed each other.

I think those of you who claim to believe these buildings could have collapsed naturally without a deceleration are playing games. There isn't a chance that could of happened and all of your posturing won't change that reality.
 

Back
Top Bottom