• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Care to Comment

It's simple arithmetic. The discrepancy between the raw and the smoothed data comes about because of the smoothing procedure you're applying. It should be obvious that a smoothing algorithm removes spurious peaks and troughs in data; that is, after all, exactly what a smoothing algorithm is supposed to do. Unfortunately, it will just as easily remove significant peaks and troughs in data. Since you're looking for a significant trough, you shouldn't be smoothing the data. If your data is noisy enough that you need to smooth it, then it's too noisy to detect the presence or absence of the feature you're looking for.

This is first year undergraduate stuff, if that.

Dave

The size of the discontinuity if WTC 1 had collapsed naturally would be quite large and it certainly shows itself in the Verinage demolitions.

The large deceleration seen in all of the Verinage demolitions with the same measurement technique completely refutes your argument here that we are smoothing out any real deceleration.

You are forced to argue that we are missing the discontinuity, yet you haven't taken any measurements yourself. The reasons for that is that there was no deceleration in the WTC 1 collapse, you have no real argument, and would prove yourself wrong.

Dave, it really is important to your credibility that you tell us what other work you do besides posting on this forum.
 
Last edited:
The size of the discontinuity if WTC 1 had collapsed naturally would be quite large and it certainly shows itself in the Verinage demolitions.

The large deceleration seen in all of the Verinage demolitions with the same measurement technique completely refutes your argument here that we are smoothing out any real deceleration.

You are forced to argue that we are missing the discontinuity, yet you haven't taken any measurements yourself. The reasons for that is that there was no deceleration in the WTC 1 collapse, you have no real argument, and would prove yourself wrong.

Dave, it really is important to your credibility that you tell us what other work you do besides posting on this forum.

How many high-rise buildings have you designed?
 
balanced differencing hides the jolt

That is what I believe, and I did explain that to prove it isn't one has to answer the question of how the average about the data point in question can be higher than the previous average.
It's simple arithmetic. The discrepancy between the raw and the smoothed data comes about because of the smoothing procedure you're applying. It should be obvious that a smoothing algorithm removes spurious peaks and troughs in data; that is, after all, exactly what a smoothing algorithm is supposed to do. Unfortunately, it will just as easily remove significant peaks and troughs in data. Since you're looking for a significant trough, you shouldn't be smoothing the data. If your data is noisy enough that you need to smooth it, then it's too noisy to detect the presence or absence of the feature you're looking for.

This is first year undergraduate stuff, if that.


Because this is first year undergraduate stuff, it's easy to explain using a simple example. Suppose we have sampled the vertical position s of a particle or other object at times t=0, 1, 2, and so on. Suppose further that there is some controversy as to whether its velocity ds/dt ever goes negative. (For the purposes of this thread, we might think of a negative instantaneous velocity as a "jolt".)

Suppose further that, unbeknownst to us, the true position of the particle is given by s=1+t+cos(pi*t), which means the velocity really does go negative (there really are jolts, and they occur at regular intervals). As it happens, however, the resolution of our data is just barely good enough to reveal those jolts because we sampled the position at its Nyquist rateWP. The true value of the position is as follows, with our sampled values shown by the plus marks:

picture.php


The true velocity of this object ranges from a little under -2 to a little more than +4 (from 1-pi to 1+pi). If we calculate the velocity using simple backward differencing, we'll see the velocity alternating between -1 and +3. If we calculate the velocity using balanced (symmetric) differencing as was done by MacQueen and Szamboti, we'll see only the general trend, which is a constant velocity of +1:

picture.php


As can be seen from this example, simple differencing tends to underestimate the extremal values of the instantaneous velocity, but it provides a far more accurate picture of the instantaneous velocity than we'd get from balanced differencing.

If we were to rely on balanced differencing only, as advocated by Tony Szamboti, then we would conclude that the object's velocity never goes negative, and there is no jolt. That conclusion would be false; the smoothing performed by balanced differencing would have led us astray.
 
Last edited:
And Tony's house of cards falls faster than freefall.

The charlatan has been exposed again.
 
When I first encountered the JOLT paper I wondered how anyone could ever prove (to me) how much detail they could see in the video. I saw no full description of the lens, camera, type of recording, etc. which made me immediately skeptical of any derived values. I decided that it was fair to ask anyone show me a moving small distinct object in the video whose size is known to be (say) one foot which seems to be the resolution claimed. Without that visual proof, just dividing some number pixels by some other number is just a supposition. What seemed to be going on in the paper was that Tony was assuming perfect resolution - that the one pixel value MUST be the resolving power of the video. Comments, as I like to say, welcome.
Rgrds-Ross
 
I am not sure how accurately results you can get from counting pickles on a video. What I am sure of is that there are professionals who are very sensitive about their professional reputation, and are real careful when making statements.

Try guess who are who on this subforum.
 
I am still puzzled about the concept of a "jolt." As I get it, there should have been a massive deceleration of the upper block of the towers as they contacted the first unbroken floors or columns.

I would submit that, since the mechanism of collapse had nothing to do with vertical comprtession of any but a few core columns, this is not as big a deal as Tony wants us to think it.

I get the impression, though, they we do see a manifestation of the "jolt" when the core columns collided in the "kink" near the top of the south tower.

My impression is that the collapse began with the whole upper block moving in one direction, slightly off of a straight perpendicular drop. The kink may have developed when core-column-to-core-column contact arrested movement in that direction, and the force of gravity pulling down on the hat truss was enough to overcome the momentum of the rest of the structure that was pulling it toward the other side of the building.

Sorry I can't put it into engineers' jargon. I'm in this as a fire figfhter and sometimes construction laborer.
 
I think of bazant's calculations giving a factor 10 off, under perfect conditions, and compare that to columns hitting floors. A jolt does not sound too likely to me, but as an electrician my opinion have little backup.
 
No it isn't. I'm pointing out some very obvious faults in your data analysis, not appealing to my own authority. And the deceleration is still visible in your own data.

Dave

If the one point you speak of, at 1.834 seconds, was a real deceleration, then the distance traveled of the data point at 2.000 seconds would have been less because of it. This would have caused the average velocity between the points on either side of 1.834 to be less than the average velocity between the two points on either side of the 1.667 second data point before it. But it isn't, and this is why it cannot be considered a real deceleration.

You haven't even tried to explain that.

As I have said, the same measurement technique picks up the very pronounced decelerations in every one of the verinage demolitions, where a real gravity only caused collapse is known to be occurring. These decelerations are also observed over a number of data points, which is what would have happened in WTC 1 if it had a real deceleration.

You are obviously desperate and willing to pick on some small measurement noise to try and find any way of saying there was a possibility of the collapse of WTC 1 being naturally caused. I am actually sorry to say this lack of real deceleration proves it could not have been.
 
Last edited:
If the one point you speak of, at 1.834 seconds, was a real deceleration, then the distance traveled of the data point at 2.000 seconds would have been less because of it. This would have caused the average velocity between the points on either side of 1.834 to be less than the average velocity between the two points on either side of the 1.667 second data point before it. But it isn't, and this is why it cannot be considered a real deceleration.

You haven't even tried to explain that.

As I have said, the same measurement technique picks up the very pronounced decelerations in every one of the verinage demolitions, where a real gravity only caused collapse is known to be occurring. These decelerations are also observed over a number of data points, which is what would have happened in WTC 1 if it had a real deceleration.

You are obviously desperate and willing to pick on some small measurement noise to try and find any way of saying there was a possibility of the collapse of WTC 1 being naturally caused. I am actually sorry to say this lack of real deceleration proves it could not have been.

Considering what Dave already posted regarding the magnitude of uncertainty of your measurements, I'm not sure this is picking on "some small measurement of noise." Basically, the quality of your data sucks, and it is certainly not sufficient for you to claim that it rules out a deceleration.
 
Considering what Dave already posted regarding the magnitude of uncertainty of your measurements, I'm not sure this is picking on "some small measurement of noise." Basically, the quality of your data sucks, and it is certainly not sufficient for you to claim that it rules out a deceleration.

As I said, the average about the 1.834 second data point would have to be less than the average about 1.667 seconds before it, if the point at 1.834 was a real deceleration. The fact that it isn't proves there was not a real deceleration there.
 
Last edited:
As I said, the next average would have to be less than the average before it, if the point at 1.834 was a real deceleration. The fact that it isn't proves it is not.

Ah, but you're calculating those averages without first knowing the uncertainty of the original data, other than it shows at least a 1g error, which is about +/- 5 fps at your sampling rate, which is huge compared to the difference between your average velocities.
 
Ah, but you're calculating those averages without first knowing the uncertainty of the original data, other than it shows at least a 1g error, which is about +/- 5 fps at your sampling rate, which is huge compared to the difference between your average velocities.

You are somehow skipping several steps when you jump to working in your alleged 1g error with measuring distance in feet, which must come from your struggle to understand what constitutes a deceleration.

It seems you have been trying too hard to refute something that unfortunately is irrefutable.
 
Last edited:
Wrong.

You are somehow skipping several steps when you jump to working in g's with measuring distance in feet, which must come from your struggle to understand what constitutes a deceleration.

It seems you have been trying too hard to refute something that unfortunately is unrefutable.

Wrong several times again, Tony. Consistency is certainly your strong point.
 
Tony,

Why are you still arguing about whether your data contains acceleration rate reduction ?

You know that myself, OWE and Achimspok have all generated trace data from the same footage which is of a much higher accuracy and resolution.

You also know that data does indeed contain what I would term *mini jolts*.

I'm aware that you think the jolt magnitude(s) should be larger.

You know the upper block wasn't rigid, that there is no direct reason to think that *jolts* in the core would be transmitted such that those *jolts* would be of large magnitude at the NW corner, and that recent FEA analyses show a much lower *jolt* magnitude the further from the point of impact you look at.

I have absolutely no idea why you are not using the higher resolution data in trying to state your case...
378476413.png

...you have access to the raw data after all.

I definitely think that the lack of upper block rigidity is something you should look into further in terms of progressing your viewpoint.
 
If the one point you speak of, at 1.834 seconds, was a real deceleration, then the distance traveled of the data point at 2.000 seconds would have been less because of it. This would have caused the average velocity between the points on either side of 1.834 to be less than the average velocity between the two points on either side of the 1.667 second data point before it. But it isn't, and this is why it cannot be considered a real deceleration.

Incoherent rubbish. You're trying to claim that any isolated minimum in the data cannot be a real data point, and then concluding that there are no isolated minima in the data. You're specifically excluding from consideration the exact phenomena you claim to be looking for.

Dave
 
Incoherent rubbish. You're trying to claim that any isolated minimum in the data cannot be a real data point, and then concluding that there are no isolated minima in the data. You're specifically excluding from consideration the exact phenomena you claim to be looking for.

Dave

The original measurement data in the Missing Jolt paper was taken by hand using a pixel measuring tool called Screen Calipers.

We retook the data last night with a much more sophisticated and automated tool called Tracker, which is meant for just this sort of thing and locks onto the feature to be measured. These measurements show the distance traveled between 1.667 and 1.834 seconds into the fall of WTC 1 is greater, not less, than the distance traveled between 1.500 and 1.667 seconds into the fall.

So it was in fact noise in the hand data, probably caused by not being precisely locked on the point being measured for each measurement.
 
Last edited:
The original measurement data in the Missing Jolt paper was taken by hand using a pixel measuring tool called Screen Calipers.

We retook the data last night with a much more sophisticated and automated tool called Tracker, which is meant for just this sort of thing and locks onto the feature to be measured. These measurements show the distance traveled between 1.667 and 1.834 seconds into the fall of WTC 1 is greater, not less, than the distance traveled between 1.500 and 1.667 seconds into the fall.

So it was in fact noise in the hand data, probably caused by not being precisely locked on the point being measured for each measurement.

I'm not in the least surprised; as I've said all along, any tilt of the upper block eliminates the possibility of a significant jolt on impact, which is of course the key difference between the WTC collapses and a verinage demolition. However, the data you've published in the paper still contains a feature which your conclusion incorrectly asserts is not present, even though it's simple a noise artefact. In the circumstances, I would suggest that the minimum responsible action right now would be to withdraw your paper from the Journal of 9/11 Studies immediately, pending a re-write.

Dave
 

Back
Top Bottom