• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Szamboti's Missing Jolt paper

I don't have the paper open in front of me right now, but did they even estimate the error anywhere? Seems like that would be pretty crucial in making observations of this type, otherwise you would have no idea as to whether your conclusions were even significant.

No they don't. Which is odd, particularly when they talk about the first (velocity) and second derivative (acceleration) of the observed (position). Differentiation blows up errors. Indeed, when one plots the acceleration using delta's it is basically noise. (I believe I can't post images yet, so I can't show a plot right now. Heck, there is a reason I post like mad :) ).

I believe they where very much aware of errors. They had the desire to arrive at the "actual velocity", the term they indeed use to refer to their plot, seeing the velocity as it really was, unobscured from the fog that measurement errors are as much as possible. That is why they chose the roundabout way of arriving at the velocity through the acceleration (why not deduce it from the position data directly?). They where thinking "constant acceleration" and used all previous data points to arrive at a more accurate value, so they thought, of the velocity at a particular point in time. An idea that is valid when the acceleration is indeed constant, such as of an object in free fall. Not a good idea when the acceleration cannot be assumed constant. Even less so when you want to demonstrate that a 13ft/s discontinuity is absent.
 
Last edited:
In my opinion, which I'm sure impresses many around here, is that the resolution is not a consideration since the jolt of impact between the two blocks should have been discernible on the video.

The reason that your opinion doesn't impress many around here is that it's not a matter of opinion. It's mathematically demonstrable that the jolt of impact between the two blocks could not possibly have been discernible on the video. Let me try to explain this simply.

Each of the values MacQueen and Szamboti use for the position of the roofline is based on the position of a single pixel, which represents an interval of 0.88 feet. In other words, when they quote a height as, for example, 11.44 feet, all they know about the roof height at that moment is that it was somewhere between 11.00 and 11.88 feet; they have no idea, and we cannot know from the video, where in that interval it lay.

The vellocity of the roofline is the amount by which the position changes in a given time interval. We can therefore very simply calculate the velocity by subtracting the previous value from the current value, and multiplying by 6 (because each time interval is 1/6 of a second). For the point at 1 second, that means that we subtract 7.92 from 11.44, then multiply by 6, to get a velocity of 3.52 x 6 = 21.12 feet per second. However, both the 7.92 and the 11.44 value have uncertainties of 0.44 feet either way. So we're subtracting a number between 7.48 and 8.36 from a number between 11.00 and 11.88, and so all we can say about the result is that it lies somewhere between 2.64 and 4.4, and that the velocity lies somewhere between 15.84 and 26.4 feet per second.

As you can see, calculating the velocity doubles the uncertainty in the value. Since we also have to allow for the time interval, the uncertainty goes from ±0.44 feet to ±5.28 feet per second. When we calculate the acceleration, the uncertainty gets doubled again, and we have to divide by the time interval again, so the uncertainty on the acceleration now increases to ±63.36 feet per second per second. Note, now, that the acceleration due to gravity is only 32.174 feet per second per second. So when we come to calculate the acceleration, our uncertainty is about twice the value we're trying to measure.

Now, let's look at the actual value of acceleration at one second, the point where, according to MacQueen and Szamboti, the acceleration should dip down below zero, and they claim it stays more or less constant. There's a problem with this immediately: the velocities before and after one second are both the same, at 21.12 feet per second. This gives a calculated acceleration at this point of zero. Due to the measurement uncertainty, all we know about the acceleration is therefore that it lies somewhere between +63.36 feet per second per second and -63.36 feet per second per second.

Macqueen and Szamboti are looking for a decrease of 13.13 feet/second in a single time interval. This, with the background acceleration subtracted, corresponds to a net deceleration of 62.13 feet per second per second, which is within the uncertainty of the 1 second value. MacQueen and Szamboti's jolt could be right where they're looking for it, and they wouldn't be able to make it out because the uncertainty in the data is too high.

What MacQueen and Szamboti therefore tried to do was use a mathematical trick to reduce the noise level of their data. The problem with this is that mathematics can't tell the difference between signal and noise. Therefore, if there was any jolt there in the first place, their maths smoothed it away, as I showed earlier.

The rate at which the roof falls did not experience any deceleration, which NIST admits was at essentially freefall. This is measurable regardless of the resolution of this video.

The statement I've bolded is a remarkably absurd one. As I've shown, the resolution of the video is crucial to why any such deceleration is not measurable. RedIbis, when you issue pronouncements like this, it might be helpful to know at least the basics of what you're talking about.

Dave
 
For the sake of accuracy: I was mistaken about the time resolution in post #50.

As soon as a read 0.*17 second increments 'NTSC 60 half frames per second' jumped to my mind and apparently I assumed that anyone trying to detect a 13ms spike would make use of the highest time resolution available. I was wrong :). Matters turn out to be even worse. They use every 1 out of 5 frames, giving them a time resolution of 170 ms. Now, where a time resolution of 17 ms already is insufficient to detect a 13 ms spike, a time resolution of 170 ms does not quite improve matters.

Why anyone would want to do such a thing is beyond me. If present. such a spike will be thoroughly hidden, that's for sure.

They weren't trying to detect a 'completely' transient or isolated spike, but rather a hypothetical event that, were it real, would influence the future dynamical evolution over a period of time much larger than the duration of the spike or jolt. If the event itself was only 13 ms, who cares if it should show its effects clearly for a duration of time much greater?
 
Because they wouldn't observe it. To see a spike in a smoothing funciton with samples over 170ms is near impossible.

(This is why, incidently, a calc teacher has you /graph behaviors/ at certain points of a limit instead of using the calcualtor. It smooths it out and you don't see the behavior ofa function near those points.)
 
The vellocity of the roofline is the amount by which the position changes in a given time interval. We can therefore very simply calculate the velocity by subtracting the previous value from the current value, and multiplying by 6 (because each time interval is 1/6 of a second).

By such logic, if Szamboti and MacQueen had used all of the data available to them, they would have had to have multiplied the delta height by 30 instead of 6, for a velocity 5 times as great. The uncertainty of the velocity would have increased in absolute value, also.

Hmmm. More data point leading to more uncertainty. What's wrong with that picture?

This is a non-quantum (classical) system, and I don't think anybody, except maybe Judy Woods, would disagree that the descent of the roofline is monotonic. Together with a hard upper bound of downwards acceleration of g, these factors should allow one to narrow down the uncertainly considerably.
 
Because they wouldn't observe it. To see a spike in a smoothing funciton with samples over 170ms is near impossible.

(This is why, incidently, a calc teacher has you /graph behaviors/ at certain points of a limit instead of using the calcualtor. It smooths it out and you don't see the behavior ofa function near those points.)

I don't understand. Why wouldn't they see it? To be clear, when I spoke about the "the future dynamical evolution over a period of time much larger than the duration of the spike or jolt", I'm talking about roofline displacement for ~ 1 < t < 3. The effect is obscured in Dave's graph partly because he used G instead of .75 G. Had he used .75 G, he would have come up short - i.e., less displacement than was observed, which would become more and more obvious as you approach 3 seconds.

By way of analogy, suppose you deliver a sideways impulse to a body traveling in a straight line. You may not be able to pin down the exact frame that the impulse was delivered in, but at some point, a new trajectory becomes visible, and remains visible.
 
By such logic, if Szamboti and MacQueen had used all of the data available to them, they would have had to have multiplied the delta height by 30 instead of 6, for a velocity 5 times as great. The uncertainty of the velocity would have increased in absolute value, also.

Hmmm. More data point leading to more uncertainty. What's wrong with that picture?

They are looking for a 13.13 ft/s change in velocity over a 1 second interval, not a 5s interval.
 
They weren't trying to detect a 'completely' transient or isolated spike, but rather a hypothetical event that, were it real, would influence the future dynamical evolution over a period of time much larger than the duration of the spike or jolt. If the event itself was only 13 ms, who cares if it should show its effects clearly for a duration of time much greater?

Please note we are not talking about an object that is falling at an a priori known acceleration, such as an object in free fall. In that case indeed even a single data point, provided the accuracy is sufficient, would enable us to decide whether it underwent a jolt of a particular magnitude on its way down or not.

Here we are looking at an object that is undergoing an obstructed fall. Basically every pair of consecutive samples will exhibit a lower than free fall acceleration. The rest of what we get to know about velocities, accelerations at particular points in time we must derive from observations. The point is, if you want to rule out that such a lower than free fall acceleration can be attributed to a jolt of some magnitude a low temporal resolution isn't going to help you. Noise in the data and in the estimates involved will leave wiggle room. A sufficiently high temporal resolution will allow you to say "Well, the jolt should stand out to the noise, it's not in the data, so it can be ruled out."
 
Last edited:
However, since I did this, I have found out who Hambone is: He is GregoryUrich. This checks, since he has been associated with the JONES in the past, and he has a pretty good grip on the science.

So, now, it's up to you guys to prove that he's lying. Because if he's not, the JONES is. We've already proven that it's wrong. Hambone/Gregory's comment also proves that they knew they were wrong. There's no plausible deniability anymore.

I really shouldn't have to explain this in such simple terms to you people.

That is so sad, and yet so funny :D what a freaky episode.
 
By way of analogy, suppose you deliver a sideways impulse to a body traveling in a straight line. You may not be able to pin down the exact frame that the impulse was delivered in, but at some point, a new trajectory becomes visible, and remains visible.

Except it's not a change in trajectory we are looking for.

Due to the sampling rate of the camera we have basically an ideal impulse; no displacement during the collision, an infinite force acting during an infinitesimal time.
 
They weren't trying to detect a 'completely' transient or isolated spike, but rather a hypothetical event that, were it real, would influence the future dynamical evolution over a period of time much larger than the duration of the spike or jolt. If the event itself was only 13 ms, who cares if it should show its effects clearly for a duration of time much greater?

You've misunderstood the paper. Look at the dotted line in fig.4, which I reproduce again below. They are looking for a step in the velocity. Their analysis is certain to average out any such step.

By such logic, if Szamboti and MacQueen had used all of the data available to them, they would have had to have multiplied the delta height by 30 instead of 6, for a velocity 5 times as great. The uncertainty of the velocity would have increased in absolute value, also.

Correct; the uncertainty in the position values is unchanged, but the time interval is smaller, hence the error in gradient of position vs. time is greater. However, they'll have 5 times as many points, so if they average over all 5 they will get the same overall uncertainty as before.

Hmmm. More data point leading to more uncertainty. What's wrong with that picture?

What's wrong with it is that it isn't the whole picture.

This is a non-quantum (classical) system, and I don't think anybody, except maybe Judy Woods, would disagree that the descent of the roofline is monotonic. Together with a hard upper bound of downwards acceleration of g, these factors should allow one to narrow down the uncertainly considerably.

You have missed the entire point of the paper, which is the authors' claim that the "jolt" leads to a velocity curve of the roofline that is not monotonic, having a discontinuity at around 1 second. They then try to infer the absence of this discontinuity from a calculation that uses its absence as a starting assumption. It's a mathematical circular argument.

The effect is obscured in Dave's graph partly because he used G instead of .75 G. Had he used .75 G, he would have come up short - i.e., less displacement than was observed, which would become more and more obvious as you approach 3 seconds.

I'm having trouble uploading the 0.75G graph. Apart from a slightly compressed vertical scale, though, it doesn't look noticeably different. Still no step in the velocity curve.

Dave
 
Even if we had higher resolution in the video, this paper is flawed anyways.

The paper assumes a direct head-on collision and takes no account of rotational kinetic energy during the "collision" of the upper and lower blocks.
 
Correct; the uncertainty in the position values is unchanged, but the time interval is smaller, hence the error in gradient of position vs. time is greater. However, they'll have 5 times as many points, so if they average over all 5 they will get the same overall uncertainty as before.

I don't believe this. Can you show it, rigorously?

You have missed the entire point of the paper, which is the authors' claim that the "jolt" leads to a velocity curve of the roofline that is not monotonic, having a discontinuity at around 1 second. They then try to infer the absence of this discontinuity from a calculation that uses its absence as a starting assumption. It's a mathematical circular argument.
When I wrote that the descent is monotonic, I was referring to displacement, not velocity. Also, note that the the911forum.freeforums.org thread has a plot that shows two kinks in the existing data "Their own data shows two jolts, maybe not big enough to satisfy them, but they ARE there." Even in this post, though, there is no hint that displacement has somehow lessened during any time interval.

I'm having trouble uploading the 0.75G graph. Apart from a slightly compressed vertical scale, though, it doesn't look noticeably different. Still no step in the velocity curve.

Dave

I'm not saying that there would have been, if you only sample 1/5 the data points, as was done in the paper. I'm saying that the displacement will differ markedly from what was actually observed, if you use an average acceleration closer to what was actually observed. E.g., your 1G acceleration has shortened what Szamboti and MacQueen call the pre impact velocity recovery time from 600 ms to something in the neighborhood of 250 ms. Even if you took the acceleration to be .75 G after recovering to the pre-impact velocity, you cannot possibly "tie" the observed descent.

What do you say that your curve will show, however, if you plot points every .034 second, instead? If .83s < t0 < 1s, such that x(t0) ~ 11.08 feet and x(t0 + .034) ~ 11.08 + .88 feet, calculating velocity wrt these adjacent points will yield .88/.034 = 25.9 ft/sec. The velocities that you actually calculated, with .17 s time increments, in [.83s, 1s] are [31.8, 36.9] ft/s, so in this sense the velocity drop would stand out like a sore thumb. To get the full picture, though, one should calculate velocities using all the points every .034 seconds in .83s < t0 < 1s, to see if the "sore thumb" is just one of many, or not. (I.e., just another jaggy point amongst many.)

I don't want to take the time to do this, but as you already have an Excel spreadsheet, it shouldn't take you long to do the honors.
 
What do you say that your curve will show, however, if you plot points every .034 second, instead? If .83s < t0 < 1s, such that x(t0) ~ 11.08 feet and x(t0 + .034) ~ 11.08 + .88 feet, calculating velocity wrt these adjacent points will yield .88/.034 = 25.9 ft/sec. The velocities that you actually calculated, with .17 s time increments, in [.83s, 1s] are [31.8, 36.9] ft/s, so in this sense the velocity drop would stand out like a sore thumb. To get the full picture, though, one should calculate velocities using all the points every .034 seconds in .83s < t0 < 1s, to see if the "sore thumb" is just one of many, or not. (I.e., just another jaggy point amongst many.)

I don't want to take the time to do this, but as you already have an Excel spreadsheet, it shouldn't take you long to do the honors.

How exactly are you getting points every 0.034s?
 
I don't believe this. Can you show it, rigorously?

No, it's not particularly relevant. I take no issue whatsoever with the time intervals, as I've repeatedly stated. Sampling more frequently will simply result in more duplicated values, particularly in the acceleration; in fact, as the average acceleration is less than the measurement interval, all it will do is add a set of zeroes scattered around the data.

When I wrote that the descent is monotonic, I was referring to displacement, not velocity. Also, note that the the911forum.freeforums.org thread has a plot that shows two kinks in the existing data "Their own data shows two jolts, maybe not big enough to satisfy them, but they ARE there." Even in this post, though, there is no hint that displacement has somehow lessened during any time interval.

Your statement that "the descent is monotonic" is irrelevant to the question of why the analysis of results is invalid. The analysis assumes that the acceleration up to each point is constant. As you know, I've been involved in the 911 Forum discussion, I've seen the plot shown, and I've reproduced it myself. Given that the analysis removes the kinks that are known to be in the data, how can you not see that there's a flaw in the conclusion that there were never any kinks there in the first place?

What do you say that your curve will show, however, if you plot points every .034 second, instead?

More low-resolution data and no better results. As I keep saying, the problem with the analysis has nothing to do with the time resolution. The conclusion of the paper is as irrelevant as it is fallacious, so I'm really not prepared to spend any more time on it. If MacQueen and Szamboti are prepared to re-do their work with a mathematically valid technique, I'll take a look at it. At the moment, the only response I've had from Szamboti is that it was "disgusting" of me to criticise it at all, so I'm disinclined to respond any further.

Dave
 
If MacQueen and Szamboti are prepared to re-do their work with a mathematically valid technique, I'll take a look at it. At the moment, the only response I've had from Szamboti is that it was "disgusting" of me to criticise it at all, so I'm disinclined to respond any further.

I've had a similar experience... Mr. Szamboti sent me some PM's, but basically claimed that I was "obviously rattled" in my response.

This seems to be standard crank technique. Ace Baker used to do the same thing, even after I had him on ignore.

Anyway, as I've informed him via PM, I'm not interested in giving a private lecture to him, and if he wants to contribute he needs to do so in the open. The problems with his paper are real, well summarized in this thread, and were known to him before publication. That rather moots his objections unless he's now willing to entertain the possibility that he screwed up (again).

Incredibly, he makes an even stronger claim in his PM, which I will not republish entirely out of good manners, but I will provide the key sentence: "The paper 'the Missing Jolt' doesn't just say there was no 31g jolt, it says there was NO jolt and thus no dynamic or amplified load could have occurred." Namely, that his paper not only disproves the supposed 31 g jolt, but that it disproves any reactive impulse at all -- it somehow provides infinite resolution. Unless this is a language error on his part, this is so obviously wrong that I despair of any reasoning with him.
 
Last edited:
As you know, I've been involved in the 911 Forum discussion, I've seen the plot shown, and I've reproduced it myself. Given that the analysis removes the kinks that are known to be in the data, how can you not see that there's a flaw in the conclusion that there were never any kinks there in the first place?

You didn't reproduce it, you approximated it. And it's a rather poor approximation at t = .17s and .83 sec, where your average acceleration is not only much more than the (smoothed) acceleration that Szamboti MacQean calculated, but even more than G!

In fact, now that I'm looking closely at it, I have questions about precisely what you did, also. You say that

Upper block then continues to accelerate downwards at 1G.

and yet your table of "average acceleration" has values that never go above 22.18 ft/s*2 for t> 2.34. How can that be?

Looking at t = 2.34, you show discretized displacement = 60.72
Looking at t = 2.5, you show discretized displacement = 66.88
Looking at t = 2.67, you show discretized displacement = 73.92


average v1 = (66.88 - 60.72) / .17 s = 6.66 / .17 = 36.24 ft/sec
average v2 = (73.92 - 66.88) / .17 s = 7.04 / .17 = 41.41 ft/sec

average acceleration = (41.41 - 36.24) / .17 s = 30.41 ft/s*2 ~ G, which makes sense.

Me thinky you make mistake-y.....

Either that, or my understanding/guess about how Szamboti and MacQueen did their calcs is wrong.
 

Back
Top Bottom