• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The physics toolkit

which probably derives from someone's estimate that the 480-pixel vertical resolution of the video corresponds to 1919 feet.
Aha. That sort of thinking might explain the odd NIST elevations for WTC 7. Hmm.

The "new" Chandler/Szamboti position data contain quantization errors of ±0.125 foot.
The tracked points on the new data look manually placed, badly. Horizontal placement wanders several pixels, and the position chosen in the video (which is poor quality) does not seem accurate even to the nearest pixel.

We might be able to infer jolts that are a little briefer than that by looking at the velocity's recovery time
Had a look at missable jolts...
here
 
which probably derives from someone's estimate that the 480-pixel vertical resolution of the video corresponds to 1919 feet.
Aha. That sort of thinking might explain the odd NIST elevations for WTC 7. Hmm.
Sorry, my mistake: If 480 pixels corresponded to 1919 feet, then the quantum would have been 1919/480, not 480/1919. I don't know where the 1919 came from, and I was just guessing that the 480 comes from a 480-pixel vertical resolution. For all I know, someone could have estimated 480 feet per 1920 pixels on their computer monitor (which would be exactly 3 inches per pixel) but miscounted the pixels. No matter what the source of the 480/1919 quantum may be, it's definitely present in the Chandler/Szamboti data.

The tracked points on the new data look manually placed, badly. Horizontal placement wanders several pixels, and the position chosen in the video (which is poor quality) does not seem accurate even to the nearest pixel.
The horizontal placement wanders by 7 quanta (7 times 480/1919). Is that the same as 7 pixels?

Had a look at missable jolts...
here
I'll take a closer look later. I was amused to see this, from Tony:
I told you what the maximum magnitude of the 61 msec jolt would be that could be missed by a 200 msec measurement frequency. It is 0.82 g's.
Tony's argument used data taken from the 0.8-to-1.0-second interval on my time axis, where deceleration does indeed appear unlikely. His one-paragraph argument also contains several mistakes. One of the more obvious problems with his argument is that he can't use the specific data for that one interval to argue that jolts could not have gone undetected anywhere else, such as 1.1 seconds, but that is the conclusion he's trying to promote.

The new Chandler/Szamboti data shows an impulse near 1.1 seconds (on my time axis, not Tony's) that is consistent with all three of the following models (and others as well):
  • 200 msec of net .28g downward acceleration (reduction of .72g from free fall), no actual deceleration;
  • 100 msec of net -.45g downward acceleration (that is, a .45g upward acceleration, for a reduction of 1.45g from free fall); this actual deceleration goes undetected because the 5 Hz sampling rate is too low;
  • 67 msec of net -1.17g downward acceleration (that is, a 1.17g upward acceleration, for a reduction of 2.17g from free fall); this actual deceleration goes undetected because the 5 Hz sampling rate is too low.
That last model involves a 1.17g jolt lasting 67 milliseconds. It's perfectly consistent with Tony's own data, but Tony says it can't exist. In other words, Tony continues to promote a fallacy that's contradicted by his own data.
 
Had a look at missable jolts...
here
I suspect you wanted comments on your own post, not on Tony's. Here ya go...

...the lower the acceleration between actual samples, the higher the missable jolt magnitude is.
That's true as stated, provided you really do mean acceleration (but I don't think you do; see below) and you're measuring downward acceleration.

Let's prove your statement under the assumption that you know the exact velocity at the beginning of the interval. The greatest acceleration possible is free fall, which (together with the initial velocity) completely determines the position at the end of the interval. Any jolt at all would show up as a different position at the end of the interval, because you can't recover from the jolt within the interval by falling faster than free fall.

You probably don't know the exact velocity at the beginning of the interval, but that just means you can't always detect a jolt. It remains true that free fall during the interval leaves the least room for an undetectable jolt.

Unfortunately, I don't think you really meant acceleration when you wrote the passage quoted above. I think you meant delta-vee, not acceleration, because you went on to write:
That would imply that the earlier in the descent any jolts are expected to occur, the higher the magnitude of missable jolts is, for any specified sample interval.
The delta-vee per unit time (or per interval) is smaller during the early part of the descent, but there is no a priori reason to expect the average acceleration to change much during the descent.

In popular usage, the word "acceleration" is often used to refer to delta-vee as well as to true acceleration. In technical work, that confusion will lead you astray, as in the last passage quoted above.

Acceleration is the first derivative of velocity, and the second derivative of position. Delta-vee is the integral of acceleration over some interval. Those are completely different things. A change in velocity (delta-vee) is not a rate of change in velocity (acceleration).
 
That's true as stated, provided you really do mean acceleration (but I don't think you do; see below) and you're measuring downward acceleration.
I just extrapolated an equation from Tony's initial variables, so what I was trying to say was change of the acceleration variable that Tony stated as 0.64g, which is derived from two points on a velocity plot...
Tony said:
I used points 7 and 8 in the Tracker data where the velocities are 4.568 and 5.920 m/s respectively and where the first floor collision would have likely taken place about 1 second into the fall. The slope between these points is 6.28 giving an acceleration of 0.64g. The difference is then 0.36g and the amount of additional velocity that could be gained at 0.36g over 139 msec is 0.49 m/s. This would be the velocity loss due to the jolt. This velocity loss over a 61 msec duration equates to a jolt of about 0.82 g's.
...so I do mean acceleration, even though the scope is a bit vague. That 0.64g variable is required to determine the *headroom*. Making up my own terms now ;)

That said, I think the equation (and so the graph) still holds true, even though I've described the *slope* variable badly (as it's actually the *slope* of a velocity plot, and would be better to be an acceleration derived from three velocity points...)

Am not validating Chandlers underlying data in any way though btw. Just an example used to derive the generalised equation.


(click to zoom)

I'll check it.
 
Last edited:
Before responding to femr2, I need to correct my previous post...

The delta-vee per unit time (or per interval) is smaller during the early part of the descent, but there is no a priori reason to expect the average acceleration to change much during the descent.
I should have said delta-position there instead of delta-vee. So long as the interval size and the acceleration are both constant, the delta-vee will be constant across all intervals, and the delta-vee per interval will be the product of the constant interval width and the constant acceleration.

Acceleration is the first derivative of velocity, and the second derivative of position. Delta-vee is the integral of acceleration over some interval. Those are completely different things. A change in velocity (delta-vee) is not a rate of change in velocity (acceleration).
That's all true, but irrelevant. Delta-position is even more different from acceleration than delta-vee, and I was assuming femr2 was confusing delta-position with acceleration when he wrote of some acceleration-related thing being smaller "earlier in the descent".

I just extrapolated an equation from Tony's initial variables, so what I was trying to say was change of the acceleration variable that Tony stated as 0.64g, which is derived from two points on a velocity plot...
Understood. Your plot and the formula on which it is based look fine to me.

That said, I think the equation (and so the graph) still holds true, even though I've described the *slope* variable badly (as it's actually the *slope* of a velocity plot, and would be better to be an acceleration derived from three velocity points...)
Thanks for clarifying. There are several different things you could have meant by "slope", but I was able to figure out what you meant. Because it looked right to me, I didn't comment on it in my previous post.

The only thing that looked wrong was the "earlier in the descent" business, and I botched my response to that (as explained above). I apologize for my confusion.
 
The horizontal placement wanders by 7 quanta (7 times 480/1919). Is that the same as 7 pixels?
Doubt it. Horizontal resolution should be 720 pixels so should have a different quanta unless pixel->feature ratio is exactly 1:1 (which it never is), but no matter. The data shouldn't wander horizontally in the way it does. Poor quality trace.

ETA: The video quality is also poor, and it looks like it's not been deinterlaced properly, so each frame in the video actually contains visual information from 2 distinct points in time merged together.

Here's a comparison...

A frame from the video Chandler used (upscaled for comparative detail inspection)...
s3.png


Same (I think) frame from my version (native resolution)...
sa1.png
 
Last edited:
:) Poor. Rubbish. Not good. Bad.

ETA: Forgot about the USEnglish thing. Pants=Shorts=Undergarments.

So Chandler is.... underwear.

Is underwear bad?

Victoria's Secret is bad?

I submit that there may need to be a change in thinking for GB's youth.
 
So Chandler is.... underwear.
Is underwear bad?
Victoria's Secret is bad?
I submit that there may need to be a change in thinking for GB's youth.
Though I'd love to say otherwise, I'm afraid I don't fall into any interpretation of *youth*.

All utterly irrelevant of course. I could laugh out loud at the US usage of the word fanny, but it would be somewhat immature.

As you have gotten the gist, I suggest you attempt to contribute more productive input.
 
Doubt it. Horizontal resolution should be 720 pixels so should have a different quanta unless pixel->feature ratio is exactly 1:1 (which it never is),
Someone thinks it's exactly 1:1. If you look at the Chandler/Szamboti XML posted by DGM, you'll see that
  • the "width" property is 720.0
  • the "height" property is 480.0
  • the "xscale" property is 3.3116604938851464
  • the "yscale" property is 3.3116604938851464
The x and y scale factors are exactly the same, and contain far more precision than could be justified by any physical measurement.

How could they have come up with that scale factor? Possible clue: It's almost (but not quite) 1% greater than the correct scale factor for converting meters to feet. Coincidence? Probably not: converting Chandler's metric heights to feet using that scale factor yields better (but still not perfect) agreement with the new Chandler/Szamboti data in feet than using the correct conversion factor. I began to wonder whether the peculiar scale factor could have come from botching a conversion from feet to meters or vice versa, possibly by working part of it out by hand or on a calculator in decimal, rounding to some small number of digits, and then finishing the calculation in binary floating point.

I never figured that out, but my pursuit of that conjecture revealed the 480/1919 quantum of the new Chandler/Szamboti data. The horizontal (x) values are quantized with exactly the same 480/1919 quantum as the vertical (y) values.
 
How could they have come up with that scale factor?

It'll be automated based on the single *ruler* measurement that's been included from within the Tracker program...



(The blue almost vertical line on the West edge)
 
As you have gotten the gist, I suggest you attempt to contribute more productive input.

The only productive thing to say to a truther is that they're insane and need professional psychiatric help.

All else is enabling , as some truthers have realized, their "conspiraspanking".
 
The only productive thing to say to a truther is that they're insane and need professional psychiatric help.

After a year at this place, I can't dispute the above. I got here a year ago, trying to get truthers to post affirmative hypotheses without much luck. Debating someone who can't or won't state any explanation that better fits the facts is pointless. I think it's Dave Rogers who has the signature line about google removing context automatically. That's what femr2 does; discuss minutia without any regard as to how his special secret knowledge fits into the bigger picture. George Monbiot accurately called this a "displacement activity" -- attacking a straw man, rather than effecting real, useful political change. I suppose that there are worse hobbies, and the bright side would be that they learn engineering and physics along the way like I do.
 
It'll be automated based on the single *ruler* measurement that's been included from within the Tracker program...
Thanks! So the scale factor was just coincidence, or almost so. The user inputs a dimension with fairly nice numbers, and the trigonometric calculation for a nearly vertical or horizontal dimension converts that into a nearby number with random-looking digits.

Your title for that screen grab is "Chandler Thing", but this is the first time I've seen that exact set of measurements. They agree with Chandler's published metric data to within 3 millimeters. They don't agree so well with the new Chandler/Szamboti data in feet. What's the source of that screen grab?

Using an old-fashioned ruler on my monitor's display, it looks as though the total vertical distance in that video (including the parts I can't see in that screen grab) is about 460 feet. If it's really 480 feet (or Tracker were told so), then that would explain the 480 part of the 480/1919 quantum. If the original video has 960-pixel vertical resolution, and Tracker can perform half-pixel tracking for all pixels except for one of the two boundary pixels, then that might explain the 1919 part of the 480/1919 quantum. My new guess is that the 480/1919 quantum in the new Chandler/Szamboti data corresponds to 1/2 pixel width or height in the original 1440x960 video, which corresponds in turn to a full quarter pixel width or height in the 720x480 version.
 
Last edited:
What's the source of that screen grab?
It's directly from my copy of Tracker with the XML data posted by Tony, and using the video that was actually used. The resources are all in this thread a little earlier. I just opened the thing in Tracker.

If the original video has 960-pixel vertical resolution
No, Chandlers video is 720x480.

My 1440x960 version is derived from an original interlaced DVD version. Instructions to generate are here.
 
My 1440x960 version is derived from an original interlaced DVD version. Instructions to generate are here.

Isn't that attempting to create a silk purse from a sow's ear?

"Original DVD version"? You mean a DVD compression copy of the original video?
BTW what type of recording format was the actual original recording?
DVPro? Digital Beta? mpeg or mov to hard drive? other propriety tapeless recording?

Simply put you cannot "create" better resolution. Yes some programs will interpolate and derive more pixels and better resolution that way but geez, given that the TM itself decries such electronic manipulation in other cases what makes it better here?
 
"Original DVD version"? You mean a DVD compression copy of the original video?
Indeed, an mpeg-2 copy taken directly from DVD without recompress at the best bitrate I could find. The file is 93Mb @ avg 5960Kbps for a 2:04 clip. Not bad at all. The source file is in the link.

If you are aware of a better quality version, by all means let me know.

If not, there's little point in complaining.

Not sure of the original format, probably DV. Got a copy ?

The method of upscaling uses SuperResolution technology. Read up on it. It doesn't interpolate, and doesn't create anything. It uses a mathematical process of extracting detail from the adjacent frames.

I'm sure you will agree that it's fidelity is far superior to the version used by Chandler, which is the point.
 
Indeed, an mpeg-2 copy taken directly from DVD without recompress at the best bitrate I could find. The file is 93Mb @ avg 5960Kbps for a 2:04 clip. Not bad at all. The source file is in the link.

If you are aware of a better quality version, by all means let me know.

Not sure of the original format, probably DV. Got a copy ?

If not, there's little point in complaining.
The point in complaining is that you are not using 'original ' recording and thus there has been some loss in converting and/or compressing from original.
No, I do not have a copy but I am not the one doing an essentially forensic investigation with this video.



The method of upscaling uses SuperResolution technology. Read up on it. It doesn't interpolate, and doesn't create anything. It uses a mathematical process of extracting detail from the adjacent frames.

I'm sure you will agree that it's fidelity is far superior to the version used by Chandler, which is the point.

Oh yes it looks better in resolution than does Chandler's. Yes Chandler is a skid mark (more North American in slang verbiage, than "pants")

I don't really care too much how it accomplishes this resolution you do acknowledge though that with debunkers using such manipulation that there would be a great cry and gnashing of teeth by the 911TM about it? If TM uses it though, they are quite satisfied with it,,, odd.

Still, in order to have more pixels than in the first redition then the program simply must 'create' those pixels and it does this by making assumptions. It may not be a direct averaging interpolation but you even mention that it uses a mathematical manipulation of what actually is present in the pixels of the recording to , again I use that word, "create" new pixels.

Now you must ensure that the error margin in this process is at least an order of magnitude less than all other errors in the following forensics you do with it.

Hell, go for it, I admit its my own personal incredulity that is holding me back in this.
 
Last edited:
The point in complaining is that you are not using 'original ' recording and thus there has been some loss in converting and/or compressing from original.
No. Absolutely zero *loss*. All steps from original use raw binary data. Zero recompression. The upscaling method does not negatively affect the underlying data at all. If the adjacent frames include appropriate additional information, it is included, otherwise the underlying data is passed through untouched. At absolute worst, trace accuracy matches that of that obtainable from the original (deinterlaced) video.

It is also impossible (well, very incorrect) to use the *original* interlaced mpeg-2 video data for tracing purposes, as each video frame contains two distinct frames from different points in time.

The method and procedures have been extensively tested on blind video data with known underlying movements, in which an original high resolution video has been significantly downscaled, and the methods used to trace feature movements. Results for SR upscaled video have consistently proven excellent, as have the tracing algorithms within SynthEyes (the application used to perform the automated tracing).

No, I do not have a copy but I am not the one doing an essentially forensic investigation with this video.
Shame. And, yes, I am applying methods honed and deveoped over quite a long period of time for the specific purpose in hand. I make the data public, so anyone is very welcome to verify the data in any way they please. I perform the steps I do to try and ensure the absolute highest quality and accuracy possible. As included above, the procedure itself is public too, so can be replicated an repeated by all.
 

Back
Top Bottom