• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The physics toolkit

I'm not yet convinced that the initial G is this high. But I will believe good data.

A factor to bear in mind is that it's not just my data that exhibits above-G segments, NIST being one of 'em...

34gllzs.gif


Also, the purpose of posting the data initially was your statement about the descent time for WTC 7.

I've provided my raw data, the procedures applied are in the spreadsheet, and there's also a video showing the positioning in video form.

I've highlighted the scaling factor progress, and as I've said, my current dataset maxes out around the 36ft/s^2 mark. Not much above NISTs curve fit max.

Am happy to generate and provide additional data for whatever purpose, but if it's going to evolve into another endless process involving you trying to *debunk* my data, or *discredit* the methods, then I'll apply my time and effort in more personally productive arenas. You're welcome to generate your own data of course. You'll note I've made no claims about the data, other than defend it's integrity.
 
A quick and dirty trace of the three features suggested (NW corner, Near Kink & NE corner)

280647948.png


Just eyeballing suggests that, as NIST stated 32.196ft/s^2 for their linear fit (32.196 indeed), that we can expect a 2D trace of the NW corner to exceed that.
 
femr,

tfk said:
I do not know femr's techniques for producing these numbers.

femr said:
Quite happy to go into as much detail as is necessary.

I've asked you several times to describe in some detail exactly how you get your "sub-pixel" resolution.

Care to give that a whirl?


Tom
 
A factor to bear in mind is that it's not just my data that exhibits above-G segments, NIST being one of 'em...

http://i33.tinypic.com/34gllzs.gif

Also, the purpose of posting the data initially was your statement about the descent time for WTC 7.

I've provided my raw data, the procedures applied are in the spreadsheet, and there's also a video showing the positioning in video form.

I've highlighted the scaling factor progress, and as I've said, my current dataset maxes out around the 36ft/s^2 mark. Not much above NISTs curve fit max.

Am happy to generate and provide additional data for whatever purpose, but if it's going to evolve into another endless process involving you trying to *debunk* my data, or *discredit* the methods, then I'll apply my time and effort in more personally productive arenas. You're welcome to generate your own data of course. You'll note I've made no claims about the data, other than defend it's integrity.

I mean no disrespect to you. But you & I butt heads.

The reasons why are probably best left as water under the bridge. So I'll keep the snark out of the discussion and simply address the technical side.
___

This is good raw data.

You're to be commended for the (clearly extensive) work that you've put into it. You're to be especially commended for your willingness to share your raw data. Something that is exceedingly rare in the truther world.

We have disagreements about several things that overlap into my area: engineering. Especially the recognition & quantification of data errors and their huge effect on the interpretation of data.

The biggest lesson of just a few hours for me on this data is something that is readily evident in your excel data: the fact that very slight variations in the position vs. time model result in sizable variations in the velocity vs. time mode. And then result in enormous variations in the acceleration vs. time model.

With the sensitivity that I've seen, it doesn't surprise me in the slightest that NIST's acceleration exceeded G. It would not surprise me now if their MODEL's result had produced momentary accelerations that approached 2G.

This does not mean that the north wall necessarily fell with this acceleration.

It far more likely means that the MODEL's acceleration curve is extraordinarily sensitive to the slight variations in the acquisition, massaging & HONEST manipulation of the data that is typically used by competent engineers & scientists to try to get the best results possible.


Tom
 
This is good raw data.
It's as good as I can get using my current methods and the available footage. I use the very best footage I can lay my hands upon, but it would be better still if I could use the *original* media. SynthEyes does a great job of feature tracking, and I've seen no better. Plotting points manually by eye cannot reliably compare. I'm sure there are improvements possible, including in terms of signal processing, but with diminishing gains.

We have disagreements about several things that overlap into my area: engineering.
Most of what I do is based on the visual record. Engineering doesn't really play much part, like the orientation study. It's just a close look at the visual record using tools and methods honed over the period. I think most of the wall is built on lack of trust to be honest.

Especially the recognition & quantification of data errors and their huge effect on the interpretation of data.
I probably don't use formal methods, but it's all done as well as possible with nothing hidden or deliberately distorted. Orientation study - NIST trajectory and orientation is wrong. Exactly how much is probably the source of some more discussion at some point. The implications are a different kettle of fish. No idea. Only way to know would be a re-run of the simulation with more accurate input parameters.

The biggest lesson of just a few hours for me on this data is something that is readily evident in your excel data: the fact that very slight variations in the position vs. time model result in sizable variations in the velocity vs. time mode. And then result in enormous variations in the acceleration vs. time model.
Absolutely. The biggest gain in generating high fidelity and high sample-rate dat is the opportunity to have enough of it to cut through the fuzz by *losing* some of it through things like smoothing. Choice of smoothing methods then becomes important, as is understanding the nature of the source (such as the immediate need to deal effectively with deinterlace jitter). An alternative route for jitter is to leave the video unfolded, and perform two separate traces of the same feature at half framerate, as each field will not suffer from jitter. I tend to try both and see how they compare. For finding *mini-jolts* sample rate is critical. For finding average acceleration less so.

With the sensitivity that I've seen, it doesn't surprise me in the slightest that NIST's acceleration exceeded G.
It would be handy to have their raw data ;)
It does surprise me that their linear fit works out at 32.196ft/s^2 though.

This does not mean that the north wall necessarily fell with this acceleration.
Using wider and wider sample ranges and linear fits would give good approximation without amplifying low level noise.

I tried a running 29 sample wide symmetric difference drop->velocity->acceleration test with very favourable results. Still over-G for the NW corner, but I do think that is to be expected (and actual).

Shall apply some time as-soon-as to traces of the NE corner and Kink location, and post the data when I can. Might be a couple of days (quite a time consuming process. just did a quick run to generate the previous graph)
 
femr,

I've asked you several times to describe in some detail exactly how you get your "sub-pixel" resolution.

Care to give that a whirl?

Tom

No problem. Have done so several times though...

I use the professional feature tracking system SynthEyes
http://www.ssontech.com/synsumm.htm

Check out the list of Movies SynthEyes is used on

SynthEyes employs various methods to track features including pattern matching, bright spot, dark spot & symmetric spot.

ETA: I could go into the whole pattern match and spot processes, but essentially it's like the process used by NIST in NCSTAR 1-9 Vol2 C.1.3 p680, but automated and honed over many years of upgrades :)

It's output is sub-pixel 3DP.

To give you an idea of it's applications, it will also take a number of tracked points and *solve* the scene into three dimensions (including camera motion, lens distorion, ...) in order to facilitate the inclusion of three-dimensional models into a composite piece of video footage. It's used regularly in the movie industry for live-footage->cgi composite work.

The tracker itself is excellent. As I've said, I've blind-tested it using video generated at high resolution with known feature movements, downscaled and applied noise to the video, then used SynthEyes to track the relevant points. Results were shockingly good.

My pre-requisite to tracking is very careful preparation of the video footage, performing steps such as interlace unfold and correctly applied bob doubling. All video preparation steps are chosen such that they *cannot* deteriorate the quality of the original video signal. They can only improve it, or, at worst, match the original.
 
Last edited:
A quick and dirty trace of the three features suggested (NW corner, Near Kink & NE corner)

http://femr2.ucoz.com/_ph/3/280647948.png

Just eyeballing suggests that, as NIST stated 32.196ft/s^2 for their linear fit (32.196 indeed), that we can expect a 2D trace of the NW corner to exceed that.


Thanks for providing unequivocal proof that NIST's conclusions are absolutely true.

Do you see it in this graph. It's sitting right in front of you.


Tom
 
Congratulations, femr.

IMNSHO, this is the first significant piece of data that I've seen you produce.

I don't know if anyone has published this data yet. But if not, it is significant. You should be proud.
___

Time out for a sports metaphor...

Unfortunately for you, THIS is what you've accomplished.

Don't worry. it's not like the score was tied. Or that the score was close. Or that the game was even still going on. The game ended about 3 years ago. It was a rout.

Enough with the ESPN moment. Back to the data.
___

ASSUMING THAT both femr & the video software did their jobs (that is, the software has identified & tracked a fixed, well defined location on the northeast corner), then this graph puts the last of 100 nails in the coffin of the Truther nonsense regarding the collapse of WTC7.

It does not seem possible that the motion of the northeast corner is an artifact. Regardless, this will be easily confirmed or denied with a little more investigation. femr, you can do this and produce something really meaningful. First steps will be to carefully check, recheck, and attempt to carefully quantify your info. If you do this well, there's only one thing that I can see that keeps it from being publication-worthy: the fact that the issue was settled a long, long time ago. And the info that you've discovered here simply reinforces, very slightly, what the pros have said for years.

What is absolutely clear from this is that the northeast corner of the building was slowly collapsing for over 4 seconds before the kink developed in the center of the north wall. Just as NIST said. And, as was already known, it was then approximately one second after the kink that the global collapse of the north west corner began.

There are several critical points.

1. The motion of the northeast corner does not appear to be an artifact. The stability of the center & northwest corner's location of the building DURING the time that the northeast corner was moving rules out most other explanations for artifact motion.

2. Just as NIST said and truthers deny, the collapse of the building was NOT a sudden event. But that it was a continuous process that extended over many seconds before the LAST act: the fall of the North wall. The slow, gradual motion of the northeast corner proves this beyond doubt.

In some fantasy delusion, Truthers have claimed that the collapse of the east penthouse was somehow a separate, unrelated event from the collapse of the North wall. Which they mistakenly refer to as "the collapse of the building". This data completely negates that illusion.

Something the pros knew all along, of course.

This s why there absolutely zero problem, danger, risk associated with any additional competent investigation.

Nice catch, femr.


Tom
 
Last edited:
tfk said:
[paraphrase] Please explain exactly how you achieve the "sub-pixel" accuracy that you claim.

No problem. Have done so several times though...

And all of you explanations (that you've provided to me) have been identical to this result: no explanation at all.=

I use the professional feature tracking system SynthEyes
http://www.ssontech.com/synsumm.htm

I know that.

Check out the list of Movies SynthEyes is used on

I'm not particularly interested in movies.

SynthEyes employs various methods to track features including pattern matching, bright spot, dark spot & symmetric spot.

Not what I asked.

ETA: I could go into the whole pattern match and spot processes, but essentially it's like the process used by NIST in NCSTAR 1-9 Vol2 C.1.3 p680, but automated and honed over many years of upgrades

Not an explanation.

It's output is sub-pixel 3DP.

Assertion. Not explanation.

Please don't refer me to someone or someplace else. Just a simple 2 or 3 paragraph explanation in your own words, please.


Tom
 
Just to make the math simple, I indexed z to the start of downward motion.
You mean, of course, that your t=0 corresponds to downward motion of the northwest corner of the roof, which occurs well after the start of downward motion for the east side and center of the roof. That's why you got average accelerations of greater than 1g for the first 1.3 seconds.

3. Hang a heavy weight out in space off of the roof of a building. Attach it to an object on the roof with a beam. Put pivot joints at each end of the beam. Drop the weight. The weight falls at g. Initially, the beam pivots, and the object to which it is tied stays stationary on the roof. Finally, the beam can pivot no more, and the object is jerked off the roof. The objects initial acceleration will be greater than G.

This is similar to the description that you proposed, WG.
Yes, but I didn't postulate a heavy weight; the weight of the roof itself will do fine.

What I described is more similar to a plank (the roof) resting on top of two empty beer cans placed at its extreme ends. Knock the leftmost can (the support for the east corner) away at some time t0. The plank (roof) begins to rotate about the hinge at its rightmost end.

A little after t0, at some time t1, the rightmost end of the plank slips off of the rightmost can. The plank has acquired some angular momentum (counterclockwise) before t1, but the rightmost end was stationary up to t1. After t1, the counterclockwise rotation continues (which means the downward velocity of the rightmost end is less than the downward velocity of the leftmost end) but you have to add the motion of that rotation to the motion caused by downward acceleration of the plank's center of gravity.

The downward acceleration due to gravity began at t0, not at t1. If you look only at the position of the plank's rightmost end starting at t1, you'll see an acceleration greater than 1g but won't understand why.

If you do the math, you'll find that the downward acceleration of delta-vee for the rightmost end is greater than (eta: would result from an acceleration of) 1g for some period of time starting at t1, but is less than 1g for intervals starting at t0 (until the plank rotates to vertical or the leftmost end hits something, which is your scenario 4).

4. Everything starts just as WD described it. The wall fails first near the east end, where we know the initial failure occurred (i.e., at the kink). The east end of the wall falls nearly at G because it has a multi-story buckling failure. The wall as a whole falls & pivots counterclockwise, west end high, because the west end has not yet failed. The bulk of the wall & attached structure builds up momentum.

This is just like the situation you described, WD.
Yes, that is what I described. In free fall, however, the left end of the plank (the eastern corner of the roof) could experience an acceleration of greater than 1g because the center is accelerating at almost 1g and the left end of the plank is accelerating downward faster than the center.

Suddenly, some point at the bottom of the falling section near the east end hits some significant resistance. As we know it ultimately will.

As long as the impact point was east of the wall's c.g., this impact would transmit a huge dynamic load thru the wall, perhaps instigating the failure at the west end of the wall.

If true (and it does make sense), there should be a sudden, perhaps measurable drop in both the CCW angular velocity of the wall & the downward linear velocity of the wall towards the east end just before the west end starts its fall.
That's entirely plausible, but I'd like to emphasize that no such event is needed to explain the northwest corner's acceleration at more than 1g when measured from time t1 (your t=0) or the east corner's acceleration at more than 1g when measured from time t0 (before your t=0).

femr,

I've asked you several times to describe in some detail exactly how you get your "sub-pixel" resolution.

femr2's just using the software; he didn't write it and probably doesn't understand all of its algorithms. I don't either, but I can demystify this a bit by explaining a couple of standard techniques.

The first technique is easiest to explain if we assume the software is tracking a light-colored feature against a dark background, and the light-colored feature is approximately one pixel in size.

If the light-colored feature is perfectly centered within a pixel, then that pixel would be the color of the feature while all 8 pixels that surround it would be the color of the dark background.

If the light-colored feature is exactly halfway between the centers of two pixels, then both of those pixels would be the same color, lighter than the background but not as bright as a single pixel would be if the feature were centered within it.

If we do the math and work out a threshold for distinguishing between the two scenarios above, we'll get half-pixel resolution. (The math is a little more complicated than I made it sound, because a pixel-sized feature could overlap with as many as 4 pixels.) If you iterate that process a second time, you can get quarter-pixel resolution; I think that's pretty close to the practical limit of this technique.

A second technique exploits the improved spatial resolution that can obtained by integrating successive frames of a video. Synthetic aperture radarWP is a well-known application of a similar technique.
 
Last edited:
Tom,

I do not appreciate you using your posts to me as some form of venting opportunity.
If you must vent, separate it from your dialogue with me.

As I said, I'll do the traces over the next couple of days.

What would you interpret from this animated GIF ?

NIST_WTC7_redo-newoutsmallhy.gif


As for how sub-pixel tracing accuracy is possible, I think the clearest way is to use a visual example (as is my wont).
I'm not really into protracted technical verbage.

I generated the following animation of a simple circle moving in a circle itself...
26543418.gif


I then downscaled it until the circle is a small blob...
372817686.gif


If we look at a blow-up of the shrunken image, such that we can see the individual pixels, it looks like this...
477542845.gif


Sub-pixel feature tracking essentially works by enlarging the area around the feature you want to track, and applying an interpolation filter to it. Lots of different filters can be used with varying results.

Applying a Lanczos3 filter to the previous GIF, to smooth the colour information between each pixel, results in the following...
371929626.gif


I think you will see that there will be no problem for a computer to locate the centre of the circle quite accurately in that smoothed GIF, even though the circle in the original tiny image was simply a random looking collection of pixels. This process of upscaling and filtering generates arguably more accurate results than simply looking at inter-pixel intensities.

The resulting position determined is therefore clearly sub-pixel when translated back into the units of the original tiny source.

It is a side effect of aliasing that small movements of the object cause slight variation in inter-pixel intensity, saturation and colour.

Tracing the position of the small blob on the tiny version results in the following...
323039059.png


The raw data is here...
http://femr2.ucoz.com/SubPixelTracing.xls


The graph shows accurate (not perfect) sub-pixel location data for the position of the small blob.

I could go into more detail, but hope that clarifies.

ETA: Another test using exactly the same small blob rescale, but extended such that it takes 1000 frames to perform the circular movement. This results in the amount of movement being much much smaller between frames. This will give you an idea of how accurate the method can be...

(click to zoom)

Would you believe it eh.

Here's the first few samples...
Code:
0	0
-0.01	0
-0.024	-0.001
-0.039	-0.001
-0.057	-0.001
-0.08	-0.002
-0.106	-0.002
-0.136	-0.002
-0.167	-0.002
-0.194	-0.004
-0.214	-0.005
-0.234	-0.005
-0.251	-0.007
-0.269	-0.008
-0.289	-0.009
-0.31	-0.009
-0.337	-0.012
-0.365	-0.014
-0.402	-0.015
-0.431	-0.018
-0.455	-0.019
-0.48	-0.02
For this example, I'll quite confidently state that the 3rd decimal place is required, as accuracy under 0.01 pixels is clear. There are other sources of distortion, such as the little wobbles in the trace, which are caused by side-effects of the smoothing and upscaling when pixels cross certain boundaries. This reduces the *effective* accuracy. Can be quantified, by graphing the difference between *perfect* and the trace location, but not sure how much it matters.

Now, obviously this level of accuracy does not directly apply to the video traces, as they contain all manner of other noise and distortion sources. For previous descent traces I've estimated +/-0.2 pixels taking account of noise.
 
Last edited:
Hey WD,

You mean, of course, that your t=0 corresponds to downward motion of the northwest corner of the roof, which occurs well after the start of downward motion for the east side and center of the roof. That's why you got average accelerations of greater than 1g for the first 1.3 seconds.

I see what you're saying. That's an interesting thought.

And all of your comments apply not only for motion & dynamics within the plane of the north wall, but for those perpendicular to it as well.

femr should easily be able to catch the dynamics if they are within the plane of the wall. In fact, it's likely that he already has. It'd be a lot more difficult, of course, to see them if it's the collapse of the internal framework that drags it down in the way that you're describing. Perhaps that conclusion will be reached by the process of eliminating the within the north wall plane possibility.

It'd be very useful if he'd provide the same curves for the motion of several (say about 3 additional) points along the roofline. Also, the motion of the four corners of the black louvres on that wall would be very useful to give angular rotation of the wall.

Yes, but I didn't postulate a heavy weight; the weight of the roof itself will do fine.

Well, the weight of all the structure, of course.

That's entirely plausible, but I'd like to emphasize that no such event is needed to explain the northwest corner's acceleration at more than 1g when measured from time t1 (your t=0) or the east corner's acceleration at more than 1g when measured from time t0 (before your t=0).

I agree completely. And we know that the start of the building's collapse began about 9 seconds (IIRC) before the beginning of the collapse of the northwest corner.

femr2's just using the software; he didn't write it and probably doesn't understand all of its algorithms. I don't either, but I can demystify this a bit by explaining a couple of standard techniques.

Thanks.

I wanted a clear explanation of the specific techniques that he used in this case.

Even now, he's given "what can be done". And alluded to techniques "similar to what NIST used".

My skepticism is based on the fact that NIST had access to a boatload of experts in vidoegrammetry. Guys that are world class experts in these techniques.

As I said, the NIST report gives NIST's estimate of their error as ±6'. femr is claiming 6 inches.

I wasn't born a giant pain-in-the-ass skeptic. But I did grow into it.


Tom
 
Last edited:
Tom,

I do not appreciate you using your posts to me as some form of venting opportunity.
If you must vent, separate it from your dialogue with me.

I'm not really into protracted technical verbage.

It seems Tom's (tfk) unwarranted arrogance and pomposity just doesn't lend itself to being appreciated.

The guy (if he is actually a guy) writes like he thinks he is the only one in the world who knows something about anything, and he doesn't seem to mind being abusive.

I have a feeling that the real reason for his not being forthcoming with his identiity is that he feels his anonymity allows him to project an air of authority, as a ploy in the discussion, in a way that he wouldn't be able to if his identity were known. His excuse for not revealing his identity, that he is afraid of some form of retaliation by those who don't buy the government line on the events of Sept. 11, 2001, has no real basis. There are others like Ryan Mackey and Ron Wieck who publicly support the official story and nobody has threatened them or attempted to harm them in any way that I know of.

There can be no credibility given to anonymous posters since they face no risk of losing that credibility.
 
Last edited:
femr should easily be able to catch the dynamics if they are within the plane of the wall.
It may also be possible to resolve the movement into three dimensions. Deformation of the facade is the only stumbling block to do that with ease.

Perhaps that conclusion will be reached by the process of eliminating the within the north wall plane possibility.
NW and NE corners are clearly moving in three dimensions. Flex behaviour of the north face can be seen simply by scrubbing through video of the descent.

It'd be very useful if he'd provide the same curves for the motion of several (say about 3 additional) points along the roofline.
The lack of decent contrast at the roofline makes tracing features along it very prone to error. It will only be possible to provide traces on the roofline for sections of the full clip length. The Near Kink trace provided uses the TL corner of the *black box* on the facade. It's simply not possible to identify the actual position of the roofline for much of the width of the building. Yes, NISTs raw data would be quite useful.

Also, the motion of the four corners of the black louvres on that wall would be very useful to give angular rotation of the wall.
Okay. In addition I'm tracking a number of points down the East edge.

And we know that the start of the building's collapse began about 9 seconds (IIRC) before the beginning of the collapse of the northwest corner.
Until I have access to a longer version of the Camera #3 footage, there is no raw data available to confirm that timing.

I wanted a clear explanation of the specific techniques that he used in this case.

Even now, he's given "what can be done". And alluded to techniques "similar to what NIST used".
I've provided you with a very clear example, including the very simple process (upscaling and interpolating) that allows sub-pixel tracing to actually work. As I said, I can go into more detail, though it shouldn't really be necessary.

If you want more hand-holding, fine. We can go into pattern matching algorithms, FFT, search ranges, all manner of gibberish (aka unnecessary protracted technical banter)

My skepticism is based on the fact that NIST had access to a boatload of experts in vidoegrammetry. Guys that are world class experts in these techniques.
I'll take that as a compliment, regardless of your intent.

As I said, the NIST report gives NIST's estimate of their error as ±6'. femr is claiming 6 inches.
You're getting confused about metric between the Flight 175 video studies and WTC 7. If you study the Camera 3 details in the NIST report, you'll find they state rather higher levels of accuracy post-noise-treatment.

My current noise level estimate for the NIST Camera #3 footage traces currently stands at +/- 0.7ft. I think even you are capable of working out that's 1.4ft (16.8 inches).

However, my stated estimation is noise level. I'm inclined to suggest that post-noise removal (smoothing) it's valid to state a higher level of accuracy. Haven't done so yet though.

Try and remain accurate Tom.

It's worth pointing out again that the roofline does not contain appropriate contrast detail for accurate tracing to be performed. NISTs use of a (badly defined) point on the roofline is therefore a source of lack-of-confidence in their trace graphs (The raw data from which has not been released).

I wasn't born a giant pain-in-the-ass
Bearing in mind you appear to have come into contact with data containing noise only a couple of days ago, have no obvious personal experience or understanding of any of the techniques being used, are clearly prepared to make conclusions on a simple graph of *quick and dirty* trace data, hand-wave away provided proof-of-process details, and it seems you have not actually looked at the movement of WTC 7 in video form in any kind of detail before (as you seem surprised by movement many have seen since the first scrub through video)...forgive me for stating the obvious...

You're not really in a position to be complaining Tom.

If there are things you don't understand, by all means ask.

Is there some additional detail about sub-pixel tracing techniques you don't understand ?

Obviously the exact inner-workings of SynthEyes are not public, but feature tracking systems all share a common set of baseline methods, which I'll go into further detail upon if absolutely necessary.

Oh, and just a small point...NIST do not appear to have rxtracted static point data from their traces, and also appear to have overlooked horizontal perspective implications (as their vertical distance metric was derived from a different horizontal position to their roofline trace position). Ho hum eh.
 
There can be no credibility given to anonymous posters since they face no risk of losing that credibility.
Any information I provide is attached to my *name*, and it is simply the responsibility of others to focus on the validity of the information, rather than the person it originates from.

I have no desire, nor intention, to converse under any name other than femr2.
 
femr,

Tom,

I do not appreciate you using your posts to me as some form of venting opportunity.
If you must vent, separate it from your dialogue with me.

This is not "venting". It's stating one of the most fundamental principles of Measurements and Analysis 101.

I've posted THIS for you before. Please watch it. Pay close attention. It'll only take about 10 seconds.

The KEY comment:
MIT physics professor said:
"There is an uncertainty in every single measurement. Unless you know the uncertainty, you know absolutely NOTHING about your measurement."

When I was an engineering student, if we turned in our lab results without an error analysis, it was not accepted. If we didn't get it done, we got a failing grade. Even if we did all the analysis exactly right.

In a previous post, you replied to my comment:

tfk said:
[You & I have disagreed about] ... Especially the recognition & quantification of data errors and their huge effect on the interpretation of data.

femr said:
I probably don't use formal methods, but it's all done as well as possible with nothing hidden or deliberately distorted.

You seem to be suggesting that you think an error analysis is a statement of a researcher's mistakes or dishonesty. Nothing could be further from the truth.

It is an acknowledgment of the inescapable fact that there is an inherent error in every single measurement that anybody, anywhere takes. And a formal analysis of how much those errors will impact any calculated conclusion.

This is an absolutely crucial component of any measurement analysis.

The LACK of an understanding and clear statement of the sources & magnitudes of one's measurement errors is an incompetence.

The LACK of an acknowledgment of those error is a dishonesty.

What I can tell you is this: the single biggest lesson of error analysis is a shocked: "I cannot BELIEVE that, after taking all those measurements so carefully, we've got to acknowledge THIS BIG an error." But that was precisely the message that our instructors were trying to get us to appreciate.

What would you interpret from this animated GIF ?
http://femr2.ucoz.com/NIST_WTC7_redo-newoutsmallhy.gif

Are you talking about "what this video says about the continuum vs. discrete process of collapse"?

If that's your topic, the drop of the penthouse says to an experienced structural engineer the same thing that the collapse of the previous (east) penthouse said, and that the subsequent collapse of the north wall confirmed: "the building is in the process of collapsing".

And a careful analysis - like your "east corner" graph confirms that.

And there is absolutely nothing that can be seen in this gif to deny that.

The resulting position determined is therefore clearly sub-pixel when translated back into the units of the original tiny source.

There are a half-dozen or more features of your tiny, circling pixel that are not available to you in the WTC videos.

That's not to say that Syntheyes cannot do significant image enhancement. That is to say that the results of that enhancement is not going to be anywhere near as effective as your example.

It is a side effect of aliasing that small movements of the object cause slight variation in inter-pixel intensity, saturation and colour.

And a principle source of aliasing in video images is leakage in the stop bands of image processing filters. Did you get that? "Filters". As in "things that ALTER the exact rendition of your measured quantity." Filters that have altered your video before you ever got your hands on it.

Not hugely. Subtilely.

UNTIL you get down to the individual pixel level.

... The graph shows accurate (not perfect) sub-pixel location data for the position of the small blob.

I could go into more detail, but hope that clarifies.

No, actually it does not.

I didn't ask what type of sub-pixel techniques your program CAN use. I asked which specific ones you DID use.

ETA: Another test using exactly the same small blob rescale, but extended such that it takes 1000 frames to perform the circular movement...

Utterly irrelevant to WTC video. You don't have repetitive, oscillating motion that you can sample multiple times. You don't have 1000 frames of motion to analyze.

For this example, I'll quite confidently state that the 3rd decimal place is required, as accuracy under 0.01 pixels is clear.

Any talk about 1/100 of a pixel in any discussion of available WTC videos is fantasy.

Can be quantified, by graphing the difference between *perfect* and the trace location, but not sure how much it matters.

Stop talking about quantifying it. Stop demeaning the valuable act of quantifying your error. Do it.

But do it right. Not like you have been half-assedly doing it. (See next paragraph.)

Now, obviously this level of accuracy does not directly apply to the video traces,

That's right. It doesn't apply.

(As an aside: You probably shouldn't have brought it up, then. It makes it look like you are indulging in "baffling with the bs".)

This whole method that you use to guess your accuracy - creating perfect graphic images, with 100% contrast, with perfectly defined edges, moving in perfectly geometric, repeating motions, at any number of frame acquisitions, at any spatial resolution, applying perfectly symmetrical blurs & then using filters & processing to reconstruct your original shape & motion - is self-deluding.

Your artificially created reference video ignores the 20 - 100 sources of unpredictable, asymmetric, non-constant in space & time distortions that occur before the image gets to disk.

as they contain all manner of other noise and distortion sources. For previous descent traces I've estimated +/-0.2 pixels taking account of noise.

My guess, based on measurements that I've made in the past: If you performed a real error analysis, and you used re-interlaced video, you'd find that during the collapse (the time of interest, of course), you've got uncertainties of about ± 2 to ±3 pixels before image enhancements & ±1 to ±2 pixels after image enhancements.

If you use single frames, then your uncertainties will be about twice as large.

Now, I could be wrong about that. Yeah, it's happened before.

Do you know what it would take to convince me that I'm wrong? And to get me to immediately change my mind?

Funny thing. It'd take a competent error analysis.


Tom
 
tfk said:
Time out for a sports metaphor...

Unfortunately for you, THIS is what you've accomplished.

Don't worry. it's not like the score was tied. Or that the score was close. Or that the game was even still going on. The game ended about 3 years ago. It was a rout.

Enough with the ESPN moment. Back to the data.

This is not "venting". It's stating one of the most fundamental principles of Measurements and Analysis 101.

No Tom, it is venting your personal viewpoint of what you perceive to be *the truth movement*, it's viewpoint, it's goals and intentions.

Unfortunately for me ? What kind of idiot are you Tom ?

Keep your personal crusade out of posts to me.

I've posted THIS for you before.
No, you haven't.

Please watch it.
When I have time perhaps.

It is an acknowledgment of the inescapable fact that there is an inherent error in every single measurement
No excrement Sherlock. Moving on.

Are you talking about "what this video says about the continuum vs. discrete process of collapse"?
No, I'm asking you what you see :)

there is absolutely nothing that can be seen in this gif to deny that.
Just WHAT is your problem Tom ? Who exactly is denying what exactly ?

There are a half-dozen or more features of your tiny, circling pixel that are not available to you in the WTC videos.
Such as...

That's not to say that Syntheyes cannot do significant image enhancement. That is to say that the results of that enhancement is not going to be anywhere near as effective as your example.
Feature tracking is not image enhancement Tom. I'm fully aware of the difference between a perfect test case and real video, and made such clear earlier...
femr2 said:
Now, obviously this level of accuracy does not directly apply to the video traces, as they contain all manner of other noise and distortion sources. For previous descent traces I've estimated +/-0.2 pixels taking account of noise.
Why is it that AFTER you have read such statements, you seem to reword them in a shielded derisory manner then state them as if they are something you had known for ever and ever eh ? :)

And a principle source of aliasing in video images is leakage in the stop bands of image processing filters. Did you get that?
HA HA. You're looking in from the wrong end of the video chain Tom. The primary source of aliasing in live video footage is that the lens focusses light from a region of space onto a single CCD receptor. It's a side effect of the optical hardware first, the CCD properties second, the internal storage compression atrefacts third, then gets some messing up from subsequent processes such as format conversion, contrast enhancement (which IS applied to the NIST video, by NIST)

This whole method that you use to guess your accuracy
You just can't get it can you Tom. It's not any kind of attempt to guess WTC trace accuracy. It's an example of the validity of sub-pixel feature tracing.

re-interlaced video
The simplest way to illustrate you talking out of your ass Tom.

Now, I could be wrong about that. Yeah, it's happened before.
You've been learning lot's recently. No need to stop now.

Do you know what it would take to convince me that I'm wrong? And to get me to immediately change my mind?
Frankly I really don't care Tom. You are not going to change. You are a pompous idiot with delusions of grandeur. If you can separate your *snark* (as you put it) from courteous dialogue, I'll continue discussion with you. If not, do one. I've got better things to do.
 
femr,

Okay. In addition I'm tracking a number of points down the East edge.

Good. I'll look forward to seeing your data.

tfk said:
And we know that the start of the building's collapse began about 9 seconds (IIRC) before the beginning of the collapse of the northwest corner.

Until I have access to a longer version of the Camera #3 footage, there is no raw data available to confirm that timing.

We "know" it in the same way that we know the Challenger blew up because of an o-ring failure. And the Columbia blew up because of wing damage: "Because competent experts analyzed the situation, and stated clearly that there was sufficient hard evidence to support those conclusions."

I've provided you with a very clear example, including the very simple process (upscaling and interpolating) that allows sub-pixel tracing to actually work. As I said, I can go into more detail, though it shouldn't really be necessary.

If you want more hand-holding, fine. We can go into pattern matching algorithms, FFT, search ranges, all manner of gibberish (aka unnecessary protracted technical banter)

And, once again, I know that there are dozens of techniques used to perform this sort of enhancement. I am NOT asking "what generic techniques are available?"

I am asking, "which specific ones did you use in THIS specific analysis?"

You're getting confused about metric between the Flight 175 video studies and WTC 7. If you study the Camera 3 details in the NIST report, you'll find they state rather higher levels of accuracy post-noise-treatment.

No, I'm not getting confused.

NiST used moire analysis to enhance their resolution of the lateral movement of WTC7. You haven't discussed that technique in your discussion of the building's collapse.

And moire techniques were only available with WTC7 because the building had a repeating pattern of straight vertical lines. Something that the airplane lacked. So that technique was out.

But all of the techniques that you have discussed would have been readily available to NIST video analysts when attempting to measure the speed of the plane. They went thru great effort to get as accurate info as possible. And they still ended up with video based errors of something around ±40 mph.

Try and remain accurate Tom.

Always do.

NISTs use of a (badly defined) point on the roofline is therefore a source of lack-of-confidence in their trace graphs (The raw data from which has not been released).

It's a source of "lack of confidence" in the baselessly suspicious & paranoid.

There are 100,000 trivial little details that they didn't specify. Most of them because they analyzed & tossed out, deemed irrelevant or trivial.

Otherwise the report would have been 10x as big & taken 10x as long.

Bearing in mind you appear to have come into contact with data containing noise only a couple of days ago,

try 40 years worth of noise laden data...

have no obvious personal experience or understanding of any of the techniques being used,

Wrong.

It would be a mistake for you to assume that my asking you to explain these processes in your own words is so that I can learn them.

It would be a mistake for you to assume that I have no professional experience in video processing of moving images.

are clearly prepared to make conclusions on a simple graph of *quick and dirty* trace data,

Wrong.

I have several conclusions based on my spending about 3 hours running many analyses of various models. After spending about 6 hours writing & debugging a program to do the analysis. It'll take me little time to write up the results. I'll post them here within the next week.

hand-wave away provided proof-of-process details,

You've provided "proof-of-process details"??

That dot & moving circle was your "PROOF of process" for your sub-pixel accuracy claims in the WTC video?

really...?

and it seems you have not actually looked at the movement of WTC 7 in video form in any kind of detail before (as you seem surprised by movement many have seen since the first scrub through video)

You're correct that I haven't extracted my own data from video. No need, IMO. I haven't been the one claiming to do video analysis. And for commenting on others' (like yours), I'm perfectly happy to use their raw data.

Unless of course, like Chandler, they refuse to supply it. Then I'll just discount them as secretive, insincere investigators.

...forgive me for stating the obvious...

Be my guest. I've been known to do the same on occasion.

You're not really in a position to be complaining Tom.

Please... Complaining is for drama queens. I haven't been complaining. I've been "commenting".

Is there some additional detail about sub-pixel tracing techniques you don't understand ?

Yes. What I've asked you about 10x in the last week.

Which SPECIFIC technique did you use in your specific analyses?

Obviously the exact inner-workings of SynthEyes are not public, but feature tracking systems all share a common set of baseline methods, which I'll go into further detail upon if absolutely necessary.

Consider my request to be, IMO, "absolutely necessary".

Oh, and just a small point...NIST do not appear to have rxtracted static point data from their traces, and also appear to have overlooked horizontal perspective implications (as their vertical distance metric was derived from a different horizontal position to their roofline trace position). Ho hum eh.

NIST did not "extract static data points"??

What do you think that the dots on their "drop distance vs. time" graph were?

Or do you mean "location of the roof prior to collapse"?

If that is the case, what do you think all that commentary about the horizontal movement of the roof prior to global collapse was about?

Perhaps, rather than my guessing, you should tell me what you mean by "static data points".


Tom
 
Last edited:

Back
Top Bottom