• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Discussion of femr's video data analysis

How did NIST err

The NIST data suffers from the following (non-exhaustive) series of technical issues, each of which reduce the quality, validity and relevance of the data in various measures...

  • NIST did not deinterlace their source video. This has two main detrimental effects: 1) Each image they look at is actually a composite of two separate points in time, and 2) Instant halving of the number of frames available...half the available video data information. Tracing features using interlaced video is not a good idea. I have gone into detail on issues related to tracing of features using interlaced video data previously.
  • NIST did not sample every frame, reducing the sampling rate considerably and reducing available data redundancy for the purposes of noise reduction and derivation of velocity and acceleration profile data.
  • NIST used an inconsistent inter-sample time-step, skipping roughly every 56 out of 60 available unique images. They ignored over 90% of the available positional data.
  • NIST likely used a manual (by hand-eye) tracking process using a single pixel column, rather than a tried and tested feature tracking method such as those provided in systems such as SynthEyes. Manual tracking introduces a raft of accuracy issues. Feature tracking systems such as SynthEyes employ an automated region-based system which entails upscaling of the target region, application of LancZos3 filtering and pattern matching (with FOM) to provide a sub-pixel accurate relative location of initial feature pattern in subsequent frames in video.
  • NIST tracked the *roofline* using a single pixel column, rather than an actual feature of the building. This means that the trace is not actually of a point of the building, as the building does not descend completely vertically. This means the tracked pixel column is actually a rather meaningless point on the roofline which wanders left and right as the building moves East and West.
  • NIST used the Cam#3 viewpoint which includes significant perspective effects (such as early motion being north-south rather than up-down and yet appearing to be vertical motion). It also means that each horizontal position across the facade requires calculation of a unique scaling metric, which NIST do not appear to have bothered to do.
  • NIST did not perform perspective correction upon the resultant trace data.
  • NIST did not appear to recognise that the initial movement at their chosen pixel column was primarily north-south movement resulting from twisting of the building before the release point of the north facade.
  • NIST did not perform static point extraction(H, V). Even when the camera appears static, there is still (at least) fine movement. Subtraction of static point movement from trace data significantly reduces camera shake noise, and so reduces track data noise.
  • NIST did not choose a track point which could actually be identified from the beginning to the end of the trace, and so they needed to splice together information from separate points. Without perspective correction the scaling metrics for these two points resulted in data skewing, especially of the early motion.
  • NIST performed only a linear approximation for acceleration, choosing not to further derive their chosen displacement function.
  • NISTs displacement function, if derived to obtain acceleration/time contains a ~1s period of over-g acceleration.
  • NISTs displacement function, if derived to obtain acceleration/time does not suggest a 2.25s period of roughly gravitational acceleration.
  • The displacement data appears to have been extracted initially from the T0 pixel column, but using the scaling factor determined for a point above Region B, further skewing the displacement data.

Not a great show of competence. Let us hope that a similar raft of issues is not present for other sections of the report. It'll all come out in the wash.
 
How do you know that all these algorithms don't smooth away signal?
Of course some signal is lost, especially high frequesncy signal data. Of course. I'm not looking for mini-jolts. I'm looking at the trend. Your point ?

Answer: You don't, unless you know already what's going on.
LOL. I do. Some signal is lost during smoothing. So is some noise. Some noise remains. Some signal remains. Frequency domain is useful ;)

Yes. No smoothing.
ROFL. Nope. Wrong. Try again.
 
So Femr, when will your research be published or presented in/at any reputable journal or conference on structural engineering or fire science?
 
The NIST data suffers from the following (non-exhaustive) series of technical issues, each of which reduce the quality, validity and relevance of the data in various measures...

  • NIST did not deinterlace their source video. This has two main detrimental effects: 1) Each image they look at is actually a composite of two separate points in time, and 2) Instant halving of the number of frames available...half the available video data information. Tracing features using interlaced video is not a good idea. I have gone into detail on issues related to tracing of features using interlaced video data previously.
  • ...
  • The displacement data appears to have been extracted initially from the T0 pixel column, but using the scaling factor determined for a point above Region B, further skewing the displacement data.
Not a great show of competence. Let us hope that a similar raft of issues is not present for other sections of the report. It'll all come out in the wash.
I knew it, this is about NIST. You don't have anything to support your fictional official theory stand, so you attack NIST. Did I predict this or what?

Why does NIST have to study the collapse? There is no purpose. You study is a waste of time and you can't tie your work to a conspiracy theory, to help debunk the lies and delusions of 911 truth. What good is your smoothing the acceleration to look like music?
 
The NIST data suffers from the following (non-exhaustive) series of technical issues, each of which reduce the quality, validity and relevance of the data in various measures...

<snip for brevity>

.....Not a great show of competence. Let us hope that a similar raft of issues is not present for other sections of the report. It'll all come out in the wash.

Have you presented your findings to the scientific community through any 'normal' channels that the scientific community might recognise and respect?

I mean ... internet CT debate can be quite stimulating (and even fun) but you have devoted such a huge amount of time, effort and (no doubt) money to your investigation that it's almost a shame that it languishes in obscure internet backwaters. Try a respected engineering journal.
 
The NIST data suffers from the following (non-exhaustive) series of technical issues, each of which reduce the quality, validity and relevance of the data in various measures...

  • NIST did not deinterlace their source video. This has two main detrimental effects: 1) Each image they look at is actually a composite of two separate points in time, and 2) Instant halving of the number of frames available...half the available video data information. Tracing features using interlaced video is not a good idea. I have gone into detail on issues related to tracing of features using interlaced video data previously.
  • NIST did not sample every frame, reducing the sampling rate considerably and reducing available data redundancy for the purposes of noise reduction and derivation of velocity and acceleration profile data.
  • NIST used an inconsistent inter-sample time-step, skipping roughly every 56 out of 60 available unique images. They ignored over 90% of the available positional data.
  • NIST likely used a manual (by hand-eye) tracking process using a single pixel column, rather than a tried and tested feature tracking method such as those provided in systems such as SynthEyes. Manual tracking introduces a raft of accuracy issues. Feature tracking systems such as SynthEyes employ an automated region-based system which entails upscaling of the target region, application of LancZos3 filtering and pattern matching (with FOM) to provide a sub-pixel accurate relative location of initial feature pattern in subsequent frames in video.
  • NIST tracked the *roofline* using a single pixel column, rather than an actual feature of the building. This means that the trace is not actually of a point of the building, as the building does not descend completely vertically. This means the tracked pixel column is actually a rather meaningless point on the roofline which wanders left and right as the building moves East and West.
  • NIST used the Cam#3 viewpoint which includes significant perspective effects (such as early motion being north-south rather than up-down and yet appearing to be vertical motion). It also means that each horizontal position across the facade requires calculation of a unique scaling metric, which NIST do not appear to have bothered to do.
  • NIST did not perform perspective correction upon the resultant trace data.
  • NIST did not appear to recognise that the initial movement at their chosen pixel column was primarily north-south movement resulting from twisting of the building before the release point of the north facade.
  • NIST did not perform static point extraction(H, V). Even when the camera appears static, there is still (at least) fine movement. Subtraction of static point movement from trace data significantly reduces camera shake noise, and so reduces track data noise.
  • NIST did not choose a track point which could actually be identified from the beginning to the end of the trace, and so they needed to splice together information from separate points. Without perspective correction the scaling metrics for these two points resulted in data skewing, especially of the early motion.
  • NIST performed only a linear approximation for acceleration, choosing not to further derive their chosen displacement function.
  • NISTs displacement function, if derived to obtain acceleration/time contains a ~1s period of over-g acceleration.
  • NISTs displacement function, if derived to obtain acceleration/time does not suggest a 2.25s period of roughly gravitational acceleration.
  • The displacement data appears to have been extracted initially from the T0 pixel column, but using the scaling factor determined for a point above Region B, further skewing the displacement data.

Not a great show of competence. Let us hope that a similar raft of issues is not present for other sections of the report. It'll all come out in the wash.

As was expected, the only time you get off your high horse long enough to answer a question posed by the unwashed masses such as me, you disregard the 2nd part of the question. So here's one with only 1 part, that WILL BE IGNORED BY YOU:

Why'd you ignore the other part of my question?

-------------------------
After reading your lengthy 1/2 answer to my questions it occurs to me you still, not surprisingly, haven't told the class how their conclusions are wrong, nor will you ever, because you lack an e-spine.

Which is itself pretty silly, seeing as though we'll never meet. So how can you be embarassed by your answer? I'm just some dude on the interwebs.
 
Last edited:
So, lots of posturing and inept interpretation of beachnut's assertion that he has a better smoothing method than Savitzky-Golay, along with ridiculous assertion that NO smoothing would be better (lol), but still no posting of any better methods.

Go figure :rolleyes:

Unsmoothed acceleration data...
796449867.jpg


+/- 4000 ft/s^2. Yeah, that's much more betterer :rolleyes:

This is much more accurate...
590673176.jpg

Savitzky-Golay curve, not the NIST curve, of course ;)
 
I'm not sure I agree with all of the points listed.
I would need to look at the information closer, but I would suggest that you send your findings to NIST and see what happens. It can't hurt.

I would suggest changing the tone a little and e-mailing the information to:
william.pitts@nist.gov
 
Well why won't you?

If I ask you a 2 part question, I expect two answers. Not the childish crap typical of your truther bretheren. Shall I ask it yet again or will you just ignore it yet again?
 
So, lots of posturing and inept interpretation of beachnut's assertion that he has a better smoothing method than Savitzky-Golay, along with ridiculous assertion that NO smoothing would be better (lol), but still no posting of any better methods.

Go figure :rolleyes:

Unsmoothed acceleration data...
http://femr2.ucoz.com/_ph/7/2/796449867.jpg

+/- 4000 ft/s^2. Yeah, that's much more betterer :rolleyes:

This is much more accurate...
http://femr2.ucoz.com/_ph/7/2/590673176.jpg
Savitzky-Golay curve, not the NIST curve, of course ;)
So you throw away anything less than 1/2 Hz. Good choice...
 
You can expect whatever you please. You have not posted a smoothing method which produces superior results than the S-G method...

Other people have taken care of that request - just not to your exacting specifications.

No, instead I just ask a general question that has been posed to you seemingly for months, and you refuse to answer it, acting like a petulant child who insists he's always right.

This leaves us with the impression that while your graphs are really groovy, you've got no clue what you're talking about. Unless this ground-breaking boredom culminates with something we've not seen yet (aka - a different conclusion, in the "they did it" file) I see no reason to even keep this thread here. Perhaps it should be relegated to the math and science section alongside your fallen bretheren MT.
 

Back
Top Bottom