• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Discussion of femr's video data analysis

femr,

A couple of questions about your data.

You've described it as "assuming it adhered to ITU-R BT.601". Having been broadcast in the US, would it not originally have been in NTSC-M? From what you can tell, was the original source taken from a recording of a broadcast? Frame rate of original broadcast?

This chart shows your static points, NormY & DJ_Y. (What do "Norm" & "DJ" stand for?)


staticpointsnormydjy.png



I understand that you're getting rid of jitter by averaging points.

Couple of questions:

I assume that your program takes data points from both frames, has a built in factor that gives an "interlace offset Y" value that it adds to one frame's data, to calculate an accurate "interlaced" resultant Y.

1. Looking at your NormY graph, the jitter is mostly about 0.5 pixel. But occasionally reduces to about 0.1 pixel. Any reason for that?

2. If that "jitter" goes to 0.1 pixel periodically, does that mean that the interlace offset is correct, and the excess "jitter" is related somehow to the algorithm that picks out the feature?

I've got a couple of other questions, but they depend on that first assumption being correct. So I'll ask if you can just describe the process that SynthEyes uses to determine these points.


tom
 
femr,

A couple of questions about your data.
Okay.

You've described it as "assuming it adhered to ITU-R BT.601".
I've provided someaspect ratio data with the assumption that the camera adhered to that standard.

Having been broadcast in the US, would it not originally have been in NTSC-M?
It would originally be in the format recorded by the camera. The copy I use is correctly interlaced, so the original must have been a (highly probable) 59.94fps. NTSC-M is a 30 (29.97) fps format.

From what you can tell, was the original source taken from a recording of a broadcast?
No. As it's correctly interlaced it has to be a digital copy. I'd hazard a guess that it was provided in digital form to the folk that made the DVD it was taken from.

Frame rate of original broadcast?
The DVD data is 29.97fps interlaced. It's unlikely that there has been any framerate resampling. The video shows no signs of pulldown, frame duplication or missing frames.

What do "Norm" & "DJ" stand for?
Norm - normalised - static point data subtracted from NW corner data.

DJ - simple 2 point moving average DeJitter.

I understand that you're getting rid of jitter by averaging points.
Yep, but the untreated data is there if you want to use a different method.

I assume that your program takes data points from both frames
Traces are performed on both frames, yes.

has a built in factor that gives an "interlace offset Y" value that it adds to one frame's data, to calculate an accurate "interlaced" resultant Y.
No, though you could do so manually if you like. Both datasets are *zeroed*, so the assumption is that the first samples from each dataset are at the same height.

1. Looking at your NormY graph, the jitter is mostly about 0.5 pixel. But occasionally reduces to about 0.1 pixel. Any reason for that?
No, but will have a look at the visual trace to see if there is a reason.

2. If that "jitter" goes to 0.1 pixel periodically, does that mean that the interlace offset is correct, and the excess "jitter" is related somehow to the algorithm that picks out the feature?
See previous comment re offset. Will have a look at the visual trace and make a few comments on jitter a bit later.

So I'll ask if you can just describe the process that SynthEyes uses to determine these points.
Not sure what you mean. I choose the initial point (region). SynthEyes tracks that region on subsequent video frames.
 
As usual, I've got my suspicions. But we've still got a few things to sort out.

I'm not sure what the doubt is.

The vertical trace remains within ~+/- 0.2 pixel margin for 12s.

The test video (the blob) showed that SynthEyes is technically capable of determining position to the third decimal place.

The West edge trace on the Cam#3 footage was comparable with NISTs (imo) dodgy moire method...
89078455.png


Note the vertical axis on my data is from a *2 upscaled video, so it's actually +/- 1 pixel range, not +/- 2. Very similar to the NIST results (better imo) and well sub-pixel.

So, what is the question ? Is it whether SynthEyes is capable of tracking feature position to sub-pixel accuracy, or whether the noise in the video results in an over 1 pixel margin ?

I think it's been made clear that SynthEyes is more than capable of sub-pixel positional accuracy, and the variance in traces of WTC7 is low. There are all manner of siources of noise which could result in the trace position varying, but they are nothing to do with SynthEyes. It simply determines the location which best matches it's reference image. An example of what I'm talking about there is the draft NE corner trace. That showed the NE corner gradually descending over a long period. In reality the corner didn't move, but the *bleed* of the image data reduced. As SynthEyes was told to track that location, it did exactly what it said on the tin. Changing the trace location to the windows near the NE corner negated that particular video artefact.

Is there something specific I could do that would end the doubt ?
 
The original source video can be found here...

Download

Bear in mind it is in interlaced form, and if you choose to use it, the first thing to do is unfold each interlace field.

Any processing of the video should use a lossless codec such as RAW or HuffYUV.


femr,

this link doesn't work (for me, anyway).

Do you have another?

Highest possible quality...

tom
 
Last edited:
the jitter is mostly about 0.5 pixel. But occasionally reduces to about 0.1 pixel. Any reason for that?
Had a look at the video, and I think it may be due to slight bleed variation of the bottom field roofline.

Might be prudent to try a slightly different trace point for that field to see if I can eliminate it.

The jitter on the other points (static) is quite even, so, yes, probably a problem with the bottom frame initial trace region location.

Can redo it, but it'll mean dropping the trace region down a bit, so it won't be centered on the corner itself, but would include the bottom edge of the *black line* that delineates the roofline.
 
With all due respect to everyone posting, why is it necessary to debate the collapse speed of WTC7? As we studied some of the videos of the collapse in firefighter training, I can't see where the speed it fell is of any consequence. My instructors, whom are both trained in fire science and engineering, both explained rather in-depth as to why the structure failed. No CD, no thermite, etc.

Seeing that the building took serious damage from the fall of towers 1 and 2, in addition to the fact that the fires burned uncontrolled on multiple levels of 7 for many hours, it's a minor miracle that 7 lasted as long as it did.

Why expend the time on such a moot point?

Again, not looking to ruffle feathers, just looking for an honest clarification. Maybe I'm misinterpreting the point?
 
With all due respect to everyone posting, why is it necessary to debate the collapse speed of WTC7? As we studied some of the videos of the collapse in firefighter training, I can't see where the speed it fell is of any consequence. My instructors, whom are both trained in fire science and engineering, both explained rather in-depth as to why the structure failed. No CD, no thermite, etc.

Seeing that the building took serious damage from the fall of towers 1 and 2, in addition to the fact that the fires burned uncontrolled on multiple levels of 7 for many hours, it's a minor miracle that 7 lasted as long as it did.

Why expend the time on such a moot point?

Again, not looking to ruffle feathers, just looking for an honest clarification. Maybe I'm misinterpreting the point?
I have lost track of the details but I am pretty sure the origin of this type of discussion was the "truther" false claim that free fall must mean demolition. That claim is, of course, completely false.

Some people seeking to address such a claim set out to test whether or not there was free fall. It is a point of possible interest in its own right. But a complete waste of time if your interest is the base question "Demolition or not?" There was no demolition so whether there was free fall or not does not change the answer to "Demolition or not?"

There has been a similar situation with discussion of Thermite and derivatives. Particularly the Jones Harrit paper on allegations of nano-thermxte at ground zero.
The question has interest in itself as a matter of scientific curiosity.

BUT to a person like me whose focus is "Demolition or not?" the discussion of the pros and cons of the Jones Harrit allegations is a waste of time. There was no demolition.
...THEREFORE there was no use of thermite or any derivative to assist demolition ...
...THEREFORE I wouldn't care if there was a ten tonne stockpile of thermxte on site it wasn't used.
(And all that despite the simple fact that the extraordinary claims made for heating by nano-thermxte are simply impossible. e.g. The commonest implicit claim that a paint film on steel will develop enough heat to melt the steel. Ridiculous.)

Similarly the discussion in this thread has interest of itself BUT it is absolutely irrelevant to the question "Demolition or not?"

So the origin may be in the falsehood many truthers rely on. They state either explicitly or by implication that "FreeFallAcceleration == CD" and expect gullible people to fall for it. Surprisingly many seem to accept it and then try to disprove FFA but that is a side track for my current comments. Again surprisingly it is rare for debunkers to "call" the truthers on the "FFA == CD" claim. Why we let them get away with it I don't know.
 
why is it necessary to debate the collapse speed of WTC7?
The OP is fairly clear on the purpose of the thread, and it's not about freefall/not freefal, rather, analysis of the techniques I use for tracing video features.

ozeco41 said:
I have lost track of the details but I am pretty sure the origin of this type of discussion was the "truther" false claim that free fall must mean demolition.
Assuming you have taken the time to read the thread, it has already been made clear that is not the origin of this discussion. The OP also made it very clear that posts should remain focussed upon the technical details, not the usual rambles.

Again, so it's perfectly clear...

"Discussion of femr2's video data analysis [techniques]"

Not freefall/no freefall/faster than freefall.

I have not really performed any analysis of the resultant trace data yet, and so what is actually being discussed is whether the trace data is *valid*, what level of accuracy can be claimed, how inherent noise can be reduced, where noise originates...all that sort of thing.

Please ensure any posts you make on this thread remain within the technical domain, as requested in the OP.
 
Hey Sabre,

With all due respect to everyone posting, why is it necessary to debate the collapse speed of WTC7? As we studied some of the videos of the collapse in firefighter training, I can't see where the speed it fell is of any consequence. My instructors, whom are both trained in fire science and engineering, both explained rather in-depth as to why the structure failed. No CD, no thermite, etc.

Seeing that the building took serious damage from the fall of towers 1 and 2, in addition to the fact that the fires burned uncontrolled on multiple levels of 7 for many hours, it's a minor miracle that 7 lasted as long as it did.

Why expend the time on such a moot point?

Again, not looking to ruffle feathers, just looking for an honest clarification. Maybe I'm misinterpreting the point?

Since I'm the one of the guys (on our side) perpetuating this conversation, I guess I've gotta pick up this question.

First, let me ask you a question. Do you find it (pick your adjective) annoying, vexing, bothersome, pissing you off, that I am in this conversation?

The reason that I'm in this is a couple of (interesting to me, doubtless snore-inducing to most) engineering points, and a couple of important philosophical ones.

The philosophical ones are the same one that supports the real engineering debate that still goes on about some fine details about the collapse of the buildings:

1. "There are a few details that are unknown about those events, and the big picture of what happened has nothing to fear by addressing them. (e.g., the work of Dr. Quintiere regarding the frailty of thin trusses in fires.)

2. Even though a bunch of truther pudknockers are almost certain to take honest, lively debate and twist it into "see, they still can't tell you this tiny detail, therefore how can we trust them about anything", you can't let this inconsequential annoyance toss a wet blanket on real inquiry or open discourse. That would give them a power & influence way, way beyond their significance.

The best analogy for me is that we've got one of those 10,000 piece jig saw puzzles. It's 98% finished. It shows a cohesive, consistent picture of, say, a schooner at anchor in some beautiful South Pacific Lagoon.

There are some pieces that we just haven't figured out, & appear to be several missing pieces, and a couple of pieces that look like they'll never fit.

The fact is that there are obvious pedestrian answers for the ones we haven't figured out (just a matter of time), the missing pieces (ever lived in a house with a couple of ferrets running free?) and even the ones that look like they'll never fit (I've got relatives whose "prankish nature" is one wild hair short of torture ... and they have access to paint).

The point is that, no matter what the reasons for the few anomalous pieces, no matter whether or not we ever figure out those reasons, there is no way that we're going to disassemble the whole puzzle, start over and end up with, say, a climber summitting Mt. Everest.

The big picture is simple far too complete for that to ever happen.

You can have the (mostly Hollywood fantasy) situation of "pulling one thread & the whole garment unravels". But only when you're dealing with 2% knowledge & 98% ignorance of the whole story.

You can't have that situation when you have 98% knowledge & 2% ignorance.

Last point: when you adopt the position that "there ain't nothing to fear by looking at any aspect of the story", you can't just say it & be convincing. You have to be willing to do it.

Reasonable?


tom
 
Last edited:
... oh yeah, and the technical points relate to what femr said above.

tom
 
I answered a question by commenting on what led to "this type of discussion" viz:
I have lost track of the details but I am pretty sure the origin of this type of discussion was....
Then commented (twice):
...The question has interest in itself as a matter of scientific curiosity...
I have no problem with your detailed discussion of a technique of scientific analysis.
But you are posting in the 9/11 conspiracies forum and that carries the implication that the discussion is linked somehow to 9/11 matters.
 
Tom,

Is it approaching time for you to be willing to make some conclusions about the technical aspects of the tracing methods ?

I think I've made the case for the levels of accuracy attainable, along with making the limitations fairly clear. You might not agree, and if not, I think now is the time to be clear as to why.

When compared to other methods used to generate positional data from video from others, I think it's fair to say my data is of clearly superior quality, and comparable with the NIST moire method in accuracy attainable.

That the NIST moire method can only be used for one single point and my method can be used anywhere on the frame leads me to suggest that I'm generating the best quality positional data for the other building locations that there has been presented thus far. It is not clear at all how NIST performed their roofline positional data, but it's description suggests it was not great quality (their described initial point cannot be determined accurately, and there's no way to determine where the roofline actually is near their described location). There's then the fact that they used a point somewhere around the mid-point of the building width in the Cam#3 footage, which has the implication that they've interpreted the flexing and twisting motion as vertical movement, and ignored perspective correction. All these issues can be cleared up by using better, and public, trace data.

If you can accept the validity of the tracing methods, we can move forward into interpretation of the various traces that have been performed.

For WTC7 that will mean quantifying things like...

a) Was there vertical kink of the roofline ?
b) Did the NE and NW corners release at the same time ?
c) How early did movement begin ?
d) How early did vertical movement begin ?
e) Implications for NIST report...
etc, etc.

Following that, it'll be the turn of WTC1, with a plethora of traceable observations possible.

So, happy with the tracing methods, or not ?

---

ozeco41,

As above, once the hurdle of agreeing trace method accuracy has been crossed, there will be many-an observation dropping out of actually analysing some of the data. Freefall or not is not really one of those, but it's a useful excercise to calibrate the measurements at the moment.

For instance, the same tracing methods have been used on WTC 1 videos, and indicate that south wall failure was not the event which triggered initiation, rather that core failure came first. Until the methods have been explored to satisfaction of some others it's not appropriate to throw that sort of thing in the pot. Shall be done later tho.
 
Last edited:
femr,


Let's use only one definition of the word "frame", please. (I am skeptical that the industry uses the same word to mean two separate things. I've never seen anyone but you make that claim. If you have a reference to anyone doing so, I'd love to see it.)

Two (fields upper & lower) make up one frame.

"fps" = frames per second.

I also finally dug up the correct definition of the term "deinterlacing".

The terminology, as you stated it, made no sense (to me, anyway). Now it does.

Deinterlacing is the process of eliminating interlace artifacts when interlaced video is shown on a progressive scan display.

Apparently deinterlacing can be done with hardware (keeping the elements on for a longer period, or line doublers or filters) while sending the interlaced video to a progressive display.

Or it can be done by "reinterlacing" the fields into frames, before sending them in a progressive format to "p" type displays.

So, in essence, you can deinterlace ("eliminate interlace artifacts") by reinterlacing (weaving the two fields back together into one frame, before sending it in p-format to a progressive scan display).

tomk said:
Having been broadcast in the US, would it not originally have been in NTSC-M?
It would originally be in the format recorded by the camera. The copy I use is correctly interlaced, so the original must have been a (highly probable) 59.94fps. NTSC-M is a 30 (29.97) fps format.

I understand that NTSC-M is 30 fps.

What I was saying is: the camera that recorded that video belonged to CBS, an American company, and broadcast it to American TVs.

Do American TV stations record at 60 fps, and then broadcast at 30 fps?

Please note that your own reference refers to a "525-line systems with a 59.94 Hz field rate". (Not frame rate)


No. As it's correctly interlaced it has to be a digital copy. I'd hazard a guess that it was provided in digital form to the folk that made the DVD it was taken from.

Any chance that you could go back to the source to find out these details?

tomk said:
Frame rate of original broadcast?
The DVD data is 29.97fps interlaced. It's unlikely that there has been any framerate resampling. The video shows no signs of pulldown, frame duplication or missing frames.

And this is where it gets squirrely.

A minute ago, you said 60 fps. Now you say that it is 30 fps.

There are two answers that make sense:
The data is streaming at 60 fields per second, producing 30 frames per second.
Or the data is streaming at 120 fields per second, producing 60 frames per second.

Please don't tell me that it is streaming at 60 frames per second, producing 30 frames per second.

tomk said:
I understand that you're getting rid of jitter by averaging points.
Yep, but the untreated data is there if you want to use a different method.

Would you post the raw data for the static locations, please. It's clear from the curves that you posted the DJ points previously.

tomk said:
I assume that your program takes data points from both frames
Traces are performed on both frames, yes.

I think both of us meant "both fields of a given frame".

Does it assemble the fields into one frame first, and then do its analysis?
Or does it do the analysis on each field separately?

Both datasets are *zeroed*, so the assumption is that the first samples from each dataset are at the same height.

Clarify what you mean by this, please.

tomk said:
the jitter is mostly about 0.5 pixel. But occasionally reduces to about 0.1 pixel. Any reason for that?
Had a look at the video, and I think it may be due to slight bleed variation of the bottom field roofline.

Clarify, please.

___

For the record, one of the engineering points goes beyond "can Syntheyes track to sub-pixel resolution".

It goes to the use of this information: What are the correct ways to determine velocity & acceleration from this data?

Which is, of course, the ultimate purpose for which the data was generated.


tom
 
As a lurker on this thread, I've been trying to follow the discussion, mainly curious about the measurements of acceleration.
Very pleased that TFK has chosen to take on the Femr2 data and examine it - I do think this is constructive (peer-review, after all). But as always when one starts getting into deeper engineering or technical discussions, one inevitably gets further and further away from any notion of 'conspiracy', and consequently it seems less and less relevant to be posting this on a conspiracy forum.

Don't get me wrong, I think this is a 'good thing' - there really is no evidence of a conspiracy to demolish the WTC buildings anyway, so naturally a serious discussion of any aspect of the collapses is nothing more than an engineering or technical one.

But should it even be on this forum? I think only as a citation or reference in a conspiracy discussion.

The conspiracy-related question, whether FFA = CD has still not been addressed by the truthers who made the claim, but it would be helpful if TFK and Femr2 would turn their attention to an actual controlled demolition and make the same kinds of measurements.
There are many to choose from, perhaps you gents would care to pick one and do your excellent analysis on it. :)
 
First, let me ask you a question. Do you find it (pick your adjective) annoying, vexing, bothersome, pissing you off, that I am in this conversation?

tom

No, not at all.
I guess the reason I asked is that I'm concerned this will most likely resolve nothing. The truthers will still believe what they want to believe, regardless of whatever facts and evidence we present to them.

I guess I just see it as a waste of your time...to put it bluntly.
 
Let's use only one definition of the word "frame", please. (I am skeptical that the industry uses the same word to mean two separate things. I've never seen anyone but you make that claim. If you have a reference to anyone doing so, I'd love to see it.)

Two (fields upper & lower) make up one frame.

"fps" = frames per second.
Thought we'd been through this before, but...

What *frame* means is entirely dependant upon context.

A frame of a video which contains two interlaced fields is called a frame.

But if you separate those two fields into their correct two separate images, each of them is also called a frame.

Deinterlacing is the process of eliminating interlace artifacts when interlaced video is shown on a progressive scan display.
I don't agree. There are many ways of deinterlacing video, many of which are borne from the desire to retain as much vertical resolution as possible. However, the purest meaning of deinterlacing is simply the separation of the two fields into their correctly separate frames. They are, after all, two separate images taken at separate times. The only reason they are interlaced in the first place is a hangover from old broadcast bandwidth limitations, and the fact that human eyes are not as sensitive to vertical resolution as they are to horizontal resolution. One thing to do on that front is look at digital TV from the top of the telly. Looks awfully odd when you make the height of the image shrink.

Apparently deinterlacing can be done with hardware (keeping the elements on for a longer period, or line doublers or filters) while sending the interlaced video to a progressive display.
Best to simply focus on the basic premise of what is contained within an interlaced frame, ie two separate frames (called fields only when part of the interlaced frame), rather than get into all the various ways that software (even if running on some hardware) is used to *attempt* to retain as much vertical detail as possible. There's all manner of assumptions that occur when *blending* interlace fields in any kind of way, with varying levels of success. None of them are, however, *correct*.

Or it can be done by "reinterlacing" the fields into frames, before sending them in a progressive format to "p" type displays.
No. You're talking about either blending interlace fields there, or DEinterlacing them. They are already interlaced. If you deinterlace, then reinterlace, you're back where you started.

So, in essence, you can deinterlace ("eliminate interlace artifacts") by reinterlacing (weaving the two fields back together into one frame, before sending it in p-format to a progressive scan display).
No.

I understand that NTSC-M is 30 fps.

What I was saying is: the camera that recorded that video belonged to CBS, an American company, and broadcast it to American TVs.

Do American TV stations record at 60 fps, and then broadcast at 30 fps?
Yes. Your TV then deinterlaces the signal and shows you it at 60fps :)

Any chance that you could go back to the source to find out these details?
Can try, but not sure it's at all necessary.

There are two answers that make sense:
Okaaay...

The data is streaming at 60 fields per second, producing 30 frames per second.
Yes(ish).

Or the data is streaming at 120 fields per second, producing 60 frames per second.
No, definitely not.

Please don't tell me that it is streaming at 60 frames per second, producing 30 frames per second.
:) Okay I won't, but that's what's occurring. 60 separate images are generated by the camera. If the camera outputs interlaced video data, it takes two of those images and combines them into one interlaced image, then outputs 30 images per second.

Seemples :)

Would you post the raw data for the static locations, please. It's clear from the curves that you posted the DJ points previously.
Already did yonks ago...
http://femr2.ucoz.com/load/wtc7_dan_rather_extra_static_points/1-1-0-30

Does it assemble the fields into one frame first, and then do its analysis?
No. That's exactly what we start with, unless you're suggesting one of the dodgy field blending methods, again, none of which are at all correct for use in this context.

Or does it do the analysis on each field separately?
Yes, because I make it do so, and for very good reason.

Clarify what you mean by this, please.
Er, probably. Each location is traced twice. One for the upper field, another for the lower. Each trace value has the initial pixel location subtracted from all data points, so the first one is always zero (you should be able to see this in all of the data provided to you). I don't apply an interlace offset to the second field data, as it is simply assumed that the first samples specify the same height. Subsequent value changes are all relative to the zero, so offset is not necessary. Clear(ish) ? :)

Clarify, please.
Er, what isn't clear about the statement... is probably the easiest way to remove confusion.

It goes to the use of this information: What are the correct ways to determine velocity & acceleration from this data?

Which is, of course, the ultimate purpose for which the data was generated.
Absolutely not. The purpose for which the data was generated has nothing to do with determining velocity and acceleration. They are certainly valid questions if the data is used for that purpose, but is not why the data was generated at all. The list of additional questions as provided above should give some indication that derivation of the data is not necessary to answer quite a few reasons for generating the data.

Think there may be some word salad in this post, but it's a bit offputting to be presented with such odd interpretation of interlaced video again. Ho hum. We'll get there.
 
the fact that human eyes are not as sensitive to vertical resolution as they are to horizontal resolution. ...
...
Why?
Source?

Where was the camera position for the video, lens focal length? The camera stand was? Where is your paper published? Where are the details for the setup?

Where did you go to school for video? Who did you study under? Why are you qualified? And how does this fit into the CD delusion? What is the conclusion of this video data analysis? What video expert (like one who has a PhD) has peer reviewed your analysis? Do you have a PhD?
 
Perhaps a clearer statement...persistance of vision and the timing between interlaced frame display, along with better horizontal FOV than vertical FOV results in the eyes percieving less vertical artefacts than horizontal ones. Better to display interlaced video as alternate horizontal lines than vertical ones. Consider it a personal opinion if y'like. Not interested in a protracted discussion of human eye sensitivities.

Where was the camera position for the video
Have already told you I'll dig it out. Patience.

lens focal length?
No idea and unlikely to find out. Not going to make any appreciable difference either. Could see if SynthEyes can solve the scene and determine it, but there's probably not enough resolution.

The camera stand was?
Not fixed.

Where is your paper published?
What paper ?

Where are the details for the setup?
What setup ?

Stick to the technical detail beachnut. OP is clear on this request. Please don't continue with your usual rhetoric. If there is an element of information on video data you think is wrong, by all means question it.
 
Last edited:

Back
Top Bottom