• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Discussion of femr's video data analysis

... I haven't done the analysis yet. If there's something I want you to know, I'll let you know ;)
No source for your BS; got it ;)

No goal; and your work is not related to anything about 911 conspiracy theories. Got it. ;)

When compared to other methods used to generate positional data from video from others, I think it's fair to say my data is of clearly superior quality, and comparable with the NIST moire method in accuracy attainable.
I think your data is not better than others. You have not qualified your background or sourced your methods. It is clear there is no science in your overall approach. You dismiss the errors, skip them. Then you think you can make up data from video which lacks the accuracy you come up with.

I think your data is worse after your efforts to "analysis" it. What undergraduate studies did you have in this area?
 
Last edited:
So you wouldn't classify refudation of multiple NIST conclusions under that moniker then ? That's interesting.

.

Actually wrt to WTC 7, no it does not constitute evidence of any conspiracy.

NIST was not attempting a foresic conclusion about the short duration of FFA. Others however are trying to do exactly that and stating that NIST's not doing so is the CT.


Fact is that this very thread indicates that a conclusion about the detailed cause of the FFA period has little or no use in determining what minute details of the overall collapse caused this short duration of FFA. The whole exercise is pretty much irrelevent then and simply indicates why NIST would not go down such a path and waste more time and tax payer's money.
 
NIST was not attempting a foresic conclusion about the short duration of FFA.
Neither am I. NISTs moire method should however be classified as forensic, as it involved determining building movement to inch accuracy.

Others however are trying to do exactly that
I'm not interested in what others are trying to do.

Fact is that this very thread indicates that a conclusion about the detailed cause of the FFA period has little or no use in determining what minute details of the overall collapse caused this short duration of FFA.
I have made it repeatedly clear that I'm not really interested in freefall, near freefall or over freefall. The purpose of the thread, started by tfk, primarily to examine my suggestion that the positional data is +/- 0.2 pixel accurate.

beachnut said:
I think your data is not better than others.
What you think it not really relevant beachnut. You say many things, none of which you provide any reason for. By all means provide your reasoning for making such a claim, including specific detail for *others*. I shall of course respond with as much detail as necessary.

You have not sourced your methods.
Incorrect. I have explained my methods in great detail.

You dismiss the errors, skip them.
Incorrect. The entire purpose of this thread is for tfk to examine the ability to produce sub-pixel accurate positional data. Determination of error has been the primary focus of the topic, until the obstructive and rather pointless influx of recent posts from members such as yourself.

Then you think you can make up data from video which lacks the accuracy you come up with.
Incorrect. The data is not made up in the slightest, and full detail has been provided for anyone to replicate the same data. If you wish to suggest an alternate noise level estimation, then by all means do so. The links to the data are provided earlier in the thread. Dan Rather data is what you'll be-a-wantin there.

I think your data is worse after your efforts to "analysis" it.
What ? Haven't presented any analysis of the data yet beachnut. Again, what you think is irrelevant and substanceless. By all means qualify what you are attempting to say, though if you could ensure it makes sense it would be appreciated. Thanks pal ;)
 
The entire purpose of this thread is for tfk to examine the ability to produce sub-pixel accurate positional data.

A slight correction...

The purpose of this thread is for tfk to examine the ability to produce sub-pixel accurate data for change in position of specified features in video.

It is important to make that distinction.
 
Actually, none of this is important, because you have no hypothesis about the events of 9/11, but by all means carry on.
 
Actually, none of this is important, because you have no hypothesis about the events of 9/11, but by all means carry on.

Very true.I have been a member here for two years and no truther has ever put forward anything even remotely resembling a hypothesis.Still,one lives in hope.
 
tfk,

It seems the thread is heading towards a potentially unrecoverable tangent of substanceless and off topic posts from others. Quite how folk have so much free time as to be motivated to comment on threads they clearly have no interest in is beyond me. Bizarre.

However, there's always hope that they simply choose to ignore topics they have nothing to contribute to, allowing the thread to continue without the dilution of focus and increase in noise that such interruptions result in. Let's hope that's not the actual intention eh. I'm partially surprised that actions from moderators have been conspicuous by there absence, but whatever.

Assuming such folk decide to go and do something they are interested in, rather than make noise here...

1) Are you satisfied that position change trace data can be sub-pixel accurate ?

89078455.jpg


349864523.jpg


If not, why not ?

2) Do you require any further detail on the treatment of the actual video data, such as deinterlacing ?

3) Are you planning on presenting any position/time or derivative data smoothing methods in detail ?

4) Are you planning on presenting the error analysis previously mentioned, which if I recall included quantifying the various sources of noise ?

5) Have you managed to download the video yet ?

6) Have you found any additional building measurement data for building elements, to assist in refining scaling metrics further ?

7) Have you applied your derivation methods to the Cam#3 data ?
 
I agree that the expressed purpose of this thread does not include discussion of the how and why of the period of free fall or faster than free fall acelleration of the north side of WTC 7.
I suggest that those whp wish to pursue that line start a new thread
"If a portion of WTC 7 achieved free fall or faster than free fall, how relevent is it?"
 
TJ, this charlatan uneducated fraud simply doesnt get it. It is a bad investment of time to discuss this analysis, as it is both flawed in it's 'opinion related' conclusion and lacking in accuracy. Lil femr - this kid without a degree - is simply trying new ways to slip in his delusions of CD.

femr--do you think either 1, 2 or 7 was a CD?

Yes or no?

If no, then what is your point? Should this whole elementary exerciser not be moved to the science forum, where it can be accurately assessed and your flaws pointed out? Remember the Physics forum where your diluted incorrect answer to the high school physics question I asked got posted, and you were soon told how incorrect you were? It took a total of 2 posts.

If yes, then what evidence do you have to support your delusions? We all know it's 'Yes', after all, not too long ago, you were arguing that there was no energy from the collapse to hurl the WTC debris to Winter Garden roof. So, since it is Yes....where, from this flawed SynthEyes program incorporation, are you looking to derive your evidence?


Sorry guys,

I've been caught up in work the last week. Very little spare time.

Carl, you're opinion is well known. Please keep it elsewhere. It doesn't help the conversation.

Thanks.


tom
 
Neither am I. NISTs moire method should however be classified as forensic, as it involved determining building movement to inch accuracy.


I'm not interested in what others are trying to do.


I have made it repeatedly clear that I'm not really interested in freefall, near freefall or over freefall. The purpose of the thread, started by tfk, primarily to examine my suggestion that the positional data is +/- 0.2 pixel accurate.


What you think it not really relevant beachnut. You say many things, none of which you provide any reason for. By all means provide your reasoning for making such a claim, including specific detail for *others*. I shall of course respond with as much detail as necessary.


Incorrect. I have explained my methods in great detail.


Incorrect. The entire purpose of this thread is for tfk to examine the ability to produce sub-pixel accurate positional data. Determination of error has been the primary focus of the topic, until the obstructive and rather pointless influx of recent posts from members such as yourself.


Incorrect. The data is not made up in the slightest, and full detail has been provided for anyone to replicate the same data. If you wish to suggest an alternate noise level estimation, then by all means do so. The links to the data are provided earlier in the thread. Dan Rather data is what you'll be-a-wantin there.


What ? Haven't presented any analysis of the data yet beachnut. Again, what you think is irrelevant and substanceless. By all means qualify what you are attempting to say, though if you could ensure it makes sense it would be appreciated. Thanks pal ;)
I was interested in your qualifications to do the work; I assume you have zero since you post opinions which are wrong and not based on reality.

You can't source your work to specific qualified research. Or you would.

You can't source your qualifications. Your data is not an improve data set, and you can't prove it because you are not using methods which would improve the accuracy of the data, you are waving your hands and making up methods to make the data look better to you.

How does this relate to any Conspiracy! This is a CT sub-forum. Connect the dots.
 
I was interested in your qualifications to do the work
I have exactly zero intention of posting any personal details, and that is never going to change.

You can assume whatever you please beachnut.

Your data is not an improve data set
As opposed to which other available "Dan Rather footage" point trace dataset would that be beachnut ?

And please sort out the grammar.

you are not using methods which would improve the accuracy of the data
Improve the accuracy ? What are you talking about ? I use procedures to ensure the best trace accuracy possible, which I've explained in detail. But you are, er, delusional if you think any subsequent post-processing changes the data accuracy. It's possible to filter or smooth noise, sure, but it sounds like you're talking out of your proverbial.

you are waving your hands
I suggest you have a serious look in the mirror beachnut.

Connect the dots.
There's an idea. Am sure your local newsagent will have several colouring books with suitable pictures. You may even get a free pen.

Enough of your nonsense beachnut. You have no interest in this thread. Simply ignore it. Thanks matey.
 
Last edited:
femr,

Sorry, real life intrudes... And it will continue to intrude for about a week.

1) Are you satisfied that position change trace data can be sub-pixel accurate ?

I am satisfied that it CAN be done.

I am not satisfied that you have demonstrated that level of precision in these data sets.

I am not saying that you have not. I am saying that I'm not yet convinced.


I am not simply rejecting your claim for capricious reasons.

The fact is that it is up to you to prove two things:

1. that SynthEyes can generate subpixel resolution.

2. that it can generate them IN THESE VIDEOs (given the unknowns of source, the camera motions, heat refraction, etc.)

This is why I asked you for static points on the other buildings.

BTW, you gave me "2 point averaged" static points. Not the individual field static points.

Could you post the individual field data, please.

Also, btw, looking at the video, just outside the heavy black lines that outline the building is a border of extra light, translucent pixels. It looks about as wide as the black line. Does this look like a compression artifact to you?

2) Do you require any further detail on the treatment of the actual video data, such as deinterlacing ?

No thanks. I found it.

I gotta say, your explanations kinda suck.

After a significant amount of searching, I found ONE source (out of about 100) that referred to fields as frames.

And that one source was careful enough to clarify that a) it's a holdover term used in analog signals, and (MOST important) b) that definition of Frame means 262.5 lines. Not 525 lines.

[You could have cleared a bunch up with this simple comment. Sometime I get the impression that "clearing things up for others" is not your top priority.]

The other 99% of references (INCLUDING YOUR OWN) refers to the individual fields as fields, and define 2 fields per frame. Your source refers to "525-line systems with a 59.94 Hz FIELD rate."

3) Are you planning on presenting any position/time or derivative data smoothing methods in detail ?

Yup. Planning on it. Soon as I get time.

4) Are you planning on presenting the error analysis previously mentioned, which if I recall included quantifying the various sources of noise ?

Nope. That's your job. I'm not going to do it for you. I can not do it for you, because there are far too many details about your analysis that are obscure & not well explained.

5) Have you managed to download the video yet ?

Yup.

6) Have you found any additional building measurement data for building elements, to assist in refining scaling metrics further ?

Nope.

7) Have you applied your derivation methods to the Cam#3 data ?

Yup. But only looked at it momentarily.
__

I'll get this stuff posted as soon as it's finished. I won't just delay for any reason.

But my other work takes precedence.

Your work does not depend on anything that I do.

It's your job to provide all the information required to convince anyone who reads your work.

The information that you've provided thus far does not convince me. Part of that is my inability to decipher what you are saying. A big part of that is having to go back to you to get your definition of terms, trying to pry details out of you (IMO), etc.

Plus you & I don't communicate all that great in the first place.


tom
 
Last edited:
4) Are you planning on presenting the error analysis previously mentioned, which if I recall included quantifying the various sources of noise ?
Although that was addressed to tfk, I have a couple of comments.

The forward error analyses I have offered provide bounds for the worst case quantization error in the calculated acceleration, as a function of the worst-case quantization error in the position and the sampling rate used to calculate the acceleration. It is easy to prove that an extreme worst-case signed quantization error cannot persist throughout the entire data set, and possible (but not quite so easy) to prove that the expected quantization error is much smaller than the worst case.

There's a very easy practical check on the error in femr2's data:
  1. Discard every other data point, because the deinterlacing led to some obvious bias in at least one data set.
  2. Downsample the remaining data to a sampling rate at which the worst-case quantization error is reasonable.
  3. Let's say we downsample from 30 Hz to 5 Hz. That means we get 6 independent downsampled data sets, each offset by 1/6 second. Compute accelerations for all 6, and compare the differences.
If there's a lot of error in each position, then the 6 computed accelerations will look pretty different, even if their average (smoothed) acceleration is about the same. If the 6 computed accelerations look similar, with the differences dominated by the 1/6-second phase shifts, then there can't be much independent error in each position sample.

That check can't rule out systematic error, but it could eliminate lingering doubts about subpixel resolution.
 
real life intrudes
No worries.

I am satisfied that it CAN be done.
Okay.

I am not satisfied that you have demonstrated that level of precision in these data sets.
Darn. Okay, we'll get there in the end.

1. that SynthEyes can generate subpixel resolution.
Think it's important to make sure we are both speaking da same lingo yeah :) SynthEyes tracking sub-pixel position CHANGE of tracked features, yes ? I hope that's been pretty much nailed with the *blob* trace...
156677416.png


Do you accept that the blob trace shows that SynthEyes tracks position change to a sub-pixel accuracy ? I'd say the data behind that trace shows SE is possible of determining movement down to ~0.002 pixels ;)

2. that it can generate them IN THESE VIDEOs (given the unknowns of source, the camera motions, heat refraction, etc.)
The first thing to highlight here is the difference between the noise level in the blob trace and the noise level in the WTC7 trace.

As far as I am concerned, the difference in noise levels is a very good way to quantify the variance added by most of the elements you mention.

The technical method employed to track the feature is exactly the same between the two of course, and the potential accuracy shouldn't change, only the video data quality itself changes.

Would it help your perspective if I said...

The traces show the change in position of the specified feature on the video to sub-pixel accuracy. That does not necessarily mean that *in the real world* the pixelised feature video data represents the exact position of that feature. The tracking method cannot *make up* movement. When SynthEyes determines that the pixelised feature has moved left 0.1 pixels, the upscaling, filtering and pattern matching algoriths employed are reporting true shift in pixel data.

BTW, you gave me "2 point averaged" static points. Not the individual field static points.

Could you post the individual field data, please.
Can do.

Does this look like a compression artifact to you?
It looks more like a halo to me.

I gotta say, your explanations kinda suck.
We speak in slightly differing tongues I rekn. Perhaps it's that things like interlaced video are a bit trivial to me, and I can't really see how there can be any confusion.

After a significant amount of searching, I found ONE source (out of about 100) that referred to fields as frames.
But Tom, Tom, Tom....it's just a word. What I've been saying about a frame being called a field of an interlace frame only when it's treated as part of that interlace frame is correct. A picture is an image is a frame is a field (when part of an interlaced frame). It's all just terminology, so as long as you understand how I've been treating the data, it's all cool.

that definition of Frame means 262.5 lines. Not 525 lines.
Absolutely not. How many frames are there on your old analogue camera film ? How many lines are there on a 4k HD video frame ? etc, etc. Frame is a universal word with many meanings even within the video arena. We're all clear though I hope. Not splitting hairs, but I'll probably use the word frame in numerous contexts as appropriate.

Nope. That's your job. I'm not going to do it for you. I can not do it for you, because there are far too many details about your analysis that are obscure & not well explained.
No problem on retracting the analysis you said you were going to provide, but the rest is a bit ridiculous.

Get SynthEyes (or equivalent)
Get the video.
Unfold it.
Trace the NW corner in both frames (fields if you will)
Export the traces.
Open in excel.
Save.
Upload.

Job done.

That's about the extent of the *analysis* you say is not being explained. Easy as pie. You'll then have generated the raw data and published it. Anything there obscure or difficult to understand ?

It's your job to provide all the information required to convince anyone who reads your work.
That's fine up to a point. You stared this thread with a purpose. Have you achieved it ? If not, is that because you think I've not provided information of some sort ? If so, what ? Really can't think of much that could be missing. Generating the data is a piece of pie. The only processing I've done in the data supplied to you is to *zero* it and combine both fields traces into a single timeline. Bit-o jitter treatment, sure.

The information that you've provided thus far does not convince me. Part of that is my inability to decipher what you are saying. A big part of that is having to go back to you to get your definition of terms, trying to pry details out of you (IMO), etc.
Well you know I'll provide you with answers. Whadaya need ?

Plus you & I don't communicate all that great in the first place.
That's true. Agreement ;) Woot, woot.
 
That means we get 6 independent downsampled data sets, each offset by 1/6 second. Compute accelerations for all 6, and compare the differences.
Couldn't agree more. Have the results sitting in the wings, though I've left the data as it's two separate field traces and done 3-per. (3/29.97s interval)

Will get some graphs done and post them a bit later.
 
After a significant amount of searching, I found ONE source (out of about 100) that referred to fields as frames.
femr2, for tfk's and maybe someone else's ease following your use of the words "field" and "frame", please let me know if your use matches this description:

- A field is a subimage of an interlaced image, but it is called field only as long as it is part of such interlaced image.
- After an interlaced video is deinterlaced, what were fields are now promoted to full frames and you get a doubled-framerate video made only of non-interlaced frames in which there's no concept of field.
- For treatment with SynthEyes, you use deinterlaced video, meaning you talk about frames all the time. However, they come from upper or lower fields and that distinction is significant in some cases.

That's what I understand, and I think that tfk was trying to refer to the frames in the deinterlaced video as fields too. For what I've seen, that seems to have been the cause of confusion.
 
Last edited:
femr2, for tfk's and maybe someone else's ease following your use of the words "field" and "frame", please let me know if your use matches this description:
Okay.

- A field is a subimage of an interlaced image, but it is called field only as long as it is part of such interlaced image.
Yes.

- After an interlaced video is deinterlaced, what were fields are now promoted to full frames and you get a doubled-framerate video made only of non-interlaced frames in which there's no concept of field.
Yes. That is termed a bob-doubled form. I leave the deinterlaced video in unfolded form though, like this...
125286220.jpg


- For treatment with SynthEyes, you use deinterlaced video, meaning you talk about frames all the time.
Yes, though in unfolded form as described above.

That's what I understand, and I think that tfk was trying to refer to the frames in the deinterlaced video as fields too. For what I've seen, that seems to have been the cause of confusion.
Hope we're all clear :)
 
femr2, for tfk's and maybe someone else's ease following your use of the words "field" and "frame", please let me know if your use matches this description:

- A field is a subimage of an interlaced image, but it is called field only as long as it is part of such interlaced image.
- After an interlaced video is deinterlaced, what were fields are now promoted to full frames and you get a doubled-framerate video made only of non-interlaced frames in which there's no concept of field.
- For treatment with SynthEyes, you use deinterlaced video, meaning you talk about frames all the time. However, they come from upper or lower fields and that distinction is significant in some cases.

That's what I understand, and I think that tfk was trying to refer to the frames in the deinterlaced video as fields too. For what I've seen, that seems to have been the cause of confusion.

Does this all not presume that the original video was presented to a digital recording device from a digital camera?
If either one was an analogue device then each feild was originally 262.5 lines with an equal number of blank lines (no information) between each line. Furthermore a couple dozen of those lines would contain no image information as they are merely placeholders in the vertical blanking interval.

If each feild contained a full raster then there is no "feild/frame" distinction and it is not interlaced video it is progressive.
 
Regarding the question of sub-pixel accuracy, I think it's important to make a careful distinction between the accuracy of a measured change in position of an image within a video frame, and the accuracy of a measured change in the position of the real object in space that generated the image.

For the former, I'm pretty thoroughly convinced that sub-pixel accuracy has been shown. But this:

- The position of images in the frame can be measured to +/- y pixel accuracy.
- Measurement of the image size (of e.g. a number of building floors) relative to the known dimensions of the object (building) yields a scale factor of s meters per pixel.
- Therefore, changes in the position of the object can be measured to +/- s*y meter accuracy.

... does not necessarily follow. Noise and error occur in the process of projecting the image from the object onto the camera sensor. Camera movement and heat refraction shimmer are among the obvious possible sources. These are not represented in any stage of the moving-dot test.

You will probably be able to reduce the camera movement artifacts by subtracting out position data from fixed objects in the scene. Especially if you can make, and justify, the simplifying assumption that all of the camera movement is due to changes in the camera's orientation, rather than changes in the camera's absolute position in space, on the basis of the former having a much greater influence (due to the distance) on the position of the image in the frame. Heat distortion is more problematic, because there is probably no definite separation between the characteristic frequencies of heat distortion and camera shake, nor between the characteristic frequencies of heat distortion and the movement you are measuring.

When characteristics of both your signal and your noise sources are unknown, there are limits to how much certainty you can achieve or claim. You cannot even be sure low-pass filtering (smoothing) isn't clobbering a real component of your signal (e.g. vibrations or shock waves in the building frame during the collapse) unless you've analytically ruled out the possibility of those having occurred. More sophisticated filtering affecting lower frequencies (e.g. when looking for "jolts" in the wtc tower collapses) are even more problematic.

With those caveats, I applaud your efforts so far.

Respectfully,
Myriad
 

Back
Top Bottom