• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Discussion of femr's video data analysis

However, they come from upper or lower fields and that distinction is significant in some cases.
Yes.

The reason I leave the video in unfolded form is that, as there is an inherent *shift* in vertical position between each field, the *shape* of small features is much more consistent between consecutive upper fields and consecutive lower fields.

Performing two traces, one for the feature on the upper field, and another for the same feature on he lower field yields higher quality positional data.

It's then a simple matter of combining both zeroed sets of trace data to obtain the full double framerate trace data.
 
Does this all not presume that the original video was presented to a digital recording device from a digital camera?
Not specifically, but I know where you're coming from.

I strongly suggest that the video in question was digital, as there is no apparent leakage of interlace field data.

If transferred from, say, interlaced VHS, the bleed and leakage over field boundaries should be noticable.

Either that or an original analogue video source was digitally interlaced.

If either one was an analogue device then each feild was originally 262.5 lines with an equal number of blank lines (no information) between each line.
The video being used is of course digital, and the interlacing is very clean. No overscan or blank-retrace present, though the timecode lines are present.

Furthermore a couple dozen of those lines would contain no image information as they are merely placeholders in the vertical blanking interval.
Yes, but that form is very rarely put into consumer video form. The source used is extracted directly from DVD without recode.

If each feild contained a full raster then there is no "feild/frame" distinction and it is not interlaced video it is progressive.
It's interlaced MPEG-2 704*480 - 30*1000/1001 fps.
 
Yes. That is termed a bob-doubled form. I leave the deinterlaced video in unfolded form though, like this...
[qimg]http://femr2.ucoz.com/_ph/7/2/125286220.jpg[/qimg]
So the frame rate is still the original, I understand.

And do you also call each of the halves a frame? Because that would indeed be confusing. To me that format is in some way a "weirdly interlaced image" in the sense that the concept of a field is somewhat valid, though it's "left field" and "right field" instead.
 
Regarding the question of sub-pixel accuracy, I think it's important to make a careful distinction between the accuracy of a measured change in position of an image within a video frame, and the accuracy of a measured change in the position of the real object in space that generated the image.
Agreed.

For the former, I'm pretty thoroughly convinced that sub-pixel accuracy has been shown.
Cool.

But this:

- The position of images in the frame can be measured to +/- y pixel accuracy.
- Measurement of the image size (of e.g. a number of building floors) relative to the known dimensions of the object (building) yields a scale factor of s meters per pixel.
- Therefore, changes in the position of the object can be measured to +/- s*y meter accuracy.

... does not necessarily follow. Noise and error occur in the process of projecting the image from the object onto the camera sensor. Camera movement and heat refraction shimmer are among the obvious possible sources. These are not represented in any stage of the moving-dot test.
Correct. The moving dot is a test case to try and show the sub-pixel precision possible when supplied with ideal video data.

For the WTC7 traces, I perform static point tracing and subtraction to try and reduce the effect of camera movement as much as possible.

No specific perspective correction has been performed for the Dan Rather data yet, but for the Cam#3 footage I have performed quite a bit of additional work to account for it as much as possible.

You will probably be able to reduce the camera movement artifacts by subtracting out position data from fixed objects in the scene.
Yes.

Especially if you can make, and justify, the simplifying assumption that all of the camera movement is due to changes in the camera's orientation, rather than changes in the camera's absolute position in space, on the basis of the former having a much greater influence (due to the distance) on the position of the image in the frame.
Yes. Footage is selected where there is the highest probability of the camera being on a tripod.

Heat distortion is more problematic, because there is probably no definite separation between the characteristic frequencies of heat distortion and camera shake, nor between the characteristic frequencies of heat distortion and the movement you are measuring.
It is unlikely that with the relatively low resolution footage in use that separation of heat/smoke distortion could be distinguished from other noise sources. Perhaps comparison of multiple static point traces may assist, but there is then the problem that such distortion is unlikely to be consistent in different regions of the frame.

When characteristics of both your signal and your noise sources are unknown, there are limits to how much certainty you can achieve or claim.
Agreed. The suggested noise level is simply based on rough deviation from *zero* over the period of time the building feature is essentially static in the following trace...

...which is a zoomed view of the full trace...


You cannot even be sure low-pass filtering (smoothing) isn't clobbering a real component of your signal (e.g. vibrations or shock waves in the building frame during the collapse) unless you've analytically ruled out the possibility of those having occurred.
Yes. Many methods were applied to similar data when looking for *mini jolts* in the sauret footage, with reasonable success. For first and second order derived data I am of the opinion that it's okay to smooth out the wobbles...there's unlikely to be a way of separating them from the noise. Definitely so for second order data.

More sophisticated filtering affecting lower frequencies (e.g. when looking for "jolts" in the wtc tower collapses) are even more problematic.
Pretty good results were attained, but that was with much better quality video...
378476413.jpg

(Will need some explanation, but I'll add it when asked)

With those caveats, I applaud your efforts so far.
Ta.
 
Last edited:
So the frame rate is still the original, I understand.

And do you also call each of the halves a frame? Because that would indeed be confusing. To me that format is in some way a "weirdly interlaced image" in the sense that the concept of a field is somewhat valid, though it's "left field" and "right field" instead.
It's a minefield of terminology :) I only leave it in that form for convenience. I'm okay calling them fields. I'm okay calling them sequential images. Prior discussion evolved out of tfk stating that each image in normal deinterlace form was still called a field. I'll call 'em fields whenever I remember to do so :)
 
Heat distortion is more problematic, because there is probably no definite separation between the characteristic frequencies of heat distortion and camera shake, nor between the characteristic frequencies of heat distortion and the movement you are measuring.
An additional note that's been made before but relevant here...

SynthEyes is used in a mode where a rectangular region is traced, rather than a single pixel.

Focus on a particular feature, such as the NW corner, is obtained by moving the initial region such that the NW corner is in the centre of the region (though it doesn't have to be, and there are sometimes good reasons not to make it so.).

Region placement can be performed to sub-pixel accuracy.

SE then scans a larger defined region centered on the same location, using pattern matching of the region with *8 upscaling and LancZos3 filtering applied, for a best fit match in the next frame.

It provides a Figure of Merit metric for each frame region match (0 perfect to 1 no match), which is included in the data made available.

Next frame region is of course automatically re-centered on the newly found sub-pixel location.

Subsequent matches are performed using the initial pattern, or, when keyframing is selected, it updates the base pattern every (n) frames to handle time based variations. I can explain the benefits/drawbacks of using keyframing if necessary.

Where I'm going with this is...

As a region is used, SE can use numerous pixels to determine feature location. As the region is upscaled, this translates to many pixels, even with a small region size.

I use regions as large as is practical for the purpose in hand. This is a subjective process, and region size selection and shape is pretty intuitive with experience of performing the traces.

For example, I use a larger region size when tracing static features, and a smaller region size when tracing features such as the NW corner to provide more *focus* and negate the effect of background features and suchlike.

The localised effect of elements such as heat and smoke is, imo, minimised by...

a) Using region based tracing in the first place

and

b) Making the regions as large as possible.

Here is an example image which shows relative region sizes...

759212128.png
 
Last edited:
femr,

A lot of this is fine detail stuff.

But when you're pushing the limits of what is possible (0.2 pixel), all the tiny details really matter.

Do you accept that the blob trace shows that SynthEyes tracks position change to a sub-pixel accuracy ? I'd say the data behind that trace shows SE is possible of determining movement down to ~0.002 pixels

2/1000ths of a pixel? Where did you get that number?

My problem with this is that it is unrealistic when compared to real video info.

Most significantly, it doesn't capture any of the features of how video cameras digitize continuous spatial info into discrete pixels.

I'll be more impressed if you take a non-symmetric color shape, bounded on one side only, apply a non-uniform blur, place it over a variable background, mimic how video CCDs digitize that info, pull out every other line and then watch SE do its stuff on THAT digitized info.

I'll be more impressed if you do some actual, proper calibration: using a test set-up where you KNOW what the motion is, and compare that to what SE gives you when you run thru the same algorithms.

But I supposed that's out of the question, and we're stuck with trying to pull what info we can out of this video.

The answer for me is going to reside within the static point data. Precisely because we KNOW that those points are not moving. You cannot tell for certain if the points on the WTC7 roof line are changing because the building is moving or if they are moving because the algorithm is giving you variable results.

You also can't tell with a single reference point. That's why I asked for multiple ones.

Let me give you a couple of other things to consider:

1. You use a reference point to compensate for camera motion. The absolute error in your position is going to now be twice the error of any given point: the errors add.

2. You have mentioned only a single static reference point from which you subtract all your data.

Using one reference point, you cannot compensate for rotational variations. You'd need to use at least two reference points.

3. Let's assume that you use only one field of data. If you have a really well defined edge (assume that it transitions within a single pixel, with no edge blur), then as the edge moves down in 0.25 pixel steps thru the field, this is how it appears to the video output (assuming a perfect input/output transform:

Actual position / Apparent position
0.00 0.00
0.25 0.25
0.50 0.50
0.75 0.75
1.00 1.00 (this scan line is missing from this field)
1.25 1.00 |
1.50 1.00 |
1.75 1.00 V
2.00 1.00 (reaches, but doesn't encroach into pixel)
2.25 2.25
2.50 2.50
2.75 2.75
3.00 3.00
3.25 3.00 (pixel not in this field)
3.50 3.00
3.75 3.00
4.00 3.00
4.25 4.25

This is the transform out of which you're trying to get sub-pixel location.

Of course, this just applies to a point, or a line parallel to the pixel layout. In an unusual twist, points that have edge blurs that extend at least 2 pixels can actually improve your resolution.

As I said at the beginning, all of the above are fine points. But when you're claiming 0.2 pixel accuracy, then every possible source of error matters. And watching things like, for example the camera panning back & forth (not hard mounted), leave me very skeptical.

One thing that I got to do several days ago was to watch some motion tracking programs attempt to track points in videos. I was underwhelmed. The people who were doing the video work had to manually adjust the reference point (establish key points) every (it appeared) 20 frames or so.

Perhaps SE has much better algorithms than those programs. Again, something that needs to be proven.

The first thing to highlight here is the difference between the noise level in the blob trace and the noise level in the WTC7 trace.

As far as I am concerned, the difference in noise levels is a very good way to quantify the variance added by most of the elements you mention.

You'd have to prove this correlation.

And "quantify" doesn't mean "it kinda looks like that to me".

The technical method employed to track the feature is exactly the same between the two of course, and the potential accuracy shouldn't change, only the video data quality itself changes.

You're asserting that the accuracy doesn't depend on the local "video data quality". I find that hard to believe. Proof would help.

It looks more like a halo to me.

A halo due to ...?

Care to elaborate on what halo might do to SE's interpolation algorithms. Care to prove it?

I don't know the various compression algorithms that may (or may not) be applied to standard TV broadcasts. But I'd be really surprised if there are none. Care to elaborate?

But Tom, Tom, Tom....it's just a word. What I've been saying about a frame being called a field of an interlace frame only when it's treated as part of that interlace frame is correct. A picture is an image is a frame is a field (when part of an interlaced frame). It's all just terminology, so as long as you understand how I've been treating the data, it's all cool.

femr, femr, femr. The words are how we attempt to communicate. And using the same word to mean two completely different things doesn't help.

And you've used the same word, in the same discussion, to mean 1) an assemblage of 2 deinterlaced fields, 525 lines and 2) a single 262.5 lines.

Absolutely not. How many frames are there on your old analogue camera film ? How many lines are there on a 4k HD video frame ? etc, etc. Frame is a universal word with many meanings even within the video arena. We're all clear though I hope. Not splitting hairs, but I'll probably use the word frame in numerous contexts as appropriate.

We weren't talking about camera film, etc. We were talking about video frames in THIS application.

No problem on retracting the analysis you said you were going to provide, but the rest is a bit ridiculous.

I'm not retracting anything. I never said that I'd do an error analysis for you.

I said that I thought that I could demonstrate that you weren't getting accuracies down in the fractional pixel range. That I thought that they were up close to a full pixel.

I'll continue that discussion.

Get SynthEyes (or equivalent)
Get the video.
Unfold it.
Trace the NW corner in both frames (fields if you will)
Export the traces.
Open in excel.
Save.
Upload.

Job done.

What are you talking about? Replicating your study is not on my agenda.

Well you know I'll provide you with answers. Whadaya need ?

How about the original data (not the 2 point averaged) on those static points.


tom
 
Last edited:
I have exactly zero intention of posting any personal details, and that is never going to change.


You can assume whatever you please beachnut.


As opposed to which other available "Dan Rather footage" point trace dataset would that be beachnut ?

And please sort out the grammar.


Improve the accuracy ? What are you talking about ? I use procedures to ensure the best trace accuracy possible, which I've explained in detail. But you are, er, delusional if you think any subsequent post-processing changes the data accuracy. It's possible to filter or smooth noise, sure, but it sounds like you're talking out of your proverbial.


I suggest you have a serious look in the mirror beachnut.


There's an idea. Am sure your local newsagent will have several colouring books with suitable pictures. You may even get a free pen.

Enough of your nonsense beachnut. You have no interest in this thread. Simply ignore it. Thanks matey.
What training have you had in video as it relates to your attempt to get data accuracy that is not there? Any goal yet Any chance under the sky that you can relate this work to one of the delusional 911 truth CT claims?

Have you had a course on sampling theory? Any engineering courses? Any video analysis courses? I only ask this because your work seem to be complete BS made up by yourself; not a single reference to academic work to corroborate your claims. I mean you might think you are making video with better resolution, but looking good is not better accuracy, and you are making major errors and leaving out extensive background information that is important.

This is not make it look better deal, it is a stuck with bad resolution, and not much you can do about it. Generation…

You know the sub-forum subject - debunking 911 truth idiotic claims, not attacking NIST and other reality based works.

Use the ignore button; you and Major Tom need to use the buttons. But your best forum for you opinion based analysis is where lies are weighted equally with reality; ATS, or that forum you and Tom can go unchallenged as you post nonsense about 911 attacking NIST, Bazant and other reality based work you don’t understand. And believe me you don’t understand NIST, and proof is in the thread; and you have no idea what I refer to.


Use the ignore button, Luke

My opinions? (your work is opinion) Lol, I would show your work to a PhD expert in video analysis, but he would laugh at me for bringing in such obvious nonsense. This is why you guys never publish, because it is all BS and worthless. It as if you are studying my father collapsing to figure out he had a stroke, or a heart attack. You are studying the façade of a building abandoned by the interior support, due to interior collapse, evidenced by the penthouse falling through the interior of 7 seconds before collapse; making the total collapse of 7 twice as slow as freefall. My goodness, you are studying the wrong junk.
Any chance you can tie this great junk to 911 conspiracy theories?
 
Last edited:
you might think you are making video with better resolution
This shows your utter lack of understanding. I am not making video at all.

you are making major errors
What errors ?

and leaving out extensive background information that is important.
What information ?

Use the ignore button
No.

And believe me
lol

you don’t understand NIST, and proof is in the thread; and you have no idea what I refer to.
Show me.

I would show your work to a PhD expert in video analysis
Go ahead.
 
femr,

A lot of this is fine detail stuff.

But when you're pushing the limits of what is possible (0.2 pixel), all the tiny details really matter.
Fine, though 0.2 pixels is far from pushing the limits in technical terms.

2/1000ths of a pixel? Where did you get that number?
Thought you might question that :)

It's from the blob trace data. I posted the first few samples a while back. First few y values increment in 0.001 pixel increments as expected. I'll post the full dataset if y'like.

My problem with this is that it is unrealistic when compared to real video info.
You've missed the purpose. It's purpose is to highlight the level of accuracy possible with SE. Call it the limiting case.

Most significantly, it doesn't capture any of the features of how video cameras digitize continuous spatial info into discrete pixels.
Not entirely true, as it's still pixelated. And it's not supposed to. Limiting case test video ;)

I'll be more impressed if you
That's fine, but as I've indicated the blob test has a purpose. I can perform additional tests, and was going to suggest that you produce a video within which you already know the sub-pixel movements of a feature, then it can be blind-tested, yeah ;) (A video from W.D.Clinger would be trusted more personally though tbh) But it's the attitude that grates a bit. Ho hum...

take a non-symmetric color shape
Okay
bounded on one side only
What do you mean ? Trace only one side/section of it ?

apply a non-uniform blur
Hmm. How about adding random noise instead ? Or I suppose I could blend some smoke over the top ?

place it over a variable background
Okay.

mimic how video CCDs digitize that info
Doesn't make much sense.

pull out every other line
Makes no sense at all. You mean replicate the result of deinterlacing an interlaced video of similar format ?

and then watch SE do its stuff on THAT digitized info.
Okay.

I'll be more impressed if you do some actual, proper calibration: using a test set-up where you KNOW what the motion is
Already have. Blob test. I assume you mean some video footage of something ? If so, it can be done, but it's all manner of hassle ensuring movement is *known* and I'm not at all sure you'd trust the results anyway.

But I supposed that's out of the question
Not out of the question, but all getting a bit ridiculous for what is really rather simple. The problem as I see it is still about trust and the fact that this is far from your arena than impartial skepticism. But ho hum...

and we're stuck with trying to pull what info we can out of this video.
Nope.

The answer for me is going to reside within the static point data. Precisely because we KNOW that those points are not moving. You cannot tell for certain if the points on the WTC7 roof line are changing because the building is moving or if they are moving because the algorithm is giving you variable results.
The purpose of the blob test is to show that SE is capable of detecting position change far finer than +/- 0.2 pixels. There's always going to be some noise after static point subtraction, of course. I can look at the blob test in more detail of course, and provide a clearer picture of the higher levels of accuracy attainable. I've suggest a figure, which is maximum sensitivity. There are a few obvious deviations from that during the blob test, reasons for which should be fairly obvious. I'll explain in more detail, but I do request that you have a stab at suggesting what those deviation may be caused by first. I'm not really here to teach you from the ground up about all this stuff.

You also can't tell with a single reference point. That's why I asked for multiple ones.
I have provided you with multiple static point data.
There are issues using multiple static points, but in principle I have no issue including further static points in the resultant data. Indeed I already have for certain tasks not being discussed here. The data has been provided to you with a single static point to keep the spreadsheet manageable. If I provided you with a spreadsheet with 50 static points and 16 NW corner traces, you'd have a hissy, complaining about being swamped with data. Sigh.

Let me give you a couple of other things to consider:
You are too gracious...

1. You use a reference point to compensate for camera motion. The absolute error in your position is going to now be twice the error of any given point: the errors add.
And the variance is STILL only +/- 0.2 pixels. Good, innit.

2. You have mentioned only a single static reference point from which you subtract all your data.
See above.

Using one reference point, you cannot compensate for rotational variations. You'd need to use at least two reference points.
True, though cutting through the noise between separated static point traces to determine a rotational variable would MOST probably introduce more error than leaving it as it is, which is clearly not rotating to any great extent, and assuming it's on a tripod (which it really does appear to be) the likelyhood of rotation is very slim, and verly slight otherwise. By all means have a go at totally negating noise from two separate static point traces and determining rotation ;) (It's much easier when using to moving points, for obvious reasons ;) )

3. Let's assume that you use only one field of data. If you have a really well defined edge (assume that it transitions within a single pixel, with no edge blur), then as the edge moves down in 0.25 pixel steps thru the field, this is how it appears to the video output (assuming a perfect input/output transform:

Actual position / Apparent position
0.00 0.00
0.25 0.25
0.50 0.50
0.75 0.75
1.00 1.00 (this scan line is missing from this field)
1.25 1.00 |
1.50 1.00 |
1.75 1.00 V
2.00 1.00 (reaches, but doesn't encroach into pixel)
2.25 2.25
2.50 2.50
2.75 2.75
3.00 3.00
3.25 3.00 (pixel not in this field)
3.50 3.00
3.75 3.00
4.00 3.00
4.25 4.25

This is the transform out of which you're trying to get sub-pixel location.
I'm not using one field. There's a valid point in there, sure, but there are no *jumps* in the output data, as expected. Are you sure that each interlace field discarded alternate lines ? :)

Of course, this just applies to a point, or a line parallel to the pixel layout. In an unusual twist, points that have edge blurs that extend at least 2 pixels can actually improve your resolution.
See prior post on regions.

As I said at the beginning, all of the above are fine points. But when you're claiming 0.2 pixel accuracy, then every possible source of error matters. And watching things like, for example the camera panning back & forth (not hard mounted), leave me very skeptical.
Being skeptical is fine, but a lot of this comes from this not being a field you are experienced in. That's fine, but it would be appreaciated if you chill out a bit. I'm willing to do the leg work, but there's a limit :)

One thing that I got to do several days ago was to watch some motion tracking programs attempt to track points in videos. I was underwhelmed. The people who were doing the video work had to manually adjust the reference point (establish key points) every (it appeared) 20 frames or so.
Perhaps they need some tips :) I perform zero manual adjustment.

Perhaps SE has much better algorithms than those programs. Again, something that needs to be proven.
SynthEyes should be paying me for this. lol.

You'd have to prove this correlation.
Okay. First intention would be variance from the perfect known movement.

You're asserting that the accuracy doesn't depend on the local "video data quality". I find that hard to believe. Proof would help.
No I'm not. I'm saying that if you accept tha SE is capable of detecting movement of +/- 10 pixels, then if the noise level in a trace is +/- 100 pixels, then that additional noise is caused by noise in the video, not noise in the detection alrogithms.

A halo due to ...?
Impossible to say for sure. Probably the edge contrast. Makes me think you don't know what I'm talking about though.

Care to elaborate on what halo might do to SE's interpolation algorithms. Care to prove it?
:) Read up on what halo effects in video are. It shouldn't affect trace whose region includes such data as it's consistent across frames.

I don't know the various compression algorithms that may (or may not) be applied to standard TV broadcasts. But I'd be really surprised if there are none. Care to elaborate?
There are definitely compression artefacts. I don't think the feature you mentioned is such, as I said.

femr, femr, femr. The words are how we attempt to communicate. And using the same word to mean two completely different things doesn't help.
My usage of the words has been correct.

And you've used the same word, in the same discussion, to mean 1) an assemblage of 2 deinterlaced fields, 525 lines and 2) a single 262.5 lines.
Correctly.

We weren't talking about camera film, etc. We were talking about video frames in THIS application.
In context, we were talking about you not understanding the variable correct contextual use of the word frame. As I've said, to make things easier on ya, I'll try and use the word field more often. I might also say, sod it, I'm tired of the attitude, and call them all images instead.

I'm not retracting anything. I never said that I'd do an error analysis for you.
Incorrect, but no matter.

I said that I thought that I could demonstrate that you weren't getting accuracies down in the fractional pixel range.
Have you ?

That I thought that they were up close to a full pixel.
And now ?

I'll continue that discussion.
Okay.

What are you talking about? Replicating your study is not on my agenda.
You said *there are far too many details about your analysis that are obscure & not well explained*. I just provided you with the procedure to perform *my analysis* in full.

How about the original data (not the 2 point averaged) on those static points.
Again, no probs.
 
I'm not using one field. There's a valid point in there, sure, but there are no *jumps* in the output data, as expected. Are you sure that each interlace field discarded alternate lines ? :)
Again, for the sake of clarity, let me know if this is correct.

The interlaced image comes from a double speed (59.94 fps in this case) source, after applying an interlacing technique. It doesn't matter much who does the interlacing, be it the camera or the DVD maker. The point resides on how the interlacing is done.

The interlacing process does not involve erasing every other row of each frame, but downsampling the original frame to half the height of the final video's resolution (to 240px in this case), maybe with a 0.5 pixel offset. The result is then put into the interlaced frame as a field, in a process that can be better defined as an encoding than as a composition, in the sense that it can be decoded (deinterlaced) again to obtain the downsampled image without loss of information, and that's what the TV does when reproduced.

If that's the case, it should be clear that due to the downsampling process, SE's algorithms can be applied without problems to the result of deinterlacing.

As an aside, since the interlacing process seems to have been done digitally from a digital source, I'd be curious to see the difference in trace results between the "bob-doubled" form and the unfolded form. No need to satisfy my curiosity, though, because that's beside the point, but maybe you already tried and can briefly comment on the results.
 
The interlaced image comes from a 59.94 fps source
Yes. (I've edited your sentence a little)

It doesn't matter much who does the interlacing, be it the camera or the DVD maker. The point resides on how the interlacing is done.
Yes.

The interlacing process does not involve erasing every other row of each frame, but downsampling the original frame to half the height of the final video's resolution (to 240px in this case), maybe with a 0.5 pixel offset.
It's not always done this way, but yes.

The result is then put into the interlaced frame as a field
Yes.

it can be decoded (deinterlaced) again to obtain the downsampled image without loss of information, and that's what the TV does when reproduced.
Minefield. Differing DVD players will treat the data in a multitude of ways. Some taking account of frame flags, some not. Some TV's will also treat the input signal in various ways.

Essentially, most modern devices will make attempts to either reconstruct the image into a correct form, or display the interlace frames in a way which preserves as much vertical resolution as possible.

If that's the case, it should be clear that due to the downsampling process, SE's algorithms can be applied without problems to the result of deinterlacing.
Yes. Jumps over pixel boundaries would be detectable if this was not the case.

As an aside, since the interlacing process seems to have been done digitally from a digital source, I'd be curious to see the difference in trace results between the "bob-doubled" form and the unfolded form. No need to satisfy my curiosity, though, because that's beside the point, but maybe you already tried and can briefly comment on the results.
No problem. I've got a bit of a backlog of *tasks* on this thread, but I'll get there.
 
Last edited:
femr2 said:
when you're pushing the limits of what is possible (0.2 pixel)
Fine, though 0.2 pixels is far from pushing the limits in technical terms.

Thought I'd pre-empt the inevitable *disbelief* that my response is bound to provoke.

Consider a single white pixel feature on a black background.

If that feature moves left by one pixel, gradually, the aliasing end result is that the intensity of the original pixel drops, and the intensity of the adjacent pixel increases.

Assuming simple 8-bit greyscale colour depth, that alone allows for detection of 255 positions, translating to 1/255th of a pixel (0.0039 pixel accuracy if you will).

Ramp this up with...

a) Full 24bit RGB colour (3 planes of 8 bit data)
b) Region based pattern matching (normally involving well over 64 separate pixels. Hundreds in the case of static point traces)
c) *8 upscaling
d) LancZos3 filtering

...and I hope you can appreciate that potential technical sub-pixel position change determination accuracy can be...awesome :)
 
Last edited:
Thought I'd pre-empt the inevitable *disbelief* that my response is bound to provoke.

Consider a single white pixel feature on a black background.

If that pixel moves left by one pixel, gradually, the aliasing end result is that the intensity of the original pixel drops, and the intensity of the adjacent pixel increases.

Assuming simple 8-bit greyscale colour depth, that alone allows for detection of 255 positions, translating to 1/255th of a pixel (0.0039 pixel accuracy if you will).

Ramp this up with...

a) Full 24bit RGB colour (3 planes of 8 bit data)
b) Region based pattern matching (normally involving well over 64 separate pixels. Hundreds in the case of static point traces)
c) *8 upscaling
d) LancZos3 filtering

...and I hope you can appreciate that potential technical sub-pixel position change accuracy determination can be...awesome :)
You think you are getting more accurate? You are not. You can't source any of this nonsense.

Source your nonsense. Pixel moves, does the Sun rotate around the earth? lol - When will you reveal how this relates to 911 CT.

You make up your analysis as you go. Like a BS artist talking it up. Why are you qualified to make up this analysis. Please explain your studies and courses. Degrees. Engineering degree?

But most of all reference actual scientific work which backs up your claims.

Next of all, reference the 911 CTs your work refutes or tries to support.

What was the original camera and specs for it.
The lens.
The position and elevation.
The temperature.
The wind.
The weather conditions.
What generation is the video you are using.

Which PhDs in this field have you worked with, and what do they say? What CT was this about? What errors? Did you realize what you are doing to the data? lol - you are making the data worse - you are making it pretty, not more accurate. Source your methods and verify why it makes more accuracy as you make pretty pictures, not better data. Major problems - you can't source your methods as ways to do what you are doing. In your analysis, it is like you are taking 5.5 and rounding it to 4 sometimes and 7 other times. This is funny stuff if you think about it. You are studying the facade of building which is collapsing internally. That is funny as you are looking at the wrong things and using a method/means which is limited, and you don't realize why.
 
beachnut said:
You are studying the facade of building which is collapsing internally.
This is a great point.

tfk and femr2, at the risk of piling on, can either of you tell me what any of this has to do with 9/11 Conspiracy Theories? I reported the thread, and it didn't get moved to Science, so may I assume that there is / was some conspiracy afoot that sub-pixel resolution will ultimately reveal?
 
This is a great point.
Not really. This thread is examining tracing techniques. Those techniques are used for many purposes, primarily for WTC 1 rather than WTC 7 to date. There are many implications which will be presented over time once those skeptical of the methods have been satisfied of their validity. This thread is an offshoot of threads related to things like *jolts*, NIST Initiation sequence for WTC 1, WTC 7 failure propogation, ROOSD. All that sort of thing.
 
You can't source any of this nonsense...Source your nonsense...reference actual scientific work which backs up your claims...Source your methods...you can't source your methods...
Try NIST :)

The fact that you continually ask for *someone else* to provide you with information about a subject you clearly do not understand, in the slightest, speaks volumes. Am sure I've heard the phrase *appeal to authority* before somewhere. This is not rocket science. It's all quite simple really. A rather obscure field, perhaps, but fundamentally simple.

I have no desire to hear anything further from you at all, but if you must insist on *wearing a funny hat and stepping into the room shouting nonsense* please be very specific about what you believe I have gotten wrong, prove so, and provide correction. I'm afraid I'm going to report further similar posts from yourself.

Cheers matey. x
 
Yes. That is termed a bob-doubled form. I leave the deinterlaced video in unfolded form though, like this...
125286220.jpg

Here is why I have trouble with the way you utilze these feilds.

The two images above are in (approx) a 2:3 height to width ratio whereas the full raster we see on TV is 4:3. Obviously half the height is missing, gone, not there, in both feilds yet you seem to treat each pixel in each feild as if it represents full height when in fact it represents half of the full height, the other feild containing the other half (and removed in time by near 1/60th of a second)
 
Last edited:
orly?

There are many implications which will be presented over time...
Again, with due respect, I don't believe you. No implications will be presented. No conclusions will be drawn. You and yours will whinge about anomalies with NIST or whatever until the end of time, just like the JFK people. In the meantime, life goes on, building codes get revised, new skyscrapers have concrete cores, etc.

If this discussion is purely technical, it belongs in Science and Technology, because you refuse to state the "conspiracy" involved.

ETA - never has the phrase "counting the number of angels dancing on the head of a pin" been more apt.
 
Last edited:
you seem to treat each pixel in each feild as if it represents full height
Incorrect. I apply separate vertical and horizontal scaling metrics when translating from pixel to real world units, as has been discussed numerous times during this thread.
 

Back
Top Bottom