Merged Discussion of femr's video data analysis

Oh, you DID read it
Correct.

and still didn't understand
Incorrect.

Allow me to spoon-feed you your own data.
No thanks. I'm rather familiar with it already.

Your images are still invisible, but I had a look at your profile. Almost invisible and the size of a postage stamp.

The relevant point is your inept claim that it was over filtered. Tell me, Tom, what filtering did I apply to the data ?

Here it is...
508127617.png


You said in the last post that this data had long since demonstrated free fall. And that you stand by that statement today.
Incorrect. *freefall*. I certainly stand by *freefall*. If you really think that after our several months of discussion about over-g and the shape of the acceleration curve that you can get away with such a ridiculous and petty attempt at distorting meaning, then in your own words...I'm pointing and laughing.

On this graph, I've charted 3 data points as they SHOULD be drawn
Priceless. I have repeatedly told you about your very foolish mistakes on this front. (Am sure W. D. Clinger will be more than unimpressed also)
A great example of how NOT to approach the actual data.

Pulled from your ***.

What you think that the numbers "-120" & "120" mean
That you think NIST are a bunch of idiots, wearing capes, who get tucked in by their mommy before bedtime ;)

3. the correct numbers are: your velocity uncertainty (i.e., error band) for the 1 second interval is really ±2 ft/sec.
You use a +/-1 ft/s initial value, which you pulled from your *** :)

The velocity uncertainty for the 1/60th second interval is ±120 ft/sec.
Only a complete idiot would use such an interval tom, of course.

You don't know WTF you are talking about.
Incorrect.

You are innumerate. "numerically illiterate".
Incorrect.

Given a constant position error, the uncertainty in your velocity calculations increases dramatically simply because you take data at a higher rate. Which is exactly what I said.
Only an idiot would use near-adjacent samples to determine velocity from relatively high sample-rate data containing noise of any kind.

This is exactly how error analysis works.
Otherwise known as *how to post really big error numbers through use of inept data treatment in order to waffle on ad infinatum*

I note you've ignored my comments regarding NIST, so one initial question...

How stooopid must NIST be to suggest a positional accuracy of +/- 1 inch for the data I provded from them ?

They should hire you, so you can tell them that their positional accuracy is actually +/- 1 foot, and any velocity derivations are only accurate +/- 60 ft/s.

I'd pay good money to see you tell them that when the West edge of WTC7 moved 4 inches in 4 seconds, that it was ACTUALLY moving at 60 ft/s.

Priceless.

You published your data. Darned sporting of you, old chap.
You're welcome.

It ain't "your data" any more.
Incorrect.

I'll be using & citing your published data.
You're welcome to. Though why you'd choose to is interesting.

You don't have any more right to restrain my use of your published data
Eh ? You must be seriously paranoid if you think I'm trying to *restrain your use of my data*. Maybe some time on the beach ?
 
Last edited:
Yawn. Already have, numerous times. Re-read the thread.

If you still don't understand, given the ridiculous amount of detail already provided to you, there is no hope for you.

If you think your TV displays 30fps...lol :jaw-dropp

You "explain" nothing.
Ever.
You merely "assert".

Just like the article that you've posted here. it says almost nothing about broadcast video. It talks about making DivX videos.

But the little it does say contradicts you.

Allow me to quote from the very article that you have so yawningly referred us to...

First note near the bottom:
"In NTSC countries (USA, Japan,...) it's ca. 30 fps (59.94 fields per second)"

So, yeah, I believe that they broadcast at 30 full frames per second. And that they implement that by broadcasting 60 half-frames (alternatingly upper & then lower scan lines) per second.

You like to alternate your definition of "frame", one time using the term to define 525 scan lines, and another time using the same term to define 262.5 scan lines.

It amuses you, for some childish reason.

And I believe that you get off on obfuscating, because it is the only way that you can tell yourself that you're privvy to some superior knowledge.

You don't communicate in order to spread clarity & understanding. You communicate as a game. A contest.

Now, back to this video:

It was generated by a CBS news crew. Presumably using a 2000 era professional ENG camera. And broadcast live.

At NTSC standards.

Which means it was broadcast at:

60 fields per second, or
60 262.5 scan line images per second, or
60 half-frames per second,

All of the above, you like to call "frames".

And if it was recorded at 60 fields per second, or 60 "262.5 scan line frames" per second then there is no way to generate 60 full ("525 line") frames per second from that data.

Feel free to try to offer a clear explanation if you disagree.

But why don't you try backing up your "exposition" with a references which doesn't directly contradict you this time. (See "NTSC = 30 fps" above.)
 
FWIW, the whole discussion on the frame rate boils down to what the sampling rate of the data that ferm2 has provided is, and that's ~60 Hz.

Data is sampled (by the camera) at 60 Hz, then scaled down, then encoded, two frames at a time, into interlaced frames, then stored or transmitted as a 30 Hz stream; femr2 then reverses the encoding to obtain a 60 Hz stream, which he uses to draw the data from. This is an overview, some of the details are or can be different.

Yup. Have been over it time and time again with some folk here. I note those responsible for returning to the topic have neither admitted their error nor apologised for their disruptive behaviour. Ho hum.

I think this simple frame is a great *picture tells a thousand words* example of why the fields must be separated...

556666931.png


The question to ask being: What is the location of the NW corner.
 
You like to alternate your definition of "frame", one time using the term to define 525 scan lines, and another time using the same term to define 262.5 scan lines.
The meaning and valid use of the word of frame is indeed very flexible. At no time does any of my usage of the word ever relate to definition of specific scanline quantities. Drowning in your own waffle.

It amuses you, for some childish reason.
Incorrect. This is very tedious.

And I believe that you get off on obfuscating, because it is the only way that you can tell yourself that you're privvy to some superior knowledge.
If you think that the very basics of interlacing are *secret privvy knowledge* you simply make it clear you have no clear idea what you are talking about.

And if it was recorded at 60 fields per second, or 60 "262.5 scan line frames" per second then there is no way to generate 60 full ("525 line") frames per second from that data.
Correct, IF there is movement. (And mostly correct even if there isn't)

But why don't you try backing up your "exposition" with a references which doesn't directly contradict you this time. (See "NTSC = 30 fps" above.)
:) fps means both frames per second, and fields per second. Can perhaps use ips if it stops you getting so very very confused, but I doubt it would make any difference.

556666931.png


Please tell me the location of the NW corner in the frame above.
 
It's not arrogance at all. It's utter disbelief at the lack of understanding.

What's the refresh rate of your telly ?

Mine's 50Hz.

It displays 50 pictures per second.

PAL is 25i NTSC is 30i (actually 30*1000/1001)

The i refers to interlaced, not progressive.

Each frame contains two separate images...called fields...which are separated and displayed sequentially.

:rolleyes:

Yup, "i" refers to interlace.

Now, what did you say that the "25" refers to? Or the "30" in the NTSC spec?

Oh, yeah. You didn't mention those blatantly obvious numbers.

Because they refer to frame rates.

Lame...
 
Whilst NIST is on the slab...

NCSTAR 1-9 Vol 2 Appendix C - VIDEO ANALYSIS OF WTC 7 BUILDING VIBRATIONS BEFORE COLLAPSE

A few quotes:

  • The west edge of WTC 7 (to the right in the frame) was of the most interest in this analysis. This was the northwest corner of the building, which was clear of smoke throughout the recorded period.
  • From the camera’s perspective, the angle of this edge appears close to vertical.
  • The tripod mount kept fluctuations in viewpoint to a minimum, although they were not be negligible.
  • In the previous WTC 2 moiré analysis, frame-to-frame and slow motions of a tripod-mounted camera were found to be a source of error.
  • Due to the camera location, points on the north face near the northwest edge were closer to the camera than points near the northeast edge. However, this distortion was small, since the width of the north face was much smaller than the distance of WTC 7 from the camera.
  • The perspective view of the camera looking up at WTC 7 also introduced some error into the measurement of the number of pixels for the width, as did the uncertainty of a couple of pixels in the exact location of the edges defining the north face.
  • Given these sources of error, an estimate for this video of the width of the north face of WTC was 301 ± 4 horizontal pixels.
  • Since the true dimension of the north face was 329 ft, the conversion factor was 1.09 ft ± 0.02 ft per horizontal pixel. Combining this with the equivalence of 100 ± 10 vertical pixels for each horizontal pixel gave the final conversion factor of 1.1 ft ± 0.1 ft (13 in. ± 1 in.) for each 100 pixels of vertical marker motion.
  • To prepare for analysis, the video clip was exported into a sequence of images, with each image carrying the data for a single frame of the video. Each frame was then converted from the original RGB color into grayscale values.
  • One source of uncertainty in this analysis was the curve-fitting process; another arose from defects in the video images.
  • Many of the frames from this video contained defects, such as the color and black-and-white patterns in the lower center and lower left of Figure C-1. Similar defects occurring along the northwest edge being used for the analysis were responsible for some outlier points in the results.
  • To find the intersection point of the pixel intensity plots for these two pixel columns, the data for each column were first fitted to a smooth curve. A third-order polynomial least-squares fit gave a good compromise between the variance (a measure of the distance of each data point from the curve) and an estimate of the error in determining the intersection point.
  • Starting at about 6 s before the penthouse began its downward movement into the building, there was an abrupt change in the slope of the data that marked the beginning of oscillations that continue until collapse. There was also a second abrupt change in the data at about 1.5 s before the penthouse started moving downward.
  • there are major changes in the location of the marker point that occurred over long time intervals of 20 s or greater. These changes were likely due to movement of the camera.
  • Of primary interest in this analysis was any information that could shed light on the collapse sequence of WTC 7.

Tom,

If you have been following the detail of our *discussion*, it should be immediately clear to you why I have highlighted the quotes above.

If so, please state clearly why each is significant.

If not, say so, and I shall.

Please. You first.

It's always entertaining...
 
Correct.
Your images are still invisible, but I had a look at your profile. Almost invisible and the size of a postage stamp.

They are invisible to you because of your paranoia. Accept cookies from JREF & see the images. I'm tired of having to post them in separate locations to coddle your paranoia.

The relevant point is your inept claim that it was over filtered. Tell me, Tom, what filtering did I apply to the data ?

No, that is NOT the "relevant point".

I brought up the point. Not you. I made the relevant point when I brought it up.

Your "relevant point" is your typical tap dance to evading direct questions.

Why don't you answer my question directly.

Here it is...
http://femr2.ucoz.com/_ph/7/508127617.png


Incorrect. *freefall*. I certainly stand by *freefall*. If you really think that after our several months of discussion about over-g and the shape of the acceleration curve that you can get away with such a ridiculous and petty attempt at distorting meaning, then in your own words...I'm pointing and laughing.

Oh, this should be rich.

Are you saying that the blue line that I overlaid on your graph is NOT the acceleration curve for something that goes into real "free fall"?

Are you saying that your curve does represent the acceleration curve for something that IS in free fall?

Please answer these questions as clearly as you know how…

We'll get to the rest of your post after you reply to these points.
 
Please. You first.
No problem. It is obviously not *immediately clear to you why I have highlighted the quotes*, and your *snark* (as you would put it) is simply a screen to hide that fact (or you'd simply reel off a response). lol.

The west edge of WTC 7 (to the right in the frame) was of the most interest in this analysis. This was the northwest corner of the building, which was clear of smoke throughout the recorded period.
Not clear of smoke. No attempt to quantify the effect of smoke distortion upon their method. Their method is very sensitive to such minor pixel intensity fluctuations. Mine isn't.

From the camera’s perspective, the angle of this edge appears close to vertical.
But it is not vertical, and no attempt to correct their readings for it was performed.

The tripod mount kept fluctuations in viewpoint to a minimum, although they were not be negligible.
Yet no attempt was made to extract camera movement from their data.

In the previous WTC 2 moiré analysis, frame-to-frame and slow motions of a tripod-mounted camera were found to be a source of error.
Yet, again, no attempt to quantify or extract such movement from their data.

The perspective view of the camera looking up at WTC 7 also introduced some error into the measurement of the number of pixels for the width, as did the uncertainty of a couple of pixels in the exact location of the edges defining the north face.
Yet no perspective correction was applied to the data.

Given these sources of error, an estimate for this video of the width of the north face of WTC was 301 ± 4 horizontal pixels.
Why +/- FOUR pixels ? No detail on where the number came from, but okay...+/-4 pixels. Poor.

Since the true dimension of the north face was 329 ft, the conversion factor was 1.09 ft ± 0.02 ft per horizontal pixel. Combining this with the equivalence of 100 ± 10 vertical pixels for each horizontal pixel gave the final conversion factor of 1.1 ft ± 0.1 ft (13 in. ± 1 in.) for each 100 pixels of vertical marker motion.
+/- 1 inch.

To prepare for analysis, the video clip was exported into a sequence of images, with each image carrying the data for a single frame of the video. Each frame was then converted from the original RGB color into grayscale values.
They threw away 66% of the image data. Bad idea.

To find the intersection point of the pixel intensity plots for these two pixel columns, the data for each column were first fitted to a smooth curve. A third-order polynomial least-squares fit gave a good compromise between the variance (a measure of the distance of each data point from the curve) and an estimate of the error in determining the intersection point.
A ridiculous process which adds unquantified noise.

One source of uncertainty in this analysis was the curve-fitting process; another arose from defects in the video images.
Neither of which are quantified.

Many of the frames from this video contained defects, such as the color and black-and-white patterns in the lower center and lower left of Figure C-1. Similar defects occurring along the northwest edge being used for the analysis were responsible for some outlier points in the results.
Their method is particularly prone to picking up such defects, as it uses the entire West edge of the building. My method is much less prone as it uses only the local trace point region.

Starting at about 6 s before the penthouse began its downward movement into the building, there was an abrupt change in the slope of the data that marked the beginning of oscillations that continue until collapse. There was also a second abrupt change in the data at about 1.5 s before the penthouse started moving downward.
A metric clearly insensistive to specific determination of positional error, as CHANGES are relative.

there are major changes in the location of the marker point that occurred over long time intervals of 20 s or greater. These changes were likely due to movement of the camera.
Which, again, they have not bothered to quantify. Likely ? Perhaps camera movement, perhaps actual building movement.

Of primary interest in this analysis was any information that could shed light on the collapse sequence of WTC 7.
Absolutely. Nay a mention of velocity nor acceleration.


Any idea how this relates to our *discussion* ?

Firstly, I suggest you, er, critique the NIST error analysis.
 
Last edited:
I brought up the point. Not you. I made the relevant point when I brought it up.
Your momma. (It's my ball and I wanna play with it). LMAO. What filtering did I apply tom ?

Are you saying that the blue line that I overlaid on your graph is NOT the acceleration curve for something that goes into real "free fall"?
No.

Are you saying that your curve does represent the acceleration curve for something that IS in free fall?
No.

Please answer these questions as clearly as you know how…
Just did.

You really are reaching brave new heights of pedantic Tom.

There are two (infinitely small) periods of time on the acceleration curve at which acceleration could be described as *at freefall*.

Yawn.
 
FWIW, the whole discussion on the frame rate boils down to what the sampling rate of the data that ferm2 has provided is, and that's ~60 Hz.

Data is sampled (by the camera) at 60 Hz, then scaled down, then encoded, two frames at a time, into interlaced frames, then stored or transmitted as a 30 Hz stream; femr2 then reverses the encoding to obtain a 60 Hz stream, which he uses to draw the data from. This is an overview, some of the details are or can be different.


pg,

Yes & no...

[I think that you're aware of all of this. I'm pointing it out for others.]

He does get an image every 1/60th of a second, but he does not get an image of the same physical locations every 1/60th second. He gets every other scan line every 1/60th second.

He gets an image of the same physical locations every 1/30th of a second, when he rescans the same field.

[If the format were different, say, to have broken the image up into left half for field 1 & right half of the image for field 2, then he'd be taking half images (L then R) at 60 fields/sec, but his data set would only resolve motion of any object fully within either field with 30 image/second resolution.]

In other words, scan line (e.g.) 25 in the upper field does not represent the same physical location (e.g., height) as scan line 25 in the lower field. The lower field is imaging a line in space that is offset by (one vertical pixel x the actual vertical dimension / vertical pixel scale factor).

You'll have to ask femr, but I do not believe that he corrects his data set for this. If he doesn't, then all his data for one field is in error by one full vertical pixel. Which, if true, would by itself seriously undermine his claim of an accuracy of ±0.2 pixels.

Note that slight vertical camera motions also shift a given physical line from one field to the other of both stationary & moving objects. Another complication.

And as a given single pixel feature (such as a roofline edge) descends thru the image, it is going to alternately appear in different fields. As the descent speeds up, it's appearance in one or the other field will have a "beat frequency" phenomenon similar to wagon wheels appearing to spin backwards as they speed up. Another complication.

To some extent, this can be compensated by the SynthEyes' ability to recognize large area patterns that stretch across many scan lines. But it still does impact the program's vertical resolution.

For this reason, I have serious doubts that the program's resolution is going to be the same in the horizontal & vertical direction. A consideration that I don't recall femr ever addressing. More diminished confidence in his ability to provide a convincing error analysis.

Other than "you should accept my assertion..."

tom

tk
 
Last edited:
He does get an image every 1/60th of a second, but he does not get an image of the same physical locations every 1/60th second. He gets every other scan line every 1/60th second.
You are making an assumption about the method of interlacing, namely that it was performed by the capture of two full height frames with the interlace frame being constructed by alternating between each captured frame on each scanline. That is generally not the case.

He gets an image of the same physical locations every 1/30th of a second, when he rescans the same field.
I trace each field separately. Two separate 30*1000/1001 sample per second traces, each offset by 1/(60*1000/1001)s.

If the format were different, say, to have broken the image up into left half for field 1 & right half of the image for field 2, then he'd be taking half images (L then R) at 60 fields/sec, but his data set would only resolve motion of any object fully within either field with 30 image/second resolution.
I've repeatedly given you full detail on the methods, and included details of the unfolded frames...
http://www.internationalskeptics.com/forums/showpost.php?p=6261449&postcount=138
125286220.jpg


In other words, scan line (e.g.) 25 in the upper field does not represent the same physical location (e.g., height) as scan line 25 in the lower field.
Generally correct. (Method of interlace determines correctness)

The lower field is imaging a line in space that is offset by (one vertical pixel x the actual vertical dimension / vertical pixel scale factor).
Again, depends upon the method of interlace.

You'll have to ask femr, but I do not believe that he corrects his data set for this.
You believe incorrectly, even though I have explained to you numerous times.

Separate field traces are joined relative to one single sample point, the first, which is set to position (0). All subsequent changes in position are relative to that initial location.

If he doesn't, then all his data for one field is in error by one full vertical pixel. Which, if true, would by itself seriously undermine his claim of an accuracy of ±0.2 pixels.
Not true.

Note that slight vertical camera motions also shift a given physical line from one field to the other of both stationary & moving objects. Another complication.
Correct, if a specific method of interlace was used when constructing the original interlaced frames.

If such a method was used, then separate field traces would show obvious *leaps* at hpix intervals...

45980309.gif


For the Dan Rather footage, the method of interlace is unclear, as one field does show occasional hpix jumps, whereas the other does not.

That is why I prefer to keep them separate.

The simple *jitter* smoothing minimises the effect of such.

And as a given single pixel feature (such as a roofline edge) descends thru the image, it is going to alternately appear in different fields. As the descent speeds up, it's appearance in one or the other field will have a "beat frequency" phenomenon similar to wagon wheels appearing to spin backwards as they speed up. Another complication.
It's possible, though not apparent in either of our derived datasets, yes ?

The variance in data over long periods of time, including quite significant camera movement, does not show significant instances.

A point (which I'm sure you'll love) is that the 0.2 pixel variance suggested applies to the Dan Rather viewpoint only, which you were told many months ago was far inferior to the Cam#3 viewpoint. Here's some data from the new copy of the Cam#3 data...
943943983.png

Interesting vertical axis scale there.

To some extent, this can be compensated by the SynthEyes' ability to recognize large area patterns that stretch across many scan lines.
Agreed.

For this reason, I have serious doubts that the program's resolution is going to be the same in the horizontal & vertical direction.
The program's resolution doesn't change, but both scaling metric accuracy and effective positional accuracy are different for horizontal and vertical movement, for interlaced video, yes. (Cam#3 horizontal movement accuracy is about +/- 1 inch. Dan Rather vertical movement accuracy, with +/- 0.2 pixel, is about +/- 0.7 ft)
 
If a person reads through the thread from the beginning, It is interesting to see how many wrong claims TFK has made. Remember, on page 1 he had serious doubts that sub-pixel tracking was even possible.

Phenomenal the number of mistakes he made without acknowledging them.

Asking questoions is fine, but why all the insults without admitting any past mistakes?

Has he ever produced his own error analysis he promised a while ago? Nothing.
 
If a person reads through the thread from the beginning, It is interesting to see how many wrong claims TFK has made. Remember, on page 1 he had serious doubts that sub-pixel tracking was even possible.

Phenomenal the number of mistakes he made without acknowledging them.

Asking questoions is fine, but why all the insults without admitting any past mistakes?

Has he ever produced his own error analysis he promised a while ago? Nothing.

How does this thread, in your opinion, relate to 911 delusions? How is this related to 911 conspiracy theories?
 
Last edited:
If a person reads through the thread from the beginning, It is interesting to see how many wrong claims TFK has made. Remember, on page 1 he had serious doubts that sub-pixel tracking was even possible.

Phenomenal the number of mistakes he made without acknowledging them.

Asking questoions is fine, but why all the insults without admitting any past mistakes?
Indeed, though one issue is whether he actually recognises or learns from any of those mistakes.

For example, from post #1 of this thread...
tfk said:
The really interesting part (to me) has been the enormous impact on the calculated velocity & acceleration that results from the simple act of increasing the sampling rate.

And yet over 500 posts later, he is still making the fundamental mistake of attempting to justify use of near-adjacent samples to determine derived velocity and acceleration data.

The new Cam#3 data will be on the table fairly soon, so shall see what has actually sunk in along the way.
 
No problem. It is obviously not *immediately clear to you why I have highlighted the quotes*, and your *snark* (as you would put it) is simply a screen to hide that fact (or you'd simply reel off a response). lol.

You're right about one thing. I never would have come up with THIS laundry list of "revelations".

It's tough to read minds when minds are this dysfunctional.

But I've learned to let you go first. It's ALWAYS so entertaining.

Just like this post is…

The two main points, before I launch into this mishmash, are:

1. the horizontal motion of the building was important to them. They provided a clear, concise description of their analysis. They did an error analysis, as evident from their constant inclusion of "±tolerances" on all their dimensions.

2. the beginning of the descent of the building was important to them. The same comments as 1. above apply.

3. the exact velocity & acceleration of the collapse, the stuff that you kids get all a-twitter over, was utterly inconsequential. This is proven by the fact that it changed not one conclusion one iota.

Not clear of smoke. No attempt to quantify the effect of smoke distortion upon their method. Their method is very sensitive to such minor pixel intensity fluctuations. Mine isn't.

Wrong. 100% backwards. Their moire technique is singularly insensitive to smoke, because it gives you large lengths of repeating patterns. All segments of which contain all the important results, so if one area is obscured by smoke, another area can easily provide the info.

You don't understand Moire.

Your technique is far more inhibited by smoke than NIST's. And overall, for the horizontal motion of the building, their Moire technique is more sensitive than yours.

You claim 0.2 pixels. (Horizontal resolution, you might get close to this.) The horizontal scaling factor is 1.09 ± .02 ft/pixel. Ergo, you claim 13.1"/pixel x 0.2 pixels = 2.62" horizontal resolution.

Keep that number in mind...

But it is not vertical, and no attempt to correct their readings for it was performed.

LMAO.

They didn't want it to be vertical. If it were perfectly vertical, then the Moire technique would not have worked.

They didn't need it to be vertical. There is no "correction" that is required.

Yet no attempt was made to extract camera movement from their data.

If the camera had been moving during the time that they were trying to extract horizontal motion data, then they would have gotten no data.

Ergo, I conclude that the errors due to motion during the data acquisition time were minimal.
___

And this group was my favorite…

NIST said:
In the previous WTC 2 moiré analysis, frame-to-frame and slow motions of a tripod-mounted camera were found to be a source of error.
Yet, again, no attempt to quantify or extract such movement from their data.

"No attempt to quantify"? Wrong.

NIST said:
The perspective view of the camera looking up at WTC 7 also introduced some error into the measurement of the number of pixels for the width, as did the uncertainty of a couple of pixels in the exact location of the edges defining the north face.
Yet no perspective correction was applied to the data.

"No attempt to quantify"? Wrong.

NIST said:
Given these sources of error, an estimate for this video of the width of the north face of WTC was 301 ± 4 horizontal pixels.

Why +/- FOUR pixels ? No detail on where the number came from, but okay...+/-4 pixels. Poor.

No, ya moron. Excellent technique. Exactly they type of technique that you should learn from, and attempt to emulate.

Read again - and try to understand this time - what they just said. Start with "Given these sources of error …" and finish with "… 301 ± 4 pixels."

THIS WAS THE CALCULATED RESULT OF THE QUANTIFICATION OF THE ERRORS THAT YOU JUST SAID THEY DIDN'T DO.

LMFAO …

"No attempt to quantify …" And then you read off the quantification. Are you always this clueless?

No need for an answer. Rhetorical.

BTW, this entire exercise was a first rate example of disclosing the significant sources of error in one's measurement, attempt to quantify each and then reporting the ultimate error bands associated with your technique.

And ± FOUR pixels sounds just about right to me. By a group of professionals.

±0.2 pixels sounds like the braggadocio of a clueless idjit who hasn't a clue what he is doing. And simply dismisses the whole exercise. While "suggesting" that people simply take his clueless word for things.
___

A close second, in cluelessness quotient...

NIST said:
Since the true dimension of the north face was 329 ft, the conversion factor was 1.09 ft ± 0.02 ft per horizontal pixel. Combining this with the equivalence of 100 ± 10 vertical pixels for each horizontal pixel gave the final conversion factor of 1.1 ft ± 0.1 ft (13 in. ± 1 in.) for each 100 pixels of vertical marker motion.

+/- 1 inch.

I understand that every time you actually make a statement, it comes back to bite you in the ass. And that is why you write things that don't say squat.

Like "+/- 1 inch."

Please elaborate. Tell me what you think that "±1 inch" means in this context…

I'm almost certain you're gonna get it wrong...
___

And blah, blah, blah … a bunch more crappola.

NIST said:
Of primary interest in this analysis was any information that could shed light on the collapse sequence of WTC 7.

Absolutely. Nay a mention of velocity nor acceleration.
Any idea how this relates to our *discussion* ?

Sure. First, you're wrong.

NIST specifically gave an empirical equation for velocity of descent of WTC7 north wall. Along with their position data. (Which Chandler won't do.)

Then they did talk about acceleration. Or are you forgetting "Stage 1, 2 & 3"??

They talked about velocity of LOTS of things on 9/11. From speed of jets, to spread of fire, to fall of debris, to speed of seismic wave, to wind speed & direction, etc. And pretty much every one of these velocities had an error band attached to it, IIRC.

Now that you're learning a little about it, pretty good technique, no?!!

Second, velocity & acceleration of the fall of those buildings were singularly uninteresting & irrelevant to NIST's conclusions. By the time the Towers barely began to fall, the collapse mechanisms (& NIST's job) was essentially over.

With WTC7 it was a little more involved. So they continued thru the vertical & horizontal collapse progression. Did a pretty fair job of it, too.

But, by the time the outer wall began to collapse, the rest of the building was already history.

And again, NIST's job was over.

So the velocities & accelerations after this point in time was pretty much irrelevant.

They'd been on it for about 5 years, at that point. Pretty anxious to put it to bed, I'd imagine.

Firstly, I suggest you, er, critique the NIST error analysis.

Already did.

Did you notice how many times you quoted "± some tolerance", just in this one post?

Every single one of those numbers that follows the "±" is he result of an error analysis.

The fact that they didn't put all that a-figgurin' in the report is not surprising in the slightest. I ued to include it in my reports when I was younger. But for anything going out of the engineering department, it just makes people's eyes glaze over.

So you just note the sources of error & include the total tolerance that you figure in the conclusion.

Just like NIST did…

Wasn't that … "special"…?!!
 
There are two (infinitely small) periods of time on the acceleration curve at which acceleration could be described as *at freefall*.

Yawn.

In other words,
So, you appear to agree that my "blue line" does describe an object that is falling "in free fall".

You now appear to be agreeing with what I said over a year ago: that "the acceleration profile of the north wall does [ETA] (by Chandler's data & now your data) NOT look anything like an object that is 'in freefall'."

And that would mean that, if you are honest, you'll be stating that Chandler is also wrong when he says that the building fell "in free fall".

And you appear to be disagreeing with troofers when they say that the north wall of WTC7 "had ALL of its supports removed".

Good.

We appear to be making progress.

Took ya long enough...

LoL.
 
Last edited:
3. the exact velocity & acceleration of the collapse, the stuff that you kids get all a-twitter over, was utterly inconsequential. This is proven by the fact that it changed not one conclusion one iota.
The very thing I have been saying to you Tom. I'm not particularly interested in your obsessive compulsive desire to discuss velocity and acceleration.

Wrong. 100% backwards. Their moire technique is singularly insensitive to smoke, because it gives you large lengths of repeating patterns. All segments of which contain all the important results, so if one area is obscured by smoke, another area can easily provide the info.
A full moire technique, sure, but that's not what their automation implemented. They looked at intensities of two pixel columns, with image data that had been downsampled into greyscale. Subtle pixel intensity changes due to smoke would make significant differenc to each pixel column plot.

Did they negate some of that later by applying a third-order polynomial least-squares fit, sure, but also added noise layer in the process.

You don't understand Moire.
Incorrect.

Your technique is far more inhibited by smoke than NIST's.
Incorrect. Trace location is determined by multi-pixel pattern matching. Not specific pixel intensity, but pattern.

And overall, for the horizontal motion of the building, their Moire technique is more sensitive than yours.
I'd disagree, but regardless, the results are very similar...

89078455.png


By the way, the little *wobbles* in my plot...are present on the video data...easy to prove.

You claim 0.2 pixels.
Incorrect. You are getting yourself all confused again Tom. The +/- 0.2 pixels metric applies only to the original Dan Rather dataset.

The WTC 7 horizontal motion graphs that I compare to those of NIST are done with the Cam#3 footage.

You have been told this numerous times, and yet are still confusing the two.

(Horizontal resolution, you might get close to this.) The horizontal scaling factor is 1.09 ± .02 ft/pixel. Ergo, you claim 13.1"/pixel x 0.2 pixels = 2.62" horizontal resolution.

Keep that number in mind...
No thanks. You've wandered off on a tfk tangent. Getting your metrics all confuzzled.

They didn't want it to be vertical. If it were perfectly vertical, then the Moire technique would not have worked.

They didn't need it to be vertical. There is no "correction" that is required.
Of course there is. The relationship between horizontal position and moire-based vertical pixel intensity match is not linear, as the building was not perfectly aligned with the camera.

If the camera had been moving during the time that they were trying to extract horizontal motion data, then they would have gotten no data.
The camera was moving at ALL times tom.

Ergo, I conclude that the errors due to motion during the data acquisition time were minimal.
LMAO. Quite slight, sure. I'll be showing you when I release the Cam#3 data (Gave you a glipmse in the image earlier).

Excellent technique. Exactly they type of technique that you should learn from, and attempt to emulate.
Nope. Most of those factors I have treated specifically for the Cam#3 data. Perspective correction, vertical and horizontal skew, static point extraction...

Start with "Given these sources of error …" and finish with "… 301 ± 4 pixels."
A figure which is not broken down at all. They don't bother to attempt any kind of perspective or camera movement correction and bolt a figure from their *** onto the global width metric. Nice one.

BTW, this entire exercise was a first rate example of disclosing the significant sources of error in one's measurement, attempt to quantify each and then reporting the ultimate error bands associated with your technique.

And ± FOUR pixels sounds just about right to me. By a group of professionals.

±0.2 pixels sounds like the braggadocio of a clueless idjit who hasn't a clue what he is doing. And simply dismisses the whole exercise. While "suggesting" that people simply take his clueless word for things.

LMAO. What is my error estimate for building width scaling factor Tom ? ;)

The NIST positional error estimate is NOT 4 pixels Tom, it's +/- 1 inch.


Oh, wait, you don't think it's POSSIBLE to determine movement of less than 6ft if I recall correctly, so NIST can't possibly be suggesting their horizontal movement data is accurate +/- 1 inch.

:)
 
So, you appear to...
Appear to ?

You may enjoy your condescending waffle Tom, but it's more than tedious.

My answers are clear and were not in any doubt.

Your inept attempts to apply fictional inference to your posts will be ignored as appropriate.
 
And the shoe drops...

The meaning and valid use of the word of frame is indeed very flexible.

All this firkin' nonsense, all this time…

All because this
Edited by LashL: 
Edited for civility
wants to dick around with the definition of a word.

Well,
Edited by LashL: 
Edited for civility
if the definition of "Frame" is so friggin' "flexible", then all of your crappola about 25 frames per second & 30 frames per second was just plain wrong.

Most importantly, all of this could have gone away with a simple statement at the beginning, to the effect that:

"The word 'frame' is confusing because the same word is used to refer to two different things: a full-frame (i.e., two interlaced fields 525 NTSC scan lines, 625 PAL scan lines), and also a half-frame (essentially a field, 262.5 NTSC scan lines or 312.5 PAS scan lines) when transmitting to interlaced systems.

Exactly the approach that I suggested months ago.

At no time does any of my usage of the word ever relate to definition of specific scanline quantities. Drowning in your own waffle.

And yet, while you are SEEMING to disagree with something that I said here, you are not. You agree that those statements that I made identifying the number of scan lines in your two definitions of frame are exactly correct.

Again, this exemplifies PERFECTLY your deceitful approach to "communication".

This is the heart of the reason that communication with you is doomed to failure.

You have zero interest in speaking plainly. In defining your terms. In agreeing upon & sharing your definitions.

You are utterly untrustworthy. You're a deceiver.

It's exactly why you fit in so well with the twoofers.

Christ, you can't even be honest about being a Twoofer.

And, now that your deceptions have been shown plainly, I'm gonna enjoy the rest of my weekend.

... except that I noticed one more post ...

Removed breaches of Rule 0.
Replying to this modbox in thread will be off topic  Posted By: LashL
 
Last edited by a moderator:

Back
Top Bottom