• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Discussion of femr's video data analysis

The original source video can be found here...

Download

Bear in mind it is in interlaced form, and if you choose to use it, the first thing to do is unfold each interlace field.

Any processing of the video should use a lossless codec such as RAW or HuffYUV.
 
A very *loose* curve fit.

The initial polynomial is simply done with excel, then a quick 2nd order plot in Maple...

761812080.png


Probably useless, but a *start point*. I *very* rarely use Maple, so will dig in and see whether I can use the raw data rather than the poxy excel poly.
 
A very *loose* curve fit.

The initial polynomial is simply done with excel, then a quick 2nd order plot in Maple...

http://femr2.ucoz.com/_ph/7/761812080.png

Probably useless, but a *start point*. I *very* rarely use Maple, so will dig in and see whether I can use the raw data rather than the poxy excel poly.


What value is that equation supposed to represent?

Vertical position vs. time?

Over what domain?

tom
 
What value is that equation supposed to represent?

Vertical position vs. time?

Over what domain?

tom
Initial equation is polynomial curve fit of position/time data within excel.

Graph is a second order curve - Acceleration.

x - time (s)
y - ft/s^2

Time is 10 to 17 seconds in the supplied data.
 
Initial equation is polynomial curve fit of position/time data within excel.

Graph is a second order curve - Acceleration.

x - time (s)
y - ft/s^2

Time is 10 to 17 seconds in the supplied data.

That's what I thought.

That doesn't demonstrate a particularly astute understanding of curve fitting & function range (+30' to -30' to +30' ?) or domain ...

Overlay your polynomial onto your raw data & you'll see that they look nothing alike. So your empirical equation is pretty useless to giving you velocity & then acceleration values. Which, of course, would be one of the principle reasons for generating that curve in the first place.

BTW, it's a 5th order poly fit. Not a 2nd.


tom
 
Last edited:
For reference, here are the trace locations for the data provided...

http://femr2.ucoz.com/_ph/7/2/759212128.jpg

The image is in deinterlaced form, showing both fields.

The size of the box indicates the feature search area.

The large search area used for the static feature trace allows for slightly increased accuracy.


Thanks.

What is the point on the building within large green rectangle that is your static point? (The rectangle's too big to tell.)

Could you do the points that are inside the red boxes as well.



Uploaded with ImageShack.us

Also, what the heck is the solid black line that rises up the right side & roof line of WTC7?

Is that in the original CBS video? Why does it look like it is completely fake. Like it's been added after the fact?


tom
 
Last edited:
That's what I thought.

That doesn't demonstrate a particularly astute understanding of curve fitting & function range (+30' to -30' to +30' ?) or domain ...

Overlay your polynomial onto your raw data & you'll see that they look nothing alike. So your empirical equation is pretty useless to giving you velocity & then acceleration values. Which, of course, would be one of the principle reasons for generating that curve in the first place.



tom
I said it's a very loose curve (though it's not *that* far off - have you plotted the equation itself ?), and the graph is probably useless (as it's very low order due to max-ing out on the poxy excel poly fit. Am looking at a order 56 plot at the moment, but there's no rush). lol. Bit of a lack of curve fitting tools in this building, and we've done it all before anyway. Call it a prompt to produce...whatever...

As far as my understanding goes, the intention of this thread was your intention to prove claims of sub pixel position accuracy wrong. Post containing the +/- 1 pixel graph a couple of posts ago could do with a response.

btw - Yes, the initial equation is a 5th order poly, however, the graph is a second derivation of it (the diff commands in Maple). If you thought that was a graph of the initial 5th order equation, there's no wonder you're saying it's not a good fit to the position/time data. Note it comes out around the 32ft/s^2 mark at peak ;)

I included the maple commands to make sure that was clear. I might have to include some additional verbage to make sure my posts are simply clear, but please take a little extra time looking at them, rather than assume I'm doing something *stoooopid* eh.
 
Could you do the points that are inside the red boxes as well.
Can do.

Ta. Can see it no problem.

Also, what the heck is the solid black line that rises up the right side & roof line of WTC7?
It's simply a high contrast region. It's not an overlay. Have a look at the linked mpeg video zoomed. As I said, the Dan Rather footage is great from a perspective, er, perspective, but the quality is not great. Video artefacts galore.

Is that in the original CBS video?
Yep.

Why does it look like completely fake. Like it's been added after the fact?
Video CCD's do lot's of odd things like that. Am sure the CBS overlay titling doesn't help things either.
 
Could you do the points that are inside the red boxes as well.

staticpointstracereques.png

A draft vertical trace...
925179743.png


There is not enough contrast on the background building, and the trace drifts quite badly. So I've omitted it.
 
Re: My misreading of your chart explanation: My apologies. It demonstrated my failure to read to the end of your post.

Acceleration makes more sense, but the results are still way off. And this, ultimately, is going to be the cornerstone of my point in this thread: Obtaining a reasonable, plausible result in velocity & acceleration is one of the few objective checks that one can employ to get a sense of accuracy & error in the position vs. time graph.

The extended +30g values at the beginning & end of your acceleration graph don't make any sense whatsoever.

And now, I'm going to jump to the end and may lose you. It's better to go step by step, but I'll post a peek at where we're going.

There are folks who are "leveraging" the calculation of acceleration from position data to sell "controlled demolition".

The motivations are irrelevant to this conversation. What is relevant is not "what results can I get from the data and the analysis?"

What is relevant is "What results are really justifiable & defensible from the data & the analysis?"

In order to do this right, it is imperative to have a sense of the "sensitivity" of the analysis. That is, how sensitive the answer is to different variables.

When one is taking few data points, then the issue of "smoothing the data" (i.e., frequency filtering) is irrelevant. As soon as one starts gathering data higher speeds, then there is a metric that is going to be of the form ∂y * ƒ (where ∂y = position error & ƒ = frequency) which sets a limit on the known accuracy of your results.
___

[Aside]
It's fascinating that this is virtually identical to a statement of the Heisenberg Uncertainty Principle, which of course applies only to atomic scale objects. But the math may well apply in both areas...

WD, I'm certain that there is some fundamental theorem in Data Acquisition that addresses this issue directly. I suspect that your reference in Numerical Methods probably addresses it. Can you put a name to it?
[/Aside]
___

The end result of all of this is, right now, "art". As soon as someone can point us to the theory, it'll become engineering, where we can quantify the relationship between DAQ ("Data Acquisition") rate and measurement error.

The engineering says that there is error in all measurements and therefore in all calculations. The art says that there are acceptable & unacceptable levels of error.

The acceleration graph that you produced has a clearly unacceptable level of error. It shows the building having an average of about +10G over the first 1.2 seconds or so. Which, from a zero initial velocity, means that the building would have jumped upwards about 180 feet.

I trust that you agree that this is an unacceptable level of error...

I've found out, by playing with these equations, that it is pointless to bother trying to fit any polynomial to the position data if you are going to do the differentiating for velocity & position. If you set the degree of the polynomial high enough to capture the various characteristics of the position over the whole time interval of the fall, you will inevitably be introducing pure artifacts that will blow up when you calculate the velocity and (especially) acceleration.

Instead you have to use "interpolating functions". Both Mathematica (the program that I use) & Maple have them. These don't try to fit the same polynomial over the whole range, but string a bunch of them together, making sure to match values and derivatives at all "connections" between the curve segments.

You can choose polynomial interpolation (for which you can specify the polynomial order) or spline interpolation.

The spline interpolation is designed to give you continuous 2nd derivatives (no kinks). While splines seem like a natural choice, Mathematica will not integrate spline based interpolation functions. This is a good reality check on your results. Integrate twice back up to get total drop, and make sure that it agrees with the original data.

Getting the right drop is a necessary - not sufficient - check on your acceleration results. You can have higher accelerations for shorter periods of time or lower accelerations for longer periods of time, and achieve the same displacement


Here are the curves that I got with your old Camera 2 (NIST ID, the Dan Rather) video. You said that you couldn't see them before.

This curve is your data (smoothed & decimated) vs. NIST's data after I've adjusted time offset and applied a 0.93 scale factor to your descent. It matches quite well. Small differences at the start of the collapse. Not surprising because you & NIST used different points on the roof line.

I got this scale factor by comparing NIST's drop to your drop over a central (i.e., "constant acceleration") section of the curve. From about 25' drop to about 170' drop). By staying in the middle of the range, I was able to avoid the "where is t0?" question & various dynamics at the end of the drop too.

This is puzzling, and deserves an examination. There is no reason that I can see that your scale factor should be off. Could you re-check it against known points on the building, please.




FEMR's drop vs time
Smoothed & decimated

This curve shows just your data, 21 sample smoothed & decimated (every 7th point), along with the interpolation curve that fits the data. You can see that the fit is excellent. Much better than any single polynomial will give you.



Velocity (smoothed) vs. time:
From Smoothed drop data



Acceleration (smoothed) vs. time
From smoothed velocity data



At the end of the day, we're gonna find out that the acceleration results are very sensitive to the filtering & decimation values used. This makes one nervous that your results might not be objective, but merely a reflection of your prejudices regarding "acceptable values".

Which is precisely why I started this thread. To get some feedback from some of the EEs with background in DAQ and/or signal processing. To see if there is a more objective way to determine proper filtering of the data.

Nonetheless, I believe this acceleration data to be "reasonable", which gives a "sanity check" on the filtering.

Final point: You'll see that the curve I've posted only covers 5 seconds, whereas the one you posted covered 7.

BTW: small point. You'll find that (being 2nd derivative of 5th order eqn) that the curve your plotted is a 3rd order poly. (Not particularly important.)

That's enough for now.

More later.


tom
 
Last edited:
femr,

Could you post the numerical data on the static points for those other 3 buildings, please.

It's immediately clear that there is a very high correlation between the jitter of all three of those static points. This goes a long way to suggest that the jitter is really in the video, and not in the software algorithms.

Btw, was this data taken from interlaced video? Did you do any "jitter correction" on it?


tom

Do you have a url reference back to an original of the CBS video? (BTW, are you "xenomorph"? I'm just asking if the origins of all those videos is you or someone else.)
 
Last edited:
First thread in a while with a plethora of educational material in it. Bump to keep it a top...not that it needs it at this point.

TAM:)
 
Acceleration makes more sense, but the results are still way off.
As the initial equation is a 5th order poly, the accel derivation is only order 3, which is why it's the shape it is.

And this, ultimately, is going to be the cornerstone of my point in this thread:
OK.

Obtaining a reasonable, plausible result in velocity & acceleration is one of the few objective checks that one can employ to get a sense of accuracy & error in the position vs. time graph.
Not sure I agree with that. I think I've shown that the positional accuracy is pretty hot, but certainly finding techniques to reduce the noise level for subsequent derivations will be productive.

The extended +30g values at the beginning & end of your acceleration graph don't make any sense whatsoever.
They wouldn't. The vertical axis is in ft/s^2, not G. As I indicated, the *peak* is about 32ft/s^2, which is why I included the graph. Using higher order initial polys still results in over-g accel, but I'm looking at various ways of eliminating noise before I post further accel graphs.

What is relevant is "What results are really justifiable & defensible from the data & the analysis?"
My viewpoint is even simpler...what does the data actually show. Thus far the tracing process for WTC7 has shown that movement spans a wide timeframe, building corner release points can be quantified, that sort of thing...

The acceleration graph that you produced has a clearly unacceptable level of error. It shows the building having an average of about +10G over the first 1.2 seconds or so. Which, from a zero initial velocity, means that the building would have jumped upwards about 180 feet.
See prior comments. ft/s^2, not G.

I've found out, by playing with these equations, that it is pointless to bother trying to fit any polynomial to the position data if you are going to do the differentiating for velocity & position. If you set the degree of the polynomial high enough to capture the various characteristics of the position over the whole time interval of the fall, you will inevitably be introducing pure artifacts that will blow up when you calculate the velocity and (especially) acceleration.
It's a problem, yes. Am looking at the effect of sequential passes of downsampling and interpolation at the moment to smooth it out.

While splines seem like a natural choice, Mathematica will not integrate spline based interpolation functions.
Route forward there would be to generate a new data-set from the interpolation function. Match it to the original sample rate, then use the more basic symmetric differencing on that data perhaps ?

This is a good reality check on your results. Integrate twice back up to get total drop, and make sure that it agrees with the original data.
:)

Here are the curves that I got with your old Camera 2 (NIST ID, the Dan Rather) video. You said that you couldn't see them before.
I see piccys.

There is no reason that I can see that your scale factor should be off. Could you re-check it against known points on the building, please.
Sure, though there is very scant information available on building size metrics.

At the end of the day, we're gonna find out that the acceleration results are very sensitive to the filtering & decimation values used.
Absolutely.

This makes one nervous that your results might not be objective, but merely a reflection of your prejudices regarding "acceptable values".
Oi. Enough of that.

Which is precisely why I started this thread. To get some feedback from some of the EEs with background in DAQ and/or signal processing. To see if there is a more objective way to determine proper filtering of the data.
OK.

BTW: small point. You'll find that (being 2nd derivative of 5th order eqn) that the curve your plotted is a 3rd order poly. (Not particularly important.)
Aiii.
 
femr,

Could you post the numerical data on the static points for those other 3 buildings, please.
Sure. Will upload after I've been t'pub. Thirsty :)

It's immediately clear that there is a very high correlation between the jitter of all three of those static points. This goes a long way to suggest that the jitter is really in the video, and not in the software algorithms.
For sure. The pure noiseless video testing (the little rotating blob) allows quantifying the actual tracing algorithm variances. There are many sources of noise in the video itself.

Btw, was this data taken from interlaced video?
No. Unfolded video in the form of the image posted earlier, so there are two traces for each point which are then combined to form the 59.94 sample per second data-set.

Did you do any "jitter correction" on it?
Yes. (n1+n2)/2.

Do you have a url reference back to an original of the CBS video?
No, that's the best I have access to.

are you "xenomorph"?
No.
 
First thread in a while with a plethora of educational material in it. Bump to keep it a top...not that it needs it at this point.

TAM:)


In coarse, Hollywood, over-the-top Hispanic accent:
"Do ju know wha' a "plethora" ees?"

Name that movie ...
(& show your age ...)

tom
 
I dunno, but i would guess cheech and chong....something or other....lol

As for my age, i'll be 40 in 6 weeks.

TAM:)
 
What is relevant is "What results are really justifiable & defensible from the data & the analysis?"

In order to do this right, it is imperative to have a sense of the "sensitivity" of the analysis. That is, how sensitive the answer is to different variables.

When one is taking few data points, then the issue of "smoothing the data" (i.e., frequency filtering) is irrelevant. As soon as one starts gathering data higher speeds, then there is a metric that is going to be of the form ∂y * ƒ (where ∂y = position error & ƒ = frequency) which sets a limit on the known accuracy of your results.
___

[Aside]
It's fascinating that this is virtually identical to a statement of the Heisenberg Uncertainty Principle, which of course applies only to atomic scale objects. But the math may well apply in both areas...

WD, I'm certain that there is some fundamental theorem in Data Acquisition that addresses this issue directly. I suspect that your reference in Numerical Methods probably addresses it. Can you put a name to it?
[/Aside]
___
The name I would give to it is "forward error analysisWP of quantization errorWP". Quantization error behaves somewhat like roundoff errorWP, but tends to be much larger in simple calculations such as ours.

Unfortunately, most numerical analysisWP textbooks concentrate on algorithmic error (aka formula error) and algorithmic stability, discuss roundoff error only in passing, and don't mention quantization error at all. The reason for this, I suspect, is that numerical analysts have historically been less concerned with solving problems given by noisy data than with solving problems expressed by mathematical formulas. The textbooks and research literature of numerical analysis have not fully caught up with the data revolution.

Numerical differentiationWP via differencing is ill-conditionedWP. That much is a well-known fact:
Wikipedia said:
An important consideration in practice when the function is approximated using floating point arithmetic is how small a value of h to choose. If chosen too small, the subtraction will yield a large rounding error and in fact all the finite difference formulae are ill-conditioned...
What isn't so widely appreciated is that the ill-conditioning of numerical differentiation amplifies quantization error much as it amplifies roundoff error.

Bottom line: For our purposes, calculating accelerations from positions by taking second differences, the error in the accelerations is linear in the position error but quadratic in the sampling rate.

That means femr2's subpixel resolution is helpful, but the higher sampling rates are substantially less helpful.

The end result of all of this is, right now, "art". As soon as someone can point us to the theory, it'll become engineering, where we can quantify the relationship between DAQ ("Data Acquisition") rate and measurement error.

The engineering says that there is error in all measurements and therefore in all calculations. The art says that there are acceptable & unacceptable levels of error.
I think I've explained the theory, and pointed you guys toward some references. Unfortunately, you'll probably have to study up on conditioning and roundoff error, and then figure out how to apply those principles to quantization error.
 

Back
Top Bottom