• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Discussion of femr's video data analysis

NIST's 3 parameters vs femr2's 11 parameters: preliminary report, part 2

Three days ago, I used femr2's data for the NW corner of WTC 7 to compare the accuracy of my reverse-engineered version of his Poly(10) model with NIST's nonlinear, nonpolynomial model. Now that I have the correct coefficients for Poly(10), it's time to repeat that calculation. While I'm at it, I'll report on three more versions of NIST's model.

As explained in the first part of this report, I'm using an objective measure for goodness of fit between model and empirical data: the residual sum of squares.

[size=+1]Translation between NIST's and femr2's time scales[/size]

NIST and femr2 use different time scales. NIST chose its time scale so t=0 would correspond to a certain event that NIST interpreted as close enough (for government work) to the beginning of collapse initiation. femr2 chose his time scale so t=0 would arrive 11.8785 seconds before the moment that femr2 interprets as collapse initiation.

To compare NIST's models with femr2's data, we need to know how to translate between NIST's time scale and femr2's. That translation is expressed by a signed number: the offset that must be added to NIST's time scale to obtain the corresponding moment on femr2's time scale.

That offset is needed only to compare NIST's models with femr2's data. For femr2's models we know there is no offset, simply because femr2's model and data are known to use exactly the same time scale.

In what follows, I will refer to the time offset as to (with the subscript "o" abbreviating "offset"). Although the value of to is likely to be close to the value of femr2's T0=11.8785s, those two numbers should not be confused with each other: to describes the translation between NIST's time scale and femr2's, whereas T0 marks the left endpoint of the training set for femr2's models.

[size=+1]The models.[/size]

Poly(10)

femr2 has chosen to model the vertical displacement (position) of WTC 7's north wall by a polynomial. His Poly(10) model uses a polynomial of degree 10. That model has 11 parameters (not the 13 parameters I had assumed before femr2 explained that his polynomial of degree 10 describes the vertical displacement, not the acceleration).

All 11 of the Poly(10) parameters were tuned to describe femr2's measurements for the NW corner of WTC 7, as extracted from the Dan Rather video, during an interval of time that runs from approximately 12 to 17 seconds on femr2's chosen time scale. I will refer to that data as the training set for the model.

NIST original

As I have already explained, NIST's nonlinear model has three parameters. As stated in NCSTAR 1-9 section 12.5.3, NIST's values for those parameters are

A = 379.62
λ = 1/0.18562
k = 3.5126

To obtain those values, NIST used its own measurements of features on the north wall.

In addition to those three parameters, we need to know the value of to. Estimating by eye, I had guessed that to was about 10.9 seconds, but a more careful calculation suggests that to=10.85 seconds produces better results when used with the parameter values shown above.

NIST alternative 1

If we use femr2's data as the training set, we get different values for those parameters. In the first part of this ongoing report, I calculated such values mostly by hand. Now that I have written a little computer program to help with that task, I'm getting different numbers. Using the same training set that femr2 used for his Poly(10) model, I get

A = 422.14
λ = 1/0.17874
k = 3.645

NIST alternative 2

The parameters of NIST alternative 1 assume to=10.85s. Although to is not a parameter of NIST's model (because its definition involves femr2's time scale), we might get better results by tuning our estimate of to along with the three parameters A, λ, and k. Doing so, I get

A = 434.41
λ = 1/0.17978
k = 3.50956
to = 10.96s

NIST alternative 3

The choice of training set can have a profound effect. Although models that have many parameters tend to be more sensitive to the training set, its influence is easy to see even with NIST's relatively spare 3-parameter model. To demonstrate that influence, I used the interval between 11 to 13 seconds to estimate the following values:

A = 1626.4
λ = 1/0.18412
k = 3.73668
to = 11.69s

Note that the above value for to is very close to the value femr2 selected for his T0.

[size=+1]And the winner is...[/size]

It turns out that the most accurate model depends upon the interval of time used for the comparison.

NIST's models are considerably more accurate than femr's near the beginning of the collapse (from 11 to 13 seconds on femr2's time scale), but are a great deal less accurate near the end of femr2's data (at 17.2 seconds on femr2's time scale).

Here are some examples. (Because the residual sum of squares is a measure of error, lower scores are more accurate.)

| 11 to 17 s | 11 to 13 s | 15 to 17 s
Poly(10) | 2008 | 1978 | 12
NIST original | 10858 | 1315 | 7372
NIST alternative 1 | 1946 | 743 | 452
NIST alternative 2 | 1760 | 703 | 346
NIST alternative 3 | 18269958 | 52 | 18034838

If you're more interested in learning what happened near the beginning of the collapse than in what happened several seconds later, then the NIST-style models are more accurate for that purpose. If you're more interested in learning what happened during and after the period of acceleration at approximately 1g, then femr2's Poly(10) model is more accurate for that purpose.

[size=+1]Explanation.[/size]

The choice of training set makes a big difference. femr2 tuned his Poly(10) model on the NW corner data beginning at 11.8785s and continuing to 17.1171s.

Note also that we're evaluating the accuracy of femr2's Poly(10) model on data that are almost (but not quite!) identical to its training data.

As Myriad explained so well:

It is unusual, but the reason for it is easily understood.

A polynomial model of a short time series with eleven coefficients is essentially a lossy compression of the data itself. The curve fitting procedure acts as the compression algorithm.


When we evaluated femr2's Poly(10) model on the interval running from 15 to 17s, the evaluation data were identical to the data for which Poly(10) was tuned. Of course it's going to do well.

When we evaluated femr2's Poly(10) model on the interval running from 11 to 17s, more than 80% of the evaluation data overlapped with the training data. You'd expect it to do extremely well, but it didn't score quite as well as NIST alternatives 1 or 2, both of which were trained on almost exactly the same data as Poly(10).

When we evaluated femr2's Poly(10) model on the interval running from 11 to 13s, we found that it is somewhat less accurate than NIST's nonlinear model as described in NCSTAR 1-9 section 12.5.3, and is considerably less accurate than versions of that model that have been trained on the same data as Poly(10).

Why? Part (but not all!) of the explanation is that femr2's Poly(10) model has too many parameters, which makes it overly sensitive to its training set. The Poly(10) model is spectacularly accurate on the evaluation data that overlap with its training set, but is so inaccurate between 11 and 12 seconds that it loses out to NIST's 3-parameter model (when that model and Poly(10) use essentially the same training set).

NIST alternative 3 demonstrates the same principle. That model is very accurate for the interval on which it was trained, but is spectacularly inaccurate for the collapse as a whole.

[size=+1]Future work.[/size]

NIST was attempting to model the movement of the entire north wall, not one specific NW corner of that wall. The accuracy of its approximations when applied to other features on the north wall remains to be determined, as does the accuracy of femr2's Poly(10) model.
 
Reviewing yesterday's six-hour correspondence with femr2, I now understand why he is celebrating yet another in his series of victories over the forces of incivility and sloppy nonsense.
What a bizarre thing to do. I'm not celebrating, whatever that is supposed to mean. I do think your behaviour reveals a lot about you, sure. Given that, I have no idea what you think you understand.

Now he's really upset. Look what I made him do.
:boggled: As I said, quite an interesting insight into your personality.

I assume you have all the data you need now.

Most of your questions will be answered by the analyses I'll perform after I have your coefficients.
Splendid.
 
I'm mapping to data which follows a particularly chaotic behaviour. There's no reason to think behaviour in the second half of the data could possibly be predicted by a model determined from inspection of the first half. Good to not forget what the data actually relates to.


In other words, you understand that your model has no predictive value, not even within the time interval that it was fit to. Overfitting.

Again the purpose is to get accurate data about the velocity and acceleration behaviour over time, and for that purpose the data, derived curves and additional smoothed datasets are accurate and insightful.
(emphasis added)


Unfortunately, there is no accuracy or insight gained using a model with no predictive value. You can only lose accuracy with each step you depart from the raw data.

Predictive value indicates that the model behaves similarly to ("predicts") the physical system that generated the data. So you can study the model for insight about the system, where additional directly measured data might not be available.

For example, NIST's linear fit of velocity for the WTC 7 collapse data reflects the way mass accelerates linearly in a gravitational field, which was without a reasonable doubt the dominant physical process that was generating that data during that period. The quality of the fit then tells you how closely that model comes to completely explaining the data, and what effects are left over to account for (e.g. by measurement error, or possible additional processes not represented by the model such as interaction of the curtain wall with the core). That is insightful, even though not terribly dramatic or surprising.

You don't really think there were any dynamics describable by tenth-order or fiftieth-order equations going on in the collapse, I hope. So besides unreliable false certainty about details the data is simply not sufficient to establish, such as how long to the hundredth of a second the fall was "above g," what insights about the actual collapse of the actual building are you gaining?

Respectfully,
Myriad
 
Estimating by eye, I had guessed that to was about 10.9 seconds, but a more careful calculation suggests that to=10.85 seconds produces better results when used with the parameter values shown above.
Am fine with that. NISTs T0 is about a second early so your effective alignment point is pretty close to my T0 of 11.8785s.

[size=+1]And the winner is...[/size]

It turns out that the most accurate model depends upon the interval of time used for the comparison.
I do hope you're not going to use time before my T0 = 11.8785s...

NIST's models are considerably more accurate than femr's near the beginning of the collapse (from 11 to 13 seconds on femr2's time scale)
Oh. What is the point ? I've given you the Poly(10) ROI.

RSS values outside the ROI for Poly(10) are pretty useless.

If you're more interested in learning what happened near the beginning of the collapse than in what happened several seconds later, then the NIST-style models are more accurate for that purpose.
Near the beginning ? You mean before T0 ? If so, you already know Poly(10) has a limited ROI.

I could generate some Poly data which IS valid before T0, but it's really not my purpose.

What do you gain by repeatedly making accuracy assertions for periods of time outside the ROI of the curve being tested ?

femr2 tuned his Poly(10) model on the NW corner data beginning at 11.8785s and continuing to 17.1171s.
Absolutely, yet you have performed RSS computations outside that region again. Why ?

When we evaluated femr2's Poly(10) model on the interval running from 15 to 17s, the evaluation data were identical to the data for which Poly(10) was tuned. Of course it's going to do well.
You are losing sight of the purpose of the curve fits.

Do you think the intention has anything to do with modelling the behaviour outside the ROI ?

Do you think that the performance of the Poly(10) curve outside of the ROI has any relevance to the purpose for which it was determined ?

To shorten the discussion, the end-of-chain purpose is the velocity/time and acceleration/time graphs.

The purpose is to be able to graph what happens to the acceleration of the NW corner over time as accurately as possible.

I have managed to extract acceleration/time with undoubtedly unprecedented detail for the event.

It's not actually a predictive modelling exercise.

It's a data extraction and visualisation exercise.

The Poly(10) model is spectacularly accurate on the evaluation data that overlap with its training set
Thank you. It has no relevance outside that ROI.

[size=+1]Future work.[/size]

NIST was attempting to model the movement of the entire north wall, not one specific NW corner of that wall. The accuracy of its approximations when applied to other features on the north wall remains to be determined, as does the accuracy of femr2's Poly(10) model.
I have stacks of additional trace data from various points.
 
Just to amplify this point, if one makes no reasonable restriction on the number of fit parameters or the analytical expression of the fit equation, it is trivial to obtain an absolutely perfect fit (residuals = 0) to any data set. However, this will tell you absolutely nothing about the system or about your data.

The NIST fit is superior where it matters. Look at it from the standpoint of propagation of errors, and all will become immediately clear.

This is the kind of thing hammered into students in undergraduate physics laboratory classes. It's very important.
 
In other words, you understand that your model has no predictive value, not even within the time interval that it was fit to.
Please see my previous post for reminder of purpose.

It's a data extraction and visualisation exercise.

Enabling the derivation of accurate velocity/time and acceleration/time profiles for the NW corner.

There's nothing to predict. It already happened.

Unfortunately, there is no accuracy or insight gained using a model with no predictive value.
Again, it enables the production of derived velocity and acceleration profile graphs, the purpose.

You can only lose accuracy with each step you depart from the raw data.
Indeed, and so the initial raw data has been extracted with as much care and accuracy as possible.

what insights about the actual collapse of the actual building are you gaining?
At base level, accurate velocity/time and acceleration/time profiles.

I think we have reached the point where the *argument* is migrating from how accurate the resultant velocity/time and acceleration/time graphs are.

I say...what they show is true.
 
The end result...



(click to enlarge)

The red line (Savitzky-Golay Smoothed) represents by far the most detailed and accurate acceleration profile of the NW corner available. (What's the alternative, Chandler ?)

In conjunction with other observations, it also assists in highlighting a number of inaccuracies within the relevant section of the NIST report.

I hope there is little or no doubt that the trend is a true reflection of actual behaviour. Never exact, but very good.

The same techniques can be applied to other regions with similar levels of accuracy.

Poly(10) and Poly(50) fits correlate very well.

The profile enables more detailed understanding of acceleration/time behaviour.
 
Last edited:
Just to amplify this point, if one makes no reasonable restriction on the number of fit parameters or the analytical expression of the fit equation, it is trivial to obtain an absolutely perfect fit (residuals = 0) to any data set. However, this will tell you absolutely nothing about the system or about your data.
Please see posts above highlighting the purpose.

The NIST fit is superior where it matters.
Have you actually read any of the content of this thread ?

NIST made quite a slip with the early motion.

Where in your opinion does it *matter* ? Note that W.D.Clinger seems to have no issue with my T0 = 11.8785s

Look at it from the standpoint of propagation of errors, and all will become immediately clear.
Undoubtedly the velocity curves are less accurate than the displacement, and further so with acceleration, of course. That is why I have endeavoured to achieve the very highest accuracy for the initial displacement fit. The end result is significantly more detailed and accurate than what has been previously available.
 
Last edited:
Lest it become forgotten in the evolving spherical cow...

The NIST data suffers from the early motion it captures being primarily non-vertical, by way of them performing their motion trace using the Cam#3 viewpoint, which due to perspective effects exhibits up-down motion in the image frame for motion primarily North-South in the real world.

The early motion present in the NIST data is not valid for use in determining vertical displacement, vertical velocity and vertical acceleration.

The result is that the NIST T0 is ~1s early, and the shape of the displacement curve is skewed, becoming more so when velocity and acceleration derivations are performed.
 
Last edited:
And, lest it also be forgotten...

The point of fitting a poly to the displacement data is to enable derived velocity and acceleration functions to be determined (and thus the resultant graphs plotted)

As W.D.Clinger has admirably stated...
The Poly(10) model is spectacularly accurate on the evaluation data that overlap with its training set
...and indeed has a full timespan training set ssresid of 20.36.

However, the value of that accuracy is not utilised until the resultant function is derived for velocity and acceleration, and it is there that the benefits of the high degree poly are seen when comparing accuracy to the NIST model (and the detail available to plot, which is the primary point of the exercise after all).

I hope when W.D.Clinger approaches these derivations that agreement can be gained on the method used to derive the actual raw data, specifically window size for central difference approximation. And also agreement to perform accuracy metrics within the training set limits.
 
I say...what they show is true.


And I say, what the poly(10) model shows is that at t = 11.8785, the displacement was following separate superimposed position trends displacing it downward 603270, 271880, 15038, 30404, and 196 feet (the t, t^3, t^4, t^8, and t^10 terms respectively) while also following other position trends displacing it upward 161995, 766831, 65340, 47226, 5306, and 9409 feet (the constant, t^2, t^4, t^5, t^6, and t^9 terms respectively), with a net result of about 1 foot of downward displacement from its zero position.

Yes, that's exactly what your poly(10) model is saying. Don't blame me.

So now we know, the collapse was a result of a vertical tug-o-war between five of the major gods of the Greek pantheon at the bottom, and six Norse gods at the top. (The one pulling as t^4 was probably mighty Thor, especially when his team got beat despite outnumbering the opponents.)

Or, the model is a ludicrous mathematical fantasy resulting from blindly misapplied polynomial over-fitting, and adds nothing to anyone's understanding of the data or the actual event, instead providing a false pretense of accuracy and "truth."

Crap statistics, or the plot of next summer's CGI action fantasy thriller? I'll let concerned readers decide.

Respectfully,
Myriad
 
with a net result of about 1 foot of downward displacement from its zero position.
Correct, that zero position being relative to the initial trace position 11.8785s earlier in the trace.

A little further on in time it will give you a number which specifies the relative downward displacement at that time with significant accuracy too. Handy.

You also seem to be questioning the validity of an equation describing displacement/time, which is a bit odd bearing in mind that NIST calculated one...
z(t) = 379.62{1-exp[-(0.18562t)3.5126]}

Which Norse Gods is that one a battle between ? :eye-poppi

Yes, that's exactly what your poly(10) model is saying. Don't blame me.
And NISTs displacement/time equation ?

So now we know, the collapse was a result of a vertical tug-o-war between five of the major gods of the Greek pantheon at the bottom, and six Norse gods at the top. (The one pulling as t^4 was probably mighty Thor, especially when his team got beat despite outnumbering the opponents.)
We know that the Poly(10) equation will provide relative change in displacement data with increasing time to an exceedingly high accuracy throughout the entire ROI, thus enabling velocity and acceleration changes with increasing time to be determined with significant accuracy.

Or, the model is a ludicrous mathematical fantasy resulting from blindly misapplied polynomial over-fitting
Nonsense. The model captures the behaviour of the real world displacement behaviour of the NW corner with *spectacular* accuracy. I shudder to think what your opinion would be for the Poly(50) model ;)

I draw your attention to the acceleration graph posted earlier. Note the correlation between the Savitzky-Golay smoothed data curve and the Poly(10) curve. Also note that the S-G curve did not employ what you call *blindly misapplied polynomial over-fitting*. Consider the implications of that correlation.

and adds nothing to anyone's understanding of the data or the actual event
Perhaps not to yours, but otherwise, nonsense. See numerous velocity and acceleration profile graphs and interpretation throughout this thread.

instead providing a false pretense of accuracy and "truth."
How bizarre. Vigorous hand-waving.
What false pretense of accuracy is being provided ?
What false pretense of truth is being provided ?
Please be specific and detailed.
 
Thanks Myriad. I have seen 'over-fitting' before, but I didn't know that there was a word for it. Most of the stats I use (multivariate regression analysis in market research data, for instance) have R2 of .4 to .6, and they are considered quite predictive. 0.99999 just looks really weird. And as tfk notes, femr's tendency to over-estimate his data accuracy (R2 to 15 decimal places, srsly yo?) made me wonder.

femr2, I know you hate this, but - you have a polynomial model that describes the movement of the NW corner as it collapses for a few seconds almost perfectly. May I be so rude as to ask "so what?" You've now spent years hinting that there might be another shoe to drop. Any plans on dropping it?
 
Correct, that zero position being relative to the initial trace position 11.8785s earlier in the trace..
Building on my point above - You got a 30 frames per second video to show you precision to ten-thousands of a second?
 
you have a polynomial model that describes the movement of the NW corner as it collapses for a few seconds almost perfectly. May I be so rude as to ask "so what?"
As has been stated numerous times in the last few posts, but in your terms above...

It allows derivation of a curve that describes the velocity of the NW corner as it collapses for a few seconds exceedingly accurately.

It allows derivation of a curve that describes the acceleration of the NW corner as it collapses for a few seconds very accurately.

With those additional curves the numerous observations discussed within this thread (and others) can be supported with accurate data.

If you don't see any use or purpose, don't worry about it.
 
Still having the problem with fields and frames, I see. Brutal.
If you're going to attempt to make a witty comeback after being corrected on the simplest of items, make sure you get it right :rolleyes:

As you no doubt know well (as I've said enough times for it to have sunk in surely) a field is only validly called such whilst it is within the container of an interlaced frame. Field has a meaning you know. Once separated from that container through the process of deinterlacing and stored sequentially within another video container, each of those separate images is called a frame.

/derail.
 
“When I use a word,” Humpty Dumpty said, in a rather a scornful tone, “it means just what I choose it to mean—neither more nor less.”
 

Back
Top Bottom