• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Discussion of femr's video data analysis

... And we have strong evidence that femr2's methods yield more accurate results. The post by W.D.Clinger, whilst critical of excessive detail and some other matters of femr2's approach, actually confirms that his work is better than NIST on the factors in question.

So isn't it 'case proved' unless someone rebuts both femr2's claims and the support given by W.D.Clingers post?
No.

For all your talk about context, you're paraphrasing me out of context so you can commit a fallacy of equivocation with respect to "femr2's methods".

femr2's methods for extracting data points from video are better than NIST's, and I'm inclined to think that several of femr2's related methods are also better than NIST's. Some of femr2's other methods, such as his cavalier use of high-degree polynomial curve fitting, are worse than NIST's methods. femr2 relied on those inferior methods when he argued that his results will convince cmatrix.

I do not know whether femr2's methods yield more accurate results overall. I certainly do not know of any strong evidence for that conclusion.

Part of the problem is that femr2 has not described his methods in enough detail for their accuracy to be evaluated properly. If you don't understand what I'm talking about, then I suggest you pick one of femr2's acceleration curves and attempt to calculate its residual sum of squares.
... I know NIST described their methods and stated their conclusions with greater professionalism.
No goal/point to the work, appears to be a vendetta against NIST on a topic that means nothing. Your post is good, to the point, another good closing.
 
Last edited:
Like with WTC 1 tilt, I'll go with the real measurements and leave such arguments to you experts.

Do you remember how badly you screwed up the WTC1 tilt measurements and never admitted to your mistakes? Or the way you screwed up so many times when discussing subpixel tracking? I expect the same from you here.

I got what I wanted when femr made quality measurements. I'll leave you to suffer the NIST defense. I don't think I would have defended my very first love the way you have defended the NIST's every blunder.

Puppy love can do that.

LOL, maybe we should go back and discuss your myriad of math errors.

Remember the ones you were going to fix in a few days? What happened?

Did the guy doing your calculations quit? Or did you just make up that nonsense on your own?

Useless pixel measurements, meaning-less graphs and faulty math, derailing the truth movement for almost a decade.

Bravo!
 
NIST's 3 parameters vs femr2's 13 parameters: preliminary report

If you have read this entire thread with considerably more care than it deserves, you know that femr2 has done an excellent job of extracting time series from several videos, improving upon the fairly crude feature extraction described in NIST's reports.

You also know that both NIST and femr2 have used nonlinear models to approximate the vertical motion of WTC 7's north wall during the first few seconds of its collapse. Whether femr2's model improves upon the accuracy of NIST's has become a common topic of speculation in this thread. We can compare the accuracy of those approximations by computing their residual sum of squares for several features on the north wall.

For this preliminary report, I used just one feature: the NW corner of WTC 7. Because that's the feature femr2 used to derive his approximation, my choice of that feature should give an advantage to femr2's model. To assess the magnitude of that advantage, I estimated new values for NIST's parameters from femr2's data for the NW corner, and computed the residual sum of squares for those new estimates as well as for NIST's original model. If I get around to it, I may be able to compare these mathematical models using features of the north wall for which none of them were tuned.

[size=+1]The models.[/size]

NIST's model is described in NCSTAR 1-9 Volume 2, section 12.5.3, and femr2 has quoted it many times. NIST used a 3-parameter model of the form
z(t) = A{1 – exp[–(t/λ)k]}​
where the parameters A, λ, and k were determined by the method of least squares (minimizing the residual sum of squares) applied to NIST's own data. NIST's values for those parameters were

A = 379.62
λ = 1/0.18562
k = 3.5126

When I recalculated those values using femr2's data for the NW corner, I got

A = 379.69
λ = 1/0.1897
k = 3.796

Although NIST's formula models the downward displacement, it can be converted to femr2's convention by prefixing a minus sign. NIST's models for velocity and acceleration are obtained by differentiation.

Within this thread, most of femr2's graphs have shown acceleration, which he models using polynomials of high degree. For this comparison, I have used his Poly(10) model, which is a polynomial of degree 10. By the time that polynomial has been integrated twice to obtain a model for the displacements, it will have 13 different parameters.

So far as I know, femr2 has not stated the numerical coefficients for his polynomials, so I had to estimate those coefficients from his graph. When I did so, I found that a polynomial of degree 5 would work as well as his Poly(10) approximation, and I used that polynomial for this comparison. After integrating twice, it has 8 different parameters.

[size=+1]And the winner is...[/size]

It turns out that the most accurate model depends upon the interval of time used for the comparison.

NIST's models are considerably more accurate than femr's near the beginning of the collapse (from 11 to 13 seconds on femr2's time scale), but are considerably less accurate near the end of femr2's data (at 17.2 seconds on femr2's time scale).

Here are some examples. (Because the residual sum of squares is a measure of error, lower scores are more accurate.)

interval of time | NIST original | NIST tuned | Poly(10)
11 to 17 s | 14861 | 2879 | 4628
11 to 13 s | 962 | 428 | 4234
15 to 17 s | 12964 | 1751 | 296

If you're more interested in learning what happened near the beginning of the collapse than in what happened several seconds later, then the NIST-style models are more accurate for that purpose. If you're more interested in learning what happened well after the period of acceleration at approximately 1g, then femr2's polynomials are more accurate for that purpose.

That may be related to a controversy concerning the time at which the collapse begins. If femr2 believes the collapse began later than NIST says, then femr2 may have tuned his polynomials to model the data starting around 12 seconds instead of 11.

[size=+1]The graphs.[/size]

As you can see from the graph of vertical displacement (position), most of the error in the NIST-style models comes after 16 seconds:

danY.png



The following graph displays the instantaneous velocity computed by central differencing, as described in NIST section 12.5.3, with absolutely no smoothing of position or velocity. (I didn't want to run the risk that my choice of smoothing algorithm might favor one of the models being compared.) Once again, you can see that most of the error in the NIST-style models comes after 16 seconds.

danV.png


The acceleration graph below shows why femr2's model has so much error before 12 seconds. femr2's curve is very steep in the vicinity of 12 seconds. Before 12 seconds, femr2's approximation models a substantial and sustained (although rapidly diminishing) upward acceleration, which does not actually occur within femr2's data.

danA.png


Although both of the NIST-style approximations show accelerations beyond 1g, those accelerations come about a second later than in femr2's model.

Inferring instantaneous accelerations from noisy time series cannot be done with great confidence. I wouldn't try to read too much into the details of that acceleration graph. As femr2 has said, the big picture is more significant than the details.

[size=+1]Future work.[/size]

NIST was attempting to model the movement of the entire north wall, not one specific NW corner of that wall. The accuracy of its approximations when applied to other features on the north wall remains to be determined.
 
If you have read this entire thread with considerably more care than it deserves, you know that femr2 has done an excellent job of extracting time series from several videos, improving upon the fairly crude feature extraction described in NIST's reports.
Bold...bad. Italic...Good, good.

You also know that both NIST and femr2 have used nonlinear models to approximate the vertical motion of WTC 7's north wall during the first few seconds of its collapse. Whether femr2's model improves upon the accuracy of NIST's has become a common topic of speculation in this thread. We can compare the accuracy of those approximations by computing their residual sum of squares for several features on the north wall.
I have no issue with that if you compare NIST data to my own, and take account of pre-existing dialogue relating to trend.

I have produced numerous acceleration profile graphs, as you know well, and each is, of course, slightly different.

For this preliminary report, I used just one feature: the NW corner of WTC 7. Because that's the feature femr2 used to derive his approximation, my choice of that feature should give an advantage to femr2's model.
So you are not using NISTs data at all, but comparing a relatively low order (for me) curve fit to a linear fit over certain periods of time, using my data ?

To assess the magnitude of that advantage, I estimated new values for NIST's parameters from femr2's data for the NW corner, and computed the residual sum of squares for those new estimates as well as for NIST's original model.
Then, again, you are not comparing NIST data to my own, but instead a linear fit over certain periods of time of your choosing to one of my available curve fits your own lower degree representation of one of my low degree curve fit graphs.

Needless to say, I'm becoming less impressed as I go along.

Why not use the high degree curve I was using earlier...
http://femr2.ucoz.com/_ph/7/805384013.png


...rather than the lower degree curve produced for clarification of trend...
http://femr2.ucoz.com/_ph/7/628055186.png


You can see the differences in trend[/i], including early gradient, time of peak and post peak oscillation differences.

It's not like you don't already know how the profile changes as poly degree is increased. You've seen this...
http://femr2.ucoz.com/_ph/7/408829093.gif

...more than enough times.

In fact, why would you choose to not check the accuracy of this acceleration profile graph...?
http://femr2.ucoz.com/_ph/7/350095033.png


As you can see the early motion is pretty darn linear.

If I get around to it, I may be able to compare these mathematical models using features of the north wall for which none of them were tuned.
I don't tune the poly fits to a particular T0 btw, which is one reason I use higher degree curves.

When I recalculated those values using femr2's data for the NW corner, I got
A = 379.69
λ = 1/0.1897
k = 3.796
Meaning you chose the velocity fit.

Within this thread, most of femr2's graphs have shown acceleration, which he models using polynomials of high degree.
,,,and have stated repeatedly the purpose being to highlight trend, and also that the higher the degree the closer to actual, providing animated graph of how the trend changes as the degree is increased...so that there is no confusion or misrepresentation of what an individual graph shows.

For this comparison, I have used his Poly(10) model, which is a polynomial of degree 10. By the time that polynomial has been integrated twice to obtain a model for the displacements, it will have 13 different parameters.
I'm afraid you have made assumption about the method by which the graph is produced.

So far as I know, femr2 has not stated the numerical coefficients for his polynomials
Correct. (I would only be able to do so up to degree 16 with current facilities. I prefer the degree 50 curve, and ultimately the Savitzky-Golan smoothed data curve if I was going to talk about accuracy.

...and thus introducing an amount of error.

When I did so, I found that a polynomial of degree 5 would work as well as his Poly(10) approximation, and I used that polynomial for this comparison.
It's not an exact match. (I'm looking at both curves overlaid). Your version has a *lower* peak, affecting early data *gradient* of course (as we can see when viewing the animation provided, the steepness of the early period of the profile increases as degree is increased, indicating that that motion tends towards higher gradient in *the real wordl*)

It turns out that the most accurate model depends upon the interval of time used for the comparison.
See above.

NIST's models are considerably more accurate than femr's near the beginning of the collapse (from 11 to 13 seconds on femr2's time scale), but are considerably less accurate near the end of femr2's data (at 17.2 seconds on femr2's time scale).
Whilst I have no issue with the actual words there, I think it prudent to point out a few items which might escape the casual reader...

a) You are comparing NISTs velocity equation to your own reverse engineered version of my acceleration profile with a relatively low order poly fit. Your version being even lower degree.
b) You are not comparing NISTs data to my data at all.
c) You have chosen the graph with the highest likelyhood for early motion variance from true (see comments about behaviour of curve as degree is increased)

To make the end result of what you are saying clearer to others...

During the early motion, a straight line is a closer fit than a low degree curve fit.

That's fine. You've already seen the higher degree curves, and the smoothed data, and the early motion on the acceleration graphs is indeed more linear.

Given that you will clearly already know this, I have to raise my eyebrows somewhat at the effort you're going to to prove the obvious (whilst at the same time knowing many will wrongly interpret your words as some kind of refutement).

If I may be so bold...tsk :(

If you're more interested in learning what happened near the beginning of the collapse than in what happened several seconds later, then the NIST-style models are more accurate for that purpose.
A misleading asssertion I'm afraid.

You are comparing a singular velocity equation (which you call NIST-style models) to a single instance of a curve fit with a low degree.

Why are you not using the term FEMR2-style models ?

What happens if you re-do your hitpiece analysis using the degree 50 profile ?

If you're more interested in learning what happened well after the period of acceleration at approximately 1g, then femr2's polynomials are more accurate for that purpose.
Now we have polynomials ?

You've only tested your own representation of one of mine but are now applying your conclusion well beyond appropriate scope.

That may be related to a controversy concerning the time at which the collapse begins. If femr2 believes the collapse began later than NIST says, then femr2 may have tuned his polynomials to model the data starting around 12 seconds instead of 11.
Again, I don't force a T0, but I am sure, given the extreme low degree, that your choice of T0 will have had some effect in your, er, comparison.

As you can see from the graph of vertical displacement (position), most of the error in the NIST-style models comes after 16 seconds
How did you determine T0 alignment ?

Before 12 seconds, femr2's approximation models a substantial and sustained (although rapidly diminishing) upward acceleration, which does not actually occur within femr2's data.
That rather depends upon where you have placed T0...

http://femr2.ucoz.com/_ph/7/349864523.png


Inferring instantaneous accelerations from noisy time series cannot be done with great confidence.
Totally agree, which is why I focus on trend.

I prefer the Sovitzky-Sobel curve myself if speaking of actual instantaneous acceleration.

I'll go into some more detail later.
 
Last edited by a moderator:
NIST's model is described in NCSTAR 1-9 Volume 2, section 12.5.3, and femr2 has quoted it many times. NIST used a 3-parameter model of the form
z(t) = A{1 – exp[–(t/λ)k]}
where the parameters A, λ, and k were determined by the method of least squares (minimizing the residual sum of squares) applied to NIST's own data.
Do you not think it would be a good idea to be looking at velocity plots ? I have a few... ;)

http://femr2.ucoz.com/_ph/7/656459373.png

http://femr2.ucoz.com/_ph/7/756527984.png



Haven't produced any graphs of velocity recently with the *new and improved* ;) labelling, but can do.

Again, worth pointing out that I have repeatedly stated the acceleration graphs in terms of trend.

If you want to change the focus to instantaneous acceleration accuracy, then it must be made clear what we are comparing.

NISTs best effort...a straight line.

My best effort to date (imo)...

http://femr2.ucoz.com/_ph/7/350095033.png



Indeed NISTs effort equates almost exactly to the blue *freefall* line on my graph.

I suggest I'm providing a rather more accurate and detailed representation of acceleration over time.
 
Last edited by a moderator:
mt,

Major_Tom said:
Can you try adding a little proof to the posturing? I don't see any in that last post.


So, you are saying that you ALSO don't understand exactly where NIST said that their estimate of the acceleration was "approximately g"?

Thanks for making my "incompetence" point above.

And, one again, you run from another opportunity to show that you might know some small, pertinent fact.

How unsurprising.

Like with WTC 1 tilt…

Only to change the subject. (And misrepresent the diversion.)
Again. Unsurprising.

I'll go with the real measurements and leave such arguments to you experts.

Yes, MT. When you can't address a simple issue, as neither you nor femr can in this case, I know that you'll "leave the argument."

It's what you do.

c'mon, MT. femr? Change your MO.

Try to point out where NIT says explicitly that their accel number is an approximation.

I got what I wanted when femr made quality measurements.

femr didn't make any "quality measurements"
femr bought a piece of software.
The program made quality measurements.

femr's contribution is to constantly screw up the precision & interpretation of those measurements.

I'll leave you to suffer the NIST defense. I don't think I would have defended my very first love the way you have defended the NIST's every blunder.

Puppy love can do that.

LOL.

Your conclusion of "puppy love" is a respect & admiration for a bunch of first rate, acknowledged professionals who each have several decades of accomplishment in the pertinent fields.

Your conclusion of "puppy love" is my laughing at a couple of incompetent ankle-biters who are beneath the notice of accomplished professionals.

And who know that, in order to not get slapped silly in public, your only successful strategy is to:

1. draw no conclusions
2. publish nowhere
3. obfuscate at every opportunity
4. toss accusations of incompetence at professionals behind their backs, instead of directing questions at them directly & honorably.
5. muster a few equally incompetent cheerleaders to rah-rah each other
6. write. and write. and write. and write. and write. and write. claim victory.
 
Tom, everyone can see you screwing up the WTC 1 angle measurements to exaggerate south tilt. Anyone can see R Mackey lie about it on video.

Anyone who wants to check can see the NIST screwed it up, too.

You, R Mackey and the NIST screwed up on a correct geometrical description of the WTC1 initial movements so badly that my job of exposing the mistakes it just a matter of reminding you of past mistakes and cut and paste of your own poor work.

I'm glad we have it all recorded.

Anyone can verify how poorly you have approached the initial movements of WTC1, so why would anyone listen to your sentimentality over your brotherly bond with NIST engineers?


Those are also measurements by femr so this is a good thread to discuss them. They show how much better he is at this than you.

>>>>>>>>>>>>>

I have always gotten quality measurements from femr for about 2 years now. He has a long history of quality at this point and you have none whatsoever.

The wonderful thing about his measurements is that they do not come attached to some pre-existing agenda.

He seems to want to know how the damn buildings really moved. I have never seen him fudge or fake movement like you or R Mackey. After a couple of years he has no history of falsifying imagery or fudging data to invent features that do not exist.

But, as a true skeptic, I still don't trust the dude to be perfect, so I use at least 2 other sources of independent measurements before "believing" him.

Even though he is very good, I still check with 2 other sources capable of measuring to this level of accuracy.

So while you whine about him, and while groups of truthers call for a new investigation which will probably be like the old one if it ever occurs, we went ahead and made real measurements and observations to replace all the fake ones. When you see them, you go through a cycle of denial and name calling. It is a predictable pattern.
 
Last edited:
He seems to want to know how the damn buildings really moved.
Quite right.

But, as a true skeptic, I still don't trust the dude to be perfect, so I use at least 2 other sources of independent measurements before "believing" him.
A view I totally support.

There should be at least 5 others within this thread capable of replicating the data and derived graphs using either the methods I have described, or their own. I looked at the NIST data and was not satisfied with it, the most blindingly simple reason for which is that they utterly ignore over 90% of the available motion data due to only sampling around 4 out of every 60 frames. Some folk are seemingly not satisfied with my data, yet none of their own is forthcoming. Instead a continnual series of *discredit* attempts.

The most recent one by W.D.Clinger is much, much better than most, but does make the fundamental mistake of *attacking* the accuracy of instantaneous acceleration when the graph used was clearly presented to gain a clearer understanding of trend.

W.D.Clinger is fully aware of other graphs that have been provided which provide a much more accurate representation of instantaneous acceleration over time, but chose not to focus upon them.

It's a to-and-fro. Perhaps, over time, the paranoia that others have that I'm out to deceive will diminish and the focus will return to making things better.

I had hoped that one result of this thread would have been discussion of techniques for improving noise treatment, but alas, very little input has been received.

I'm quite pleased with the fairly recently applied Savitzky-Golan smoothing method. Good balance between smoothing out the noise and retaining detail. Must try a range of additional paramaters and see to what extent the output differs.
 
Last edited:
Quite right.


A view I totally support.

There should be at least 5 others within this thread capable of replicating the data and derived graphs using either the methods I have described, or their own. I looked at the NIST data and was not satisfied with it, the most blindingly simple reason for which is that they utterly ignore over 90% of the available motion data due to only sampling around 4 out of every 60 frames. Some folk are seemingly not satisfied with my data, yet none of their own is forthcoming. Instead a continnual series of *discredit* attempts.

The most recent one by W.D.Clinger is much, much better than most, but does make the fundamental mistake of *attacking* the accuracy of instantaneous acceleration when the graph used was clearly presented to gain a clearer understanding of trend.

W.D.Clinger is fully aware of other graphs that have been provided which provide a much more accurate representation of instantaneous acceleration over time, but chose not to focus upon them.

It's a to-and-fro. Perhaps, over time, the paranoia that others have that I'm out to deceive will diminish and the focus will return to making things better.

I had hoped that one result of this thread would have been discussion of techniques for improving noise treatment, but alas, very little input has been received.

I'm quite pleased with the fairly recently applied Savitzky-Golan smoothing method. Good balance between smoothing out the noise and retaining detail. Must try a range of additional paramaters and see to what extent the output differs.

The best discussions on data are when a few people are trying to compare and replicate results.

That never seems to happen in this forum. Just a rather anal attack on you.

I have never approached the study of motion as an attempt to verify something I want to believe in.

I don't care what the results are. I just want good ones. I don't care what the motion is. I just want to map it accurately.

I don't get the religious devotion to certain data, like your girlfriend generated it and everything she does is wonderful. I don't understand the mentality.
 
Last edited:
(whilst at the same time knowing many will wrongly interpret your words as some kind of refutement).
That's why I thought the reasons for the lower quality standards of NIST's tracing and conclusions had to be clarified. Thanks for leaving it clear.

Oh, and pot, meet kettle.
 
NIST's model is not linear

The most recent one by W.D.Clinger is much, much better than most,
Thank you.

In return, I will try to explain NIST's model to you. I believe your misunderstanding of NCSTAR 1-9 section 12.5.3, and that model in particular, has been a major obstacle to productive conversation since 20 October 2010 or earlier.

As I explained in my most recent post, NIST section 12.5.3 uses a nonlinear model with 3 parameters to describe the downward displacement of the north wall:
y(t) = A{1 – exp[–(t/λ)k]}
(NIST writes the left hand side as z(t), but I have changed that to y(t) to be consistent with your spreadsheet data.)

As will be shown below, you have consistently referred to that as a linear model, but it is not linear. It isn't even a polynomial. It involves an exponential. The argument to exp isn't a polynomial either, because k is generally not an integer.

NIST's velocity and acceleration models are immediate consequences (via freshman calculus) of that nonlinear model. If we define B=1/λ, then the vertical velocity is
v(t) = (dy/dt) (t) = A B k (B t)k-1 exp[–(t/λ)k]
and the acceleration is
a(t) = (dv/dt) (t) = (d2y/dt2) (t) = A B2 k ((k-1) (B t)k-2 - k (B t)2k-2) exp[–(t/λ)k]
Please notice that v(t) and a(t) are nonlinear also.

In my previous post, I stated NIST's values for A, λ, and k. Those values come from the boxed legend in the upper left of NIST's Figure 12-76, which you displayed in post #864, post #991, post #994, and post #1105. When you plug those values into the formula for v(t) above, which is derived from the formula for y(t), and do all the subtractions and multiplications you can to simplify the resulting formula, you get the formula for v(t) that's inside the boxed legend in the upper left of NIST's Figure 12-77, which you displayed in post #864, post #991, post #994, post #1105, and post #1147.

To use NIST's model with your data from the Dan Rather video, it is necessary to convert your time scale to NIST's by subtracting about 10.9 seconds, or to convert from NIST's scale to yours by adding 10.9 seconds.

From those 4 numbers (A, λ, k, 10.9), combined with your data for vertical displacement, anyone in the world can duplicate my calculations of the residual sum of squares for the NIST-like models.

It is considerably more difficult to calculate the residual sum of squares for your models, because you have not revealed the numerical values of your models' parameters. Keeping those numbers to yourself is okay so long as your only purpose is to discuss vague trends, but you should not make any claims concerning the accuracy of your models without stating or publishing the numbers necessary to evaluate your claims properly.

What I have written above may seem elementary and pedantic, but it will not be possible for you to discuss NIST's section 12.5.3 intelligently until you understand those fundamentals. You have been denying them, and you denied them again just this morning. Here's an example from six months ago:
Could you show me where NIST provided anything other than a linear fit... ?
http://femr2.ucoz.com/_ph/7/2/155958691.jpg
(full report section...http://femr2.ucoz.com/_ph/7/563913536.png)

(Might have missed it, but only see a linear regression.)
You did miss it. The image you displayed immediately under your question contained the equation for NIST's nonlinear curve fit.

The result of a curve fit is better than a linear approximation,
That was just yesterday. You were telling us that your polynomials must be better than NIST's approximations because (you thought) NIST's approximations were linear.

You are also carrying forward your personal opinion that a linear fit is superior to a curve fit. I'm afraid I don't agree.
You misrepresented my opinion because you didn't realize that NIST's approximations were nonlinear.

Come now. Two totally differing methods, one employing curve fitting, the other employing Savitsky-Golan smoothing, with significant similarity in profile trend...
http://femr2.ucoz.com/_ph/7/350095033.png
http://femr2.ucoz.com/_ph/7/628055186.png
...one of...
http://femr2.ucoz.com/_ph/7/408829093.gif

...as opposed to...

Edited by LashL: 
Removed quoted repetitive oversized images


..the red line.
You ignored the black line and its formula in the boxed legend in the upper left. The red line is NIST's linear approximation to part of NIST's nonlinear approximation. The small but obvious differences between the red line and the nonlinear black line indicate that even in Stage 2, NIST knew the acceleration was not constant during that stage, and NIST did not even approximate the acceleration by a constant in their primary (black line) approximation.

Let's keep some perspective eh.


Again, I suggest that rather than apply your time to reverse engineering pre-existing graphs (as you apparently cannot find enough detail in the thread to replicate) that you simply use the base data to produce your own.
It would not have been necessary for me to reverse engineer your graphs had you provided the numerical details necessary to evaluate them. With what little you have told us, we don't even know the inputs you gave to your Excel plug-in (such as the time interval over which you asked it to minimize the residual sum of squares).

I now turn to what you wrote this morning, with some things highlighted in yellow.

For this preliminary report, I used just one feature: the NW corner of WTC 7. Because that's the feature femr2 used to derive his approximation, my choice of that feature should give an advantage to femr2's model.
So you are not using NISTs data at all, but comparing a relatively low order (for me) curve fit to a linear fit over certain periods of time, using my data ?
No, I was comparing your curve fit to NIST's nonlinear curve fit, using your data.

To assess the magnitude of that advantage, I estimated new values for NIST's parameters from femr2's data for the NW corner, and computed the residual sum of squares for those new estimates as well as for NIST's original model.
Then, again, you are not comparing NIST data to my own, but instead a linear fit over certain periods of time of your choosing to one of my available curve fits your own lower degree representation of one of my low degree curve fit graphs.
No, I was using your data to compare NIST's nonlinear 3-parameter model to an 8-parameter model whose accuracy is very close to that of your 13-parameter Poly(10) model.

Needless to say, I'm becoming less impressed as I go along.

Why not use the high degree curve I was using earlier...
Edited by LashL: 
Removed quoted repetitive oversized images


...rather than the lower degree curve produced for clarification of trend...
Edited by LashL: 
Removed quoted repetitive oversized images




You can see the differences in trend[/i], including early gradient, time of peak and post peak oscillation differences.
Near the beginning of the collapse, where the "early gradient" makes a difference, your Poly(50) model would have performed even worse than your Poly(10) model because its slope is steeper and it intersects the zero-acceleration line at a later time than your Poly(10) model. The "time of peak and post peak oscillation differences" can't make much difference because your Poly(10) model was already pretty accurate in that region.

(As noted below, it is possible that you graphed your Poly(50) curve incorrectly.)

It's not like you don't already know how the profile changes as poly degree is increased. You've seen this...
http://femr2.ucoz.com/_ph/7/408829093.gif
...more than enough times.
More than enough, yes. As an undergraduate, I took two semester courses in numerical analysis. I invented one of the (less important) numerical algorithms your software is using. It is entirely reasonable for you to assume that I understand the consequences of increasing a fitting polynomial's degree.

If I get around to it, I may be able to compare these mathematical models using features of the north wall for which none of them were tuned.
I don't tune the poly fits to a particular T0 btw, which is one reason I use higher degree curves.
Your statement makes no sense, and has nothing to do with what I wrote.

When I recalculated those values using femr2's data for the NW corner, I got
A = 379.69
λ = 1/0.1897
k = 3.796
Meaning you chose the velocity fit.
No. I got those values by minimizing the residual sum of squares, which is exactly what your Excel plug-in does. The difference is that you have been telling your Excel plug-in to assume a polynomial model, while I was assuming NIST's nonlinear model described in section 12.5.3 and explained in even more detail at the top of this post.

For this comparison, I have used his Poly(10) model, which is a polynomial of degree 10. By the time that polynomial has been integrated twice to obtain a model for the displacements, it will have 13 different parameters.
I'm afraid you have made assumption about the method by which the graph is produced.
Had you described your methods as well as NIST described theirs, I wouldn't have to make assumptions.

So far as I know, femr2 has not stated the numerical coefficients for his polynomials
Correct.
Which is why I had to reverse-engineer your Poly(10) model.

...and thus introducing an amount of error.
Yes, but not very much. You could have calculated the residual sum of squares before you wrote the above, in which case you would have known exactly how much (or how little) difference it would have made.

Or you could provide the necessary numbers, and let me recalculate it for you.

When I did so, I found that a polynomial of degree 5 would work as well as his Poly(10) approximation, and I used that polynomial for this comparison.
It's not an exact match. (I'm looking at both curves overlaid). Your version has a *lower* peak, affecting early data *gradient* of course (as we can see when viewing the animation provided, the steepness of the early period of the profile increases as degree is increased, indicating that that motion tends towards higher gradient in *the real wordl*)
You could have calculated the residual sum of squares before you wrote the above, in which case you would have known exactly how much (or how little) difference it would have made.

Or you could provide the necessary numbers, and let me recalculate it for you.

NIST's models are considerably more accurate than femr's near the beginning of the collapse (from 11 to 13 seconds on femr2's time scale), but are considerably less accurate near the end of femr2's data (at 17.2 seconds on femr2's time scale).
Whilst I have no issue with the actual words there, I think it prudent to point out a few items which might escape the casual reader...

a) You are comparing NISTs velocity equation to your own reverse engineered version of my acceleration profile with a relatively low order poly fit. Your version being even lower degree.
b) You are not comparing NISTs data to my data at all.
c) You have chosen the graph with the highest likelyhood for early motion variance from true (see comments about behaviour of curve as degree is increased)
With respect to b):
I have stated several times that I regard your data as superior to NIST's. For the calculations I performed, the idea of comparing your data to NIST's doesn't even make sense. Had I performed my calculations using NIST's data, it would have given NIST's model an advantage because it was fit to NIST's data. I used your data instead, which gave your model the advantage.

With respect to c):
I have already explained why your Poly(50) model would have even more error near the beginning of the collapse than your Poly(10) model (and have acknowledged the possibility that you may have graphed your Poly(50) model incorrectly). You appear to be assuming that increasing the degree of a polynomial curve fit can't do any harm. That's false. The problem of minimizing a residual sum of squares tends to become ill-conditioned as the degree is raised. At some point, the benefit of increasing the degree is likely to become less than the harm done by round-off error in that ill-conditioned problem.

That harm is especially likely to become important at the end-points of the interval over which you are minimizing the residual sum of squares. If you care more about one of the endpoints than about the rest of the interval, as might happen if you care more about the beginning of the collapse than about its middle or end, then curve-fitting with unnecessarily high-degree polynomials can really hurt you.

To make the end result of what you are saying clearer to others...

During the early motion, a straight line is a closer fit than a low degree curve fit.
No, that's not what I am saying. What you are saying is that you still don't understand the nonlinear model used in NIST's section 12.5.3.

It is true, however, that NIST's nonlinear model has the virtue of approximating a horizontal line at both t=0 and at infinity. That may be one of the reasons why NIST decided to use that model. NIST's choice of model reveals considerable sophistication.

A rank amateur would be more likely to choose polynomial approximations, not realizing that their accuracy often degrades near the endpoints of the interval you're fitting.

Given that you will clearly already know this, I have to raise my eyebrows somewhat at the effort you're going to to prove the obvious (whilst at the same time knowing many will wrongly interpret your words as some kind of refutement).

If I may be so bold...tsk :(
I made that effort because I'm genuinely interested in numerical methods, and because I had never before used the method of least squares to minimize an exponential. Doing that minimization partly by hand was an educational experience, and it gave me an excuse to read several chapters of Hamming's Numerical Methods for Scientists and Engineers.

If you're more interested in learning what happened near the beginning of the collapse than in what happened several seconds later, then the NIST-style models are more accurate for that purpose.
A misleading asssertion I'm afraid.

You are comparing a singular velocity equation (which you call NIST-style models) to a single instance of a curve fit with a low degree.
The highlighted term is new to me, and a Google search shows your post as its only match. I suspect your statement has something to do with your failure to realize that NIST's model is nonlinear.

What happens if you re-do your hitpiece analysis using the degree 50 profile ?
Run the numbers and find out, or tell me its numeric parameters and I'll find out for you. I've already told you what I think is likely to happen, but I could be wrong.

That may be related to a controversy concerning the time at which the collapse begins. If femr2 believes the collapse began later than NIST says, then femr2 may have tuned his polynomials to model the data starting around 12 seconds instead of 11.
Again, I don't force a T0, but I am sure, given the extreme low degree, that your choice of T0 will have had some effect in your, er, comparison.
It is certainly necessarily to translate between NIST's time scale and yours, but your highlighted phrase doesn't have anything to do with that.

As you can see from the graph of vertical displacement (position), most of the error in the NIST-style models comes after 16 seconds
How did you determine T0 alignment ?
I came up with the 10.9 second offset by eyeball. I didn't take the time to minimize it properly. I don't think it's terribly critical, but I could be wrong. If I am wrong, then fixing it would improve the accuracy of NIST's model without improving the accuracy of yours (because your models were derived using your time scale).

Before 12 seconds, femr2's approximation models a substantial and sustained (although rapidly diminishing) upward acceleration, which does not actually occur within femr2's data.
That rather depends upon where you have placed T0...
http://femr2.ucoz.com/_ph/7/349864523.png
No, it doesn't have anything at all to do with T0 or with the translation between NIST's time scale and yours, because the fact I stated is a fact about your Poly(10) approximation. It has nothing to do with NIST's model.

Do you not think it would be a good idea to be looking at velocity plots ? I have a few... ;)
My second graph is a velocity plot. You may not have recognized it as such because the velocity points, obtained directly from your spreadsheet via central differencing, are completely unsmoothed.

NISTs best effort...a straight line.
No. NIST's model is nonlinear.

My best effort to date (imo)...
http://femr2.ucoz.com/_ph/7/350095033.png

Indeed NISTs effort equates almost exactly to the blue *freefall* line on my graph.
I suggest I'm providing a rather more accurate and detailed representation of acceleration over time.
The highlighted sentence is complete nonsense. You're ignoring my graph of acceleration in the three models in order to perpetuate your misunderstanding of NIST's model. NIST's model of the acceleration is not a straight line. As shown in my graph and as explained at the top of this post, NIST's models of displacement, velocity, and acceleration are all nonlinear.

How smoothed profile compares to high-degree curve profile...
Edited by LashL: 
Removed quoted repetitive oversized images




;)
You've made some kind of mistake. If you compare the black curve in that graph with the red curve in the graph below, you'll see that they're offset by about a quarter of a second. Both of those graphs claim to be displaying your Poly(50) approximation. At least one of them must be wrong.
Edited by LashL: 
Removed quoted repetitive oversized images
 
Last edited by a moderator:
Sure. I didn't say the list was exhaustive, hence the etc...


Their choice of location and methodology is significant problem for their data...
a) They misinterpreted initial motion as vertical rather than north-south (as they did not take account of the initial twisting motion visible from the Cam#3 viewpoint).
b) They did not perform perspective correction.
c) They did not perform static point extraction (the removal of camera movement from trace data. Even though the view may look static, it is not.)
d) They did not track a feature at all, but a horizontal position. As the building did not descent completely vertically, but included some east-west movement, their data is actually of a wandering horizontal point, not a feature on the facade.
e) In order to obtain a trace from their initial point to their stated final point they had to *splice* together two traces from completely different horizontal positions, which without taking account of the perspective and distance shearing effects makes the data further skewed.
f) They did not treat the base video data correctly, using an interlaced copy of the video (the actual copy they used is available within the recent FOIA releases. I have the original)
g) They did not perform a per-frame trace, but instead skipped frames, reducing the sampling rate considerably and reducing available data redundancy for the purposes of noise reduction and derivation of velocity and acceleration profile data.
h) They applied their interpretation to the entire north face.
i) It is highly probable they used a manual process to record the trace data, rather than the sub-pixel accurate automated feature tracing methods I employ.

These are some of the reasons their data is shoddy and their method sloppy.


Let us keep these factors in mind when comparing data. Have any of these problems been addressed yet?


WD Clinger, major screw-ups here over the very important "stage 1". The NIST early acceleration is beyond hope for ignoring the horizontal vs vertical components of motion.
 
Last edited:
Thank you.
You're welcome.

In return, I will try to explain NIST's model to you. I believe your misunderstanding of NCSTAR 1-9 section 12.5.3, and that model in particular, has been a major obstacle to productive conversation since 20 October 2010 or earlier.
Assuming misunderstanding is probably not the best way to begin...(assumption is the mother of all...)

As I explained in my most recent post, NIST section 12.5.3 uses a nonlinear model with 3 parameters to describe the downward displacement of the north wall:
y(t) = A{1 – exp[–(t/λ)k]}​
(NIST writes the left hand side as z(t), but I have changed that to y(t) to be consistent with your spreadsheet data.)
Have you tried plotting the derivative for acceleration ? (which NIST don't perform, and instead apply a linear regression)

795385257.png

...
34gllzs.gif

Now that point has been emphasised, I'll spend some time going over the rest of your post.
 
Last edited:
And let us not forget how massive the early horizontal component of motion really was:

kinknorth.gif




That kink is mostly horizontal motion, not visible from other viewpoints.

That is a huge omission to early measurements. If it is screwed up like the NIST did, one will experience an artifical smoothing of the earliest vertical acceleration.

That changes the whole picture and shows why the femr data is the hands-down winner.
 
Last edited:
fire, ready, aim

Although NIST's formula models the downward displacement, it can be converted to femr2's convention by prefixing a minus sign.
Guess what's about to happen.

Have you tried plotting the derivative for acceleration ? (which NIST don't perform, and instead apply a linear regression)
Yes:
The acceleration graph below...

danA.png


[qimg]http://femr2.ucoz.com/_ph/7/795385257.png[/qimg]
...
[qimg]http://i33.tinypic.com/34gllzs.gif[/qimg]
Now that point has been emphasised, I'll spend some time going over the rest of your post.


The points femr2 wishes to emphasize are:
  • He forgot to insert the minus sign.
  • Although he has already posted four responses to the technical note I posted early this morning, he hasn't yet gotten to the part that shows the acceleration graph.
 
And let us not forget how massive the early horizontal component of motion really was:

[qimg]http://img197.imageshack.us/img197/4773/kinknorth.gif[/qimg]


That kink is mostly horizontal motion, not visible from other viewpoints.



By gosh, you're right -- the building hardly moved vertically at all while that kink appeared!

On the other hand, those other two nearby buildings both jumped up about forty feet! I guess they were startled by all the horizontal-only movement (and were built on giant springs).

:boggled:

Seriously -- did you really think that animation would fool anyone into thinking you were showing "mostly horizontal motion"?

Respectfully,
Myriad
 
Sooo...

Thank you.
No worries.

In return, I will try to explain NIST's model to you.
I understand it.

As I explained in my most recent post, NIST section 12.5.3 uses a nonlinear model with 3 parameters to describe the downward displacement of the north wall
Indeed, but a linear regression for acceleration.

I think the reason they didn't derive again is clear once you superimpose the resultant NIST acceleration curve atop one of mine. The inference of what you are saying is that the NIST acceleration curve is more accurate. As you can imagine, I don't agree.

As will be shown below, you have consistently referred to that as a linear model, but it is not linear.
I am referring to the linear regression for acceleration, as we are talking about acceleration.

NIST's velocity and acceleration models are immediate consequences (via freshman calculus) of that nonlinear model.
Incorrect. You've gone a step too far. NIST only derived to velocity. The only *acceleration model* is the linear regression...
http://femr2.ucoz.com/_ph/7/2/659040095.jpg


If we define B=1/λ, then the vertical velocity is
v(t) = (dy/dt) (t) = A B k (B t)k-1 exp[–(t/λ)k]
and the acceleration is
a(t) = (dv/dt) (t) = (d2y/dt2) (t) = A B2 k ((k-1) (B t)k-2 - k (B t)2k-2) exp[–(t/λ)k]
Please notice that v(t) and a(t) are nonlinear also.
Please note that NIST did not derive for acceleration, but instead stopped at velocity, and calculated a linear regression.

It is: v(t) = -44.773 + 32.196t

The only acceleration data there being: 32.196.

I have included the specific derivation for the NIST parameters above, but here it is again...
http://femr2.ucoz.com/_ph/7/795385257.png


From those 4 numbers (A, λ, k, 10.9), combined with your data for vertical displacement, anyone in the world can duplicate my calculations of the residual sum of squares for the NIST-like models.
Which is fine. I haven't challenged your residual sum of squares calculations.

It is considerably more difficult to calculate the residual sum of squares for your models, because you have not revealed the numerical values of your models' parameters.
Agreed, though my subjective points still stand.

Keeping those numbers to yourself is okay so long as your only purpose is to discuss vague trends, but you should not make any claims concerning the accuracy of your models without stating or publishing the numbers necessary to evaluate your claims properly.
I'm not *keeping those numbers to myself*. As I said, the tools I use will only output coefficients up to degree 16. I also said I will see what I can do about providing such, though I think I have made it clear that I prefer smoothing to curve fitting, and DO make it clear that the curve fits are intended to highlight trend not specific instantaneous acceleration.

What I have written above may seem elementary and pedantic, but it will not be possible for you to discuss NIST's section 12.5.3 intelligently until you understand those fundamentals. You have been denying them, and you denied them again just this morning.
Again, NIST have not derived to acceleration.

Their published acceleration data is based upon the following linear regression:

v(t) = -44.773 + 32.196t

The image you displayed immediately under your question contained the equation for NIST's nonlinear curve fit.
Looks like a cross-purpose post. I'm referring to acceleration.

That was just yesterday. You were telling us that your polynomials must be better than NIST's approximations because (you thought) NIST's approximations were linear.
Which I hold to.

You ignored the black line and its formula in the boxed legend in the upper left.
No, I didn't. The black line is the formula for velocity, not acceleration.

The red line is NIST's linear approximation to part of NIST's nonlinear approximation.
It is very unlikely to have been derived from the same equation. Instead I imagine it was determined directly from the data indicated, namely the data-points circled in red.

No, I was comparing your curve fit to NIST's nonlinear curve fit, using your data.
Fair enough.

Near the beginning of the collapse, where the "early gradient" makes a difference, your Poly(50) model would have performed even worse than your Poly(10) model because its slope is steeper and it intersects the zero-acceleration line at a later time than your Poly(10) model.
Which would have required you to reevaluate your personal choice of T0.

I have provided you with a comparison between a smoothed and high-order curve fit above, but here it is again...
http://femr2.ucoz.com/_ph/7/449207989.png



I find it eyebrow raising that you suggest the curve resulting from deriving the NIST velocity equation is a closer fit to ACTUAL motion.

I invented one of the (less important) numerical algorithms your software is using.
Interesting. Which one ? What software do you mean ?

It is entirely reasonable for you to assume that I understand the consequences of increasing a fitting polynomial's degree.
Good. Then I suggest the Poly(50) curve would have been the better choice for your analysis.

Or you could provide the necessary numbers, and let me recalculate it for you.
Can do. Later.

I have already explained why your Poly(50) model would have even more error near the beginning of the collapse than your Poly(10) model
Again, rather, you would have to rethink your T0.

(and have acknowledged the possibility that you may have graphed your Poly(50) model incorrectly)
:confused: Where ? (I acknowledged I may have a 0.5/59.94s shift on the Sovitxky-Sobel smoothed curve)

You appear to be assuming that increasing the degree of a polynomial curve fit can't do any harm. That's false.
I know that. That's why I've looked at the behaviour with a range of degrees, and not fixed T0 in stone, and performed all manner of other methods to determine acceleration profile and compared the results and...

If you care more about one of the endpoints than about the rest of the interval, as might happen if you care more about the beginning of the collapse than about its middle or end, then curve-fitting with unnecessarily high-degree polynomials can really hurt you.
That rather depends upon the complexity of the underlying data. We're not dealing with data that actually follows a simple low order curve. It's a wobbly line ;) Pretty similar to...
http://femr2.ucoz.com/_ph/7/350095033.png



No, that's not what I am saying. What you are saying is that you still don't understand the nonlinear model used in NIST's section 12.5.3.
NIST do not use that model when determining acceleration. (But I concede we are talking cross-purpose in these areas of discussion)

The highlighted term is new to me, and a Google search shows your post as its only match. I suspect your statement has something to do with your failure to realize that NIST's model is nonlinear.
No, that you refer to NIST models.

Run the numbers and find out, or tell me its numeric parameters and I'll find out for you. I've already told you what I think is likely to happen, but I could be wrong.
Am limited to output data at degree 16 with current software :mad:

If I am wrong, then fixing it would improve the accuracy of NIST's model without improving the accuracy of yours (because your models were derived using your time scale).
Rather bizarre thing to say :confused:

Change the relative accuracy of NIST, sure. IMPROVE it ? Hmmm.

No. NIST's model is nonlinear.
NIST apply their model to displacement, and velocity, not acceleration.
They perform a linear regression based on symmetric difference datapoints (which they specify) for said linear regression.

NIST's model of the acceleration is not a straight line.
Again, see above.

You've made some kind of mistake. If you compare the black curve in that graph with the red curve in the graph below, you'll see that they're offset by about a quarter of a second. Both of those graphs claim to be displaying your Poly(50) approximation. At least one of them must be wrong.
So it seems :confused: How bizarre. Will have a look at the axis labels.
 
Last edited by a moderator:
And let us not forget how massive the early horizontal component of motion really was:

[qimg]http://img197.imageshack.us/img197/4773/kinknorth.gif[/qimg]



That kink is mostly horizontal motion, not visible from other viewpoints.

That is a huge omission to early measurements. If it is screwed up like the NIST did, one will experience an artifical smoothing of the earliest vertical acceleration.

That changes the whole picture and shows why the femr data is the hands-down winner.
Could you explain why the buildings in the foreground move and WTC7 does not? How does this effect the perspective (obviously different view points were used). What was done to compensate for this (as not to exaggerate the effect)?
 

Back
Top Bottom