NIST's model is not linear
The most recent one by W.D.Clinger is much, much better than most,
Thank you.
In return, I will try to explain NIST's model to you. I believe your misunderstanding of NCSTAR 1-9 section 12.5.3, and that model in particular, has been a major obstacle to productive conversation since 20 October 2010 or earlier.
As I explained in
my most recent post, NIST section 12.5.3 uses a
nonlinear model with 3 parameters to describe the downward displacement of the north wall:
y(t) = A{1 – exp[–(t/λ)k]}
(NIST writes the left hand side as z(t), but I have changed that to y(t) to be consistent with your spreadsheet data.)
As will be shown below, you have consistently referred to that as a linear model, but it is not linear. It isn't even a polynomial. It involves an exponential. The argument to exp isn't a polynomial either, because k is generally not an integer.
NIST's velocity and acceleration models are immediate consequences (via freshman calculus) of that nonlinear model. If we define B=1/λ, then the vertical velocity is
v(t) = (dy/dt) (t) = A B k (B t)k-1 exp[–(t/λ)k]
and the acceleration is
a(t) = (dv/dt) (t) = (d2y/dt2) (t) = A B2 k ((k-1) (B t)k-2 - k (B t)2k-2) exp[–(t/λ)k]
Please notice that v(t) and a(t) are nonlinear also.
In
my previous post, I stated NIST's values for A, λ, and k. Those values come from the boxed legend in the upper left of NIST's
Figure 12-76, which you displayed in
post #864,
post #991,
post #994, and
post #1105. When you plug those values into the formula for v(t) above, which is derived from the formula for y(t), and do all the subtractions and multiplications you can to simplify the resulting formula, you get the formula for v(t) that's inside the boxed legend in the upper left of NIST's
Figure 12-77, which you displayed in
post #864,
post #991,
post #994,
post #1105, and
post #1147.
To use NIST's model with
your data from the Dan Rather video, it is necessary to convert your time scale to NIST's by subtracting about 10.9 seconds, or to convert from NIST's scale to yours by adding 10.9 seconds.
From those 4 numbers (A, λ, k, 10.9), combined with
your data for vertical displacement, anyone in the world can duplicate my calculations of the residual sum of squares for the NIST-like models.
It is considerably more difficult to calculate the residual sum of squares for your models, because you have not revealed the numerical values of your models' parameters. Keeping those numbers to yourself is okay so long as your only purpose is to discuss vague trends, but you should not make any claims concerning the accuracy of your models without stating or publishing the numbers necessary to evaluate your claims properly.
What I have written above may seem elementary and pedantic, but it will not be possible for you to discuss NIST's section 12.5.3 intelligently until you understand those fundamentals. You have been denying them, and you denied them again just this morning. Here's an example from six months ago:
Could you show me where NIST provided anything other than a linear fit... ?
http://femr2.ucoz.com/_ph/7/2/155958691.jpg
(full report section...
http://femr2.ucoz.com/_ph/7/563913536.png)
(Might have missed it, but only see a linear regression.)
You did miss it. The image you displayed immediately under your question contained the equation for NIST's nonlinear curve fit.
The result of a curve fit is better than a linear approximation,
That was just yesterday. You were telling us that your polynomials must be better than NIST's approximations because (you thought) NIST's approximations were linear.
You are also carrying forward your personal opinion that a linear fit is superior to a curve fit. I'm afraid I don't agree.
You misrepresented my opinion because you didn't realize that NIST's approximations were nonlinear.
Come now. Two totally differing methods, one employing curve fitting, the other employing Savitsky-Golan smoothing, with significant similarity in profile trend...
http://femr2.ucoz.com/_ph/7/350095033.png
http://femr2.ucoz.com/_ph/7/628055186.png
...one of...
http://femr2.ucoz.com/_ph/7/408829093.gif
...as opposed to...
Edited by LashL:
Removed quoted repetitive oversized images
..the red line.
You ignored the black line and its formula in the boxed legend in the upper left. The red line is NIST's linear approximation to part of NIST's nonlinear approximation. The small but obvious differences between the red line and the nonlinear black line indicate that even in Stage 2, NIST knew the acceleration was not constant during that stage, and NIST did not even approximate the acceleration by a constant in their primary (black line) approximation.
Let's keep some perspective eh.
Again, I suggest that rather than apply your time to reverse engineering pre-existing graphs (as you apparently cannot find enough detail in the thread to replicate) that you simply use the base data to produce your own.
It would not have been necessary for me to reverse engineer your graphs had you provided the numerical details necessary to evaluate them. With what little you have told us, we don't even know the inputs you gave to your Excel plug-in (such as the time interval over which you asked it to minimize the residual sum of squares).
I now turn to what you wrote this morning, with some things highlighted in yellow.
For this preliminary report, I used just one feature: the NW corner of WTC 7. Because that's the feature femr2 used to derive his approximation, my choice of that feature should give an advantage to femr2's model.
So you are not using NISTs data at all, but comparing a relatively low order (for me) curve fit to a
linear fit over certain periods of time, using my data ?
No, I was comparing your curve fit to NIST's
nonlinear curve fit, using your data.
To assess the magnitude of that advantage, I estimated new values for NIST's parameters from femr2's data for the NW corner, and computed the residual sum of squares for those new estimates as well as for NIST's original model.
Then, again, you are not comparing NIST data to my own, but instead a
linear fit over certain periods of time of your choosing to
one of my available curve fits your own lower degree representation of one of my low degree curve fit graphs.
No, I was using your data to compare NIST's
nonlinear 3-parameter model to an 8-parameter model whose accuracy is very close to that of your 13-parameter Poly(10) model.
Needless to say, I'm becoming less impressed as I go along.
Why not use the high degree curve I was using earlier...
Edited by LashL:
Removed quoted repetitive oversized images
...rather than the lower degree curve produced for clarification of
trend...
Edited by LashL:
Removed quoted repetitive oversized images
You can
see the differences in
trend[/i], including early gradient, time of peak and post peak oscillation differences.
Near the beginning of the collapse, where the "early gradient" makes a difference, your Poly(50) model would have performed even worse than your Poly(10) model because its slope is steeper and it intersects the zero-acceleration line at a later time than your Poly(10) model. The "time of peak and post peak oscillation differences" can't make much difference because your Poly(10) model was already pretty accurate in that region.
(As noted below, it is possible that you graphed your Poly(50) curve incorrectly.)
It's not like you don't already know how the profile changes as poly degree is increased. You've seen this...
http://femr2.ucoz.com/_ph/7/408829093.gif
...more than enough times.
More than enough, yes. As an undergraduate, I took two semester courses in numerical analysis. I invented one of the (less important) numerical algorithms your software is using. It is entirely reasonable for you to assume that I understand the consequences of increasing a fitting polynomial's degree.
If I get around to it, I may be able to compare these mathematical models using features of the north wall for which none of them were tuned.
I don't tune the poly fits to a particular T0 btw, which is one reason I use higher degree curves.
Your statement makes no sense, and has nothing to do with what I wrote.
When I recalculated those values using femr2's data for the NW corner, I got
A = 379.69
λ = 1/0.1897
k = 3.796
Meaning you
chose the velocity fit.
No. I got those values by minimizing the residual sum of squares, which is exactly what your Excel plug-in does. The difference is that you have been telling your Excel plug-in to assume a polynomial model, while I was assuming NIST's
nonlinear model described in section 12.5.3 and explained in even more detail at the top of this post.
For this comparison, I have used his
Poly(10) model, which is a polynomial of degree 10. By the time that polynomial has been integrated twice to obtain a model for the displacements, it will have 13 different parameters.
I'm afraid you have made assumption about the method by which the graph is produced.
Had you described your methods as well as NIST described theirs, I wouldn't have to make assumptions.
So far as I know, femr2 has not stated the numerical coefficients for his polynomials
Correct.
Which is why I had to reverse-engineer your Poly(10) model.
...and thus introducing an amount of error.
Yes, but not very much. You could have calculated the residual sum of squares before you wrote the above, in which case you would have known exactly how much (or how little) difference it would have made.
Or you could provide the necessary numbers, and let me recalculate it for you.
When I did so, I found that a polynomial of degree 5 would work as well as his Poly(10) approximation, and I used that polynomial for this comparison.
It's not an exact match. (I'm looking at both curves overlaid). Your version has a *lower* peak, affecting early data *gradient* of course (as we can see when viewing the animation provided, the steepness of the early period of the profile increases as degree is increased, indicating that that motion tends towards higher gradient in *the real wordl*)
You could have calculated the residual sum of squares before you wrote the above, in which case you would have known exactly how much (or how little) difference it would have made.
Or you could provide the necessary numbers, and let me recalculate it for you.
NIST's models are considerably more accurate than femr's near the beginning of the collapse (from 11 to 13 seconds on femr2's time scale), but are considerably less accurate near the end of femr2's data (at 17.2 seconds on femr2's time scale).
Whilst I have no issue with the actual words there, I think it prudent to point out a few items which might escape the casual reader...
a) You are comparing NISTs velocity equation to your own reverse engineered version of my acceleration profile with a relatively low order poly fit. Your version being even lower degree.
b) You are not comparing NISTs data to my data at all.
c) You have chosen the graph with the highest likelyhood for early motion variance from true (see comments about behaviour of curve as degree is increased)
With respect to b):
I have stated several times that I regard your data as superior to NIST's. For the calculations I performed, the idea of comparing your data to NIST's doesn't even make sense. Had I performed my calculations using NIST's data, it would have given NIST's model an advantage because it was fit to NIST's data. I used your data instead, which gave your model the advantage.
With respect to c):
I have already explained why your Poly(50) model would have even more error near the beginning of the collapse than your Poly(10) model (and have acknowledged the possibility that you may have graphed your Poly(50) model incorrectly). You appear to be assuming that increasing the degree of a polynomial curve fit can't do any harm. That's false. The problem of minimizing a residual sum of squares tends to become ill-conditioned as the degree is raised. At some point, the benefit of increasing the degree is likely to become less than the harm done by round-off error in that ill-conditioned problem.
That harm is especially likely to become important at the end-points of the interval over which you are minimizing the residual sum of squares. If you care more about one of the endpoints than about the rest of the interval, as might happen if you care more about the beginning of the collapse than about its middle or end, then curve-fitting with unnecessarily high-degree polynomials can really hurt you.
To make the end result of what you are saying clearer to others...
During the early motion, a straight line is a closer fit than a low degree curve fit.
No, that's not what I am saying. What you are saying is that you
still don't understand the
nonlinear model used in NIST's section 12.5.3.
It is true, however, that NIST's
nonlinear model has the virtue of approximating a horizontal line at both t=0 and at infinity. That may be one of the reasons why NIST decided to use that model. NIST's choice of model reveals considerable sophistication.
A rank amateur would be more likely to choose polynomial approximations, not realizing that their accuracy often degrades near the endpoints of the interval you're fitting.
Given that you will clearly already know this, I have to raise my eyebrows somewhat at the effort you're going to to prove the obvious (whilst at the same time knowing many will wrongly interpret your words as some kind of refutement).
If I may be so bold...tsk
I made that effort because I'm genuinely interested in numerical methods, and because I had never before used the method of least squares to minimize an exponential. Doing that minimization partly by hand was an educational experience, and it gave me an excuse to read several chapters of Hamming's
Numerical Methods for Scientists and Engineers.
If you're more interested in learning what happened near the beginning of the collapse than in what happened several seconds later, then the NIST-style models are more accurate for that purpose.
A misleading asssertion I'm afraid.
You are comparing a
singular velocity equation (which you call NIST-style models) to a single instance of a curve fit with a low degree.
The highlighted term is new to me, and a Google search shows your post as its only match. I suspect your statement has something to do with your failure to realize that NIST's model is
nonlinear.
What happens if you re-do your hitpiece analysis using the degree 50 profile ?
Run the numbers and find out, or tell me its numeric parameters and I'll find out for you. I've already told you what I think is likely to happen, but I could be wrong.
That may be related to a controversy concerning the time at which the collapse begins. If femr2 believes the collapse began later than NIST says, then femr2 may have tuned his polynomials to model the data starting around 12 seconds instead of 11.
Again, I don't force a T0, but I am sure,
given the extreme low degree, that your choice of T0 will have had some effect in your, er, comparison.
It is certainly necessarily to translate between NIST's time scale and yours, but your highlighted phrase doesn't have anything to do with that.
As you can see from the graph of vertical displacement (position), most of the error in the NIST-style models comes after 16 seconds
How did you determine T0 alignment ?
I came up with the 10.9 second offset by eyeball. I didn't take the time to minimize it properly. I don't think it's terribly critical, but I could be wrong. If I am wrong, then fixing it would improve the accuracy of NIST's model without improving the accuracy of yours (because your models were derived using your time scale).
Before 12 seconds, femr2's approximation models a substantial and sustained (although rapidly diminishing) upward acceleration, which does not actually occur within femr2's data.
That rather depends upon where you have placed T0...
http://femr2.ucoz.com/_ph/7/349864523.png
No, it doesn't have anything at all to do with T0 or with the translation between NIST's time scale and yours, because the fact I stated is a fact about your Poly(10) approximation. It has nothing to do with NIST's model.
Do you not think it would be a good idea to be looking at
velocity plots ? I have a few...
My
second graph is a velocity plot. You may not have recognized it as such because the velocity points, obtained directly from your spreadsheet via central differencing, are completely unsmoothed.
NISTs best effort...a straight line.
No. NIST's model is
nonlinear.
My best effort to date (imo)...
http://femr2.ucoz.com/_ph/7/350095033.png
Indeed NISTs effort equates almost exactly to the blue *freefall* line on my graph.
I suggest I'm providing a rather more accurate and detailed representation of acceleration over time.
The highlighted sentence is complete nonsense. You're ignoring my graph of acceleration in the three models in order to perpetuate your misunderstanding of NIST's model. NIST's model of the acceleration is not a straight line. As shown in my graph and as explained at the top of this post, NIST's models of displacement, velocity, and acceleration are all nonlinear.
How smoothed profile compares to high-degree curve profile...
Edited by LashL:
Removed quoted repetitive oversized images
You've made some kind of mistake. If you compare the black curve in that graph with the red curve in the graph below, you'll see that they're offset by about a quarter of a second. Both of those graphs claim to be displaying your Poly(50) approximation. At least one of them must be wrong.
Edited by LashL:
Removed quoted repetitive oversized images