• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Discussion of femr's video data analysis

You also seem to be questioning the validity of an equation describing displacement/time,



No, just one that uses sums of displacements with individual magnitudes ranging into the hundreds of miles, to derive actual displacements of up to about 300 feet.

which is a bit odd bearing in mind that NIST calculated one...
z(t) = 379.62{1-exp[-(0.18562t)3.5126]}

Which Norse Gods is that one a battle between ? :eye-poppi


Glad you asked.

The argument of the exp function will range from zero, at t = 0, to a very large negative number at very large values of t.

exp(0) = 1, and
exp(a very large negative number) -> 0

and the function remains inside that interval for all values in between.

So, {1 - the exponential part} is 0 at t = 0, and 1 at a large t.

This makes it obvious right away what that first parameter 379.62 represents: the maximum displacement the model can generate, while the rest represents the fraction of that maximum displacement that has occurred as a function of time t. In other words, the displacement scale. We can regard the displacement units as being attached to that coefficient.

Now consider the .18562. We can see right away that it's multiplied by t. So if the number were doubled, it would have the same effect as doubling all values of t in the data. So that parameter establishes our time scale. Changing it would be like stretching or squeezing the x (time) axis of the resulting graph.

But what does the value actually mean? One to any power is one, so when (.18652 * t) = 1, the exponential is exp(-1), regardless of the value of the third parameter. Exp(-1) is 1/e or 0.3679. So after 1/.18652 = 5.36 seconds, the displacement will have reached about (1 - 0.3679) = .6321 of the maximum. This establishes the time it takes the model to displace to 63% of the maximum (a standard benchmark, closely related to the "time constant" of decay functions): in this case about 5 seconds.

Since those two parameters establish the x and y (time and displacement) scales of the model, only the third remains to refine the shape of the curve. The higher that value, the slower the displacement increases initially and the faster it increases near and after that 63% benchmark. That value is dimensionless (time and displacement already being accounted for in the other two parameters) but is proportional to the slope of the curve where t = the "time constant." If it were zero, the whole displacement curve would be a horizontal line at 63% of maximum displacement (never dropping off at all); if it were a large positive value, the displacement would stay very close to 0 up until t reached the "time constant" and then increase suddenly to the maximum.

So to sum up, the model's three parameters represent displacement scale, time scale, and the general distribution of movement over the time period.

Of course, their linear model is better, for the time period for which it was derived, because it represents not only the characteristics of the displacement curve but the main physical mechanism causing it.

----------

Okay, your turn. What does the subtraction of 162.215785114699 * t3 from the total displacement mean in your model?

Respectfully,
Myriad
 
the purpose of computing is insight, not numbers

Fifty years after the publication of Richard W Hamming's Numerical Methods for Scientists and Engineers, many of those methods are available to anyone who can manipulate a spreadsheet.

Does that mean Hamming's slogan is obsolete? No. It means his slogan is more important than ever.

At the beginning of Part II, on polynomial approximation, Hamming outlined the approach to approximation that would guide the remainder of his book:
Richard W Hamming said:
Before we begin a computation we must also decide what accuracy we want in the answer and what criterion we shall adopt for measuring this accuracy. This approach may be summarized in the form of four questions:
  1. What samples shall we use?
  2. What class of approximating functions shall we use?
  3. What criterion of goodness of fit shall we use?
  4. What accuracy do we want?
To conclude that introductory chapter on approximation, Hamming writes:
Richard W Hamming said:
Thus we see that the way in which we answer the four questions...can greatly influence the answers that we obtain from the computation. It should also be clear that the answers to these questions must be found in the original problem and not in mathematical treatises or even in books on numerical analysis. Sound computing practice requires constant examination of the problem being studied, not only before the computing is organized but also as it progresses and especially during the stage when the numbers obtained are being translated back to, and interpreted in terms of, the original problem.


Chapter 17, on the theory of least squares approximation, begins by repeating Hamming's four questions. Section 17.13 uses an example to warn against naive use of polynomial curve fitting:
Richard W Hamming said:
With no further thought, a least-squares polynomial was computed, and the results were what one would expect---stupid, thoughtless computing produced foolish results!


Some (but not all!) of the participants in this thread have learned Hamming's lesson.

In other words, you understand that your model has no predictive value, not even within the time interval that it was fit to. Overfitting.

Again the purpose is to get accurate data about the velocity and acceleration behaviour over time, and for that purpose the data, derived curves and additional smoothed datasets are accurate and insightful.
(emphasis added)


Unfortunately, there is no accuracy or insight gained using a model with no predictive value. You can only lose accuracy with each step you depart from the raw data.

Predictive value indicates that the model behaves similarly to ("predicts") the physical system that generated the data. So you can study the model for insight about the system, where additional directly measured data might not be available.

For example, NIST's linear fit of velocity for the WTC 7 collapse data reflects the way mass accelerates linearly in a gravitational field, which was without a reasonable doubt the dominant physical process that was generating that data during that period. The quality of the fit then tells you how closely that model comes to completely explaining the data, and what effects are left over to account for (e.g. by measurement error, or possible additional processes not represented by the model such as interaction of the curtain wall with the core). That is insightful, even though not terribly dramatic or surprising.

You don't really think there were any dynamics describable by tenth-order or fiftieth-order equations going on in the collapse, I hope. So besides unreliable false certainty about details the data is simply not sufficient to establish, such as how long to the hundredth of a second the fall was "above g," what insights about the actual collapse of the actual building are you gaining?


Just to amplify this point, if one makes no reasonable restriction on the number of fit parameters or the analytical expression of the fit equation, it is trivial to obtain an absolutely perfect fit (residuals = 0) to any data set. However, this will tell you absolutely nothing about the system or about your data.

The NIST fit is superior where it matters. [....]

This is the kind of thing hammered into students in undergraduate physics laboratory classes. It's very important.


femr2 asked an excellent (albeit oddly phrased) question:

You also seem to be questioning the validity of an equation describing displacement/time, which is a bit odd bearing in mind that NIST calculated one...
z(t) = 379.62{1-exp[-(0.18562t)3.5126]}

Which Norse Gods is that one a battle between ? :eye-poppi


All three parameters of NIST's nonlinear model have clear physical significance. The equation
z(t) = A{1 – exp[–(t/λ)k]}​
models a collapse that begins and ends with zero velocity and zero acceleration, as would be appropriate for a structure that fails gradually and eventually comes to rest as its impact is gradually absorbed by a lower structure. A fourth parameter could have been used to distinguish the failing part of the collapse from the impacting part of the collapse, but NIST must have decided that parameter was unnecessary for their purposes. (See questions 2 and 4.)

The parameter λ = 1 / 0.18562 = 5.39 is the duration (in seconds) of the modelled collapse.

The parameter A = 379.62 is the height (in feet) of the modelled collapse. This is not the height of the building, but it's the height of the upper portion of the building that's being modelled by the collapse; one would expect the lower part of the building to collapse as well when the upper part falls on it, but the collapse of that lower part is not modelled by the equation shown above. NIST could have added another equation to model the rest of the collapse, but I presume the available data were insufficient to support a more complex model. (See question 1.)

The parameter k = 3.5126 determines the suddenness of transition between the failing and arresting stages of the collapse. When combined with parameters A and λ, the suddenness of that transition determines the duration of NIST's Stages 1, 2, and 3 in NCSTAR 1-9 Figure 12-77, including the peak acceleration during Stage 2.

So all three parameters of NIST's nonlinear model have an intuitive physical interpretation.

The 11 parameters of femr2's Poly(10) model do not correspond to any physical interpretation. They're just numbers, and provide no insight.

And I say, what the poly(10) model shows is that at t = 11.8785, the displacement was following separate superimposed position trends displacing it downward 603270, 271880, 15038, 30404, and 196 feet (the t, t^3, t^4, t^8, and t^10 terms respectively) while also following other position trends displacing it upward 161995, 766831, 65340, 47226, 5306, and 9409 feet (the constant, t^2, t^4, t^5, t^6, and t^9 terms respectively), with a net result of about 1 foot of downward displacement from its zero position.

Yes, that's exactly what your poly(10) model is saying. Don't blame me.

So now we know, the collapse was a result of a vertical tug-o-war between five of the major gods of the Greek pantheon at the bottom, and six Norse gods at the top. (The one pulling as t^4 was probably mighty Thor, especially when his team got beat despite outnumbering the opponents.)

Or, the model is a ludicrous mathematical fantasy resulting from blindly misapplied polynomial over-fitting, and adds nothing to anyone's understanding of the data or the actual event, instead providing a false pretense of accuracy and "truth."

Crap statistics, or the plot of next summer's CGI action fantasy thriller? I'll let concerned readers decide.

Respectfully,
Myriad
 
No, just one that uses sums of displacements with individual magnitudes ranging into the hundreds of miles, to derive actual displacements of up to about 300 feet.
Resultant R2 = 0.999994204 speaks for itself.
Correlation with simple and complex smoothing methods for derived profiles speaks for itself.

Glad you asked.
Not useful for the purpose in hand.

Not enough degrees of freedom to describe the nuances of the real world data.

Doesn't match the raw data so well.

Is based upon errant T0 and sloppy data.

Cannot describe accurate acceleration profile.

No mention of Nordic Gods.

So to sum up, the model's three parameters represent displacement scale, time scale, and the general distribution of movement over the time period.
Great, but no good for my purposes, as demonstrated by the velocity and acceleration curves I have produced.

Okay for NISTs purposes I suppose, bearing in mind they really didn't care about that section. See my list of NIST data issues.

Of course, their linear model is better, for the time period for which it was derived, because it represents not only the characteristics of the displacement curve but the main physical mechanism causing it.
Check out the Savitzky-Golay curve. Nice ;) Consider the implications for what you are saying about the Poly(10) derived curves.

Check out my more accurate acceleration profile during that period of time.
 
All three parameters of NIST's nonlinear model have clear physical significance.
See response to Myriad above.

You are wandering into spherical cow territory.

What is my purpose ?

It's certainly not to generate a parameterised and generalised model to describe how a building falls down. FAR from it.

I've stated it numerous times now.

It's a data extraction and visualisation exercise.

There's nothing to predict. It already happened.

Enabling the derivation of accurate velocity/time and acceleration/time profiles for the NW corner from the actual trace data.

The Poly(10) curve is void for any other dataset, and any other region of interest.

Any other dataset would require generation of a specific *Poly(10)* fit.

Its sole purpose is to capture the behaviour of the position/time data such that velocity and acceleration graphs with useful time domain detail can be produced.

Nothing to do with the context of the data, simply extraction of detail from the data. Numbers.

The 11 parameters of femr2's Poly(10) model do not correspond to any physical interpretation. They're just numbers, and provide no insight.
They enable what they are there to enable, and do a fine job of it.
 
Ahem...


No, a 60*1000/1001 = ~59.94 frame per second video.


The value is aligned with a frame timestamp, so yes. It's actually a rounded value...

11.8785333333333 = 712 * 1/(60*1000/1001)

LMFAO…

"frames ..."
"fields ..."

and now, fractional picosecond accuracy claims.

You'll notice, Carlos, that femr is as anxious to score imaginary points by using his elastic definition of "frame" as he is determined to avoid the substance of your comment: "what time base accuracy is defensible in measurements taken from NTSC video?"

"plus ca change..."

[I'm just wondering why he stopped typing "3"s? Did his finger get a cramp?]

I hope you guys are catching on.

In femr's world, whoever types most ... wins.
In his world, whoever types last ... wins.

As much as I admire your tenacity, WD, Myriad, et al, if you have either a job or a life, you are never going to be able to out-type him.

I'm just ecstatic to have put in my time, & now I get to sit on the sidelines & watch the show.

If you're going to attempt to make a witty comeback after being corrected on the simplest of items, make sure you get it right

Psst, femr, everyone here is on to your shenanigans.
Carlitos DID get it right.
NTSC is a 30 frame per second system.
HE used a consistent definition of the word.
You don't.
You completely, intentionally obfuscate "the simplest of items".

It's what you do.
 
fractional picosecond accuracy claims
Nope. Note the accuracy I have been stating repeatedly...11.8785s :rolleyes:
Provided the calc to enable carlitos to understand where the number comes from.

elastic definition of "frame"
Nope. Correct usage of the term.

NTSC is a 30 frame per second system.
Incorrect. NTSC is a ~29.97 interlaced frame per second standard.

Rest of your post is *addressing the arguer*

All of it is off topic.
 
There are many reasons to suspect the NIST data for early downward movement from camera #3. They have been discussed in other threads in this forum in more depth at the 9/11 forum.

This is why it is a bit strange to support the early movement data over that of femr, who uses camera 5 data instead.


precolldeform.gif



There are excellent reasons for preferring camera 5 data for early motion and they are pretty easy to see.

If you are looking up at the building, choose a point near the middle of the roof-line and think it moves only vertically, your earliest motion is going to be off.


You cannot just pretend the building has no early horizontal motion near the middle of the building.

I do not see anyone defending the NIST early acceleration numbers account for any of this.

And there are other problems, too, like the point they chose to measure, shown here
 
Last edited:
femr2;7163137 The NIST data suffers from the early motion it captures being primarily [I said:
non-vertical[/I], by way of them performing their motion trace using the Cam#3 viewpoint, which due to perspective effects exhibits up-down motion in the image frame for motion primarily North-South in the real world.

The early motion present in the NIST data is not valid for use in determining vertical displacement, vertical velocity and vertical acceleration.

That's just your opinion. I see zero data to back up your opinion.

What's a little alarming is that you've been repeating this line as if it were a fact, yet never once presented any videographic analysis. Perhaps you would be kind enough to refer us to your data on the subject.
We might be able to evaluate it then.
 
You've seen early deformation of the perimeter.

bowingnorthface2.gif



The NIST is measuring using a camera looking upward. They measure near the center of the wall.


2 gifs. One showing the flexing of the south wall and the other showing that the building falls slightly to the north during the earliest downward motion.


Neither of these actions are considered in the NIST data.

If you are looking up at the building and measuring a point at the top of the south wall near the middle of the building, both flexing of the perimeter and northward motion of the building will be incorrectly measured as downward motion during the earliest movement.


Hopefully now people will begin to understand why femr does not use camera 3 data for early motion. It would be pretty naive to do so.

There is no way the NIST can get good measurements of early downward motion from that angle. Some of us have learned that a while ago.
 
Last edited:
That's just your opinion. I see zero data to back up your opinion.
I assume you are referring to the early motion direction within the NIST Cam#3 trace data ? (as your post is malformed)

In which case, I've shown it in several different ways, the comparison between the NIST velocity profile and mine not the least of them.

However, the simplest way to back up the assertion was presented in this post, within which I draw your attention to the roofline, highlighting the distinct absence of the poorly named *kink*...


Indeed, that the behaviour has been tagged with the word *kink* is further indication of the widespread misinterpretation of the actual behaviour.

Ask yourself...if the distortion of the roofline in this image...

...is vertical, why can you not see it in the Dan Rather image above ? ;)

Indeed, if it WAS vertical, can you imagine the required effect upon the structure of the facade ?

The external appearance of the facade strongly supports the suggestion of North-South movement, rather than vertical.

That the very early motion was similar, but in an oscillatory motion to-and-fro also strongly supports the notion of N-S flexure, as anything else would mean vertical flexure, both down, and UP ! ;)

Vertical *kink* without breaking windows along that fault line ? Really ?

What's a little alarming is that you've been repeating this line as if it were a fact, yet never once presented any videographic analysis.
It's absurdly simple, and should not require more detail than presented above to grasp the basic premise.

Perhaps you would be kind enough to refer us to your data on the subject.
We might be able to evaluate it then.
Who is this *we* ? You mean you ? What say you to the above ?

Also consider this (rather odd) statement within the NIST report...
NIST said:
There was another observable feature that occurred after the global collapse was underway and thus could not be captured accurately in the simulation. After the exterior facade began to fall downward at 6.9 s, the north face developed a line or “kink” near the end of the core at Column 76. As shown in Figure 5-205, the northeast corner began to displace to the north at about 8.8 s, and the kink was visible at 9.3 s. The kink and rotation of the northeast façade occurred 2 s to 3 s after the exterior façade had begun to move downward, as a result of the global collapse.
Rather telling. Quite a damning example of the NIST interpretation of motion wouldn't you say.
 
Last edited:
Three points and then some conclusions:

1. A typo in my previous post has been pointed out to me. I transposed two digits in one of the NIST exponential model parameters, and as a result also slightly miscalculated the time value (the reciprocal of that parameter).

2. femr2 challenged me to explain the meaning of NIST's exponential model, and I provided the explanation. Why do you think he did not address my own challenge in turn? Fair is fair, and I didn't ask him to explain the whole thing, just one of its eleven terms.

Okay, your turn. What does the subtraction of 162.215785114699 * t3 from the total displacement mean in your model?


3. The claim that "R2 speaks for itself" is not supported by the data analysis literature. Even the Wikipedia article (Coefficient of determination) discusses "inflation of R2" and the appropriate use of "adjusted R2" in such cases.

As W.D.Clinger cited, Hamming sums it up:

With no further thought, a least-squares polynomial was computed, and the results were what one would expect---stupid, thoughtless computing produced foolish results!


I don't think there's anything terribly wrong with wasting one's time thoughtlessly computing foolish results. But claiming one's own work is superior to expert analysis because the experts did not also waste their time foolishly chasing the largest possible R2 themselves, is Dunning-Krueger writ large.

Now, a brief summary, keeping in mind that we're talking about several different things here.

1. Collapse data observations.

I think femr2 has made a good case that his raw data on the north wall collapse is better quality than NIST's. Some (in my opinion) unfair griping has been directed femr2's way that he "only" applied packaged software to the problem, but there's nothing wrong with that. Many science fairs and probably even a few Nobel prizes have been won by people who got their hands on the right tool at the right time.

2. The value of NIST's observations.

Nonetheless, the possibility that better raw data acquisition could have been done with better software tools in no way diminishes the quality or value of the data that NIST did collect and use. That NIST "should have" done differently (used more frames, different video, measured more or different points, etc.) is entirely a matter of opinion, and is so far unsupported by any evidence (or even any rationale for expecting) that the conclusions of the study vis a vis building performance would have been affected.

3. Models and data analysis.

Recent discussion here has shown that NIST's models and analysis are clearly superior to femr2's, and why. His argument to the contrary has come down to "mine's bigger!" (referring to R2 values) which is not only a foolish argument, it's a well-known foolish argument that is known to arise in cases of this type when certain methods are applied naively.

Hence, the best analysis available at present can be achieved by fitting NIST's models to femr2's data, as W.D.Clinger has been doing.

Respectfully,
Myriad
 
Thanks Myriad and WDC! Impressive posts!
Some here are doing mathematics, and some are crunching numbers.

I guess when we talked about the "relevance" of femr's number-crunching, the issue behind that really was "which insights can be gained".

I guess we all appreciate more accurate raw data. But it doesn't really seem like anyone has yet drawn any insights from it, or could at least line out in what direction insights are to be expected.

Certanly, a Poly(10) is not the way to go (anywhere). Neither is maximizing the R2. Smoothing and curve-fitting the data without first estimating inherent margin of error and stating a purpose is pretty moot.
 
femr2 challenged me to explain the meaning of NIST's exponential model
Incorrect. Based upon your Norse God description of the Poly(10) terms, I asked...
Which Norse Gods is that one a battle between ?
Your answer didn't mention Norse Gods at all :(

I didn't ask him to explain the whole thing, just one of its eleven terms.
I have highlighted a particularly relevant point several times now...

I am not building a generalised model to describe building motion. I am using curve fitting techniques solely to capture the subtle variations in the displacement/time data in a form suitable for deriving velocity and acceleration profiles with increased trace-specific detail purely within the specified ROI.

The repeated attempts to emphasise the relevance of the parameters in the simplistic NIST displacement equation in terms of NISTs aims are completely irrelevant.

Interesting and informative, sure, but utterly beside the point.

The proof is in the pudding, as they say, and the resulting acceleration profile graphs are just that.

By its very nature, the NIST equation cannot reproduce the subtle changes in acceleration over time, as it simply does not have enough degrees of freedom.

The S-G graph is produced via an entirely different method, and results in a distinctive trend shape.

The Poly(10) (and Poly(50)) acceleration graphs match that trend shape very well.

The derived NIST acceleration graph does not, and cannot, reveal that additional detail.

Revealing that detail is the entire point of what I'm doing with the curve fitting method used to generate the Poly(10) equation, and it is, as W.D.Clinger indicated "spectacularly accurate on the evaluation data that overlap with its training set".

It contains enough degrees of freedom that when derived for velocity, it is capable of retaining detail within the velocity profile as-yet unseen. Similarly for acceleration. Feel free to point me to more detailed acceleration profile graphs for WTC7 by anyone on t'planet though...

The claim that "R2 speaks for itself" is not supported by the data analysis literature.
It is, however, supported by the results, namely the velocity and acceleration graphs, which, again, match the S-G curve very well indeed.

But claiming one's own work is superior to expert analysis because the experts did not also waste their time foolishly chasing the largest possible R2 themselves, is Dunning-Krueger writ large.
The level of detail within the derived velocity and acceleration graphs is clearly superior to those from NIST.

Nonetheless, the possibility that better raw data acquisition could have been done with better software tools in no way diminishes the quality or value of the data that NIST did collect and use.
However, the extensive list of issues I have highlighted does "diminish the quality or value of the data that NIST did collect and use.".

That NIST "should have" done differently (used more frames, different video, measured more or different points, etc.) is entirely a matter of opinion
Sure, but my purpose is to show the lower level detail they could not, as they didn't have data of a high enough quality or resolution to do so. As a consequence it is possible to provide more accurate information about certain specific NIST assertions they based upon their lower accuracy data and derivations.

Recent discussion here has shown that NIST's models and analysis are clearly superior to femr2's, and why.
Incomparable purposes.

To simplify...

The Poly(10), Poly(50) and Savitzky-Golay acceleration profiles within the graph above are clearly superior and significantly more detailed than the rudimentary NIST curve in green.

My purpose is to reveal that detail.

Within this context and purpose you are clearly incorrect.

His argument to the contrary has come down to "mine's bigger!"
See vastly more detailed acceleration profile than NISTs (via two entirely different methods) above ;)

It is not difficult to reveal the acceleration trend using simple symmetric differencing...
82136974.jpg

...but such methods amplify the inherent noise in the raw data. Thus various techniques have been applied to filter out such noise and reveal the underlying trend with more clarity. My preference is the Savitzky-Golay profile. Poly(10) and Poly(50) simply support its validity (and vice-versa)

Hence, the best analysis available at present can be achieved by fitting NIST's models to femr2's data
To achieve what ? Doing so directly PRECLUDES being able to reveal additional detail within the acceleration/time data, but the very definition of the NIST function.

You are ignoring the purpose.
 
Incorrect. Based upon your Norse God description of the Poly(10) terms, I asked...

Your answer didn't mention Norse Gods at all :(
...

Which highlights he fact that your Poly(10) model makes no sense in the real world. It is, as Myriad so aptly explained, not a model for the data, but only a stupid, purpose-free compression algorithm. You have 10 Gods in your Poly(10) model, pulling on the poor north face.
The simpler, elegant NIST model has three parameters that make actual physical sense.

What could possibly gained from the application of your 10 parameters (Norse Gods) - in any frame of reference? The accuracy with which you state these parameters (many many trailing digits) far far exceeds the accuracy of the raw data. What good could this possibly do?
 
compression algorithm.
If you want to look at it that way... Bingo ;)

NIST model has three parameters that make actual physical sense.
And is incapable of revelaing the fine velocity and acceleration profile detail over the ROI.

What could possibly gained from the application of your 10 parameters (Norse Gods) - in any frame of reference?
Ye Gads man....

The resultant profile graph detail.

How many times do I have to say it ?
 
[qimg]http://img836.imageshack.us/img836/949/precolldeform.gif[/qimg]
Interesting, in that the southwest corner appears to move downwards as the northwest one moves backwards.

What does that tell you?

Specifically, what does that tell you about the center of gravity and the time of release?
 
If you want to look at it that way... Bingo ;)

And is incapable of revelaing the fine velocity and acceleration profile detail over the ROI.

What can the Poly(10) reveal that the raw data doesn't? To the extemt that the Poly(10) deviates from the data, it gives us compression artefacts, not real details.

Ye Gads man....

The resultant profile graph detail.

How many times do I have to say it ?

And what could possibly gained from the application of your resultant profile graph detail - in any frame of reference? The graph shows compression artefacts, not reality. What good could this possibly do?
 
It is not difficult to reveal the acceleration trend using simple symmetric differencing...
[qimg]http://femr2.ucoz.com/_ph/7/2/82136974.jpg[/qimg]
...but such methods amplify the inherent noise in the raw data.


And your methods instead fit to that noise and error and repeat it back to you (with great "accuracy") which you then call "additional revealed detail." Amazing, noise and error are not only eliminated but magically converted into signal!



Why not just use the s-g smoothing and move on? That lets you derive acceleration, reasonably reflects the limitations of the available data, and still shows in the data what seems to be your major points (if you ever go anywhere with them) about above-g acceleration and the timing of the phases. That's what you need, unless your need is to make dick-wagging claims about "a model superior to NIST's"

Respectfully,
Myriad
 
I assume you are referring to the early motion direction within the NIST Cam#3 trace data ? (as your post is malformed)

Cut the extraneous deflections out of that sentence and it means nothing. I haven't seen any data to quantify your assertions. We've all seen these video clips, we can all see downward motion, which you and MT are denying the existence of.:rolleyes:


Indeed, that the behaviour has been tagged with the word *kink* is further indication of the widespread misinterpretation of the actual behaviour.

Femr2 - using words like 'malformed', 'misinterpretation', etc.. does not constitute any evidence, just a lousy attitude toward the question on your part.

So drop the loaded terminology please and just present facts. ie: what is the vertical/horizontal displacement in metres or degrees per frame that you are referring to?

It seems you haven't measured those, so are just making wild guesses about them. Prove me wrong and show me some hard data.


Indeed, if it WAS vertical, can you imagine the required effect upon the structure of the facade ?

The external appearance of the facade strongly supports the suggestion of North-South movement, rather than vertical.

The burden of proof is not on me, it is your argument, so you make it and state your assumptions as your own, please. The rhetorical questions are not helpful. :(

That the very early motion was similar, but in an oscillatory motion to-and-fro also strongly supports the notion of N-S flexure, as anything else would mean vertical flexure, both down, and UP ! ;)

This is pretty vague, I'm sorry. It's not meaningful enough to rebut. Can you provide some hard data?

Vertical *kink* without breaking windows along that fault line ? Really ?

ie: you don't understand. Fine. Just as I thought.


It's absurdly simple, and should not require more detail than presented above to grasp the basic premise.

LMAO coming from the master of agonizing detail. Suddenly you want to get all oversimplified and vague...:rolleyes:



Rather telling. Quite a damning example of the NIST interpretation of motion wouldn't you say.

No, not at all. Just reveals your out-of-control hostility towards anything that NIST did..... You're being irrational, IMO.
 
Interesting, in that the southwest corner appears to move downwards as the northwest one moves backwards.

What does that tell you?


That should tell you that the motion has a horizontal component and is complex.

That should tell you there is no way in hell that camera 3 data by the NIST can differentiate between verticle motion and other kinds during the earliest movement.

That should tell you that the NIST t=0 is probably screwed up, because the earliest flex or twist motion will be measured as early downward motion, which it is not.


That should tell you why the Femr approach to detecting the earliest motion is far superior to that of the NIST camera 3 efforts. Femr is smart enough to understand the weaknesses of the Camera 3 data and account for them.


This should wake you up to the fact that it isn't so easy to detect the earliest motion, you must be careful.

Note that WD Clinger hasn't even considered these factors yet.

I am looking forward to hearing how he bypasses these difficulties.
 
Last edited:

Back
Top Bottom