Re: My misreading of your chart explanation: My apologies. It demonstrated my failure to read to the end of your post.
Acceleration makes more sense, but the results are still way off. And this, ultimately, is going to be the cornerstone of my point in this thread: Obtaining a reasonable, plausible result in velocity & acceleration is one of the few objective checks that one can employ to get a sense of accuracy & error in the position vs. time graph.
The extended +30g values at the beginning & end of your acceleration graph don't make any sense whatsoever.
And now, I'm going to jump to the end and may lose you. It's better to go step by step, but I'll post a peek at where we're going.
There are folks who are "leveraging" the calculation of acceleration from position data to sell "controlled demolition".
The motivations are irrelevant to this conversation. What is relevant is not "what results can I get from the data and the analysis?"
What is relevant is "What results are really justifiable & defensible from the data & the analysis?"
In order to do this right, it is imperative to have a sense of the "sensitivity" of the analysis. That is, how sensitive the answer is to different variables.
When one is taking few data points, then the issue of "smoothing the data" (i.e., frequency filtering) is irrelevant. As soon as one starts gathering data higher speeds, then there is a metric that is going to be of the form ∂y * ƒ (where ∂y = position error & ƒ = frequency) which sets a limit on the known accuracy of your results.
___
[Aside]
It's fascinating that this is virtually identical to a statement of the Heisenberg Uncertainty Principle, which of course applies only to atomic scale objects. But the math may well apply in both areas...
WD, I'm certain that there is some fundamental theorem in Data Acquisition that addresses this issue directly. I suspect that your reference in Numerical Methods probably addresses it. Can you put a name to it?
[/Aside]
___
The end result of all of this is, right now, "art". As soon as someone can point us to the theory, it'll become engineering, where we can quantify the relationship between DAQ ("Data Acquisition") rate and measurement error.
The engineering says that there is error in all measurements and therefore in all calculations. The art says that there are acceptable & unacceptable levels of error.
The acceleration graph that you produced has a clearly unacceptable level of error. It shows the building having an average of about +10G over the first 1.2 seconds or so. Which, from a zero initial velocity, means that the building would have jumped upwards about 180 feet.
I trust that you agree that this is an unacceptable level of error...
I've found out, by playing with these equations, that it is pointless to bother trying to fit any polynomial to the position data if you are going to do the differentiating for velocity & position. If you set the degree of the polynomial high enough to capture the various characteristics of the position over the whole time interval of the fall, you will inevitably be introducing pure artifacts that will blow up when you calculate the velocity and (especially) acceleration.
Instead you have to use "interpolating functions". Both Mathematica (the program that I use) & Maple have them. These don't try to fit the same polynomial over the whole range, but string a bunch of them together, making sure to match values and derivatives at all "connections" between the curve segments.
You can choose polynomial interpolation (for which you can specify the polynomial order) or spline interpolation.
The spline interpolation is designed to give you continuous 2nd derivatives (no kinks). While splines seem like a natural choice, Mathematica will not integrate spline based interpolation functions. This is a good reality check on your results. Integrate twice back up to get total drop, and make sure that it agrees with the original data.
Getting the right drop is a necessary - not sufficient - check on your acceleration results. You can have higher accelerations for shorter periods of time or lower accelerations for longer periods of time, and achieve the same displacement
Here are the curves that I got with your old Camera 2 (NIST ID, the Dan Rather) video. You said that you couldn't see them before.
This curve is your data (smoothed & decimated) vs. NIST's data after I've adjusted time offset and applied a 0.93 scale factor to your descent. It matches quite well. Small differences at the start of the collapse. Not surprising because you & NIST used different points on the roof line.
I got this scale factor by comparing NIST's drop to your drop over a central (i.e., "constant acceleration") section of the curve. From about 25' drop to about 170' drop). By staying in the middle of the range, I was able to avoid the "where is t0?" question & various dynamics at the end of the drop too.
This is puzzling, and deserves an examination. There is no reason that I can see that your scale factor should be off. Could you re-check it against known points on the building, please.
FEMR's drop vs time
Smoothed & decimated
This curve shows just your data, 21 sample smoothed & decimated (every 7th point), along with the interpolation curve that fits the data. You can see that the fit is excellent. Much better than any single polynomial will give you.
Velocity (smoothed) vs. time:
From Smoothed drop data
Acceleration (smoothed) vs. time
From smoothed velocity data
At the end of the day, we're gonna find out that the acceleration results are very sensitive to the filtering & decimation values used. This makes one nervous that your results might not be objective, but merely a reflection of your prejudices regarding "acceptable values".
Which is precisely why I started this thread. To get some feedback from some of the EEs with background in DAQ and/or signal processing. To see if there is a more objective way to determine proper filtering of the data.
Nonetheless, I believe this acceleration data to be "reasonable", which gives a "sanity check" on the filtering.
Final point: You'll see that the curve I've posted only covers 5 seconds, whereas the one you posted covered 7.
BTW: small point. You'll find that (being 2nd derivative of 5th order eqn) that the curve your plotted is a 3rd order poly. (Not particularly important.)
That's enough for now.
More later.
tom