Richard Gage Blueprint for Truth Rebuttals on YouTube by Chris Mohr

Status
Not open for further replies.
What worries me is that the incompetence femr2 has displayed in his last few posts calls into question all of the acceleration curves he's posted in this subforum.
LOL. Classic.

I'm not going to respond to femr2's continued trolling.
Saves you admitting your gross distortions.

If he truly does not understand that an acceleration curve must be the derivative of the corresponding velocity curve, then there's no real hope of explaining his error(s) to him.
And yet again you base your assertion that "it's not" on an incomparable sample window.

Interesting to see what you will do to attempt to criticise the data though.

I'm going to digitize his velocity curves
Why not derive the original raw data ? You have it.

calculate the derivatives
To make any valid comparison, you must use a 0.83s window.

If you don't, all you're moaning about is that you might have chosen a different width.

You need days ? lol.

How about...

62994846.jpg


Awesome. Useful as :rolleyes:

and then compare those competently calculated acceleration curves to the ones he's been feeding us
LOL. Oh what fun. I'm sure you can choose a number of piecemeal fits to suit your purpose.

At the very least, that will provide a dramatic visual demonstration of the true uncertainty in femr2's acceleration graphs.
Or, alternatively, simply that you choose to reduce the velocity plot into a number of your personally chosen sections, regardless of the true behaviour you obliterate with reducing it to a small number of linear fits.

Why am I giving femr2 10 days? Because the digitization, calculations, and construction of overlay graphs will take several hours of my time, and I'm too busy right now to waste that much time on femr2's obvious mistakes.
Take as long as you like Will.

If you don't like SG smoothing, you really shouldn't be using the velocity data.

Start with the raw data, and most of all, have fun :)

We just went back 2 years. Awesome. Got to love you guys ;)
 
Of course it is. Derived with a ~0.83s window.


Perhaps you need to see the result of narrowing the window.

Perhaps a really old acceleration graph using symmetric differencing rather than SG will help you see why I employ it...
[qimg]http://femr2.ucoz.com/_ph/7/2/82136974.jpg[/qimg]

Again, we're looking at similar trend, but a lot more noise.

I'm not interested in the noise. I'm after the trend and magnitude, both of which SG smoothing is very well suited to.
Your velocity curve is already smoothed.

There is hardly any feature in the acceleration curve that corresponds to what the velocity curve does at the same time.
Instead, at 12.5 seconds, your acceleration curve shows features derived from the velocity curve at both the 12.1 s mark and the 12.9 s mark (and everything in between). If forces start to kick in or stop to act during that interval (drop distance: great), you'll distort their effect.

Ok.


No. The Savitzky-Golay smoothing method performs a local polynomial regression around each point to find the derivative at each point.
Derivative of what? Forget about polynomial, forget regression... In step 1, what is your input data (please please say it's the 60 t|s discrete data couples per second that you grabbed from video), and what comes out in the first step? Surely you get a distance curve before you get a velocity curve, no?

No. The first derivative is obtained by deriving the local point polynomial function during SG smoothing.
Aha! So you DO have local polynomial functions - for the distance curve, right? Those are NOT yet the result of SG smoothing? But ... polynomials are already a smoothed curve to fit the 60/s data points?

Being verbose about it...

For each derived point OriginLab fits a curve to the surrounding 50 samples, derives that curve, and outputs the value of the derived function for the point in question.
Your software gives you the (first) derivative of the fit, but not the function itself??

It repeats that process for every sample.

There is therefore ~60 curves fitted and derived for every second of data.
Curves fitted to - displacement data points. You have nothing else to fit to.

Trend. Magnitude.

Again, you're welcome to process the raw data in any way you please. I'm confident that you'll tend towards an extremely similar trend and magnitude. I am fully confident that the current graphs represent the "true" behaviour in as much detail as can be extracted from the available data (which I in turn am fully confident is about as accurate as it's technically possible to extract from the available Dan Rather video footage)
I have agreed with that statement about confidence that the trend is there repeatedly.

What's missing is how, after you went from curve fit to velocity (software fits polynomial to data (i.e. discrete displacement|time couples), which, as you say, gets derived instantaneously) to acceleration smoothing. SG fits polynomials to discrete data points. Your velocity curve is a continuous, smooth curve (infinitely many "data" points).

Going all "C7" and performing near-instantaneous values calculations is a huge backwards step for you all, imo.
I wasn't going to do that.

I don't think you fully understand the implications.
I suspect more and more that you don't understand the implication that your acceleration curve is mostly bunk. It doesn't show the interesting features of the real movement where they are in time, and definitely gets their y-values wrong. In other words: It is no more useful than sticking with the raw data. Your method apparently creates artefacts which distract more than enlighten.

I think you can't get around the necessity to find numerical estimates for the uncertainty / margin of error of your measured data, and then intelligently play with that. I think it should be possible to find 3 data points within an interval of 0.5, 0.8, 1.0 seconds or so with which you can prove that the average acceleration was >g over that interval, even if you apply the margins of error fully with a view to minimizing apparent acceleration.
 
Excellent questions. It seems to me that without some explicit error analysis, there is no way to distinguish between filtering out "noise" and suppressing "signal."
Agreed. Any smoothing will of course affect both. The general trend emerges with every method I've applied (which is quite a few).

Right. But a question of some interest is: to what extent did those wild fluctuations occur in real life?
I'm sure that the NW corner was experiencing constant fluctuation in its acceleration, being attached to the rest of the descending building.

It's unfortunate that those with a personal agenda against me "suddenly" decided they have "an angle". I assume you can easily see why the SG acceleration curve (with a 0.83s window) and WDC's chosen 0.1s window manual average acceleration calc do not "match", and also how manipulative it is of him to go down this road ? I am more than forthcoming about the procedures by which the data is generated, what it's limitations are, ...

tfk and WDC have now effectively joined "forces" with C7 (though they both really do understand the effect of window size). There's progress eh.
 
...It's unfortunate that those with a personal agenda against me "suddenly" decided they have "an angle". ...
You are holding your own under four or five concurrent discussions - serious questioning from friends, attacks from foes and some scattered across middle ground. Keep it up.
clap.gif

...tfk and WDC have now effectively joined "forces" with C7 (though they both really do understand the effect of window size). There's progress eh.
Very significant progress when viewed in an 18 month or so time frame.
thumbup.gif
 
Your velocity curve is already smoothed.
I know. See graph above with simply reduced acceleration derivation window. Goes wild :)

There is hardly any feature in the acceleration curve that corresponds to what the velocity curve does at the same time.
Make sure you retain how the derivation process works. They both use a wide window.

Instead, at 12.5 seconds, your acceleration curve shows features derived from the velocity curve at both the 12.1 s mark and the 12.9 s mark (and everything in between).
Absolutely. The velocity curve does similar with displacement.

If forces start to kick in or stop to act during that interval (drop distance: great), you'll distort their effect.
Of course. How much fidelity do you think is in the raw data ? It's good, but be realistic. I'm not looking for "jolts" here, and there's no way the Dan Rather data has the fidelity to reveal them. Was hard enough for WTC1 with the much superior Sauret footage. Dan Rather image quality is really not great.

In step 1, what is your input data (please please say it's the 60 t|s discrete data couples per second that you grabbed from video), and what comes out in the first step? Surely you get a distance curve before you get a velocity curve, no?
Of course.

Aha! So you DO have local polynomial functions - for the distance curve, right? Those are NOT yet the result of SG smoothing? But ... polynomials are already a smoothed curve to fit the 60/s data points?
I think you're getting confused with how SG is employed.

I suggest if you're having trouble with my words, that you have a surf, but again in maybe slightly different words...


input(displacement/time)
V
Apply SG filter, taking the value of the first derivative function of each 50 sample curve fit at the center-point of the window as the output (derived) value.
V
output(velocity/time)

Repeat from velocity to acceleration.

Your software gives you the (first) derivative of the fit, but not the function itself??
There would be 60 per second of data. Say, 360 functions for the normal duration of the graphs you;ve been looking at for the last year or so.

I have agreed with that statement about confidence that the trend is there repeatedly.
Splendid. What extra information do you think anyone should be able to determine ? I don't think it's at all wise to go beyond general trend and magnitude.

Your velocity curve is a continuous, smooth curve (infinitely many "data" points).
It's a curve made up of the gradient of each associated curve fit at the point in question...if that makes sense to you.

I suspect more and more that you don't understand the implication that your acceleration curve is mostly bunk.
I understand what it does and does not show.

You still appear to be trying to figure out how SG based derivation works, so (no offence, but) how can you really understand what either the velocity or acceleration plots are showing ? I had assumed that after discussing SG smoothing for so long that folk would have understood. I guess that was, well, ... my mistake I suppose.

It doesn't show the interesting features of the real movement
What interesting features ? (beyond revealing the trend in much more detail that previously available)

I just showed you the effect of reducing the window size (in my 2nd-previous post) to the already SG derived velocity data.

What interesting effects are there there ?

Your method apparently creates artefacts which distract more than enlighten.
It reveals the trend.

with which you can prove that the average acceleration was >g over that interval
Why would I want to ? I've done this many, many times. Many different methods. Same trend. That you've just (re?)"discovered" that there's a price to pay for the tradeoff between wild fluctuation and trend does not mean I'm going to ditch my considerable learning curve with this particular dataset.

I'll say it now...and competent acceleration plot using my Dan Rather dataset will tend towards the shape present in the SG acceleration plot already provided ;)

I've set that one in stone now :D

Might be worth a reminder of tfk's previous acceleration profile derived from my data...
51514794.png


There's something familiar there, I think.

446763612.jpg


Yep, definitely something familiar goin'on. (17-08-2010)
 
Last edited:
No. You'll be looking at some wild fluctuations though.

Right. You'll lose any detail you were trying to find in the process though.

They are the correct results for a 0.83s window.

It means that interpretation of the data is important.
If any of these results is "not incorrect", then I doubt that much of the detail I find at any level is the detail I was trying to find. No level of detail is then inherently better than any other. The more you smooth everthing together, the more you are guaranteed that the details you find are a function of the algorithm, not of the reality that gave rise to the original data.

I have "chosen" the window size based on a pretty ridiculous familiarity with the dataset in question. I've derived to acceleration with numerous different differentiation methods, and numerous smoothing methods. Similar trend emerges. Technically SG is by far the most appropriate way to treat this data.

To a certain extent, absolutely. Not keen on the "look nice" bit, but definitely okay with the tradeoff between "wild amplitude" and "detail".
Actually, I can relate to that sentiment. My own area of expertise here is the study of the infamous red-gray chips. The more I look into the data, the more I "see". I have become somewhat apt at manipulating the data such that interesting details pop out - but actually I am often pretty unsure to what extent that detail has not been produced by the manipulation, by amplyfying noise or suppressing signal.

Ye gads. Trend. Magnitude.
Here's my impression from tonight's debate:
1. Trend: yes; but I can see the trend easily among the wilder amplitudes of narrower sampling as well as in the smooth gentle curves of the Poly10.
2. Magnitude: No. I have shown that your acceleration curve gets some magnitudes more wrong than I am comnfortable with.

Again, trend and magnitude. Acceleration detail impossible to reveal with the other datasets...which I've detailed many a time.
Which comes from the quality of your dataset, not so much the smoothing algorithms.

Of course. You've been discussing the consequences for months.

To quote myself...

My acceleration graph shows:

a) Rapid increase in acceleration from release to somewhat over-g in approximately 1s.
At the end of this period, the NW corner had descended ~9ft
b) Slow reduction in acceleration to approximately g over approximately 1.5s.
At the end of this period, the NW corner had descended ~83ft
c) More rapid reduction in acceleration to roughly constant velocity over approximately 2s.
At the end of this period, the NW corner had descended ~270ft

If you use the velocity graph you'll obviously miss some profile shape detail, but you could say...

~1.75s at ~FFA


I am of the opinion that the additional detail determined from the acceleration data provides invaluable information about the behaviour of the location during descent of the building.
I used to believe these observations, but tonight have learned to be more skeptic.
The big one - ~1.75 s at ~FFA - yes, I still believe you can say that - being aware that "~" implies numerically unspecified imprecision, some +/-, that could be large or small or... so really that is somewhat trivial.

I am also quite confident that during some "significant" interval downward acceleration was significantly larger than FFA. However, if your velocity function is any good (doubts are of course non-zero now that we can put trust in that), then there must be an extreme value beyond 42 ft/s2, which your acceleration data doesn't have; so I suspect, since you underestimate the magnitude of the extreme value, that you overestimate the duration of the >FFA interval.
 
The more you smooth everthing together, the more you are guaranteed that the details you find are a function of the algorithm, not of the reality that gave rise to the original data.
No, simply more "averaged".

but actually I am often pretty unsure to what extent that detail has not been produced by the manipulation, by amplyfying noise or suppressing signal.
Wise. I don't take any real notice of the small fluctuations in the acceleration profile. Only the shape/trend/magnitude/

1. Trend: yes; but I can see the trend easily among the wilder amplitudes of narrower sampling as well as in the smooth gentle curves of the Poly10.
Fine, though that SG matches (given how it works) is quite a result. It's a little more "refined" than Poly(10)/(50), and I'm inclined to trust "some" of that refinement, such as more rapid transition between release and max acceleration.

Magnitude: No. I have shown that your acceleration curve gets some magnitudes more wrong than I am comnfortable with.
One thing to think about...the more smoothed...the "lower" the maximum amplitude will be. Yes ?

But We can look at magnitude. One concern of mine has always been the reliance upon the scaling metrics available (which are scant).

Which comes from the quality of your dataset, not so much the smoothing algorithms.
Sure. SG really is exceptionally good though.

I used to believe these observations, but tonight have learned to be more skeptic.
Not sure that's a great idea. Learned from what ? There's a limit to what I'm prepared to repeat to get back here again. A tedious read through the "femr data analysis" thread might be something you should consider. The progression from a->b has been done before.

However, if your velocity function is any good
Did you use an appropriate window ?
 
Agreed. Any smoothing will of course affect both. The general trend emerges with every method I've applied (which is quite a few).
That's an important statement! I tend to agree with it, and I am fairly sure that tfk and WDC can also agree with it!

I'm sure that the NW corner was experiencing constant fluctuation in its acceleration, being attached to the rest of the descending building.
No doubt from anybody here - except C7

It's unfortunate that those with a personal agenda against me "suddenly" decided they have "an angle". I assume you can easily see why the SG acceleration curve (with a 0.83s window) and WDC's chosen 0.1s window manual average acceleration calc do not "match", and also how manipulative it is of him to go down this road ? I am more than forthcoming about the procedures by which the data is generated, what it's limitations are, ...
I think your problem is one of practicability and accessibility: the algorithms you use until your prefered acceleration graph emerges are far from transparent, and difficult to explain - AND the result can't be demonstrated to be better in the sense that it is "truer" than the results from simpler, more straightforward methods.

Your curve looks a lot better than it is, and that can easily mislead and give rise to invalid interpretations.

tfk and WDC have now effectively joined "forces" with C7 (though they both really do understand the effect of window size). There's progress eh.
I disagree here.
C7 has managed to spot a rather obvious inconsistency: You violate the definition of "derivative" when you pretend that your a is "derived" from your v. You speak too loosely; not mathematically. "Derivative" and "to derive" have a specific meaning in math, but you mean something else.
C7 thinks because your a curve is wrong (it is in most of its detail, sorry) that the trend is also wrong.
tfk and WDC are merely pedants whp point out that you are wrong about detail and semantics (you are, sorry), without concluding anything about the real motion.
 
...
Fine, though that SG matches (given how it works) is quite a result. It's a little more "refined" than Poly(10)/(50), and I'm inclined to trust "some" of that refinement, such as more rapid transition between release and max acceleration.
As a matter of fact, it looks as if the transition is even more abrupt than your a-curve has it :D
BUT whether this look is not deceiving can only be decided by a more solid analysis of measurement error.

One thing to think about...the more smoothed...the "lower" the maximum amplitude will be. Yes ?
...
Well yes, obviously, for simple smoothing algorithms that do averages.
I was so far under the impression that it is a particular strength of SG to preserve local minima and maxima. Guess I read too much into that.
 
I think your problem is one of practicability and accessibility: the algorithms you use until your prefered acceleration graph emerges are far from transparent
SG has been used for well over a year. Hasn't changed. Was made very clear at the time. Is repeated regularly.

and difficult to explain
Anyone who didn't understand was quite free to ask.

AND the result can't be demonstrated to be better in the sense that it is "truer" than the results from simpler, more straightforward methods.
I'm sure it has (or can if you have found doubt through interest)

Your curve looks a lot better than it is, and that can easily mislead and give rise to invalid interpretations.
I don't know how much clearer I am supposed to be when repeating endlessly about trend/shape/magnitude...who is misleading who ?

Got to be careful interpreting, absolutely.

C7 has managed to spot a rather obvious inconsistency: You violate the definition of "derivative" when you pretend that your a is "derived" from your v.
Window width...

You speak too loosely; not mathematically.
~0.83s window width...

"Derivative" and "to derive" have a specific meaning in math, but you mean something else.
No, I don't.

C7 thinks because your a curve is wrong (it is in most of its detail, sorry) that the trend is also wrong.
How do I get you to let this sink in...

The only difference between WDC's "example" and mine with the same mid-point...was the window width. The period of time over which the average acceleration was computed.

Two different values...-45 ft/s^2 for WDC's 0.11s window. -27 ft/s^2 for my 0.83s window. Same source data. Both "correct".

Wrong ? No. Stop falling in the instantaneous trap.

Do you think that WDC will be using a 0.1s interval when he provides his "competent" acceleration data (from my SG derived velocity data :rolleyes:) ?

The velocity data is derived in the same way as the acceleration data.

tfk and WDC are merely pedants whp point out that you are wrong about detail and semantics (you are, sorry), without concluding anything about the real motion.
Wrong about what ? Be specific please.
 
As a matter of fact, it looks as if the transition is even more abrupt than your a-curve has it :D
It's possible, but remember that there are two sides to a window.

Determining T0 will always affect correct interpretation of that.

BUT whether this look is not deceiving can only be decided by a more solid analysis of measurement error.
I very much doubt it.

Well yes, obviously, for simple smoothing algorithms that do averages.
Good.

I was so far under the impression that it is a particular strength of SG to preserve local minima and maxima.
Absolutely.

Guess I read too much into that.
Not at all. You really need to stop jumping to conclusions, based on...what ?

The point I was trying to get across to you is that if you're "doubting" about whether an >g period is present. then remember that I'm using quite a wide sample window for the smoothing, which even though SG is great at preserving local maxima and minima it does of course still have an effect, as all smoothing does, kinda by definition yeah. To think that SG will have "no" effect would not be correct. Curve fitting within the sample window is what helps there. Make sense ?
 
...
Window width...

~0.83s window width...

The only difference between WDC's "example" and mine with the same mid-point...was the window width. The period of time over which the average acceleration was computed.

Two different values...-45 ft/s^2 for WDC's 0.11s window. -27 ft/s^2 for my 0.83s window. Same source data. Both "correct".

Wrong ? No. Stop falling in the instantaneous trap.

Do you think that WDC will be using a 0.1s interval when he provides his "competent" acceleration data (from my SG derived velocity data :rolleyes:) ?

The velocity data is derived in the same way as the acceleration data.


Wrong about what ? Be specific please.

http://en.wikipedia.org/wiki/Derivative

Formally, the derivative of the function f at a is the limit

f'(a)=\lim_{h\to 0}\frac{f(a+h)-f(a)}{h}

of the difference quotient as h approaches zero

It is wrong to say, in math, that function f' is "derived" from f if f' is not the derivative of f.
 
Agreed. Any smoothing will of course affect both. The general trend emerges with every method I've applied (which is quite a few).

"General trend" seems a bit vague, but I'm not really interested in disputing that. I'd just like to see as clean a representation as possible of the plausible range of "actual" acceleration values, given defensible assumptions about measurement error. That should help us to understand what approaches to smoothing are reasonable. (For instance, what makes you think that the "20 sample window" plot isn't very useful? What is the criterion of usefulness?)

I'm sure that the NW corner was experiencing constant fluctuation in its acceleration, being attached to the rest of the descending building.

OK, then I would like to know what we can and can't say about that "constant fluctuation." It's not that I think it is likely to have much bearing on Big Questions; it's just how I like to see data analysis unfold.

It's unfortunate that those with a personal agenda against me "suddenly" decided they have "an angle"....

tfk and WDC have now effectively joined "forces" with C7 (though they both really do understand the effect of window size). There's progress eh.

I'm sort of maxed out on psychodrama. But I don't think it's unreasonable of WDC to point out that if the acceleration plot is inconsistent with the velocity plot, it's damn hard to understand what we are looking at. (OK, he put it more sharply than that.) I haven't thought very hard about this, but I'm not sure it makes sense to apply the SG smoothing twice. And I think Oystein's questions about how to choose the size of the window are on point.

Tell ya what -- rather than bust your chops about these issues, could you point me to an appropriate "raw" data set to use? I found your data cache but was a bit uncertain what was what. You probably answered this question above, but I missed it.
 
SG employs the mathematical derivative of each curve fit function.

The actual mathematical derivative of a point on the curve, not an average between the two endpoints of the curve.

It is wrong to say, in math, that function f' is "derived" from f if f' is not the derivative of f.
Oh come on. I don't think there's a method using 60 sample per second displacement/time data that is more "true" to pure. 60 functions determined per second of data. Each of them mathematically derived. A value from each derived function calculated.
 
For instance, what makes you think that the "20 sample window" plot isn't very useful?
Can't see the wood for the trees ;)

What is the criterion of usefulness?)
Purpose. Mine is to understand the motion of the building as clearly as possible.

I'm sort of maxed out on psychodrama. But I don't think it's unreasonable of WDC to point out that if the acceleration plot is inconsistent with the velocity plot, it's damn hard to understand what we are looking at.
As I hope you've seen...it's not inconsistent. Using a wider sample window to compute the acceleration data is the primary difference between what WDC is saying and what is already on the graph. That's all.

I'm not sure it makes sense to apply the SG smoothing twice.
Do you understand that derivation is part of the "smoothing" ?

And I think Oystein's questions about how to choose the size of the window are on point.
Can't recall that. But happy to even show animation between various window widths if it comes to it, though I'd rather not. Been from a->b a few times.

Tell ya what -- rather than bust your chops about these issues, could you point me to an appropriate "raw" data set to use? I found your data cache but was a bit uncertain what was what. You probably answered this question above, but I missed it.

http://femr2.ucoz.com/load/dan_rather_basic_trace_data/1-1-0-29

Think that should get you started.

I do suggest wading through the "femr analysis" thread before you go much further though.

Dan Rather data contains a lot more noise that the Cam#3 data, even though it is still far more accurate than other datasets available. You may prefer the Cam#3 data if you are going to invest much time, but then also be aware that you'll have significant perspective effects to counter. Tomayto, tomato.

As a possible footnote to this "phase" of discussion, as I've said many, many times...I fully support folk replicating this stuff themselves. I've provided full instructions on how to go about it from raw video data processing through to end results. Anyone prepared to go through the process and end up with even better results. Splendid. Not so keen on those criticising without getting their hands dirty. Quite time consuming, and dating back a couple of years of to-and-fro.

Discussion of femr's video data analysis

I'd rather not repeat all of that again.
 
Last edited:
Can't see the wood for the trees ;)

OK, but what counts as "wood" could depend on the question under investigation.

As I hope you've seen...it's not inconsistent. Using a wider sample window to compute the acceleration data is the primary difference between what WDC is saying and what is already on the graph. That's all.

I think I grasp that, but I'm not sure of all the implications. I'm willing to stipulate, at least provisionally, that you've done all the calculations correctly, so your results aren't inconsistent in that sense. I hope you can grant that they are inconsistent in another sense, and that it isn't irrational to be bothered by velocity and acceleration estimates that are mutually inconsistent, even if each is the correct answer to some question based on the same data.

If the data indicate that the acceleration really varies quite a bit, it would be useful to know that. We should not conflate "noise" = measurement error with "noise" = substantively uninteresting variation from trend (for some value of "substantively uninteresting").

Do you understand that derivation is part of the "smoothing" ?

In principle, I think so -- but I hesitate to say that I "understand" something when I've (1) never used the method myself and (2) never seen the code that applies the method.

URL="http://femr2.ucoz.com/load/dan_rather_basic_trace_data/1-1-0-29"]http://femr2.ucoz.com/load/dan_rather_basic_trace_data/1-1-0-29[/URL]

Think that should get you started.

Thanks. Don't expect miracles, but I'll try to ground any further questions in actual data.
 
OK, but what counts as "wood" could depend on the question under investigation.
I assume I've made mu intention clear...and note this goes back many moons.

If the data indicate that the acceleration really varies quite a bit, it would be useful to know that.
A backtrack through the numerous and various acceleration plots of yesteryear will hopefully allow you to see that "analysis" converged on trend, for many reasons.

We should not conflate "noise" = measurement error with "noise" = substantively uninteresting variation from trend (for some value of "substantively uninteresting").
Sure, but knowledge of the dataset then comes into play.

In principle, I think so -- but I hesitate to say that I "understand" something when I've (1) never used the method myself and (2) never seen the code that applies the method.
Fullest explanation..
www.wire.tu-bs.de/OLDWEB/mameyer/cmr/savgol.pdf

Thanks. Don't expect miracles, but I'll try to ground any further questions in actual data.
Okay.
 
input(displacement/time)
V
Apply SG filter, taking the value of the first derivative function of each 50 sample curve fit at the center-point of the window as the output (derived) value.
V
output(velocity/time)

Repeat from velocity to acceleration.

:o Not quite correct. Digging out old data (this is all from over a year ago) there was probably a slightly different procedure for acceleration, with an extra smoothing step and narrower window. Will try to find the "master" sheet :) Detail is probably kicking around in the "femr data analysis" thread.

Trend and magnitude...
819970289.jpg
 
I think you're getting confused with how SG is employed.

I suggest if you're having trouble with my words, that you have a surf, but again in maybe slightly different words...
He's not the only one. You definitely have a problem with explaining. You are not a very good communicator (to put it mildly) and it's very easy for people to misinterpret your words.

input(displacement/time)
V
Apply SG filter, taking the value of the first derivative function of each 50 sample curve fit at the center-point of the window as the output (derived) value.
V
output(velocity/time)

Repeat from velocity to acceleration.
I think I've finally understood.

So, you have a discrete function with raw position data. You apply a SG filter to it and obtain a smoothed and interpolated continuous function of position data, plus its derivative which I'll call unsmoothed velocity (even if it's gone through a SG smoothing already). Then you apply SG once more to the unsmoothed velocity, and obtain a smoothed velocity function and its derivative which is the acceleration plot.

Am I correct? Is the SG window ~0.83s in both smoothing steps?

I guess that then the obvious mismatch between velocity and acceleration comes from the fact that you're showing an unsmoothed velocity profile and the acceleration for the smoothed one. It's the only explanation that makes sense to me. And therefore Oystein is right in pointing out that you are not showing the derivative of the velocity curve you posted. That is plain wrong. I know it's pretty obvious to say this, but the unsmoothed velocity function and the smoothed velocity function are different functions.

If I am correct, posting the smoothed velocity curve instead might help reducing the noise (I mean in the thread, not in the plots).

It may be time to try to do perspective correction on the Camera #3 video. The noise in Dan Rather makes it highly unusable and that noise is leaking to the thread.

I can't help with that for now, but for future reference, do you have 3D location information for the cameras in Dan Rather and Camera #3, and the NW corner of WTC7? The latter should be pretty easy.
 
Last edited:

Ah - thanks!

"As a rough guideline, best results are obtained when the full width of the degree 4 Savitzky-Golay filter is between 1 and 2 times the FWHM of desired features in the data. ... One sees that, when the bumps are too narrow with respect to the filter size, then even the Savitzky-Golay filter must at some point give out. The higher order filter manages to track narrower features, but at the cost of less smoothing on broad features."​

Since apparently your acceleration curve underestimates even the average (downward) acceleration between (roughly) 12.6 and 13.2 s, as can be demonstrated on the velocity graph, I would suspect that at that location your S-G window was too wide; and as a consequence, the width of that feature (the >FFA trough) will be overestimated - as I said already.

A pretty basic problem is: If/when the features of the position data are relatively wide (conversely: your S-G window is relatively narrow), it won't be smoothed much, and the resulting velocity curve will have lots of noise; to smooth out that noise, you must choose a wide window in the next step, and thus underperform on the narrow features. And if you choose your first window too wide, or the position data has relatively narrow features, you will smooth the data too much, and signal gets lost and won't be recovered when going on to the acceleration filter.

Ideally, we'd want to have an idea first what the width of the features is that we are after, and then choose the window accordingly. And perhaps choose different window sizes in different sections of the timeline.
 
Status
Not open for further replies.

Back
Top Bottom