Richard Gage Blueprint for Truth Rebuttals on YouTube by Chris Mohr

Status
Not open for further replies.
I must agree here with C7 that the correlation isn't really obvious.
It's not obvious. That's the point I'm trying to get across. Making judgement about subtle acceleration behaviour from a velocity plot is "fraught with danger". Similarly making literal or instantaneous interpretation about acceleration from the acceleration graph itself is also fraught with danger. Again, look at in terms of general trend and magnitude.

Between 12.56 s and 12.59 s, the red curve is flat.
I bet it isn't.

So obviously, the cyan curve in that area is not a mathematical derivation of the slope of the red curve in that interval.
Looks like it is affected by that period of "levelling out".

Remember, you're looking at smoothed data.

Savitzky-Golay smoothing essentially performs a local polynomial regression on a series of values to determine the smoothed value for each point. As a "bonus", derivation of each polynomial function can be easily performed to provide derivative data.

Your interpretation may be based upon how you would expect a symmetric difference derivation to behave ?

So if you smooth over 20 samples, and if each sample is a frame, we are talking about 0.667 s.
I'll have to check. And a poly-fit, which is then derived, for each point.

And while the red curve definitely exhibits a major deviation above g for a significant period of time, your smoothing whatever blurs that, and makes the cyan line almost useless, in my opinion.
I again suggest ensuring that all three displacement, velocity and acceleration...are viewed together to correctly interpret what's going on.

There's compromise on how wide a sample region to use...between noise and general trend.

Obviously there's some "smearing" when you go from displacement to velocity. That smearing is going to increase when going from velocity to acceleration of course.

I could perhaps produce graphs with narrower sample window, but I don't really see the point.

Getting too literal with 2nd order derivative data is not a great idea.

That's why I refrain from making interpretation beyond general trend.
 
I'm thinking the building (point you chose to track) moved back and appeared to move up? (Yes, I could be totally misunderstanding what this means).

If you covered this before a simple link will do.

:)
Early motion of the NW corner has been discussed to death.

Of course the NW corner was in motion before "release", as has been shown many times with the Cam#3 viewpoint traces...

666377698.jpg

821833103.jpg


"some" of that motion will affect the Dan Rather trace, though of course noise is there too. T0 was selected after the last unrecoverable peak of vertical motion.
 
Why don't you look a little closer...


Red - Velocity (Scale on LHS ft/s)
Cyan - Acceleration (Scale on RHS ft/s^2)
Black - NIST Linear Fit (Scale on LHS ft/s)

I must agree here with C7 that the correlation isn't really obvious.

For example:

  • Between 12.56 s and 12.59 s, the red curve is flat. That would indicate acceleration = 0. So obviously, the cyan curve in that area is not a mathematical derivation of the slope of the red curve in that interval.
The near-flatness of the red curve doesn't imply the acceleration is zero during that interval. It implies the acceleration is nearly constant within that interval.

That's inconsistent with the cyan curve. It is therefore obvious that femr2 made some mistake(s) in that graph.

ETA: In the above, I'm assuming you were using the word "flat" to mean "straight". If you were using the word "flat" to mean horizontal, then your "acceleration = 0" was correct but your "12.56 s and 12.59 s" was wrong. Either way, there are plenty of obvious discrepancies between the red line and the cyan line, so femr2's graph cannot be correct. It isn't even close to correct.

At ~30 frames per seconds, each fram represents ~0.033 s, right? So if you smooth over 20 samples, and if each sample is a frame, we are talking about 0.667 s.
IIRC, femr2 has stated that his Savitzky-Golay filtering involves 50 points, not 20. If the sample rate was ~30 fps, then each of his sampled points was influenced by almost 1.7 seconds of data. If the sample rate was ~60 fps, then each of his sampled points was influenced by almost a second of data.

With that much smoothing, it should be obvious that even a correct graph would be unable to support Christopher7's bold statements about instantaneous acceleration. As you noted, the graph we're discussing now cannot be correct.
 
Last edited:
It's not obvious. That's the point I'm trying to get across. Making judgement about subtle acceleration behaviour from a velocity plot is "fraught with danger". Similarly making literal or instantaneous interpretation about acceleration from the acceleration graph itself is also fraught with danger. Again, look at in terms of general trend and magnitude.


I bet it isn't [flat].
Almost flat :rolleyes:
If there is such a short interval of an (almost) flat velocity graph, after it had a negative slope, the corresponding acceleration ought to go up; it goes down, which renders either the velocity graph, or the acceleration graph, or both, useless. Or both.

Looks like it is affected by that period of "levelling out".
Ever so slightly.

Remember, you're looking at smoothed data.

Savitzky-Golay smoothing essentially performs a local polynomial regression on a series of values to determine the smoothed value for each point. As a "bonus", derivation of each polynomial function can be easily performed to provide derivative data.
What do you smooth?

Your raw data (frames|subpixels) gets converted to raw time|distance I suppose, which gives you a wobbly line.

Then you perform S-G smoothing on that time-vs-distance data, right? The algorithm crawls over (n-1)/2 data points to the left and (n-1)/2 data points to the right of each given point, that about right? And gives you a polynomial that tends to preserbe local optima and such features. Okay.

Then what - S-G gives you a time-vs-distance data which can be differentiated, right? Is that what you do to get the velocity curve? Do you smooth that curve again? And then when you differentiate a second time - will that again be smoothed?

So by the time we have an acceleration curve, the data has been smoothed three times?

Your interpretation may be based upon how you would expect a symmetric difference derivation to behave ?
Uhm yes and no. Your velocity curve is "smooth", this means it's first differential has no singularities. So I'd expect you to actually do a dv/dt over infinitesimally small intervals - doesn't matter then if symmetric or what. Since S-G gives you polynomials, the derivative should be continuous. So if I see a piece on the v-curve that's "almost flat", I'd expect to see an a-curve that is "almost zero" at the same time.

I hope you don't chop the smooth v-curve into n (some other n :D) discrete (and basically arbitrary) points that you then smooth witzh S-G?

I'll have to check. And a poly-fit, which is then derived, for each point.
Again - at what level do you do a poly-fit? Distance data only, or velocity again, acceleration again?

I again suggest ensuring that all three displacement, velocity and acceleration...are viewed together to correctly interpret what's going on.
If the velocity graph has features that don't show in the acceleration graph that is supposedly derived from the velocity graph, such as a period of over 0.7 seconds with, on average, -41.81 ft/s2 (which means it MUST be even below that value - certainly under 42 - for part of that interval), and the acceleration graph stays above -39 ft/s all of the time (missing the actual minimum by at least 7.7%), that makes me kinda wonder what use this acceleration graph is.

And it further makes me wonder how good the velocity graph is!

There's compromise on how wide a sample region to use...between noise and general trend.
I roughly understand the nature of that problem. And I am not suggesting that you are missing an important trend, or artificially creating a trend where there is none. I am fairly convinced, as I am now playing with Chandler's data, that there was a significant period of acceleration significantly above g. I do find data triplets in his Data that even the largest reasonable application of error margin couldn't make to work out as =g or <g, so if Chandler's data is any good at all, he already proves >g. And I have no doubt that your data does that even more clearly.

I just find the curves to be very misleading - they appear to show a lot of detail that has to be an artifact of smoothing both signal and noise.

Obviously there's some "smearing" when you go from displacement to velocity. That smearing is going to increase when going from velocity to acceleration of course.

I could perhaps produce graphs with narrower sample window, but I don't really see the point.

Getting too literal with 2nd order derivative data is not a great idea.

That's why I refrain from making interpretation beyond general trend.
Exactly. And I am getting more and more the feeling that your acceleration graph is already too far down that rabbit hole.
 
Last edited:
The near-flatness of the red curve doesn't imply the acceleration is zero during that interval.
Correct...


That's inconsistent with the cyan curve.
The acceleration curve clearly responds to the change in velocity trend. Smoothed, sure.

It is therefore obvious that femr2 made some mistake(s) in that graph.
There's no scope for "mistakes" in the processing. I perform SG smoothed derivation within OriginPro to the entire dataset. No piecemeal processing. No other data manipulation.

IIRC, femr2 has stated that his Savitzky-Golay filtering involves 50 points, not 20.
I've applied various different widths at various times. Can have a check to see what I used for the graphs being discussed.

As you noted, the graph we're discussing now cannot be correct.
:) I suggest being less hasty. Could you tell me why you think that ? You can see the period suggested to be "flat" above. A rethink ?

As a constant reminder...of course smoothing will "smear" some details.

Piece of pie to regenerate with smaller sample window, but of course the apparent noise level is going to increase. I'm interested in the trend, as I'm fully aware of the limitations imposed by the raw data. If you folks want to get into discussions about effectively instantaneous value changes...
 
...
ETA: In the above, I'm assuming you were using the word "flat" to mean "straight". If you were using the word "flat" to mean horizontal, then your "acceleration = 0" was correct but your "12.56 s and 12.59 s" was wrong. Either way, there are plenty of obvious discrepancies between the red line and the cyan line, so femr2's graph cannot be correct. It isn't even close to correct.
...

I did mean "horizontal" - was talking about that bit of red line that is smack on-5 ft/s, and *checking again* it goes from t = 12.555 s to 12,593 s ;) (yes, one more significant digit than pixel resolution allows, so rounded that's 12.56 to 12.59 s)
 
If there is such a short interval of an (almost) flat velocity graph, after it had a negative slope, the corresponding acceleration ought to go up
Depends upon smoothing, of course.

Again, getting too literal with instantaneous values is fraught with danger.

The graphs are provided to allow you to see the trend, not overreach and specify instantaneous results.

it goes down, which renders either the velocity graph, or the acceleration graph, or both, useless. Or both.
Have a look at the close-up.

I'm aware that many would love to poke holes in the data, but as always you're very welcome to take the raw data and replicate the results.

Or you could always ignore it and use David Chandlers instead ;)

Trend.

Ever so slightly.
Why would you expect it to do more ?

What do you smooth?
Say what ?

Your raw data (frames|subpixels) gets converted to raw time|distance I suppose, which gives you a wobbly line.
If you zoom in far enough, yes.

Then you perform S-G smoothing on that time-vs-distance data, right? The algorithm crawls over (n-1)/2 data points to the left and (n-1)/2 data points to the right of each given point, that about right? And gives you a polynomial that tends to preserbe local optima and such features. Okay.
Yes, and I take the first derivative of each polynomial.

Then what - S-G gives you a time-vs-distance data which can be differentiated, right?
No. Features of the SG filter allow output of the first derivative directly by differentiation of the individual polynomials.

Is that what you do to get the velocity curve?
No.

Do you smooth that curve again?
No.

And then when you differentiate a second time - will that again be smoothed?
Yes, though as above.

So by the time we have an acceleration curve, the data has been smoothed three times?
Twice.

if I see a piece on the v-curve that's "almost flat", I'd expect to see an a-curve that is "almost zero" at the same time.
Depends upon smoothing.

I hope you don't chop the smooth v-curve into n (some other n :D) discrete (and basically arbitrary) points that you then smooth witzh S-G?
No. Full dataset in one go.

Again - at what level do you do a poly-fit? Distance data only, or velocity again, acceleration again?
SG to velocity. SG to acceleration. Been through this numerous times.

that makes me kinda wonder what use this acceleration graph is
Trend. Magnitude. Time.

If you want to see more instantaneous behaviour, perform a simple narrow-band symmetric difference with no additional smoothing. By the time you get to acceleration you'll have random fuzz.

There has to be a trade-off between filtering and noise levels.

Try the "femr2 data analysis" thread for that discussion going round in circles.

And it further makes me wonder how good the velocity graph is!
Replicate it. You have the data.

Rather than complain/doubt, just do it. It's not rocket science.

I just find the curves to be very misleading - they appear to show a lot of detail that has to be an artifact of smoothing both signal and noise.
It's overreaching interpretation that is misleading imo.

I'm continnually reminding folk here to look at trend, not instantaneous behaviour. To remember that the data is smoothed. To cross-reference between displacement, velcoity and acceleration to ensure a clear understanding. To cross-reference between viewpoints to interpret motion.

Be cautious. Be conservative. Don't over-reach on interpretations.

Yet again... I'm confident that the general trend is "true", and the same trend emerges from multiple different smoothing and derivation methods.

Yet again...


Couple-o-different smoothing and derivation methods there. Very similar trend.

Exactly. And I am getting more and more the feeling that your acceleration graph is already too far down that rabbit hole.
I'm sure that you'd prefer that to be the case.
 
I did mean "horizontal" - was talking about that bit of red line that is smack on-5 ft/s, and *checking again* it goes from t = 12.555 s to 12,593 s ;) (yes, one more significant digit than pixel resolution allows, so rounded that's 12.56 to 12.59 s)
Sorry, my mistake. I misread your 12.56 to 12.59 as 12.6 to 12.9.

The fact remains that femr2's graph is grossly incorrect. The cyan curve, which allegedly represents acceleration, is not even close to the derivative of the red curve that allegedly represents velocity.

That's inconsistent with the cyan curve.
The acceleration curve clearly responds to the change in velocity trend. Smoothed, sure.
Inconsistent. Not even close.

It is therefore obvious that femr2 made some mistake(s) in that graph.
There's no scope for "mistakes" in the processing. I perform SG smoothed derivation within OriginPro to the entire dataset. No piecemeal processing. No other data manipulation.
Yet the graph you posted is obviously incorrect. Either your software made a mistake, or its operator made a mistake. Given the difficulty the operator is having in acknowledging the problem, I'm inclined to suspect operator error.

As you noted, the graph we're discussing now cannot be correct.
:) I suggest being less hasty. Could you tell me why you think that ?
Sure. Here's your graph, which has now been posted four times:



Red - Velocity (Scale on LHS ft/s)
Cyan - Acceleration (Scale on RHS ft/s^2)
Black - NIST Linear Fit (Scale on LHS ft/s)
Look at the interval from 12.6 to 12.9 seconds. The red curve is approximately straight during that interval, with an average slope of roughly (-20- -5)/.3 = -15/.3 = -50 ft/s^2. You said the red line represents velocity, and the cyan line represents acceleration. Since the red line is approximately straight between 12.6 and 12.9 seconds, the cyan line should be approximately constant during that interval, with a value of roughly -50 ft/s^2.

That's not at all what your graph shows. There are only two possibilities:
  • Your graph is incorrect.
  • Your explanation of the red and cyan lines was incorrect.
 
Either your software made a mistake, or its operator made a mistake.
Neither. You're clearly and obviously grabbing an oportunity to "criticise" when you know full well that the data is smoothed (with quite a wide sample window).

I'm sure you have the facilities available, so why don't you replicate ?

Had a cheack, and, yes, 50 sample SG window, with 1/(60*1000/1001) sample per second data.

So, let's get your "issue" put into perspective shall we...

You're looking at a brief period of partial velocity "flattening" which occurs over a period of ~0.05s.

You're complaining that it doesn't have a profound effect on derived data with a 50/(60*1000/1001) ~= 0.83s "smoothing" window.

Really ?

Look at the interval from 12.6 to 12.9 seconds. The red curve is approximately straight during that interval
You're falling into the C7 trap I'm afraid. No, it's not particularly "straight" at all.

Shall we have another "zoom in" ?



I don't see a straight red curve Will. Looks somewhat wobby to me.

A straight black line. Sure. Compare, contrast.

But I'll not spoil your fun too much...carry on, take a good 0.83s before you respond ;)

Might be an idea to have a think about a curve fit centered on, what, 12.75s, with a 0.83s width. Shift the centre-point a tenth or so, and do another curve fit...
 
Last edited:
Depends upon smoothing, of course.

Again, getting too literal with instantaneous values is fraught with danger.
Understood, and that's not my issue.

The graphs are provided to allow you to see the trend, not overreach and specify instantaneous results.
That would be fine if they showed a trend, rather than plenty of detail (will come back to that later)

Have a look at the close-up.
Not necessary. The close-up proved what you conceded in the meantime: That the cyan line (acceleration) is NOT the mathermatical derivation of the red line (velocity).

I'm aware that many would love to poke holes in the data, but as always you're very welcome to take the raw data and replicate the results.

Or you could always ignore it and use David Chandlers instead ;)
You aren't getting my drift. Maybe I am not being clear about it. I am not going to dismiss your work, I am just critical of the usefulness of your S-G derived acceleration curve.

I have no issue with your (raw) data, I am (still) cool with S-G for the velocity curve, and I (tentatively) agree fully with the "general trend" claim that there was a significant interval of acceleration significantly over g. This is already pretty apparent from Chandler's data, although Chandler's data is of such low quality that a closer inspection of the error sources might reveal that a function with no period of >g is possible with that data set, whereas I am rather confident that your data will survive such scrutiny more easily.

[snipped a few remarks that I find unimportant now]

Yes, and I take the first derivative of each polynomial.

No. Features of the SG filter allow output of the first derivative directly by differentiation of the individual polynomials.
Ok - slowly: You S-G-smooth the displacemant data, right? This gives you polynomials that, if plotted, give a smoot displacement curve that "nicely fits" the discrete displacement data points?
THEN you take the first derivative (ds/dt) of the displacement polynomials and that is your velocity function, which plots out as the red line?

I am confused now - these Noes seem to contradict what you just said about "output of the first derivative directly by differentiation of the individual polynomials" :confused:

Yes, though as above.
Yes to what? And which above? :confused:
(Question was: "And then when you differentiate a second time - will that again be smoothed?")

Twice.

Depends upon smoothing.

No. Full dataset in one go.
...

SG to velocity. SG to acceleration. Been through this numerous times.
SG is a method performed on a set of k data points - my understanding is that k is a finite integer. Thus i understand how you can use SG to smooth your original set of k data points t|s, but I don't understand how you apply SG to already smoothed, continuous functions with an infinite number of data points?

Trend. Magnitude. Time.

If you want to see more instantaneous behaviour, perform a simple narrow-band symmetric difference with no additional smoothing. By the time you get to acceleration you'll have random fuzz.
Right. And any feature of your SG'ed acceleration curve likewise is random fuzz, even if you make it look smoother. And that is misleading any way you put it. Or, if you prefer, it renders the curve pretty useless.


There has to be a trade-off between filtering and noise levels.
Sure.

Try the "femr2 data analysis" thread for that discussion going round in circles.

Replicate it. You have the data.

Rather than complain/doubt, just do it. It's not rocket science.
No need to replicate it. I don't doubt at all that I get the same (or similar) result. What I doubt is that it is of any use.

It's overreaching interpretation that is misleading imo.
Sure, you could put it that way.

I'm continnually reminding folk here to look at trend, not instantaneous behaviour. To remember that the data is smoothed. To cross-reference between displacement, velcoity and acceleration to ensure a clear understanding. To cross-reference between viewpoints to interpret motion.

Be cautious. Be conservative. Don't over-reach on interpretations.

Yet again... I'm confident that the general trend is "true", and the same trend emerges from multiple different smoothing and derivation methods.
Yet again, I agree with all that.

Yet again...


Couple-o-different smoothing and derivation methods there. Very similar trend.
Between 12.5 s and 15 s, the "interesting period" when "about" freefall and some >freefall occur, the red SG curve has about 9 local minima and 8 local maxima. It drops below FFA at 12.76 s and never goes below 39 ft/s. Yet I find that the velocity curve seems to go over FFA no later than 12.6 s, exhibits an average acceleration of -41.81 ft/s2 for more than 0,7 seconds between 12.59 s and 13.31 s. If the acceleration curve is in any meaningful way derived from the velocity curve, it must show a period of under -41 ft/s2 somewhere; if not, if the trough is not deep enough, it must of necessity be wider, to preserve the area under the graph, which is the integral of the acceleration = velocity. If you underestimate the peak fall acceleration, you may overestimate the duration of that >FFA period. Etc.

At the same time I see that the poly10 and poly 50 curves show the same general trend, with about the same uncertainty and imprecision.



Those acceleration curves are bitches. The more they show, the more you are bound to get wrong.

I'm sure that you'd prefer that to be the case.
Again, you misinterprete my drift ;)
 
Either your software made a mistake, or its operator made a mistake.
Neither. You're clearly and obviously grabbing an oportunity to "criticise" when you know full well that the data is smoothed (with quite a wide sample window).
I'm just pointing out the obvious fact that the cyan curves in your most recent graphs do not resemble the derivatives of the red curves in those graphs.

From that it follows that either
  • Your graphs are incorrect.
  • Your explanations of the red and cyan curves have been incorrect.

Look at the interval from 12.6 to 12.9 seconds. The red curve is approximately straight during that interval
You're falling into the C7 trap I'm afraid. No, it's not particularly "straight" at all.

Shall we have another "zoom in" ?



I don't see a straight red curve Will.

A straight black line. Sure. Compare, contrast.

But I'll not spoil your fun too much...carry on, take a good 0.83s before you respond ;)
Perhaps you should pay more attention to the labels on your axes.

According to that zoomed-in graph, the red curve shows -5 ft/s at 12.6s and shows -10 ft/s at just beyond 12.7s; let's call it 12.71s.

That means the average acceleration during that interval must be close to (-10 - -5)/(12.71 - 12.6) = -5/.11 = -45 ft/s^2.

According to your previous explanations, the cyan curve's units are in ft/s^2 and are shown on the right hand side. In your graph, you have changed the labels on that right hand side from units of acceleration to units of displacement. Ignoring that mistake and using the units shown on the right hand side of the graph I was discussing In your most recent graph, you have moved the units of acceleration to the left hand side side; from those units, it is apparent that the cyan curve never drops below -30 ft/s^2 during the interval [12.6s, 12.71s].

If the magnitude of the instantaneous acceleration were always less than 30 ft/s^2 during that interval, then the average acceleration could not be 45 ft/s^2 during that same interval.

Hence your red curve is inconsistent with your cyan curve.

So your curves are incorrect.

This isn't rocket science. It's basic calculus.

I'd prefer not to speculate about the root cause(s) of your error(s). You are in a better position than I am to figure out where you went wrong. Before you can do that, however, you must acknowledge the reality of your error(s).
 
Last edited:
That's not at all what your graph shows. There are only two possibilities:
  • Your graph is incorrect.
  • Your explanation of the red and cyan lines was incorrect.


He doesn't understand enough to see the problem.

He doesn't understand his tools well enough to wield them adroitly.

So his first response will be "words, words, words".

Later, it will become monosyllabic. "Nope" "No" "Wot"

But you can be sure that he will "post, post, post".

Right & wrong don't matter. He figures whoever gets in the last word, wins.

And a month from now, he'll be claiming that he kicked your butt in the discussion.

It's his pattern.
 
I'm just pointing out the obvious fact that the cyan curves in your most recent graphs do not resemble the derivatives of the red curves in those graphs.
You're rather obviously ignoring the sample window that I've made you fully aware of... ~0.83s.

I also assume you understand the Savitzky-Golay filter. Perhaps I shouldn't have made that assumption.

Using a time period significantly under 0.83s is obviously not going to give you the same results.

From that it follows that either
  • Your graphs are incorrect.
  • Your explanations of the red and cyan curves have been incorrect.
Again, neither. You're simply, and I'm sure knowingly, taking advantage of the fact that changing the period over which you derive the "average" will affect the results.

According to that zoomed-in graph, the red curve shows -5 ft/s at 12.6s and shows -10 ft/s at just beyond 12.7s; let's call it 12.71s.

That means the average acceleration during that interval must be close to (-10 - -5)/(12.71 - 12.6) = -5/.11 = -45 ft/s^2.
Perhaps you should make sure you use the appropriate window width, namely from 12.235s to 13.065, for the point at 12.65s.

Primitive average (-23.8 - -1.3)/0.83 = ~ -27 ft/s^2

Now, remember that Savitzky-Golay is curve fitting, and so the difference between the value above, and the value on the graph... ~ -23 ft/s^2 ...becomes much easier to understand.

You can manipulate the data however you please, but you've been told how it was derived and have no excuse for the distortion.

You understand smoothing, so perhaps your opinion is "oversmoothed". Making silly assertions about the graph being incorrect, or my explanation being incorrect is...incorrect (and you know it).

Hence your red curve is inconsistent with your cyan curve.
Nope.

So your curves are incorrect.
Nope.

I'd prefer not to speculate about the root cause(s) of your error(s).
Again, I fully understand the smoothing methods, and just illustrated where you are incorrectly manipulating the data (for your own reasons of course, as I'm sure you do understand the implication of the sample window size)

You are in a better position than I am to figure out where you went wrong.
Haven't gone wrong. Selected quite a wide sample window, which you may disagree with, but that ain't "wrong".

Before you can do that, however, you must acknowledge the reality of your error(s).
0.83s window. The error you are talking about is your own misunderstanding of the data.

Again, just for clarity, the simple average acceleration within the window for the point at 12.65s is ~ -27 ft/s^2, not your alternatively derived -45 ft/s^2.

As I said earlier, I can reduce the window size, but it really doesn't take long before the acceleration data is unreadable, with wild fluctuation, as you would expect.
 
He doesn't understand enough to see the problem.
Incorrect. I recall you using adjacent differencing on the data "a while back", claiming it contained accelerations of 80g. Absolutely hilarious.

Hopefully you've learned much since then.

I also recall you suggesting the best way to derive would be piece-meal curve fitting over ~1s periods.

Well, guess what, ~0.83s period curve fitting. One per sample.

Can you create wild accelerations by using an over-narrow window, sure. Odd that after W.D.Clinger rebuking you for doing just that, that he's now doing it himself in order to appear to score "points".

How interesting. Just so happens I was reading through the posts where WDC was doing just that, just a few moments ago. I'll post the links when the pair of you jump in again ;)

He doesn't understand his tools well enough to wield them adroitly.
lol. It really isn't rocket science.
 
The close-up proved what you conceded in the meantime: That the cyan line (acceleration) is NOT the mathermatical derivation of the red line (velocity).
Of course it is. Derived with a ~0.83s window.

I am just critical of the usefulness of your S-G derived acceleration curve.
Perhaps you need to see the result of narrowing the window.

Perhaps a really old acceleration graph using symmetric differencing rather than SG will help you see why I employ it...
82136974.jpg


Again, we're looking at similar trend, but a lot more noise.

I'm not interested in the noise. I'm after the trend and magnitude, both of which SG smoothing is very well suited to.

Ok - slowly
Ok.

You S-G-smooth the displacemant data, right? This gives you polynomials that, if plotted, give a smoot displacement curve that "nicely fits" the discrete displacement data points?
No. The Savitzky-Golay smoothing method performs a local polynomial regression around each point to find the derivative at each point.

THEN you take the first derivative (ds/dt) of the displacement polynomials and that is your velocity function, which plots out as the red line?
No. The first derivative is obtained by deriving the local point polynomial function during SG smoothing.

Being verbose about it...

For each derived point OriginLab fits a curve to the surrounding 50 samples, derives that curve, and outputs the value of the derived function for the point in question.

It repeats that process for every sample.

There is therefore ~60 curves fitted and derived for every second of data.

Or, if you prefer, it renders the curve pretty useless.
Trend. Magnitude.

Again, you're welcome to process the raw data in any way you please. I'm confident that you'll tend towards an extremely similar trend and magnitude. I am fully confident that the current graphs represent the "true" behaviour in as much detail as can be extracted from the available data (which I in turn am fully confident is about as accurate as it's technically possible to extract from the available Dan Rather video footage)

Those acceleration curves are bitches. The more they show, the more you are bound to get wrong.
The important point is interpreting the data correctly.

Again, I don't have a big problem showing you what happens when you narrow the window. Gets messy rapidly.

I've made it repeatedly clear that the graphs should be used for general trend and magnitude.

Going all "C7" and performing near-instantaneous values calculations is a huge backwards step for you all, imo.

Again, you misinterprete my drift ;)
I don't think you fully understand the implications.
 
...
Using a time period significantly under 0.83s is obviously not going to give you the same results.
...

So if you use a time period significantly under 0.83s, will that give you incorrect results?

How about a time period of significantly over 0.83 s? That will give yet another result, right? Will that result be incorrect?

What is it about 0.83 s that gives you results that are "not incorrect"?

Or if no matter what time period you choose, the result is always "not incorrect", what does that mean then?


I am sure you settled for 0.83 s because the result looked "nice": No over-the-top wild amplitudes, and at the same time an impressive texture of detail. Correct?



What is such an aesthetically pleasing graph "good" for? What is "real" in it, and how do you know it's real? Can anybody learn something from it, that could not be learned from what NIST did, or Chandler did? What? And who?
 
So if you use a time period significantly under 0.83s, will that give you incorrect results?
No. You'll be looking at some wild fluctuations though.

How about a time period of significantly over 0.83 s? That will give yet another result, right?
Right. You'll lose any detail you were trying to find in the process though.

Will that result be incorrect?
No.

What is it about 0.83 s that gives you results that are "not incorrect"?
They are the correct results for a 0.83s window.

Or if no matter what time period you choose, the result is always "not incorrect", what does that mean then?
It means that interpretation of the data is important.

I have "chosen" the window size based on a pretty ridiculous familiarity with the dataset in question. I've derived to acceleration with numerous different differentiation methods, and numerous smoothing methods. Similar trend emerges. Technically SG is by far the most appropriate way to treat this data.

I am sure you settled for 0.83 s because the result looked "nice": No over-the-top wild amplitudes, and at the same time an impressive texture of detail. Correct?
To a certain extent, absolutely. Not keen on the "look nice" bit, but definitely okay with the tradeoff between "wild amplitude" and "detail".

What is such an aesthetically pleasing graph "good" for?
Ye gads. Trend. Magnitude.

What is "real" in it, and how do you know it's real?
Again, trend and magnitude. Acceleration detail impossible to reveal with the other datasets...which I've detailed many a time.

Can anybody learn something from it, that could not be learned from what NIST did, or Chandler did?
Of course. You've been discussing the consequences for months.

To quote myself...

My acceleration graph shows:

a) Rapid increase in acceleration from release to somewhat over-g in approximately 1s.

At the end of this period, the NW corner had descended ~9ft

b) Slow reduction in acceleration to approximately g over approximately 1.5s.

At the end of this period, the NW corner had descended ~83ft

c) More rapid reduction in acceleration to roughly constant velocity over approximately 2s.

At the end of this period, the NW corner had descended ~270ft


If you use the velocity graph you'll obviously miss some profile shape detail, but you could say...

~1.75s at ~FFA


I am of the opinion that the additional detail determined from the acceleration data provides invaluable information about the behaviour of the location during descent of the building.
 
So if you use a time period significantly under 0.83s, will that give you incorrect results?

How about a time period of significantly over 0.83 s? That will give yet another result, right? Will that result be incorrect?

What is it about 0.83 s that gives you results that are "not incorrect"?

Or if no matter what time period you choose, the result is always "not incorrect", what does that mean then?

I am sure you settled for 0.83 s because the result looked "nice": No over-the-top wild amplitudes, and at the same time an impressive texture of detail. Correct?

Excellent questions. It seems to me that without some explicit error analysis, there is no way to distinguish between filtering out "noise" and suppressing "signal."
 
That's not at all what your graph shows. There are only two possibilities:
  • Your graph is incorrect.
  • Your explanation of the red and cyan lines was incorrect.


He doesn't understand enough to see the problem.

He doesn't understand his tools well enough to wield them adroitly.

So his first response will be "words, words, words".

Later, it will become monosyllabic. "Nope" "No" "Wot"

But you can be sure that he will "post, post, post".

Right & wrong don't matter. He figures whoever gets in the last word, wins.

And a month from now, he'll be claiming that he kicked your butt in the discussion.

It's his pattern.


He's doing his best to prove you right.

What worries me is that the incompetence femr2 has displayed in his last few posts calls into question all of the acceleration curves he's posted in this subforum.

I'm not going to respond to femr2's continued trolling. If he truly does not understand that an acceleration curve must be the derivative of the corresponding velocity curve, then there's no real hope of explaining his error(s) to him.

I'm going to give femr2 about 10 days to figure out where he went wrong. If he hasn't figured it out by then, I'm going to digitize his velocity curves directly from his graphs, calculate the derivatives, and then compare those competently calculated acceleration curves to the ones he's been feeding us. At the very least, that will provide a dramatic visual demonstration of the true uncertainty in femr2's acceleration graphs.

Why am I giving femr2 10 days? Because the digitization, calculations, and construction of overlay graphs will take several hours of my time, and I'm too busy right now to waste that much time on femr2's obvious mistakes.
 
Status
Not open for further replies.

Back
Top Bottom