• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Discussion of femr's video data analysis

LOL. "Datapoint pair" and "separation (s)" are super-clear axis labels. Good luck passing a basic math class.
Can you think of any labels that would be more clear ? :confused:

ETA - I was reminded of this exchange, which came after more than one label-less graph. femr2 cannot admit error, so I'll stipulate that it was my fault for not understanding his wisdom.
I've since standardised graph format and titles and labels are clear. I note you haven't acknowledged that your recent assertion...
femr2 wants his graphs to remain a mystery. I have been suggesting proper titles and axis labels for a year now. Again, intentionally vagueness as control mechanism.
...was directed at a graph which does have proper titles and axis labels. Accusation of intentional vagueness ? How bizarre.
 
Last edited:
femr2's false anomaly

I'm fully aware of Runge's_phenomenonWP,
Runge's phenomenon may be related to one of your mistakes, but involves interpolation (where the number of parameters in the polynomial is equal to or slightly less than the number of data points) rather than approximate curve fitting and smoothing (where the number of parameters is considerably less than the number of data points).

One of your mistakes was to use a point of great inherent interest (your choice of T0) as the left endpoint for your "region of interest" instead of choosing a training set that would place T0 within its interior.

Another of your mistakes was to select a class of approximating functions whose derivatives tend to be least accurate near the endpoints of your so-called region of interest.

Another of your mistakes was to ignore boundary conditions.

Another of your mistakes was to make unsupportable claims about your model, such as no "abrupt" or "instantaneous" change in acceleration. By denying the applicability of your model to neighborhoods of T0, you cannot say whether there is an abrupt change of acceleration at T0. Indeed, a mere glance at your acceleration graph for Poly(10) would suggest that the acceleration either changes abruptly or is rapidly diminishing from a large upward acceleration at your T0.

That's exactly the kind of anomaly that people like cmatrix or Major_Tom would like to see at T0.

In your measurements, however, that large upward acceleration isn't really there. (The acceleration curve you obtained by Savitzky-Golay smoothing maxes out at about 4 ft/s^2 instead of the 12 ft/s^2 of your Poly(10) polynomial at T0 and 138 ft/s^2 at 11s.) You basically created that false anomaly by
  • ignoring boundary conditions
  • making a foolish choice of the left endpoint for your region of interest
  • making a foolish choice of approximating function
So the claimed value of your polynomial models for refuting cmatrix appears to have been yet another of your mistakes.

I think you mean well. I also think you're out of your depth.
 
That's exactly the kind of anomaly that people like cmatrix or Major_Tom would like to see at T0.

I don't "want to see". I don't care what the results are, as long as they are accurate.

You seem to project your own motivations onto others.

Why would I care one way or another? Anyone can see t=o problems from the "kink" and camera 3 location. It is not a theory.

Think kink! That part doesn't take a genius.
 
I don't "want to see". I don't care what the results are, as long as they are accurate.

You seem to project your own motivations onto others.

Why would I care one way or another? Anyone can see t=o problems from the "kink" and camera 3 location. It is not a theory.

Think kink! That part doesn't take a genius.

Can you expand on the kink? Thank you in advance.
 
I have never received a nice post from you. Is there a trick in that last one I am not seeing? Anyway, kink mentioned many times over the last few pages. Can't miss it if you re-read.
 
If t=o is the greatest interest he can expand around t=0 with data from both sides of t=o.

This acceleration profile would be analyzed away from the endpoints.

If femr is to go for the most accurate acceleration around the t=o domain, the data would be analyzed around that point the same way with endpoint elsewhere.

The time domain in which acceleration most sought would be near the center of an expansion. Relocate endpoints. Redo.

Any acceleration could be found locally in this way.
 
Last edited:
If t=o is the greatest interest he can expand around t=0 with data from both sides of t=o.

This acceleration profile would be analyzed away from the endpoints.

If femr is to go for the most accurate acceleration around the t=o domain, the data would be analyzed around that point the same way with endpoint elsewhere.

The time domain in which acceleration most sought would be near the center of an expansion. Relocate endpoints. Redo.

Any acceleration could be found locally in this way.

My guess is that if femr does exactly what he did with acceleration but he does it around a different area with t=o close to the center, the 2 acceleration profiles together will give us a very good glimpse of early acceleration through the >g hump.

His acceleration is most accurate away from the endponts so 2 profiles covering different time regions could reveal more.

RIght now, he has all acceleration data on one graph. Piecewise fits of overlapping regions should expose what problems are currently at the t=o endpoint.
 
One of your mistakes was to use a point of great inherent interest (your choice of T0) as the left endpoint for your "region of interest" instead of choosing a training set that would place T0 within its interior.
My purpose was to reveal trend detail during the descent, and as you can see...
819970289.jpg

...the Poly(10) curve is a pretty good approximation.

Can I reveal more detail around T0 with an additional curve, sure, and as I said I may do so. (Not sure if it's worth the bother given I already have the S-G curve though)

When compared to the NIST curve and the S-G curve, it is clear that the Poly(10) curve does indeed reveal a lot more detail than the NIST curve, which was the intent.

Another of your mistakes was to select a class of approximating functions whose derivatives tend to be least accurate near the endpoints of your so-called region of interest.
Whilst the italicised section is technically true, bearing in mind intent, your suggestion of *mistake* is specious. Could behaviour around T0 be made more accurate with an alternate curve, sure, though at the expense of detail at other times (it's end-points, and fitting from a near-linear section to a curve.)

Another of your mistakes was to ignore boundary conditions.
Not ignored, a compromise. End-point is the end of available data, as the roofline becomes obscured.

Another of your mistakes was to make unsupportable claims about your model, such as no "abrupt" or "instantaneous" change in acceleration.
The trend, as you know, matches the S-G curve very well, and so such claims are shown to be supported using alternate method. You're talking about the first ~0.25s after T0, perhaps a little stretch to be made directly from the Poly(10) curve alone, but true nonetheless.

Would you rather rely upon the NIST curve ? Is that a more accurate representation of trend in your opinion ?

By denying the applicability of your model to neighborhoods of T0, you cannot say whether there is an abrupt change of acceleration at T0. Indeed, a mere glance at your acceleration graph for Poly(10) would suggest that the acceleration either changes abruptly or is rapidly diminishing from a large upward acceleration at your T0.
Getting silly now. You are performing the same kind of literal interpretation that has caused folk to latch onto such phrases as "2.25s of freefall" verbatim. The S-G curve shows that the transition is pretty sharp, but still takes a while to reach gravitational acceleration. (~0.75s)

In your measurements, however, that large upward acceleration isn't really there.
Of course it isn't :rolleyes:

(The acceleration curve you obtained by Savitzky-Golay smoothing maxes out at about 4 ft/s^2 instead of the 12 ft/s^2 of your Poly(10) polynomial at T0 and 138 ft/s^2 at 11s.)
Indeed.

I think you mean well. I also think you're out of your depth.
I think you have come around nicely to accepting that the Poly(10) curve does indeed reveal a lot more information about the actual velocity profile during descent. It's all good :)

As I said, I may well perform an additional fit with an earlier ROI, with which we'll be able to argue some more, and find that it matches the S-G curve behaviour around T0 pretty well.
 
Last edited:
His acceleration is most accurate away from the endponts so 2 profiles covering different time regions could reveal more.
Of course.

RIght now, he has all acceleration data on one graph. Piecewise fits of overlapping regions should expose what problems are currently at the t=o endpoint.
As you know, I agree piece-wise fits can produce superior end results, but also think that the Poly(10) curve does a fine job of what it was intended to do.

I'll consider doing an early fit to appease the nit-picker (though it's uplifting to see that the overall increase in accuracy is finally sinking in ;))
 
It's what land surveyors (auf Deutsch - Landvermesser) use. Like a tripod with a telescope on it. If it were trained on a fixed point on the building, and after time, that point had moved, they would know that the building was shifting.
...

Oh, a Theodolit! Thanks!
Dang, I could have looked it up, but "transit" sounds so much more like a method than an instrument, that I didn't think a dictionary would have it.
(My father was a publicly licensed land surveyor. Landvermesser is okay, but Vermessungsingenieur (surveying engineer) is more precise, and better still Öffentlich bestellter Vermessungsingenieur, or ÖbVI (publicly appointed surveying engineer). So, my father was an ÖbVI, my brother is an ÖbVI, and my brother in law is an ÖbVI, and naturally, when I was a student, I helped out my dad during school recesses, and worked on the far end of surveys using transits. :D)
 
I have already published videos with my measurements. They have been available since 2009.

Femr2 has conflated several ideas together and is looking for something I never claimed to have. I am not going to correct his error, but will let him continue the way he is going. For now. :)

Videos are not a good medium to convey raw data. Do the videos contain your full trace data down to the precision you achieved?
 
So...

Poly(10) revealed new information about the velocity and acceleration profiles, but is invalid before T0.

Poly(50) revealed further detail during the middle of descent, but is inaccurate towards T0.

The Savitzky-Golay profile does not suffer from the same limitations, and provides the most accurate acceleration and velocity profiles to date...



 
effectiveness of femr2's polynomial compression

I think you have come around nicely to accepting that the Poly(10) curve does indeed reveal a lot more information about the actual velocity profile during descent. It's all good :)

As you know, I agree piece-wise fits can produce superior end results, but also think that the Poly(10) curve does a fine job of what it was intended to do.
We all agree that your Poly(10) curve serves as a lossy compression of the displacements you measured for the NW corner within the interval from about 12s to 17s. It uses 11 numbers to summarize 320 numbers, achieving a compression factor of 29.

But wait...those 320 numbers have small dynamic range and limited precision. Although you reported those numbers to the nearest ten-thousandth of a foot (30 micrometres), there's no reason to believe that any digits beyond the tenths place (about 30 millimetres, just over an inch) are significant.

So your Poly(10) approximation replaces 320 12-bit numbers by 11 double precision parameters, 480 8-bit bytes by 88, for a compression factor of 5.5 and a net savings of almost 400 bytes (about 1/4 the length of this message).

Of course, you prefer your Poly(50) approximation to Poly(10). If you ever figure out how to extract the numerical values of its 51 parameters from your Excel plug-in, your preferred Poly(50) approximation will achieve a compression factor of 1.1, replacing 480 bytes by 408 bytes for a net savings of 72 bytes.

I can see why you're so proud of those polynomial approximations.
 
Last edited:
So your Poly(10) approximation replaces 320 12-bit numbers by 11 double precision parameters, 480 8-bit bytes by 88, for a compression factor of 5.5 and a net savings of almost 400 bytes (about 1/4 the length of this message).
Awesome.

If you prefer the NIST 3 parameter model, that's your lookout. My intent was and is to extract as much detail as possible using the highest quality data possible. The NIST model doesn't reveal that detail.

My next task, now that I know the actual NIST T0 frame and position, is to replicate their trace using my methods (and also with theirs), and see how they compare. I'll have to trace a number of points above region B, but that's no problem. Doing so will clarify relative levels of accuracy and should also confirm a number of my assertions about the section in question.

I would have also performed a trace using the Dan Rather viewpoint but with the NIST T0 pixel location, to highlight any differences in T0, however that won't possible as the rooftop structures are still in place and obscure the roofline during that period of time. A comparison between methods using the Cam#3 viewpoint will have to do for that location.
 
Last edited:
So...

Poly(10) revealed new information about the velocity and acceleration profiles, but is invalid before T0.

Poly(50) revealed further detail during the middle of descent, but is inaccurate towards T0....

As Poly(10) and Poly(50) are only the results of what you agreed could be called "compression algorithms", they cannot possibly generate additional information from the input data. They certainly do not reveal any detail. If anything, they are able to conceal detail.
Anything that looks like new information is in fact compression artefacts.
 
As Poly(10) and Poly(50) are only the results of what you agreed could be called "compression algorithms", they cannot possibly generate additional information from the input data.
They serve to filter out noise, and clearly reveal the underlying trend, as can be seen by comparison with simple symmetric difference derivation of acceleration which shows the same general profile but with much higher noise...
82136974.jpg


They certainly do not reveal any detail. If anything, they are able to conceal detail.
They reveal more detail in the profile than the NIST model, which is their purpose. They do indeed smooth the profile, yes.

Anything that looks like new information is in fact compression artefacts.
Wordplay.

My intent is clear, and the results a significant improvement upon pre-existing similar information.
 
No, they're right. All you're doing is subtracting information. Your assignment of the label "noise" to the information you're subtracting is purely arbitrary.

Dave
I'm clearly affirming that information is being smoothed. Some will be noise, some will be signal. Can't see the wood for the trees without some smoothing ;)

However, I'm also affirming that additional detail is being shown than was previously available (that being the green NIST curve)...
819970289.jpg


If you don't trust that additional detail, or you're fine with the (remarkably smooth ;)) detail contained in the green NIST curve, that's your lookout. Fine by me.

Do you have anything to say about the Savitzky-Golay smoothed profile ?
 
Last edited:

Back
Top Bottom