• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Discussion of femr's video data analysis

It seems LashL is tracking through the thread changing images to links due to either repetition or oversize.

I have to object. These images are sized to fit on the screen fine (they do not even result in expansion of the natural frame sizing), need to be big enough to see what is going on, and are repeated directly as part of the ongoing technical discussion.

Removal really detracts from the usefulness of the thread, as the inline images are regularly referenced. Having to jump over to an alternat browser tab is frustrating.

They are all small in size, being compressed PNG.

I can understand repetative images being an issue in other types of thread, but this one is rather technical and the graphs are an integral part of the discussion.

It seems also that the removal is indiscriminate, with images being changed to links that have been posted maybe twice.

May I suggest other members who agree that the images should stay in place contact LashL ?


:(


Maybe on YOUR screen "they do not even result in expansion of the natural frame sizing", but: your screen ≠ everyone else's screen.
 
I need the 11 coefficients of your Poly(10) model
Excel truncated the precision...

1.619957920131320E+05
-5.078677045962260E+04
5.434711807298120E+03
-1.622157851146990E+02
-7.552305569894980E+00
2.762930067008900E-01
1.681178574722320E-02
1.590132710700450E-04
-7.670678350398170E-05
1.998393291720780E-06
-3.508303587424120E-09

My apologies, but as I said, careful with the laughing dogs.
(Posted about half hour before your latest DL ridden insight into your character and intent)

Other values provided...

r 2 | The coefficient of determination.
se y | The standard error for the y estimate.
F | The F statistic, or the F-observed value.
d f | The degrees of freedom.
ss reg | The regression sum of squares.
ss resid | The residual sum of squares.

r 2 | 0.999994204
se y | 0.258774633
F | 5244946.945
d f | 304
ss reg | 3512242.572
ss resid | 20.35715047

Valid range of the Poly(10) curve: 11.8785s -> 17.1171s

T0=11.8785s

Raw Dan Rather data pre-smoothed with 9 sample symmetric difference - max - min.

When I assumed your Poly(10) model was a model for the acceleration, and would require double integration to convert it into a model for the vertical displacement, you chastised me for making that assumption. If your Poly(10) model is a model for something other than acceleration, please tell us what you think it's supposed to model.
Displacement.

Most of your questions will be answered by the analyses I'll perform after I have your coefficients.
Okay. There's quite a few stacking up in the wings.

you're telling us that P(0s)= 161995.792 ft.
Yes.
p(0) = 161995.792013132
T0=11.8785s

p(11.8785) = -1.0226
Does anyone here know how to multiply and add/subtract?

Because femr2 sure doesn't.
:p
p(11.8785) = -1.0226, or more precisely... -1.022629547068820

I don't think it will happen, but if WD Clinger adds useful information to your data that is wonderful.
It seems from his behaviour that intent is more focussed on discrediting the data than anything objective, but will see what comes out of the analyses he's said he'll perform.

I imagine an immediate disagreement will be the width of the window for central difference approximation.

NISTs central difference data was determined using data-points spaced ~0.23s apart. My data-points are spaced ~0.017s apart.

I don't think it is at all valid to use an n-1, n+1 central difference on my data, which will of course massively amplify noise.
 
Last edited:
Quite right.


A view I totally support.

There should be at least 5 others within this thread capable of replicating the data and derived graphs using either the methods I have described, or their own. I looked at the NIST data and was not satisfied with it, the most blindingly simple reason for which is that they utterly ignore over 90% of the available motion data due to only sampling around 4 out of every 60 frames. Some folk are seemingly not satisfied with my data, yet none of their own is forthcoming. Instead a continnual series of *discredit* attempts.

The most recent one by W.D.Clinger is much, much better than most, but does make the fundamental mistake of *attacking* the accuracy of instantaneous acceleration when the graph used was clearly presented to gain a clearer understanding of trend.

W.D.Clinger is fully aware of other graphs that have been provided which provide a much more accurate representation of instantaneous acceleration over time, but chose not to focus upon them.

It's a to-and-fro. Perhaps, over time, the paranoia that others have that I'm out to deceive will diminish and the focus will return to making things better.

I had hoped that one result of this thread would have been discussion of techniques for improving noise treatment, but alas, very little input has been received.

I'm quite pleased with the fairly recently applied Savitzky-Golan smoothing method. Good balance between smoothing out the noise and retaining detail. Must try a range of additional paramaters and see to what extent the output differs.

And your conclusion is?
 
I don't think it is at all valid to use an n-1, n+1 central difference on my data, which will of course massively amplify noise.

Even with such, the Poly(10)velocity model is more accurate than the NISTvelocity model within the stated region of interest (11.8785s -> 17.1171s)

T0 = 11.8785s for my dataset is selected via inspection of:
  • Raw displacement data
  • Poly(10)displacement model
  • Poly(50)displacement model
  • Savitzky-Golay smoothed displacement data
  • Wide-band symmetric difference smoothed displacement data
  • Poly(10)velocity derived model
  • Poly(50)velocity derived model
  • Savitzky-Golay smoothed velocity data
  • Wide-band symmetric difference smoothed velocity data
  • Poly(10)acceleration derived model
  • Poly(50)acceleration derived model
  • Savitzky-Golay smoothed acceleration data
  • Wide-band symmetric difference smoothed acceleration data

T0 is very close to the peak on the Savitzky-Golay profile (red line) immediately before the steep and sustained drop...

 
A copy of the numerical derivations for reference.

Poly(10)displacement
d :=
161995.7920131320
-50786.77045962260*t
+5434.711807298120*t^2
-162.2157851146990*t^3
-7.552305569894980*t^4
+.2762930067008900*t^5
+.1681178574722320e-1*t^6
+.1590132710700450e-3*t^7
-.7670678350398170e-4*t^8
+.1998393291720780e-5*t^9
-.3508303587424120e-8*t^10

Poly(10)velocity
v :=
-50786.77045962260
+10869.423614596240*t
-486.6473553440970*t^2
-30.209222279579920*t^3
+1.3814650335044500*t^4
+.10087071448333920*t^5
+.11130928974903150e-2*t^6
-.61365426803185360e-3*t^7
+.17985539625487020e-4*t^8
-.35083035874241200e-7*t^9

Poly(10)acceleration
a :=
10869.423614596240
-973.2947106881940*t
-90.627666838739760*t^2
+5.5258601340178000*t^3
+.50435357241669600*t^4
+.66785573849418900e-2*t^5
-.429557987622297520e-2*t^6
+.143884317003896160e-3*t^7
-.315747322868170800e-6*t^8

Don't think there's any precision issues during the derivations ;)

If you are using a math package it's usually better to use the displacement function and derive it within your package.
 
a question

R2 = 0.999994204 ;)


Just out of curiousity - a questions for the resident math whizzes. Is this level of claimed precision typical of this type of equation? I mostly work with R2 in terms of regression analysis for statistics. I have never seen an R2 that starts with five nines. Is that extraordinary or typical?

thx

(and I'm glad that someone got the TO joke)
 
Is this level of claimed precision typical of this type of equation? I mostly work with R2 in terms of regression analysis for statistics. I have never seen an R2 that starts with five nines. Is that extraordinary or typical?
R2 = 0.999994204 is pretty darn good, yes.

ss reg | The regression sum of squares | 3512242.572
ss resid | The residual sum of squares | 20 .35715047

It indicates exceedingly close match between the dataset values and the modelled/predicted values within the region of interest (11.8785s -> 17.1171s).

As a comparison...
NIST model function for displacement, using my dataset and the same ROI... R2 = 0.998434788
 
Last edited:
Just out of curiousity - a questions for the resident math whizzes. Is this level of claimed precision typical of this type of equation? I mostly work with R2 in terms of regression analysis for statistics. I have never seen an R2 that starts with five nines. Is that extraordinary or typical?

thx

(and I'm glad that someone got the TO joke)


It is unusual, but the reason for it is easily understood.

A polynomial model of a short time series with eleven coefficients is essentially a lossy compression of the data itself. The curve fitting procedure acts as the compression algorithm. And in this case "the data itself" is a time series already heavily smoothed from the original raw data.

It's a bit like taking a picture, applying a large Gaussian blur, compressing the result with jpeg, writing out the jpeg image data as a list of coefficients, and then noting how similar the reconstructed jpeg image is to the uncompressed (but, remember, blurred) image. That you can do so doesn't tell you anything useful about the original picture itself.

Respectfully,
Myriad
 
And in this case "the data itself" is a time series already heavily smoothed from the original raw data.
The raw data has simple 9 sample running average (minus min and max, so effective 7 sample running average) applied, hardly heavy for data sampled at 59.94Hz.

It's a bit like taking a picture, applying a large Gaussian blur, compressing the result with jpeg, writing out the jpeg image data as a list of coefficients, and then noting how similar the reconstructed jpeg image is to the uncompressed (but, remember, blurred) image. That you can do so doesn't tell you anything useful about the original picture itself.
No, it's not. You are overstating the smoothing applied for some reason.

Removal of the smoothing (which I'll not be doing) only changes R2 to 0.9999894731242370
 
Last edited:
a chronological outline of numerical rubbish

Reviewing yesterday's six-hour correspondence with femr2, I now understand why he is celebrating yet another in his series of victories over the forces of incivility and sloppy nonsense.

At 4pm on my side of the pond, femr2 asked "what exactly do you require?". For the tenth time, I requested his polynomials' numerical coefficients.

At 5:53pm, femr2 said he was having trouble getting those numbers out of his Excel plug-in. I said I'd wait. At some time before 6:36pm, femr2 edited one of his previous posts to add the numbers, but those numbers were wrong.

At 8:23pm, femr2 told me the numbers were correct. At 8:42pm, I responded with a laughing dog. At 8:51pm, femr2 suggested I be "careful with those laughing dogs". At 9:15pm, I wrote out an entire calculation showing that his numbers were wrong. At 9:23pm, femr2 asserted the correctness of his numbers once again in post #1272.

At 9:29pm, I asked whether anyone here knows how to multiply and add/subtract. At 9:34pm, femr2 edited his post #1272 to correct his numbers and to express his sincere apology:
femr2 said:
My apologies, but as I said, careful with the laughing dogs.
In a separate message one minute later, he wrote:
Does anyone here know how to multiply and add/subtract?

Because femr2 sure doesn't.
:p
See above.

p(11.8785) = -1.0226
p(0) = 161995.7920

Showing your colours.


I didn't see anything worth noting in what he had quoted above, and scrolling up without refreshing my browser revealed nothing but error in femr2's previous posts. Having spent three hours trying to convince femr2 that his numbers were wildly incorrect, I wrote a little Java program to demonstrate his error.

Now he's really upset. Look what I made him do.

oh, wow. That hurts.

This is the ultimate tragedy of Truthers. They have so much to learn, and they even seem interested. But because they hitch themselves to a cause, they distrust anyone who disagrees, and they make it their life's work to "pwn" people. Hence, they never learn.

Here's hoping this one is the exception.


As anyone can see, I have only myself to blame for femr2's decision to write "See above." instead of "Sorry about that. I've edited my post #1272 to correct the numbers."
 
You would never know how badly the JREF forum had screwed up WTC1 for years.

A little amnesia and the confidence goes on.
 
The raw data has simple 9 sample running average (minus min and max, so effective 7 sample running average) applied, hardly heavy for data sampled at 59.94Hz.


No, it's not. You are overstating the smoothing applied for some reason.

Removal of the smoothing (which I'll not be doing) only changes R2 to 0.9999894731242370


Ah, you're right about that part. I was thinking of the raw versus modeled curves from the velocity plots rather than the displacement. So let's say smoothed, instead of "heavily" smoothed.

The rest of what I said still applies. Your tenth order model is restating (compressing) your data, not explaining it. Which is fine, but it serves no purpose. A model consisting of a copy of the data has an R2 of 1.0!

Try fitting a polynomial curve to the first half of your data, then seeing how well it fits the second half, and vice versa. That will give you a clearer idea of how predictive your model actually is, and how many of the polynomial terms are really providing any benefit.

Respectfully,
Myriad
 
You would never know how badly the JREF forum had screwed up WTC1 for years.

A little amnesia and the confidence goes on.
You are upset, and off topic. Cool. femr's data only backs up NIST work in a funny failed way, and has nothing to help your CD theory move closer to reality, fantasy of CD you have will never be reality.

Why have you failed to prove your CD theory? Where are the bad guys, was your satan Bush?

an intentional act so barbaric, so inhumane and morally impoverished that the fabled characteristics of Satan come to mind.
Who did it? (hint, 19 terrorists)

Please explain how femr's work helps your claims of CD. Please, help explain the connection. Please.


You would never know how badly the JREF forum had screwed up WTC1 for years.

A little amnesia and the confidence goes on.
Years? More than enough time to publish your findings in a Journal of Engineering. Why have you and femr failed to publish this important work? You must have fellow engineer from your engineering school to help or those many Gage has who back his scam. Why can't you publish and show JREF you are not talking you are doing?

How does WTC 1 fit with femr study of WTC 7? Can you help femr with his errors?
 
Last edited:
Ah, you're right about that part. I was thinking of the raw versus modeled curves from the velocity plots rather than the displacement.
No worries. The R2 values for the velocity plots are also particularly good.

So let's say smoothed, instead of "heavily" smoothed.
I'd say lightly smoothed myself, but whatever. Here's a plot of the difference between the raw and smoothed data in pixel terms...

...maxing out around 0.15 pixels.

The rest of what I said still applies. Your tenth order model is restating (compressing) your data, not explaining it. Which is fine, but it serves no purpose.
It served the purpose of enabling derived velocity and acceleration profiles to be produced, which themselves serve to better understand the velocity and acceleration behaviour over time. A wealth of purpose.

A model consisting of a copy of the data has an R2 of 1.0!
And if I could get a curve there which could be derived I'd be on it :) I have a Poly(50) model also, but cannot extract the coefficients above degree 16. It's pretty darn close to 1.0 for displacement.

Try fitting a polynomial curve to the first half of your data, then seeing how well it fits the second half, and vice versa. That will give you a clearer idea of how predictive your model actually is, and how many of the polynomial terms are really providing any benefit.
I'm mapping to data which follows a particularly chaotic behaviour. There's no reason to think behaviour in the second half of the data could possibly be predicted by a model determined from inspection of the first half. Good to not forget what the data actually relates to.

Again the purpose is to get accurate data about the velocity and acceleration behaviour over time, and for that purpose the data, derived curves and additional smoothed datasets are accurate and insightful.
 
It's very insightful about individual position. Showing their colours as I put it.

Some values pasted from Excel lost precision. Oops. Not a big deal. Found and posted updated values about half hour before WDC's latest DL ridden insight into his character and intent.

I think I'll live, and as always I try and remain as pleasant as possible.

I know which end of this discussion I'd rather be on.

R2 = 0.999994204 ;)

The one where 1+1 =2?
 
R2 = 0.999994204 is pretty darn good, yes.

ss reg | The regression sum of squares | 3512242.572
ss resid | The residual sum of squares | 20 .35715047

It indicates exceedingly close match between the dataset values and the modelled/predicted values within the region of interest (11.8785s -> 17.1171s).

As a comparison...
NIST model function for displacement, using my dataset and the same ROI... R2 = 0.998434788

Mine is 0.99999765554433344444443333222221111111111111111111444444441413

I win!!
 

Back
Top Bottom