• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Discussion of femr's video data analysis

Trying to make sense of your sentence here. Please correct this paraphrase as you see fit:
"I--pgimeno--don't see NIST in error, having used a methodology that was good enough to make their point - rather, posters here (femr2), are misinterpreting NIST's (accurate enough for their purposes) data analysis to make it appear inaccurate like crap because they have a personal beef with NIST--not a legitimate, technical one."
Corrections above.


If this is accurate, then you're just saying that you're fine with NIST's level of accuracy. O.k. What's the issue with an attempt at refining their data? Why would anyone have a problem with that--given that it's more accurate than the original data?
I don't have a problem with anyone refining the data.


You're commenting on his rather terse writing style? Then accusing him of injecting "noise" into the debate? Pot to Kettle: "you black!"

Why do you and others insist on evaluating femr2's arguments on terms other than what femr2 has made explicit? If you don't like what he's doing with the data, that's fine, but if you can't find a reason to discard his data--and again, I'm not even sure why you'd want to--then just don't participate in the discussion. Let it unfold. I'm not the only one curious about what can be made of MORE ACCURATE DATA.

Questions of relevance are frankly, irrelevant. He's refining the starting point of the collapse. That's the relevance. Are you looking for some kind of cosmic relevance? Don't hold your breath. You want him to say "AHA! Now you can't deny that xyz..."? It seems this is a real fear for you. But I have no doubt that whatever the end result of his findings, both sides and all points in between will find a way to fit the more accurate data into their favourite explanation of what happened.
Summary of the last part of the thread:

femr2: "cmatrix is wrong because he uses the NIST data which is crap and their report is wrong. Here is my data, it is better and thanks to it I have rebutted some of their conclusions."

Other posters including me: "I have no issues with your data, but your interpretation of where the problem with cmatrix is is wrong, the NIST report is correct, your conclusions are baseless unless you can prove them through publication, and your attack to NIST uses arguments with quality standards (measured through relevance) that can't compare with theirs."

Now, as I already commented and you probably missed, if femr2 just limited himself to discussing his data, I would probably not have seen a need to join the discussion to tell him off.

Thus my sentence:
Plus, the unjustified adornments with which femr2 accompanies his discussion also introduce quite some noise and have brought up the topic of irrelevance.



I'm learning a lot in this discussion, and I'm not the only one, so please, let's have a real, technical discussion and leave the sophistic distractions about relevance and the territorial pissings about NIST in another thread.
If femr2 says "my data is this" everything is OK. If femr2 says "my data is this and NIST don't know how to do their work" that is a noise generator.
 
Last edited:
Maybe I missed it, but is there any way for femr to publish his results anywhere? I don't know if he plans to, I just want to know if it's possible.
He could publish his results on his own web site, and he has already done some of that.

So far as I know, he has not gathered his results into anything resembling a scholarly article, not even a self-published article on his web site.

To go beyond self-publishing, his options are limited because:
  1. He does not want to reveal his identity.
  2. His results are of limited interest.
To present his results at a conference, he'd have to show up in person, so that's out.

Some journals are willing to publish pseudonymous papers, but the editor(s) would insist on knowing his real identity (if only for legal reasons pertaining to copyright). So long as he's unwilling to reveal his true identity to the editor(s) of a pseudonymous paper, he can't publish in a legitimate journal.

Besides, few journals would want to publish a minor improvement on already-published data for a one-time event unless that data were accompanied by a coherent explanation of how and why the improved data should lead to significant alteration of the consensus surrounding the event.
 
Responding to cmatrix in the way femr2 intended requires that the relative merit of femr2's data and NIST's be established one way or the other. The need for that determination is an objective requirement of the topic 'rebut cmatrix' and not, as has been falsely claimed, merely a femr2 personal preference.
Quite right. I find it rather mind-boggling that there can be so much resistance to attempts to make progress (with specific individuals)* with such obvious bottle-neck issues.

* a right pain to have to include such implied context to try and keep the whining and misinterpretation from others down a little...meh...

The whole *literal interpretation 2.25s of freefall* issue is a focal point for many, and the *freefall or not* argument has been the topic of a ridiculously large volume of discussion between thousands of individuals.

Telling folk such as cmatrix (seems slightly unfair to single him out so often. He's simply one of many with rigid views based upon their own personal interpretation of the NIST texts) that a significant portion of his *technical viewpoint* in relation to WTC7 is WRONG because *he didn't read the text properly*, or *doesn't understand the meaning properly* or *duuur, look there's an approximate somewhere near the phrase you've latched onto* is not going to work. I imagine all of the above have been said at some point, and it may well be that nothing will alter that viewpoint.

However, my approach is that of attempting to show why the interpretation is wrong. That the words that have been latcched onto do not actually fit the real world behaviour. That they are, when interpreted literally, wrong. That the data they are based upon is inaccurate (for many listed reasons) to the extent that it also can be termed as wrong. That the primitive phasing summary was ill-advised at best, or in other words wrong. That David Chandler was also wrong. That NIST never should have turned around and agreed. etc. Reminds me of a dialogue I plonked in here recently...

*Truther* - WTC7 Dropped at freefall!
*Debunker* - No it didn't at any point at all !
Chandler - Yes it did, lookie see !
*Debunker* - NO, it DIDN'T nut-job (snigger) !!1!1!
Chandler - OI, NIST lookie see...freefall !
NIST - Hmm, oh yeah, you're right 2.25s worth !
*Truther* - Look ! We were right !!1!!! Impossible !1!1
*Debunker* - Pfft. It doesn't matter, never did. We all knew anyway !
*Truther* - Pish. NIST admitted simultaneous failur over 8 storeys.

...

Femr2 - NIST was wrong. Chandler was wrong. cmatrix is wrong. Here's the acceleration profile. No 2.25s period of freefall. No instantaneous rate changes. No indication of instantaneous global building structure changes...

*Debunker* - What is the POINT of your data ??!1!?
*Debunker* - How dare you ctiticise NIST !!11!eleventy

Repeat ad-nauseum.

It's necessary to be critical of the source of such *catch-phrases* if you will, and that lies firmly at NISTs doorstep.

Would it have sufficed to prefix *gravitational acceleration* with *approximately ? I very much doubt it. Those without the inclination or understanding to interpret what that word addition implies to the statement would still interpret the words in a particular way.

Folk don't seem to *like* me saying *NIST was wrong*, but consider simply, in isolation, the words...

the north face descended at gravitational acceleration

Now, is NIST right, or wrong ?


*Part* of my recent discussion is an attempt to show why my higher resolution data shows the premise of such things as *instantaneous structural resistance removal over a height of 8 floors* to be false.

Of course, as I have repeatedly highlighted, I also have a number of lower level technical issues with the data (although folk don't seem to want to discuss more than about one element on the list for some reason. Perhaps because the others result in not only the quality but the actual meaning and relevancy of the data coming into question). The discussion is mingled between the two, with regular contextual misinterpretations being performed by my, er, critics.


So, yes, people need to stop having hissy fits because I am criticising NIST, or using black and white right/wrong qualifiers where they may have a grey opinion based on a different interpretation of their own personal bigger picture, or because they are incapable of retaining context within lengthy and noisy (and so disjointed) discourse.


If folk can get a grip on context and want to discuss the cmatrix issue a bit more rationally and productively, fine, though I suggest this thread may not be the right place.
 
I'm sorry, but cmatrix's claims would be ludicrous even if the acceleration had been exactly 1g for exactly 2.25s. That much has already been established in posts by dozens of different people.
Then a new approach is all that can be attempted.

If our goal here were to point out that cmatrix misinterprets the NIST report when he claims that the acceleration was exactly 1g, we would deal with that by quoting the NIST report accurately and by highlighting the explicitly approximate nature of its claims.
No, that clearly doesn't work. Part of the problem is understanding. The stance of superiority without teaching is stale-mate.

You, however, appear to be arguing that the only way we can rebut cmatrix's misinterpretation of NIST's report is to agree with cmatrix's misinterpretation of NIST's report.
No, but understanding why the misinterpretation arose and laying blame at ALL doorsteps.

That does not mean NIST's values are "wrong". It means they are less accurate. As an engineer, you should know better than to parrot femr2's misleading choice of pejorative.
I have posted quite a length list of issues with the data, which you don't seem to have much issue with.

How inaccurate does something need to be for you to classify it as wrong ? Just curious.

I have highlighted at times why I am using certain amounts of black/white assertion.

If you'll take the trouble to read what I wrote above, you'll see that I have outlined an alternative response that does not require establishing the relative merit of femr2's data and NIST's.
Which would clearly not actually work, no matter how much you think it should.

his misinterpretations and tendentious criticisms of NIST's NCSTAR 1-9 Volume 2 section 12.5.3.
Such as ? Please make sure you don't make contextual slip-ups in your response.

There are a couple of lists of my technical criticism of the data...
 
...So, yes, people need to stop having hissy fits because I am criticising NIST, or using black and white right/wrong qualifiers where they may have a grey opinion based on a different interpretation of their own personal bigger picture, or because they are incapable of retaining context within lengthy and noisy (and so disjointed) discourse...
Spot on.
...If folk can get a grip on context and want to discuss the [issue]...
If we subtract the complicated ad homs and the heated irrelevancies the thread could progress faster. We could even accommodate the 'lose the plots' and 'miss the points'. Still that is wishful thinking. :D
 
he has not gathered his results into anything resembling a scholarly article, not even a self-published article on his web site.
Correct, and this whole arena is still an evolving work in progress with very few conclusions.

few journals would want to publish a minor improvement on already-published data for a one-time event
With the exclusion of the *minor* I tend to agree, as I have stated numerous times. For example, I am sure that NIST themselves would have little interest.

So, yes, when people repeatedly dismiss information because it has not been *submitted to a respected peer reviewed (engineering ?) journal*, I tend to :rolleyes: these days.
 
He could publish his results on his own web site, and he has already done some of that.

So far as I know, he has not gathered his results into anything resembling a scholarly article, not even a self-published article on his web site.




To go beyond self-publishing, his options are limited because:
  1. He does not want to reveal his identity.
  2. His results are of limited interest.
To present his results at a conference, he'd have to show up in person, so that's out.

Some journals are willing to publish pseudonymous papers, but the editor(s) would insist on knowing his real identity (if only for legal reasons pertaining to copyright). So long as he's unwilling to reveal his true identity to the editor(s) of a pseudonymous paper, he can't publish in a legitimate journal.

Besides, few journals would want to publish a minor improvement on already-published data for a one-time event unless that data were accompanied by a coherent explanation of how and why the improved data should lead to significant alteration of the consensus surrounding the event.
Great closing post.
 
Last edited:
If femr2 says "my data is this" everything is OK. If femr2 says "my data is this and NIST don't know how to do their work" that is a noise generator.
Regardless of whether you think it is a *noise generator*, the NIST data suffers from the following (non-exhaustive) series of technical issues, each of which reduce the quality, validity and relevance of the data in various measures...

  • NIST did not deinterlace their source video. This has two main detrimental effects: 1) Each image they look at is actually a composite of two separate points in time, and 2) Instant halving of the number of frames available...half the available video data information. Tracing features using interlaced video is *not a good idea*. I have gone into detail on issues related to tracing of features using interlaced video data previously.
  • NIST did not sample every frame, reducing the sampling rate considerably and reducing available data redundancy for the purposes of noise reduction and derivation of velocity and acceleration profile data.
  • NIST used an inconsistent inter-sample time-step, skipping roughly every 56 out of 60 available frames. They ignored over 90% of the available frame data.
  • NIST likely used a manual (by hand-eye) tracking process using a single pixel column, rather than a tried and tested feature tracking method such as those provided in systems such as SynthEyes. Manual tracking introduces a raft of accuracy issues. Feature tracking systems such as SynthEyes employ an automated region-based system which entails upscaling of the target region, application of LancZos3 filtering and pattern matching (with FOM) to provide a sub-pixel accurate relative location of initial feature pattern in subsequent frames in video.
  • NIST tracked the *roofline* using a single pixel column, rather than an actual feature of the building. This means that the trace is not actually of a point of the building, as the building does not descend completely vertically. This means the tracked pixel column is actually a rather meaningless point on the roofline which wanders left and right as the building moves East and West.
  • NIST used the Cam#3 viewpoint which includes significant perspective effects (such as early motion being north-south rather than up-down and yet appearing to be vertical motion).
  • NIST did not perform perspective correction upon the resultant trace data.
  • NIST did not appear to recognise that the initial movement at their chosen pixel column was north-south movement resulting from twisting of the building before the release point of the north facade.
  • NIST did not perform static point extraction. Even when the camera appears static, there is still (at least) fine movement. Subtraction of static point movement from trace data significantly reduces camera shake noise, and so reduces track data noise.
  • NIST did not choose a track point which could actually be identified from the beginning to the end of the trace, and so they needed to splice together information from separate points. Without perspective correction the scaling metrics for these two points resulted in data skewing, especially of the early motion.

Perhaps the *noise* could simply be discussion (agreement) of the detrimental effects each of these issue has upon the NIST data.

Then we can move on ;)

Having performed a fair bit of video feature tracking myself, most of the issues highlighted really should have been dealt with, and though it's a phrase I have not actually used before, in terms of tracing technique technical prowess I'd certainly have no shame asserting "NIST don't know how to do their work". In all fairness, and in this context, why should they ? It's not a common field. That doesn't stop it being technically shoddy, resulting in sloppy data.
 
Last edited:
Regardless of whether you think it is a *noise generator*, the NIST data suffers from the following (non-exhaustive) series of technical issues, each of which reduce the quality, validity and relevance of the data in various measures...

  • NIST did not deinterlace their source video. This has two main detrimental effects: 1) Each image they look at is actually a composite of two separate points in time, and 2) Instant halving of the number of frames available...half the available video data information. Tracing features using interlaced video is *not a good idea*. I have gone into detail on issues related to tracing of features using interlaced video data previously.
  • NIST did not sample every frame, reducing the sampling rate considerably and reducing available data redundancy for the purposes of noise reduction and derivation of velocity and acceleration profile data.
  • NIST used an inconsistent inter-sample time-step, skipping roughly every 56 out of 60 available frames. They ignored over 90% of the available frame data.
  • NIST likely used a manual (by hand-eye) tracking process using a single pixel column, rather than a tried and tested feature tracking method such as those provided in systems such as SynthEyes. Manual tracking introduces a raft of accuracy issues. Feature tracking systems such as SynthEyes employ an automated region-based system which entails upscaling of the target region, application of LancZos3 filtering and pattern matching (with FOM) to provide a sub-pixel accurate relative location of initial feature pattern in subsequent frames in video.
  • NIST tracked the *roofline* using a single pixel column, rather than an actual feature of the building. This means that the trace is not actually of a point of the building, as the building does not descend completely vertically. This means the tracked pixel column is actually a rather meaningless point on the roofline which wanders left and right as the building moves East and West.
  • NIST used the Cam#3 viewpoint which includes significant perspective effects (such as early motion being north-south rather than up-down and yet appearing to be vertical motion).
  • NIST did not perform perspective correction upon the resultant trace data.
  • NIST did not appear to recognise that the initial movement at their chosen pixel column was north-south movement resulting from twisting of the building before the release point of the north facade.
  • NIST did not perform static point extraction. Even when the camera appears static, there is still (at least) fine movement. Subtraction of static point movement from trace data significantly reduces camera shake noise, and so reduces track data noise.
  • NIST did not choose a track point which could actually be identified from the beginning to the end of the trace, and so they needed to splice together information from separate points. Without perspective correction the scaling metrics for these two points resulted in data skewing, especially of the early motion.

Perhaps the *noise* could simply be discussion (agreement) of the detrimental effects each of these issue has upon the NIST data.

Then we can move on ;)

Funny that you should mention "exhaustive"... A good description of this whole circle jerk.

And now you have 10 angels dancing on the head of an utterly irrelevant pin.

The velocity & acceleration profiles of the collapse of the north wall facade are completely & utterly irrelevant to NIST's assignment: explain why the building collapsed. Regardless of whether that acceleration was constant, variable, sub-g, g or super-g.

NIST completed their assignment competently.

Their explanations are in their report. The collapse of the north face plays zero role in the causal factors in the collapse of the building. The velocity & accelerations of the north face are the very late results of the collapse.

They were tasked with providing explanations of causes. Not results.

When NIST was dragged into this specious issue by Mr. Chandler's incompetence, they gave it the comparatively superficial attention & explanation that it deserved.

You may as well have applied your SynthEyes to tracking a single piece of the Challenger after it blew up. And then claimed that you "out performed NASA", because they didn't do the same to the silly extent that you did.

You've outperformed nobody. You bought yourself a tool. You wield it without judgment or understanding.

To someone who only has a hammer, everything looks like a nail.
 
Regardless of whether you think it is a *noise generator*, the NIST data suffers from the following (non-exhaustive) series of technical issues, each of which reduce the quality, validity and relevance of the data in various measures...

  • NIST did not deinterlace their source video. This has two main detrimental effects: 1) Each image they look at is actually a composite of two separate points in time, and 2) Instant halving of the number of frames available...half the available video data information. Tracing features using interlaced video is *not a good idea*. I have gone into detail on issues related to tracing of features using interlaced video data previously.
  • NIST did not sample every frame, reducing the sampling rate considerably and reducing available data redundancy for the purposes of noise reduction and derivation of velocity and acceleration profile data.
  • NIST used an inconsistent inter-sample time-step, skipping roughly every 56 out of 60 available frames. They ignored over 90% of the available frame data.
  • NIST likely used a manual (by hand-eye) tracking process using a single pixel column, rather than a tried and tested feature tracking method such as those provided in systems such as SynthEyes. Manual tracking introduces a raft of accuracy issues. Feature tracking systems such as SynthEyes employ an automated region-based system which entails upscaling of the target region, application of LancZos3 filtering and pattern matching (with FOM) to provide a sub-pixel accurate relative location of initial feature pattern in subsequent frames in video.
  • NIST tracked the *roofline* using a single pixel column, rather than an actual feature of the building. This means that the trace is not actually of a point of the building, as the building does not descend completely vertically. This means the tracked pixel column is actually a rather meaningless point on the roofline which wanders left and right as the building moves East and West.
  • NIST used the Cam#3 viewpoint which includes significant perspective effects (such as early motion being north-south rather than up-down and yet appearing to be vertical motion).
  • NIST did not perform perspective correction upon the resultant trace data.
  • NIST did not appear to recognise that the initial movement at their chosen pixel column was north-south movement resulting from twisting of the building before the release point of the north facade.
  • NIST did not perform static point extraction. Even when the camera appears static, there is still (at least) fine movement. Subtraction of static point movement from trace data significantly reduces camera shake noise, and so reduces track data noise.
  • NIST did choose a track point which could actually be identified from the beginning to the end of the trace, and so they needed to splice together information from separate points. Without perspective correction the scaling metrics for these two points resulted in data skewing, easpecially of the early motion.
Perhaps the *noise* could simply be discussion (agreement) of the detrimental effects each of these issue has upon the NIST data.

Then we can move on ;)
Is this typical for someone not out to prove anything, and some who claims the official story is fictional? A long list making claims about NIST instead of organizing you own work into a better form?
Your acceleration graph is not the real acceleration, it is a curve fit; is that right? Poly 10 means what for the lay person?

I already pointed out how you make fun of NIST with BS, and you erased the comment, but you still hold failed papers as technical papers at your web site, or have you fixed that error? How long will it take you to drop your 911 truth side? You fell for Flt 175 still airborne scam, have you erased what yet?

What is wrong with NIST methods for the purpose; nothing. You don't like NIST? No one needs NIST to understand WTC 7 failed due to fires not fought, so what is you purpose since you are not out to help NIST by publishing or organizing you work into a rational form?

Why not stick with your own work, fix it up so it can be understood? Why do you have to mention NIST?

This is ferm's data thread, not show what you think NIST got wrong thread. Your work is a mess, and you are attacking NIST? Is that a cover to distract from the fact your work is not in usable form? You spent time spewing your NIST crap, and failed to make your work "better".

What is the big payoff tracking one pixel during a gravity collapse? Goal? Conclusion? Payoff? Big picture? Think of this as defending your thesis. What is your thesis? If it is NIST is wrong, you failed, and you don't care.

What frame of reference did you use to come up with your results? Where are the components for your acceleration plot (x,y,z)?

Where is the distance from the lens/camera to the building, in x,y,z, and frame of reference? Where is the data for the lens used, as in mm, some data? Who made the lens? You did have the distance from the camera to the building right? Do you have any lens data, like zoom setting, etc? Where is all the supporting data? Time to move up, not on?

What was your thesis? You did all this work for what? How will it dovetail with the your view the official theory is fictional? Did you erase that comment too?
 
Your acceleration graph is not the real acceleration, it is a curve fit; is that right?
I have numerous acceleration profile graphs. Each are representative of the actual acceleration profile for the NW corner, thus the use of the word profile.

Pretty accurate, yes (especially the latest graph). Exact, no.

Each is significantly more detailed and informative than the single straight line provided by NIST.

Some are curve fits, sure.

Poly 10 means what for the lay person?
Read the thread. Mentioned a few posts ago (bingo) :rolleyes: Polynomial line fit by the least squares method with order 10.

What is wrong with NIST methods for the purpose
See list above.

Why do you have to mention NIST?
You just asked (kinda) "What is wrong with NIST methods for the purpose"...see list above ;)

This is ferm's data thread, not show what you think NIST got wrong thread.
Sure, and as soon as we can get past the denial that the NIST data suffers from the items listed, we can move on.

What is the big payoff tracking one pixel
Again, I don't track *a pixel*. I employ region-based tracking processes.

Tracing techniques reveal copious amounts of information about subtle behaviour and movement, which I am sure I will get back to once you lot have stopped asking stupid questions.
 
Funny that you should mention "exhaustive"... A good description of this whole circle jerk.
Oh, hello tom. You're back ! :) Make sure you stay polite now eh ;)

The velocity & acceleration profiles of the collapse of the north wall facade are completely & utterly irrelevant to NIST's assignment: explain why the building collapsed. Regardless of whether that acceleration was constant, variable, sub-g, g or super-g.

...

The collapse of the north face plays zero role in the causal factors in the collapse of the building. The velocity & accelerations of the north face are the very late results of the collapse.

They were tasked with providing explanations of causes. Not results.
Surprisingly enough, I don't have a big problem with what you are saying there, though, as I am sure you already know, I don't think NIST did a particularly competent job of tracing the movment at all. Yeah, big shock eh :) However... yeah, there's a however. However, you have CLEARLY not read much of the recent thread updates or you would know that the reason a few key phrases are back on the table is, well, hint: cmatrix.

When NIST was dragged into this specious issue by Mr. Chandler's incompetence, they gave it the comparatively superficial attention & explanation that it deserved.
...the consequences of which have been NIST turning around and agreeing with elements of "Mr. Chandler's incompetence" by performing some of their own.

Also, analysis of building motion is pretty fundamental in determining accurate descent mechanism, so I don't agree with you when you say *irrelevant to NIST's assignment*.

But hey, you're out of retirement, again. Don't stress too much now eh. You're a bit late in the day to start the, what was the phrase, oh yeah *circle jerk* again. Have a nice one ;)
 
Last edited:
However, you have CLEARLY not read much of the recent thread updates or you would know that the reason a few key phrases are back on the table is, well, hint: cmatrix.

You might also notice cmatrix is get the attention he deserves (I think his threads fell of the first page of the forum, again). He's not exactly a good reason to put much effort into something.

:rolleyes:
 
You might also notice cmatrix is get the attention he deserves (I think his threads fell of the first page of the forum, again). He's not exactly a good reason to put much effort into something.

:rolleyes:
Sure, though there are numerous with similar viewpoint basis. The discussion has regularly returned to him as folk keep misinterpreting the context of what I've said. Hey ho. Noise level seems to be reducing.
 
He could publish his results on his own web site, and he has already done some of that.

So far as I know, he has not gathered his results into anything resembling a scholarly article, not even a self-published article on his web site.

To go beyond self-publishing, his options are limited because:
  1. He does not want to reveal his identity.
  2. His results are of limited interest.
To present his results at a conference, he'd have to show up in person, so that's out.


He doesn't have to. The point is that all results can be duplicated by anyone.

That is the whole point. The results describe real motion. The only reason anyone would rely on belief is out of laziness.

That is not the fault of femr.


Anyone can verify that crap has made it through peer review. The capacity to verify and reproduce means everything. Peer review gives you crap like BLGB.

From what I have seen the peer review process wrt 9-11-01 has served as a way to plant propaganda. How does "crush down, then crush up" survive peer review? Not an impressive process.
 
Last edited:
Your acceleration graph is not the real acceleration, it is a curve fit; is that right? Poly 10 means what for the lay person?
If I recall correctly, femr2 has semi-confirmed that his second graph in post #1096 was obtained by fitting a polynomial of degree 10 to some of his data, but I don't think he's said whether it was obtained by fitting the polynomial to his position data or to his derived acceleration data.

To a lay person, that means nothing but may sound impressive.

To me, it means femr2 plugged his data and the number 10 into some computer program, and he thought the resulting curve looked nice.

It turns out that femr2's curve can be approximated extremely well using a polynomial of degree 5:

curve10.jpg


That tells us femr2 is using a polynomial of unnecessarily high degree.

That's significant for at least two reasons.

First, it means femr2 is using an unnecessarily large number of parameters. A polynomial of degree n has n+1 parameters: its coefficients. As I have demonstrated above, 6 parameters can model the acceleration of the northwest corner just as well as femr2's 11. When there's a simpler model that works just as well, why use the more complicated model?

To turn that around: Five of femr2's parameters aren't doing him any good. We can probably find a more appropriate class of model that puts those parameters to better use, improving the fit between data and model. That more appropriate class of model might even give us some insight into plausible physical mechanisms.

That's the second point. No one has suggested any plausible mechanisms whose physics would involve polynomials of degree 10. Although femr2 has been using that curve as his authority for several dubious arguments, it's nothing more than a nice little curve that happens to bear some resemblance to femr2's data. It does not provide much (if any) insight into the physics, and it doesn't even fit femr2's data all that well.

Look at femr2's first graph in post #1096. That's a heavily smoothed presentation of the accelerations femr2 computed from his position data. From the look of that graph, it's quite likely that a better class of model would
  1. yield a better fit to femr2's data
  2. using formulas that are more physically plausible
  3. while reducing the number of parameters.
On the other hand, that may not be apparent to a lay person.
 
If I recall correctly, femr2 has semi-confirmed that his second graph in post #1096 was obtained by fitting a polynomial of degree 10 to some of his data
Correct.

but I don't think he's said whether it was obtained by fitting the polynomial to his position data or to his derived acceleration data.
The latter.

To me, it means femr2 plugged his data and the number 10 into some computer program, and he thought the resulting curve looked nice.
I've already stated in this post what I use to perform the fitting.

I've used the same source data with up to order 50, as you know. Why is it that you make these kind of statements ?

You've seen the animated GIF showing the effect of increasing poly order many times.

Note that with extreme poly order (50) the effect upon the profile is not drastically different to at 10 (and for good reason, bearing in mind 2-step process from position/time to acceleration/time).

It turns out that femr2's curve can be approximated extremely well using a polynomial of degree 5
It doesn't look an awful lot different with order 50, but the profile changes slightly at every interval...
http://femr2.ucoz.com/_ph/7/408829093.gif


Are you suggesting degree 5 to be *best* ?

When there's a simpler model that works just as well, why use the more complicated model?
Why not ? I suggest your lower degree curve is probably missing some subtle detail. Not that it really matters.

Five of femr2's parameters aren't doing him any good.
Great. Redundant parameters. Poly order not restraining the profile shape. I have no problem with that personally.

We can probably find a more appropriate class of model that puts those parameters to better use, improving the fit between data and model. That more appropriate class of model might even give us some insight into plausible physical mechanisms.
I prefer smoothing to curve fitting...
http://femr2.ucoz.com/_ph/7/350095033.png

The curve fit graphs are really not intended to provide more than the trend...as I keep saying.

The graph above will be a lot *truer* to actual, though will of course have carried across some noise.

Follows similar trend to the higher order fit curves, which is fine and dandy.

Rather than reverse engineering my graphs, why don't you produce your own from the copy of my raw trace data you have ? Your posts would be better received if you didn't sprinkle them with snidey comments btw.
 
Last edited by a moderator:
Regardless of whether you think it is a *noise generator*, the NIST data suffers from the following (non-exhaustive) series of technical issues, each of which reduce the quality, validity and relevance of the data in various measures...

[Edited to remove LIST of 10 ISSUES ]

Perhaps the *noise* could simply be discussion (agreement) of the detrimental effects each of these issue has upon the NIST data.

The need for purposes of the discussion which initiated these recent posts is to establish that the data that cmatrix relies on is in error. (Noting that cmatrix happens to be one recently identified person making an alleged error which others have also made.)

That in turn requires two facts to be established viz:
  • The NIST data is wrong in those factors which cmatrix et al's claims rely on; AND
  • femr2's data is correct to the accuracy needed to support the argument that relies on it.

I don't think it is necessary to agree on "the detrimental effects each of these issue [sic]". [My emphasis of 'each'] It seems self evident to me that each of those factors would be detrimental and I think that femr2 is representing them truthfully. What he does not state is the quantity of error that each could introduce. However for several of them it would appear impractical or impossible to quantify.

But surely it is sufficient to know that a number of factors introduce error. And that the error is significant in the context of the current discussion which therefore bypasses the whole topic of whether or not NIST's findings were 'good enough' for NIST's purposes. They may well have been but that is not the relevant issue here.

And we have strong evidence that femr2's methods yield more accurate results. The post by W.D.Clinger, whilst critical of excessive detail and some other matters of femr2's approach, actually confirms that his work is better than NIST on the factors in question.

So isn't it 'case proved' unless someone rebuts both femr2's claims and the support given by W.D.Clingers post?
 
The point is that all results can be duplicated by anyone.

That is the whole point. The results describe real motion. The only reason anyone would rely on belief is out of laziness.
Absolutely. Anyone... not happy with/hasn't found/needs explicit detail about... my methods for generating acceleration profile graphs has full access to the raw position/time data and is welcome to produce their own.

Only tfk has done so to date...to silence. :eye-poppi
 

Back
Top Bottom