• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Discussion of femr's video data analysis

Because I have professional knowledge of numerical methods, I have a special duty not to endorse questionable methods and analyses, even if those analyses would support someone's confirmation bias.
That's a duty not to endorse. It does not imply a duty to explain what's wrong with any particular methods or analyses.

femr2 has ignored and/or dismissed much good advice in this and other threads. As shown below, his recent posts continue that pattern.

Methods have been repeated many times,
You keep saying that, but you haven't cited a paper that describes your specific methods in detail.

You appear to be referring to your many posts at JREF and elsewhere. When someone points out that those posts do not contain enough detail to replicate your analyses or even to evaluate them properly, you dismiss that objection by saying something like "Methods have been repeated many times."

yet you are not actually specifying what your problem is.
Maybe I should highlight it:
I'm sorry, folks, but confirmation bias isn't a good reason to support poorly documented analyses that appear to use highly questionable numerical methods.
For further explanation of the first highlighted phrase, see above and post #1026 in this thread. For examples of the second highlighted phrases, see below and post #1089 in this thread.

Here's another ;) NW corner acceleration profile, this time using Savitzky-Golan Smoothing...
[qimg]http://femr2.ucoz.com/_ph/7/350095033.png[/qimg]
That's yet another example of your poor documentation. You specified neither the degree of the polynomial nor the number of points in the window.

You have also alluded to down-sampling and to symmetric differencing, both of which were presumably used to obtain the data on which these two graphs were based, but you have not specified the details of either process.

In your second graph below, you allude to "curve fitting" but do not tell us what kind of curve fitting. Was it polynomial? If so, what was the degree of the polynomial? Are we supposed to guess those things from the "Poly(10)" in your graph's title?

As pgimeno noted previously, you seem fond of polynomials with ridiculously high degree, such as 50. No matter what kind of curve fitting you used, you have not explained why you selected that particular method.
Does it significantly change any of the assertions I have made about the alternate view of the same data resulting from curve fitting ? ...
[qimg]http://femr2.ucoz.com/_ph/7/628055186.png[/qimg]
Yes.

Those two graphs reveal a lower bound for the range of uncertainty you have consistently refused to specify and have often refused to acknowledge. At t=12s, the difference between the two graphs is about 8 ft/s2. That difference drops to about 3 ft/s2 at 12.5s, rises to about 5 ft/s2 at 13.5s, drops to near 0 at 14s, and so on. A proper estimate of the uncertainty in your graphs might be larger than the uncertainty implied by those particular differences.

In short, your two graphs above simply confirm pgimeno's point:
You don't have a valid justification to claim this is a mistake. Or to assert that their wording is sloppy when they talk about free fall for about 2.25 seconds. As a remainder, your graph shows that the acceleration was within 32.5±7.5 ft/s² for about 2.25 seconds, therefore in the lack of any error analysis you can't validly claim NIST is wrong.



Does the specific numerical method change the assertion that the NIST 2.25s period of freefall is inaccurate ?
According to your graphs, the northwest corner accelerated at approximately 1g or greater for about 1.5s. Integrating your first graph by eyeball, the average acceleration between 12.25s and 14.5s appears to be about 1g, which is consistent with NIST's conclusion. For reasons already stated, however, I am not willing to interpret your graphs as support for NIST's conclusion.

Does it change the assertion that *freefall* occurred for almost zero time ?
That's your assertion. As I and others have already explained, your assertion that NIST was claiming an acceleration of exactly 1g is absurd and tendentious.

Does it change the assertion of an over-g period ?
No. I accept your conclusion that the northwest corner probably accelerated at greater than 1g for some period, mainly because I did my own analysis of an early version of your data.

I do not know of any reason why I or others should accept your generalization of over-1g acceleration to the entire face.

Did the north face descend with gravitational acceleration for 2.25s ? No.
As explained above, I regard your "No" as unsupported and tendentious.

Did the descent of the first 18 floors take 40% longer than freefall ? No.

I can say *No* to those questions (regardless of any argument about interpretation of NIST statements), as I have data with which I can confirm such.

Are those questions my primary focus ? No.
You have data and analyses that confirm your "No" within your own mind. You have not yet explained your analyses with enough detail to allow competent evaluation of your conclusions.

I'm more interested in the motion over 100s earlier, but as folk here are repeatedly adamant about supporting the lower quality data and very inaccurate assertions from NIST, the *discussion* ensues.
I believe your data are of higher quality than the data NIST used.

I know NIST described their methods and stated their conclusions with greater professionalism.
 
You keep saying that, but you haven't cited a paper that describes your specific methods in detail.
Back to a *paper* again :rolleyes:

You appear to be referring to your many posts at JREF and elsewhere.
That's right.

When someone points out that those posts do not contain enough detail to replicate your analyses or even to evaluate them properly, you dismiss that objection by saying something like "Methods have been repeated many times."
That does not mean that the posts do not contain enough detail. It means you either haven't looked at the right posts, or can't follow very simple instructions.

It's really not rocket science.

Here...

1) Get one of any number of available feature tracking programs (even *tracker* that David Chandler would do if you're not after quite the quality of my data). As I've said on many, many occasions, I use SynthEyes...or even do it by hand.
2) Track the NW corner on EVERY frame of the Dan Rather footage. You should deinterlace it first. LOTS of discussion about correct deinterlacing procedure presented in the past.
3) Open the position/time data in excel.
4) Perform a simple 2 sample running average to resolve jitter (from deinterlacing)
5) Use any method you please to derive velocity and acceleration curves.

That's about it. I've used various methods, but...there y'are.

That's about all you should need to replicate the feature trace data that you have downloaded in the past, and as you hold some special importance to the procedures used to derive velocity and acceleration data, and don't *trust* mine (regardless of how much detail I've posted) do it as best you can.

I have no problem repeating procedires for specific graphs AGAIN, but I'm not going to be doing it AGAIN and AGAIN.

For further explanation of the first highlighted phrase, see above and post #1026 in this thread.
You are expecting a full replicated explanation of every detail of the production of a graph with every single instance with which it is posted.

That's yet another example of your poor documentation. You specified neither the degree of the polynomial nor the number of points in the window.
So ? 2, 50.

You have also alluded to down-sampling and to symmetric differencing, both of which were presumably used to obtain the data on which these two graphs were based, but you have not specified the details of either process.
Not every time I post a copy of the graph, no.

No down-sampling.

Symmetric differencing to action derivation, yes. 5 sample.

Are you planning on replicating the details from the raw data I have supplied you with ? If not, then why complain ?

In your second graph below, you allude to "curve fitting" but do not tell us what kind of curve fitting. Was it polynomial? If so, what was the degree of the polynomial? Are we supposed to guess those things from the "Poly(10)" in your graph's title?
Bingo :)

As pgimeno noted previously, you seem fond of polynomials with ridiculously high degree, such as 50.
Kind of an ironic thing to say when the animation being discussed ranges from 2 to 50, and the graph you just mentioned above has a degree of 10.

No matter what kind of curve fitting you used, you have not explained why you selected that particular method.
Perhaps not in the way you would personally like me to, but otherwise I have explained on many occasions :)

Those two graphs reveal a lower bound for the range of uncertainty you have consistently refused to specify and have often refused to acknowledge.
My assertions relate to the trend, not specific values.

Derivation of acceleration data from noisy position/time data inherently contains an amount of uncertainty, which should not be determined from the end-result acceleration data in my book, as every single method will have a slightly differing effect. If I recall you posted a handy equation for determining such from position/time data quite a while ago.

If I was making assertions of the form: At 12.2s the acceleration of the NW corner was (x) ft/s^2... then quantifying uncertainty would be more relevant.

As that is NOT what I'm doing, then I am not personally over concerned.

According to your graphs, the northwest corner accelerated at approximately 1g or greater for about 1.5s.
Looks about right, regardless of which graph you choose.

As I and others have already explained, your assertion that NIST was claiming an acceleration of exactly 1g is absurd and tendentious.
I have made NO SUCH assertion. I have highlighted that OTHERS make that literal interpretation.

You appear to be forgetting or manipulating context.

I accept your conclusion that the northwest corner probably accelerated at greater than 1g for some period, mainly because I did my own analysis of an early version of your data.
Then I don't quite know why you make such a fuss.

If you asked nicely I may even repeat details about a specific graph, but with the ongoing derision and distortions that are going on I'm rather disinclined to acquiesce to your requests.

I do not know of any reason why I or others should accept your generalization of over-1g acceleration to the entire face.
I have made no such generalisation. Ridiculous thing to suggest given the number of times I highlight that the behaviour of one point does not apply to the entire facade.
 
Even better - think about the geometry of the cuts and the falls and see if you can get a bit of 'over G'. Leaves and air resistance could be a problem.....

...now if it was a bare trunk :)

A white dot up top would be pretty handy (as would a series of them down the trunk), but anyway.

Maybe next weekend. I was too busy plotting the trajectory of all the saw dust. (I thought it could be important)

:rolleyes:
 
I'm getting pretty fed up with femr2, so this is likely to be my last response to femr2 for some time.

You keep saying that, but you haven't cited a paper that describes your specific methods in detail.
Back to a *paper* again :rolleyes:
femr2 rolls his eyes whenever someone suggests it would be a good idea to collect his engineering details into a single article.

You appear to be referring to your many posts at JREF and elsewhere.
That's right.
He'd rather make everybody search the World-Wide Web for clues. It's the femr2 scavenger hunt.

That's yet another example of your poor documentation. You specified neither the degree of the polynomial nor the number of points in the window.
So ? 2, 50.
50? I'm no expert, but Savitzky-Golay filtering with an even number of points in each window sounds pretty unusual. If results computed using an even number of points weren't reported using a time scale that's shifted by half an interval, they'd be biased.

Is that why the time scales on femr2's two graphs did not line up? We can only guess.

Derivation of acceleration data from noisy position/time data inherently contains an amount of uncertainty, which should not be determined from the end-result acceleration data in my book, as every single method will have a slightly differing effect.
The part before the first comma is true. The part after the first comma is a lame attempt to excuse femr2's refusal to estimate the uncertainty of his graphs and conclusions.

If I recall you posted a handy equation for determining such from position/time data quite a while ago.
The equation I posted shows the worst-case error obtained by forward error analysis, which is a pessimistic error bound that's likely to be far too large for femr2's purposes. To support his conclusions, femr2 needs something like a 95% confidence interval.

If I was making assertions of the form: At 12.2s the acceleration of the NW corner was (x) ft/s^2... then quantifying uncertainty would be more relevant.
Actually, femr2's assertions have been considerably bolder than the example he gave above, making the uncertainty even more relevant than for his example. femr2 has been telling us that, for all values of t within a certain interval, the acceleration of the NW corner was not (x) ft/s^2.

As I and others have already explained, your assertion that NIST was claiming an acceleration of exactly 1g is absurd and tendentious.
I have made NO SUCH assertion. I have highlighted that OTHERS make that literal interpretation.

You appear to be forgetting or manipulating context.
Now femr2 is telling a fib bald-faced lie. Here's the context femr2 wants us to forget. Note the italics, which femr2 put there himself:
It's usually because you don't read the context of the discussion, which is the ~2.25s period of gravitational acceleration stated by NIST.

NIST are not stating approximately gravitational acceleration. They are stating gravitational acceleration.

Their approximate in that context is the length of time which gravitational acceleration continues for.

Sloppy. Nonsense.
femr2 would have us believe that his "Sloppy. Nonsense." was directed at those who tendentiously interpret NIST's reference to "gravitational acceleration" as though it were "exactly gravitational acceleration" instead of "approximately gravitational acceleration". As is clear from the above, however, femr2 is the one who's been insisting upon that misinterpretation.

As pgimeno has pointed out, femr2 is pretty much the only person who's been using the word "exactly" in front of "gravitational acceleration", even while he's been trying to pretend that NIST and I believe the acceleration was exactly 1g:
Nonsense. You are, by direct implication, suggesting that the north face experienced exactly gravitational acceleration for approximately 2.25s. Nonsense.
That was a dishonest accusation when femr2 made it, and it's even more dishonest now after pgimeno and I have pointed out that I said no such thing, and had in fact said very nearly the opposite of what femr2 is (still) accusing me of saying.
 
I'm getting pretty fed up with femr2, so this is likely to be my last response to femr2 for some time.
Your rather annoying habit of speaking to *the audience* rather than respond to me directly is noted.

femr2 rolls his eyes whenever someone suggests it would be a good idea to collect his engineering details into a single article.
After the first 100 instances of the suggestion, it wears a little thin :rolleyes:

He'd rather make everybody search the World-Wide Web for clues. It's the femr2 scavenger hunt.
If you expect me to reference all background material posted in the past every time a graph is posted, you'll be rather disappointed. If you suggest the information doesn't exist, you'll get a... :rolleyes:

50? I'm no expert, but Savitzky-Golay filtering with an even number of points in each window sounds pretty unusual. If results computed using an even number of points weren't reported using a time scale that's shifted by half an interval, they'd be biased.
Possibly fair point. As far as I'm aware, you get back the regression value AT the same point, but I suppose it may depend upon the implementation. Could result in 0.5/59.94s shift (~0.008s)

Is that why the time scales on femr2's two graphs did not line up? We can only guess.
No. Axis start and end points are different, which is irrelevant.

The part before the first comma is true.
Of course it is.

The part after the first comma is a lame attempt to excuse femr2's refusal to estimate the uncertainty of his graphs and conclusions.
You'd expect 25 separate uncertainty analyses to accompany the animated graph we were discussing earlier with increasing poly order, before you'd give it the Clinger Certified stamp of approval ? Oh well, no worries :)

The equation I posted shows the worst-case error obtained by forward error analysis
Who would believe it eh. Referencing previously posted information, rather than request you post it over and over again.

which is a pessimistic error bound that's likely to be far too large for femr2's purposes.
It's not for my purposes, it's yours. You'd like an uncertainty analysis, whilst I'm only really interested in the trends. Any such uncertainty analysis would itself be subject to uncertainty, and can you IMAGINE the level of local unrest and argument about individuals preferred methods of performing such if *I* was to present such (over and over again). There is some uncertainty in the data, of course. There we go ! :)

The likes of tfk would accept nothing LESS than the pessimistic maxima.

It would simply add another layer of argument for those who wish the argue with an individual they have branded as *a twoofer*.

By the way, where is the uncertainty analysis in the NIST study...
http://femr2.ucoz.com/_ph/7/563913536.png


:confused: :rolleyes:

femr2 has been telling us that, for all values of t within a certain interval, the acceleration of the NW corner was not (x) ft/s^2.
Incorrect. I imagine you are playing with context again. Rather pedantic if so.

Now femr2 is telling a fib bald-faced lie. Here's the context femr2 wants us to forget. Note the italics, which femr2 put there himself:
And YET AGAIN you are misinterpreting the context, which is about how the statement has been interpreted. That you take such a statement in isolation as being my personal viewpoint/argument/whatever simply shows that you are indeed blind to the context of the discussion. The recent post from ozeco41 comes to mind.

In the process you accuse me of lying. Tsk.

I don't like such accusations, and am more than likely going to waste some of my time back-tracking through the discussion to piece it together and show you why you are wrong. I will expect an apology afterwards, and no further silly interpretations of context, especially when I have specifically told you what the context is.

femr2 would have us believe that his "Sloppy. Nonsense." was directed at those who tendentiously interpret NIST's reference to "gravitational acceleration" as though it were "exactly gravitational acceleration" instead of "approximately gravitational acceleration". As is clear from the above, however, femr2 is the one who's been insisting upon that misinterpretation.
Ye gads. How mant times have I posted this quote recently...
cmatrix said:
However, if their theory is to believed, the 2.25 seconds of free fall must have resulted from near-simultaneous buckling and breaking of the 58 perimeter columns and most of the 25 core columns over eight stories.

What is wrong with you ?

As pgimeno has pointed out, femr2 is pretty much the only person who's been using the word "exactly" in front of "gravitational acceleration"
How about every single *truther* who has used the statements in a similar manner to the quote above ?

Is your sight really so limited and seemingly blind to context ?

even while he's been trying to pretend that NIST and I believe the acceleration was exactly 1g
More of the same nonsense. I assert nothing of the sort.

Most of your post is nonsense, due to you not being able to track the context of the discussion.
 
Last edited by a moderator:
Maybe I missed it, but is there any way for femr to publish his results anywhere? I don't know if he plans to, I just want to know if it's possible.
 
I don't see such errors - rather misinterpretations of the accuracy of the NIST data and of their interpretation of such data, and baseless assumptions made over a personal assessment of what they mean.

Trying to make sense of your sentence here. Please correct this paraphrase as you see fit:
"I--pgimeno--don't see NIST in error, having used a methodology that was good enough to make their point - rather, posters here (femr2), are misinterpreting NIST's (accurate) data to make it appear inaccurate because they have a personal beef with NIST--not a legitimate, technical one."

If this is accurate, then you're just saying that you're fine with NIST's level of accuracy. O.k. What's the issue with an attempt at refining their data? Why would anyone have a problem with that--given that it's more accurate than the original data?

Plus, the unjustified adornments with which femr2 accompanies his discussion also introduce quite some noise and have brought up the topic of irrelevance.

You're commenting on his rather terse writing style? Then accusing him of injecting "noise" into the debate? Pot to Kettle: "you black!"

Why do you and others insist on evaluating femr2's arguments on terms other than what femr2 has made explicit? If you don't like what he's doing with the data, that's fine, but if you can't find a reason to discard his data--and again, I'm not even sure why you'd want to--then just don't participate in the discussion. Let it unfold. I'm not the only one curious about what can be made of MORE ACCURATE DATA.

Questions of relevance are frankly, irrelevant. He's refining the starting point of the collapse. That's the relevance. Are you looking for some kind of cosmic relevance? Don't hold your breath. You want him to say "AHA! Now you can't deny that xyz..."? It seems this is a real fear for you. But I have no doubt that whatever the end result of his findings, both sides and all points in between will find a way to fit the more accurate data into their favourite explanation of what happened.

I'm learning a lot in this discussion, and I'm not the only one, so please, let's have a real, technical discussion and leave the sophistic distractions about relevance and the territorial pissings about NIST in another thread.
 
... Why do you and others insist on evaluating femr2's arguments on terms other than what femr2 has made explicit? ...
Please point out explicitly what femr2 has made explicit? You can't, or you will not? Go ahead, in a nutshell explain how you get better data from low resolution, low frame rate video. Go ahead show us some error models you made to support the nonsense femr2 correctly labels, not "rocket science". He should have used Rocket Science. Go ahead, make it clear what femr2's goal is, and explain in detail his conclusion and how it fits with femr2's claim the Official Theory is Fictional. Give it a try. What do you have for us?

Do you do math? What do you think about the methods used by femr2, and can you explain them?

Dig out your Metatheory stuff and tell us what is.
 
Last edited:
Go ahead, in a nutshell explain how you get better data from low resolution, low frame rate video.
Hmmm. Let me see...

a) Deinterlace the video. NIST didn't bother. Tracing features using interlaced video is *not a good idea*.

b) Trace the feature using every available frame. NIST used an inconsistent time-step, skipping roughly every 56 out of 60 available frames. Yep, they ignored over 90% of the frame data.

c) Use a tried and tested feature tracking method, such as those provided in systems such as SynthEyes, rather than track the behaviour of a single pixel column in video using home-grown manual methods. Doing the latter means that the trace is not actually of a point of the building, as the building does not descend completely vertically. This means the tracked pixel column is actually a rather meaningless point on the roofline which wanders left and right as the building moves East and West.

d) The inconsistent inter-sample period in the NIST data tends to indicate they actually recorded data manually. Feature tracking systems such as SynthEyes employ an automated region-based system which entails upscaling of the target region, application of LancZos3 filtering and pattern matching (with FOM) to provide a sub-pixel accurate relative location of initial feature pattern in subsequent frames in video.

e) Either use a viewpoint which does not suffer from significant perspective effects (such as early motion being north-south rather than up-down), such as the Dan Rather footage, or perform appropriate perspective correction. NIST used the skewed perspective Cam#3 footage without perspective correction, and did not take account of the initial movement type in their chosen trace pixel column.

f) Perform static point extraction. Even when the camera appears static, there is still fine movement. Subtraction of static point movement from trace data significantly reduces camera shake noise. NIST didn't bother.

g) Track a point/feature which can actually be identified from the beginning to the end of the trace, and so eliminating the need to splice together information from separate points.

Your inclusion of *low resolution* is partially valid, as the effective vertical span of movement in the cam#3 footage is higher than that in the dan rather footage, however, all of the other negative factors make it a moot point, especially if those factors are not addressed (which NIST did not bother to do.)---I have a whole series of Cam#3 data too if you feel like wading through it ;)

Your inclusion of *low framerate* is nonsense, for just the reasons highlighted above. Note also that we are all using video with the same base framerate.
 
By all means show me the uncertainty analysis NIST performed in 12.5.3
Yes, easy, you posted it yourself. Where you say NIST is wrong, and wave you hands and present smoothed, actually fake acceleration graphs, NIST give hints of their error analysis, you missed. It is like assumed, as in low resolution video, and need no explanation past... get ready... using, Using, yes using Approximately 8 times in the passage you posted from NIST.

... approximately, 8 times in pages you posted. They know the data is, "approximately", because it comes from low resolution video, low frame rate video, etc. This was easy, you make up stuff and say NIST was wrong, and NIST already covered it with "approximately", 8 times on the pages from the NIST report you posted.

NIST makes no grand claims of getting down to .2 pixel resolution (which does not exist, unless you do a lot more than you did) nor do they post graphs with resolution like you do down to 0.01 pixels and make up silly smoothing methods you can't reference or explain. You are right, your efforts are not rocket science, your efforts are ad-hoc hand waving stuff, there is no way to comment on your disjointed effort because you can't explain it. You fake it, and you don't know what "it" is.

Are you Jay Howard too?

Your answer for him was nonsense you can't support with proof, you are using the "Fetzer say so method of proof". What was that called?
 
Last edited:
Please point out explicitly what femr2 has made explicit? You can't, or you will not? Go ahead, in a nutshell explain how you get better data from low resolution, low frame rate video. Go ahead show us some error models you made to support the nonsense femr2 correctly labels, not "rocket science". He should have used Rocket Science. Go ahead, make it clear what femr2's goal is, and explain in detail his conclusion and how it fits with femr2's claim the Official Theory is Fictional. Give it a try. What do you have for us?

Do you do math? What do you think about the methods used by femr2, and can you explain them?

Dig out your Metatheory stuff and tell us what is.

If there is a record for the most posts that contain the least substance, you my friend, would be the clear winner.

Back to the discussion at hand...
 
If there is a record for the most posts that contain the least substance, you my friend, would be the clear winner.

Back to the discussion at hand...
Meaning, you will not be explaining femr's work. Thank you very much.


You can't explain what you defend blindly. This will be the most you can do to defend femr's "paper". Can you explain any of his hand waving, and tie it to his claim the Official Theory is Fictional? You can do it, hang in there, let some of your Metatheory come out and clear up 911 issues.

Why not organize femr's work and publish it for him.
 
Last edited:
Yes, easy, you posted it yourself.
Really ? Interesting.

Where you say NIST is wrong
I've repeatedly stated that the NIST summaries are inaccurate and misleading if interpreted literally. I have also stated that specific values are inaccurate to an extent to be validly called wrong, sure.

and present smoothed, actually fake acceleration graphs
The accusation of *fake* is ludicrous beachnut.

Smoothed, absolutely, and at no point in time have I said they are exact, in fact I have repeatedly categorised them as more accurate, representative of, better than...

NIST give hints of their error analysis
Hints is it ? But no actual uncertainty analysis. Funny.

It is like assumed, as in low resolution video, and need no explanation past... get ready... using, Using, yes using Approximately 8 times in the passage you posted from NIST.
Oh well then, my data is much less approximate than NISTs. There y'are, beachnut certified uncertainty analysis :)

NIST makes no grand claims of getting down to .2 pixel resolution
Correct, they state no form of positional accuracy at all.

I base my positional variance accuracy on plots of the raw data such as...
http://femr2.ucoz.com/_ph/7/349864523.png

...where you're looking at the vertical component.

nor do they post graphs with resolution like you do down to 0.01 pixels
Correct, they have only the very primitive *pixel brightness* data which could be construed as being sub-pixel at all.

Static point data variance is in the kind of variance you indicate with my methods (though you are confusing dan rather data with cam#3 data, and feature trace data with static point trace data)...
http://femr2.ucoz.com/_ph/7/943943983.png


and make up silly smoothing methods you can't reference or explain.
Nonsense. Which graph are you confused about b ? ;)

You are right, your efforts are not rocket science
Correct !
 
Last edited by a moderator:
Really ? Interesting.
...
Correct !
You are correct in your mind, and you are obsessed with NIST. Good luck.

Where is your paper? When will it be published. Nuff said. Good luck organizing your mess of nonsense into a single integrated whole and explaining your stuff and explaining your steps. Good luck again, so far you refuse to take advice. I am only an engineer, my advice is based on a big picture approach to your work, as I dig into your work it gets more nonsensical than I thought it was - better get your act together.
 
If there is a record for the most posts that contain the least substance, you my friend, would be the clear winner.

Back to the discussion at hand...
Says the guy who isn't actually answering any of the questions.
 
By all means show me the uncertainty analysis NIST performed in 12.5.3
Yes, easy, you posted it yourself. Where you say NIST is wrong, and wave you hands and present smoothed, actually fake acceleration graphs, NIST give hints of their error analysis, you missed.
NIST gave more than hints.

NIST performed a routine error analysis on its model for the interval of approximate free fall, calculating and stating the coefficient of regression (aka coefficient of determination, aka (in this context) square of the correlation coefficient):
NIST said:
Velocity data points (solid circles) were also determined from the displacement data using a central difference approximation. The slope of the velocity curve is approximately constant between about 1.75 s and 4.0 s. To estimate the downward acceleration during this stage, a straight line was fit to the open-circled velocity data points using linear regression (shown as a straight line in Figure 12-77). The slope of the straight line, which represents a constant acceleration, was found to be 32.2 ft/s2 (with a coefficient of regression R2=0.991), equivalent to the acceleration of gravity g. Note that this line closely matches the velocity curve between about 1.75 s and 4.0 s.
The highlighting is mine, but the words are NIST's. Considering all the qualifiers that NIST placed within the passage above, it would be absurd/tendentious/stupid to interpret this section of NIST's report as a claim that the observed acceleration was exactly gravitational acceleration.

NIST's Figure 12-77 states that R2=0.9906, which is consistent with the rounded value stated in the paragraph above.

The R2 statistic is a quantitative summary of the error (differences) between NIST's data and NIST's affine (straight-line) model. With femr2's data, the R2 statistic would be different.

If competent persons were to claim that NIST's data and model were "wrong", they would support their claims by calculating and stating the R2 statistic for what they believe to be the better data and NIST's model, and would also state the R2 statistic for what they believe to be the better data and a better model.

Let me know when someone does that (or anything remotely equivalent).
 
it would be absurd/tendentious/stupid to interpret this section of NIST's report as a claim that the observed acceleration was exactly gravitational acceleration.
You are still making the false interpretation that *I* think they are. As I have repeatedly stated, the context is that others such as cmatrix do interpret the summary statements literally, and then state things like...
cmatrix said:
However, if their theory is to believed, the 2.25 seconds of free fall must have resulted from near-simultaneous buckling and breaking of the 58 perimeter columns and most of the 25 core columns over eight stories.

That you repeatedly imply that I am personally making or agreeing such misinterpretation is utterly false.

state the R2 statistic for what they believe to be the better data
Given two-step derivation process from position/time to acceleration/time not a straightforward process, and the tools I use don't spit out the R2 value, but can look into it for the curve fit. Don't think it's possible for data smoothed with Savitzky-Golay but variance data can be determined without too much hassle.
 
...I didn't want Myriad to think his joke had fallen flat. I actually laughed out loud, and I laughed again when I read ozeco41's heavy-handed response. Perhaps he understood the joke but didn't think it was funny;....
There were two parts to Myriad's post. The serious bit and the bit that looked funny. In the serious bit IMO he set the wrong target.

I commented on the serious bit because, again IMO, it was more important to progressing the real discussion which is continuing in this thread despite the noise.

The specific context of this discussion is rebuttal of cmatrix claims which rest on the NIST values describing a period of freefall. femr2 has established prima facie that the NIST values as used to underpin cmatrix claims are wrong. Note I said 'prima facie' or, in lay person terminology, that there is 'a case to be answered'. Responding to cmatrix in the way femr2 intended requires that the relative merit of femr2's data and NIST's be established one way or the other. The need for that determination is an objective requirement of the topic 'rebut cmatrix' and not, as has been falsely claimed, merely a femr2 personal preference.

I'm sorry that you choose to denigrate my response to the serious bit as 'heavy handed' and that you chose to ignore the point that I made.
 
The specific context of this discussion is rebuttal of cmatrix claims which rest on the NIST values describing a period of freefall.
I'm sorry, but cmatrix's claims would be ludicrous even if the acceleration had been exactly 1g for exactly 2.25s. That much has already been established in posts by dozens of different people.

If our goal here were to point out that cmatrix misinterprets the NIST report when he claims that the acceleration was exactly 1g, we would deal with that by quoting the NIST report accurately and by highlighting the explicitly approximate nature of its claims. As I wrote:
The highlighting is mine, but the words are NIST's. Considering all the qualifiers that NIST placed within the passage above, it would be absurd/tendentious/stupid to interpret this section of NIST's report as a claim that the observed acceleration was exactly gravitational acceleration.
That's how I would rebut that part of cmatrix's argument.

You, however, appear to be arguing that the only way we can rebut cmatrix's misinterpretation of NIST's report is to agree with cmatrix's misinterpretation of NIST's report. That's nuts.

femr2 has established prima facie that the NIST values as used to underpin cmatrix claims are wrong. Note I said 'prima facie' or, in lay person terminology, that there is 'a case to be answered'.
I have already stated, within this very thread, on several occasions dating back to 14 August 2010, that I believe femr2's position data to be more accurate than NIST's.

That does not mean NIST's values are "wrong". It means they are less accurate. As an engineer, you should know better than to parrot femr2's misleading choice of pejorative.

Responding to cmatrix in the way femr2 intended requires that the relative merit of femr2's data and NIST's be established one way or the other. The need for that determination is an objective requirement of the topic 'rebut cmatrix' and not, as has been falsely claimed, merely a femr2 personal preference.
Your two sentences contradict each other. Your first sentence acknowledges the relevance of femr2's intention, so the nature of his intended response is clearly subjective. Your second sentence then claims that an activity necessary for femr2's subjective choice of response is "an objective requirement of the topic 'rebut cmatrix'", as though femr2's preferred response is the only possible response that could rebut cmatrix.

If you'll take the trouble to read what I wrote above, you'll see that I have outlined an alternative response that does not require establishing the relative merit of femr2's data and NIST's. Hence your and femr2's obsession with that activity is your personal preference and femr2's.

Furthermore, I regard the superiority of femr2's data to NIST's as fairly obvious, so I do not understand why you think that is so controversial, and I especially do not understand why you pretend it is controversial when you are addressing me. My concerns do not involve femr2's data; they involve femr2's poor documentation, questionable analyses, and especially his misinterpretations and tendentious criticisms of NIST's NCSTAR 1-9 Volume 2 section 12.5.3.

I'm sorry that you choose to denigrate my response to the serious bit as 'heavy handed' and that you chose to ignore the point that I made.
Myriad's post was a joke. I thought it was a good example of nerd humor. You ignored his punch line, interpreted his setup as a serious argument, and responded to that setup with your usual claptrap, including a couple of fallacies I have identified in the above.

I'm sorry I characterized your response as heavy-handed. I should have said I laughed again when I read your response, and let it go at that.
 

Back
Top Bottom