Richard Gage Blueprint for Truth Rebuttals on YouTube by Chris Mohr

Status
Not open for further replies.
Chris, it might help to drill down here.



First let me say that my training is inter alia in survey sampling; "margin of error" has (in principle) an exact meaning in that context. I am not aware of a comparably exact meaning in other contexts, but by analogy, the idea is that we can estimate bounds on the error(s) in our measurements. Initially, NIST and others measure displacement (vertical and/or horizontal) from some reference position. Each such measurement is made with error.

I'm about to walk you through something you may already understand -- perhaps more exactly than I am about to explain it -- so that I can refer to it when I react to your statement.

Imagine a graph with time on the X axis and displacement on the Y axis. Each point has a vertical error bar, implying that the true displacement at that time could be anywhere on that bar. (That's really inexact, but in this context I can't be more exact without inventing some assumptions.) Then we can imagine arbitrarily many possible "real" displacement curves that go through all those error bars. I desperately want to complicate this account, but I think that's complicated enough for the moment.

Now, imagine a velocity graph based on the displacement graph. Our best estimate of the average velocity between two consecutive measurements will depend on the change in displacement. Since the displacements are measured with error, the velocities are estimated with somewhat more error, so the error bars are longer, and the possible "real" velocity curves are more varied. (Crudely, if one displacement is 50 plus-or-minus 1, and the next is 60 plus-or-minus 1, then the change in displacement could be 8, 12, or anything in between: i.e., 10 plus-or-minus 2.)

Then we can make an acceleration graph based on the velocity graph; same general idea, mutatis mutandis. So all the error bars get longer again.

Now, back to your words: "the data points created in the NIST Report were within the margin of error of their own measurements." It's not obvious what you mean by "data points." Most often it would refer to the "measurements" themselves -- but I don't think you intended to offer a tautology. Two paragraphs down I offer a possible interpretation of what you said.

At a high level, I think you may mean that given the possible measurement error in NIST's displacement estimates, their data are (maybe!!) consistent with a constant rate of acceleration. Thus, if NIST put error bars on their displacement estimates, and then derived a velocity graph (with bigger error bars), a straight line (denoting a constant change in velocity) would fit through that set of error bars. Equivalently, if NIST went on to derive an acceleration graph (with even bigger error bars), a horizontal line (denoting zero change in acceleration) would fit through those error bars.

Maybe that last is the picture in your mind: the acceleration estimates are all pretty close to g; every error bar overlaps g; equivalently (if all the error bars are the same size), each point (acceleration estimate) is "within an error bar" of g. I don't usually think of derived estimates as "data points," but it's not unreasonable.

(Or you may not have been referring to constant acceleration at all, but only to whether NIST's estimate of average acceleration is within measurement error of g. I'm not sure that question is inherently of much interest, so I won't walk through that scenario right now.)



Again, if this means that femr2's displacement measurements have smaller error bars (confidence intervals, perhaps), and therefore his acceleration estimates have smaller error bars, and some of those error bars don't overlap g but are always > g, that works.

What has been missing is a way to assign those "error bars," or more formally, to model the error.
You said it more precisely but yes, I was thinking that NIST's measurements fell within the "error bar" so that within that error bar a claim of 1g acceleration could be asserted within the limitations of those measurements (as NIST, Chandler, and chris7 claim). Or maybe it's still not exactly 1g because the actual rates of collapse within that error bar could allow for slightly <g and >g as well. No way to know for sure, which is what I said in my video 18. I was also thinking that femr's narrower error bar would show measurements outside that bar which reveal ACTUAL >g moments during parts of the collapse.
 
I know. Simply reminding him, using his own quote from NIST, of the important bolded qualifiers.

Sarns has a rather flexible approach to qualifiers, making them appear and disappear as he pleases, regardless of whether they are actually present in the text in question.
 
femr I now "get" that NIST did not include a + or - metric into their measurements so my vague sense that NIST was accurate to +/- 0.1% may well have come from chris7's repeated claims and may have no other source.
 
You said it more precisely but yes, I was thinking that NIST's measurements fell within the "error bar" so that within that error bar a claim of 1g acceleration could be asserted within the limitations of those measurements (as NIST, Chandler, and chris7 claim). Or maybe it's still not exactly 1g because the actual rates of collapse within that error bar could allow for slightly <g and >g as well. No way to know for sure, which is what I said in my video 18. I was also thinking that femr's narrower error bar would show measurements outside that bar which reveal ACTUAL >g moments during parts of the collapse.

I think it's really important to be specific about what measurements you have in mind. Otherwise, it's very easy to fall into a comedy of errors (no pun intended).

At the end of the day, we seem to be discussing one NIST "measurement," namely, its estimate of average acceleration during "Stage 2." Of course that estimate is derived from other measurements. NIST made no apparent attempt to test whether the acceleration in fact was constant during that period. We can put an "error bar" (or confidence interval) on that average, but it wouldn't have any bearing on whether there were periods of >g acceleration. To consider that, we'd have to either look at a whole bunch of error bars, one per instantaneous velocity or acceleration estimate, as I described above; or we'd have to estimate acceleration over some subset of stage 2, for which the acceleration appears to be > g, and see whether the "error bar" on that estimate did or did not include g.

I don't like this wording: "Or maybe it's still not exactly 1g because the actual rates of collapse within that error bar could allow for slightly <g and >g as well." Even if NIST somehow could prove that the average acceleration was exactly g -- or, for that matter, was exactly 0.9 g -- that would not preclude periods of acceleration faster than g. (I'm not pounding the table for > g acceleration; I'm just trying to underscore the analytical distinction at stake.)

--Yes, I suspect that the thing about NIST being accurate within 0.1% refers to c7's comparison of NIST's average to g, and has nothing to do with the accuracy of NIST's velocity or acceleration estimates.
 
Last edited:
And meanwhile ...... C7's theory requires several charges (of whatever kind) to be attached to every single column, both internal and external. More than 2000 charges, set off simultaneously by his own admission. And requiring the most extreme and unpredictable circumstances to allow/justify their detonation.

While we debate the technicalities of error margins in the computer analysis of distantly-recorded video let's not forget the bottom line - there is zero evidence of CD at WTC7.
 
Hey Chris, would I be able to mirror your respectful rebuttal's on youtube in an 'all-in-one' video?
 
If you can do it, and if you think anyone would sit there for 3 1/2 hours in a sitting and watch all this, go ahead! My playlists break it into two full-length movies, one for WTC I and II, the other for Building 7.
 
I can do it 'all-in-one', but have links in the description breaking it up section by section, so if someone wants to view a specific part, they can click the link in the description and it will take them straight to that spot.
 
You said it more precisely
No, that was just a lot of verbiage.

I was thinking that NIST's measurements fell within the "error bar" so that within that error bar a claim of 1g acceleration could be asserted within the limitations of those measurements (as NIST, Chandler, and chris7 claim).
Claim? You claim that NIST and Chandler don't know what they are talking about and the posters here know better. You claim the the programs they used, that were designed to measure velocity and acceleration, are wrong and the posters here know better. You believe anything the posters here say and deny the science of professionals using programs designed for the task at hand.

That is just denial using JREF as a crutch.

The data points are slightly inaccurate because they are taken from a video. The software draws a line thru the average of the data points to get the actual acceleration. The people who wrote the software to determine velocity and acceleration know that gives a accurate acceleration and so do the professionals who use the programs. NIST and Chandler are on opposing sides in the "collapse" debate but they agree on this point;

WTC 7 fell at FFA for ~100'.

Even FEMR's graph shows FFA [or slightly greater for a moment] for ~100', yet you ignore that and favor of a lot of verbiage that really said nothing.

Or maybe it's still not exactly 1g
Stop playing word games. It cannot be measured exactly but it was FFA. Chandler used the proper scientific terminology "Indistinguishable from free fall". Trying to twist that into "It was not FFA" is fraudulent.

because the actual rates of collapse within that error bar could allow for slightly <g and >g as well.
No Chris. You are hoping against hope and splitting hairs in an attempt to deny FFA.

You're using supposition, semantics and wishful thinking to supplant science in a rebuttal is unconscionable.
 
C7,

are there any tables of "raw" values by NIST or Chandler that you can refer to? I am talking about data pairs for drop distance vs. time. Thanks!
 
You claim that NIST and Chandler don't know what they are talking about and the posters here know better.
NIST and Chandler did a very basic displacement data extraction. I'm not saying they "don't know what they are talking about", I'm saying they didn't do a good job of it, and have provided fairly extensive evidence to show why that is the case.

You claim the the programs they used, that were designed to measure velocity and acceleration, are wrong and the posters here know better.
How does one say...ROFL...

NIST and Chandler knew what programs to use.
What programs did they use ?
What programs did they use to:

a) Process the raw video data ? (deinterlace)

b) Extract the motion data ?

c) Translate the pixel unit motion data into real world units (px->ft) ?

d) Derive velocity and acceleration ?
I don't know.
The programs NIST and Chandler used were designed to calculate acceleration.
Here we go again :rolleyes:

What programs did NIST and Chandler use ?
The program Chandler used is designed to measure velocity/acceleration. NIST used a similar program.
How did he perform the displacement data extraction ?

What program did he use ?

What mathematical method of derivation does it employ ?

How did NIST perform the displacement data extraction ?

What program did they use ?

What mathematical method of derivation did they employ ?


I think there will be a Chandler data critique falling in here soon, but in the meantime, here's the list of issues with the NIST trace.

A non-exhaustive list of issues with the NIST trace data, each of which reduce the quality, validity and relevance of the data in various measures.

You don't know what "programs" were used, because you simply don't understand. I've told you what I use:

a) SynthEyes - Professional feature tracking system, which I use to extract motion data from video with high precision. I've used many similar systems and SynthEyes has by far the most accurate tracking engine.

b) OriginPro - Professional data analysis and math environment, which I use to perform Savitzky-Golay smoothed derivation of raw data.

c) Excel - Which I use to translate from pixel units to real-world units, and generate the many lovely graphs I post.


Now then...

NIST placed "dots" on video by hand. So did Chandler. Awful.
Chandler may have redone his trace with Tracker, but likely did not use it's primitive tracking facilities.

NIST derived data using simple central difference approximation, probably with Excel or Matlab, though a piece of paper would do. Chandler's freebies "Physics Toolkit/Tracker" performed a similar symmetric difference derivation.


Yet you STILL don't know. Yet you still keep saying the "programs" they used "were designed to measure velocity and acceleration". Bizarre.

You believe anything the posters here say and deny the science of professionals using programs designed for the task at hand.
l.o.l.

The data points are slightly inaccurate because they are taken from a video.
I've told you before. In part, yes, though not solely.

The software draws a line thru the average of the data points to get the actual acceleration.
No !

Linear Regression results in an AVERAGE, of course, NOT actual.

Get that in your head.

The people who wrote the software to determine velocity and acceleration know that gives a accurate acceleration and so do the professionals who use the programs.
R.O.F.L.

WTC 7 fell at FFA for ~100'.
Arghh. ~FFA. 100ft is a bit over an'all.

Even FEMR's graph shows FFA [or slightly greater for a moment] for ~100'
No, it does not.

~1.75s of ~FFA, of which ~0.5s is likely >g, during which time the NW corner descended ~83ft.

It cannot be measured exactly but it was FFA.
Chortle. ~FFA.
 
Last edited:
are there any tables of "raw" values by NIST or Chandler that you can refer to?
NIST - No, though I have digitised their data using a 9000px image of their PDF vector based graph (so the positioning is very accurate)

Chandler - Not fully, though David did upload this during an old discussion...


Some position/time data is there.
 
femr2 beat me to it, but I'll agree with femr2 because he's right about this...

The data points are slightly inaccurate because they are taken from a video.
True.

The software draws a line thru the average of the data points to get the actual acceleration. The people who wrote the software to determine velocity and acceleration know that gives a accurate acceleration and so do the professionals who use the programs. NIST and Chandler are on opposing sides in the "collapse" debate but they agree on this point;
False.

Professionals know the following facts, which I wrote almost two years ago. (The "Well said" is a compliment to femr2.)

Well said.

Although the original post of this thread asked how Chandler could possibly derive instantaneous velocity from sampled position data for WTC1 (not WTC7), Chandler was actually drawing an incorrect conclusion about instantaneous acceleration from sampled position data. When sampled position data are used to estimate acceleration, the error is proportional to the error in the sampled position data and also proportional to the square of the sampling rate.

Suppose, for example, that +/-es is the worst-case error in the sampled position data, +/-ev is the worst-case error in the velocities estimated by simple differencing, and +/-ea is the worst-case error in the accelerations estimated by second differencing. Let f be the sampling rate (in Hertz). Then

ev = 2 f es
ea = 2 f ev = 4 f2 es

where the factor of 2 comes from computing the difference of two values with +/-e error: e-(-e)=2e.
 
I think it's really important to be specific about what measurements you have in mind. Otherwise, it's very easy to fall into a comedy of errors (no pun intended).

...

I don't like this wording: "Or maybe it's still not exactly 1g because the actual rates of collapse within that error bar could allow for slightly <g and >g as well." Even if NIST somehow could prove that the average acceleration was exactly g -- or, for that matter, was exactly 0.9 g -- that would not preclude periods of acceleration faster than g. (I'm not pounding the table for > g acceleration; I'm just trying to underscore the analytical distinction at stake.)

--Yes, I suspect that the thing about NIST being accurate within 0.1% refers to c7's comparison of NIST's average to g, and has nothing to do with the accuracy of NIST's velocity or acceleration estimates.

Exactly, wrong wording. Best to look at the NIST and Chandler data through the superior Femr2 data. No need to try to justify inferior when one has access to superior. Look at the inferior through the lens of the superior instead.

It is like using a magnifying glass to see something more clearly.

..............

On the other hand, I question whether that pun was just an accident.
 
Last edited:
OK I'm still wrapping my mind around my "margin of error" question. Sorry I am so slow here. It looked to me that NIST took their measurements (femr says they were entered by hand, but by whatever method they took raw measurements from a video) and entered them on a graph as a series of dots. In Phase Two, they then plotted a straight line representing freefall acceleration and showed that all of the dots they plotted were very close to that freefall line, some very slightly above and some very slightly below. The average of those dots comes to freefall. Am I right so far?

Now let's go to Mark Lindeman's post and my response at the top of this page, where he talks about an "error bar," which to my knowledge does not exist in any of NIST's, femr's or Chandler's graphs. If it did, however, in the NIST Report, I would think that error bar would be an equal area on both sides of that straight line and during this freefall period at least, the error bar would give a visual representation telling us whether the points fell within the "margin of error." The "error bar" would not have to follow the slope of lines connecting each point; it would just show that the plotted points all fall within the overall "error bar" and therefore it would be safe to say: using the NIST graph, the measurements show FFA within the margin of error. I'm guessing someone will find this understanding incorrect?

Any mistake I am making in understanding here pales in comparison to Chris7, whose repeated attacks against me include this false assertion: "You claim that NIST and Chandler don't know what they are talking about and the posters here know better. You claim the the programs they used, that were designed to measure velocity and acceleration, are wrong and the posters here know better." Please show me where I said NIST and Chandlere do not know what they are talking about and I will apologize and retract that statement. If you cannot find a quote where I said this, please retract your assertion and apologize to me. Stop being such a jerk.
 
OK I'm still wrapping my mind around my "margin of error" question. Sorry I am so slow here. It looked to me that NIST took their measurements (femr says they were entered by hand, but by whatever method they took raw measurements from a video) and entered them on a graph as a series of dots. In Phase Two, they then plotted a straight line representing freefall acceleration and showed that all of the dots they plotted were very close to that freefall line, some very slightly above and some very slightly below. The average of those dots comes to freefall. Am I right so far?

Looks OK to me, though the detail is rather speculative.

ETA: What femr2 and MarkLindeman said; I missed that it was a fit rather than a comparison.

Now let's go to Mark Lindeman's post and my response at the top of this page, where he talks about an "error bar," which to my knowledge does not exist in any of NIST's, femr's or Chandler's graphs. If it did, however, in the NIST Report, I would think that error bar would be an equal area on both sides of that straight line and during this freefall period at least, the error bar would give a visual representation telling us whether the points fell within the "margin of error." The "error bar" would not have to follow the slope of lines connecting each point; it would just show that the plotted points all fall within the overall "error bar" and therefore it would be safe to say: using the NIST graph, the measurements show FFA within the margin of error. I'm guessing someone will find this understanding incorrect?

It's not exactly incorrect; it's just normally done the other way round. The error bar is applied to the data point, rather than to the line of best fit. Each data point has its own error bar, normally drawn as a vertical line with tick marks across each end, and this denotes the range of expected real values that would give this measured value. (For a really complete description of the errors one might introduce a horizontal error bar too, indicating the uncertainty in the time measurement as well as that in the position / velocity / acceleration measurement, but for simplicity that's rarely done.) The straight line representing FFA is really just a representation of a line with zero width, but those don't show up too well on a graph ;) so it's been drawn with a non-zero width. The advantage to doing it this way round is that if for any reason different points have different measurement errors, it's simple to include that on the graph.

If the line of FFA passes through all the error bars on all the measured points, then we can state that the acceleration is indistinguishable from FFA. If it misses any of the error bars, then we can say that the acceleration at that instant is significantly different to FFA. Without the error bars, we can't make either statement for certain, which is the origin of the claim that any experimental data that doesn't include error bars is meaningless. It's very dangerous to draw definite conclusions from data when you're not certain how much of it is in fact data and how much is simply random noise.

So, to reiterate simply, the error bar is applied to the data points, not to the fit line; apart from that, your understanding is more or less correct.

To check your understanding, remember Miragememories' post asking how great a margin of error is required for the rate of acceleration to be greater than FFA. If you can see why that's wrong, and he should actually be asking how small a margin of error is required, then you've probably got a good handle on it.

Dave
 
Last edited:
It looked to me that NIST took their measurements (femr says they were entered by hand, but by whatever method they took raw measurements from a video)
The T0 pixel was determined programmattically, but the rest likely hand-eye, yes.

and entered them on a graph as a series of dots. In Phase Two, they then plotted a straight line representing freefall acceleration
Not quite. They performed a linear regression on the selected set of velocity data, resulting in a best-fit straight line through the velocity data.

The slope of that line provides an average acceleration over the entire period.

The slope was 32.196r, effectively 32.196 ft/s2.

"g" in NY is 32.159 ft/s2
Therefore the NIST linear fit is slightly over-g.

The R2 value was 0.9906 (not that great).

It should be noted that omitting either the first or last datapoint would have changed the output average acceleration. I assume you understand why, and how this shows that discussing FFA without an explicit "roughly" qualifier is not supported by the data.

and showed that all of the dots they plotted were very close to that freefall line, some very slightly above and some very slightly below.
There's a few significantly out imo, but some slightly above and below the line, sure.

The average of those dots comes to freefall. Am I right so far?
Ish. The slope of the best fit came out slightly over freefall, so it could be said that the average over the period was ~FFA.

it would just show that the plotted points all fall within the overall "error bar" and therefore it would be safe to say: using the NIST graph, the measurements show FFA within the margin of error. I'm guessing someone will find this understanding incorrect?
~FFA ;)
 
OK I'm still wrapping my mind around my "margin of error" question. Sorry I am so slow here. It looked to me that NIST took their measurements (femr says they were entered by hand, but by whatever method they took raw measurements from a video) and entered them on a graph as a series of dots. In Phase Two, they then plotted a straight line representing freefall acceleration and showed that all of the dots they plotted were very close to that freefall line, some very slightly above and some very slightly below. The average of those dots comes to freefall. Am I right so far?

Not quite. NIST's straight line is a linear regression fit, so its slope is determined by the "best" fit to the points. (I put "best" in scare quotes because different algorithms would reckon different "best" fits -- although, in this case, probably not much different.) The slope of that line is close to -g, implying that the average acceleration is close to free fall. That's a result, not an input to the model; the straight line doesn't inherently "represent[] freefall acceleration."

Now let's go to Mark Lindeman's post and my response at the top of this page, where he talks about an "error bar," which to my knowledge does not exist in any of NIST's, femr's or Chandler's graphs. If it did, however, in the NIST Report, I would think that error bar would be an equal area on both sides of that straight line

PMJI.

When error bars are used, they are associated with individual data points; conceptually, they're just vertical lines. It's possible to calculate, say, a 95% confidence interval for a best-fit line, which would indeed be an area around the line. I think I'll post this and then look for a picture. At any rate, I don't think this is quite what you mean.

ETA: Here is a picture of some data with error bars, and some sort of curve fit. Notice that the first four points seem to be almost in a straight line, and that one could easily fit a straight line within those four error bars. If this were a velocity graph, we could say that the data are consistent with constant acceleration over that period. They're also consistent with the curve, which depicts varying acceleration.

Here is a picture of error bars for disparate point estimates (means for seven kinds of respondents, I think). Because the horizontal axis is categorical, it makes no substantive sense to try to draw lines through these points and bars, but for what it's worth, notice that in this case, it isn't possible to fit a straight line within the first four error bars.

I think I won't look for a drawing of a 95% confidence interval for a regression estimate: too much going on already.


What you seem to mean is that after we have the best-fit line, we can draw two more lines, parallel to the best-fit line. These lines would bound an area within which we expect all data points to appear if the best-fit line perfectly represents the actual collapse (Stage 2), taking measurement error into account. (ETA: At the risk of belaboring the obvious: if a straight line for velocity perfectly represents Stage 2, then acceleration is constant. Or we could tweak the example a bit by constraining the slope of the best-fit line to equal -g, so that it represents constant FFA.)

Please let's not call that an "error bar." For one thing, it isn't a bar; it's an area. (Error bars are associated with points or, sometimes, with parameter estimates, often called "point estimates.")

Yes, in principle, we ought to be able to do something like that. My approach was more data-centric, because that's how I roll; I can work with your approach.

ETA: So, in the first picture I linked to above, you can imagine a straight line that "almost" goes through the first four points, and a shaded area centered on that line, whose height equals the height of the error bars.

and during this freefall period at least, the error bar would give a visual representation telling us whether the points fell within the "margin of error." The "error bar" would not have to follow the slope of lines connecting each point; it would just show that the plotted points all fall within the overall "error bar" and therefore it would be safe to say: using the NIST graph, the measurements show FFA within the margin of error. I'm guessing someone will find this understanding incorrect?

If all the points fell within the area consistent with constant free fall acceleration, then the data would be consistent with constant free fall acceleration. I wouldn't say that they "show FFA within the margin of error," because I don't know what that means.
 
Last edited:
Thanks guys I'm getting this. The most interesting new information for me is the fact that NIST didn't draw a line representing freefall, the line represented the average rate, which then turned out to be very close to FFA. I also get that if an error bar were created, it would follow the actual measurements and not the averaged-out straight line.

I'm still not clear why I'm misusing the term margin-of-error. Maybe femr can help. In my mind, let's just say femr puts out some measurements and says, these are accurate to within +/- 0.06%, meaning that in addition to calculating his data, he has also calculated how far off that data may be due to camera movement, heat waves in the air, and the limits of sub-pixel resolution. Is that correct? Why does the term margin of error NOT describe such a qualifier? Am I just using the wrong term or am I still missing a central concept here?

Of course the reason I am asking this is because of the question of possible >g collapse of part of the north perimeter wall. From the NIST data alone, because they never gave a "margin of error" or whatever (and because their techniques of measurement were less precise), it seems that my statement "maybe greater than freefall acceleration" is all I can say. femr's data, which is more accurate, can show that >g acceleration did happen, and based on that data I don't even need the qualifier "maybe." Is this correct?
 
Status
Not open for further replies.

Back
Top Bottom