Chris, it might help to drill down here.
First let me say that my training is inter alia in survey sampling; "margin of error" has (in principle) an exact meaning in that context. I am not aware of a comparably exact meaning in other contexts, but by analogy, the idea is that we can estimate bounds on the error(s) in our measurements. Initially, NIST and others measure displacement (vertical and/or horizontal) from some reference position. Each such measurement is made with error.
I'm about to walk you through something you may already understand -- perhaps more exactly than I am about to explain it -- so that I can refer to it when I react to your statement.
Imagine a graph with time on the X axis and displacement on the Y axis. Each point has a vertical error bar, implying that the true displacement at that time could be anywhere on that bar. (That's really inexact, but in this context I can't be more exact without inventing some assumptions.) Then we can imagine arbitrarily many possible "real" displacement curves that go through all those error bars. I desperately want to complicate this account, but I think that's complicated enough for the moment.
Now, imagine a velocity graph based on the displacement graph. Our best estimate of the average velocity between two consecutive measurements will depend on the change in displacement. Since the displacements are measured with error, the velocities are estimated with somewhat more error, so the error bars are longer, and the possible "real" velocity curves are more varied. (Crudely, if one displacement is 50 plus-or-minus 1, and the next is 60 plus-or-minus 1, then the change in displacement could be 8, 12, or anything in between: i.e., 10 plus-or-minus 2.)
Then we can make an acceleration graph based on the velocity graph; same general idea, mutatis mutandis. So all the error bars get longer again.
Now, back to your words: "the data points created in the NIST Report were within the margin of error of their own measurements." It's not obvious what you mean by "data points." Most often it would refer to the "measurements" themselves -- but I don't think you intended to offer a tautology. Two paragraphs down I offer a possible interpretation of what you said.
At a high level, I think you may mean that given the possible measurement error in NIST's displacement estimates, their data are (maybe!!) consistent with a constant rate of acceleration. Thus, if NIST put error bars on their displacement estimates, and then derived a velocity graph (with bigger error bars), a straight line (denoting a constant change in velocity) would fit through that set of error bars. Equivalently, if NIST went on to derive an acceleration graph (with even bigger error bars), a horizontal line (denoting zero change in acceleration) would fit through those error bars.
Maybe that last is the picture in your mind: the acceleration estimates are all pretty close to g; every error bar overlaps g; equivalently (if all the error bars are the same size), each point (acceleration estimate) is "within an error bar" of g. I don't usually think of derived estimates as "data points," but it's not unreasonable.
(Or you may not have been referring to constant acceleration at all, but only to whether NIST's estimate of average acceleration is within measurement error of g. I'm not sure that question is inherently of much interest, so I won't walk through that scenario right now.)
Again, if this means that femr2's displacement measurements have smaller error bars (confidence intervals, perhaps), and therefore his acceleration estimates have smaller error bars, and some of those error bars don't overlap g but are always > g, that works.
What has been missing is a way to assign those "error bars," or more formally, to model the error.