Anti-sophist
Graduate Poster
- Joined
- Sep 15, 2006
- Messages
- 1,542
Please correct me if I am wrong here, but there is a distinction between data being sampled and being recorded.
Essentially. Allow me to clarify the vocabulary for our dear friend, so he doesn't post another 10 page analysis of my use of terminology. In effect (please note I am using the terms "in effect" to convey a sense of abstractness. Not what is necessarily actually occuring), data is "sampled" twice. Essentially, the DAU takes a sample from the aircraft data source, time stamps it, individiually (if necessary), and stores it. Basically, this is the "measurement" sample. Later, the recorder/controller will sample the DAU's buffer of the most recent value, and store it. This is the "recorded" sampling.
You will notice that the graphic he posted shows this "parameter pool" in the DAU (what I call the digital buffer). The footnote also alludes to the fact that the "recorded" implied timestamp is not necessarily the measured time stamp. He has safely ignored this footnote because he doesn't understand that it submarines his entire argument.
Anyway, the implied timestamp (the frame's time stamp + location in the frame of the data) is the time of the "recording" sampling (That is to say, from the buffer to the recorded). The use of the term "implied" is of consequence. It means we aren't actually storing any information. Given the bit position of the data, and the distance from the most recent timestamp, we can calculate the implied timestamp of that recording.
On the other hand, the measured timestamp (from the aircraft, to the buffer) is lost, unless it has been specifically recorded as additional data.
He understands, I think. What he mistakenly believes (because he's misreading the standard) is that the measured time to recorded time need to differ by some relatively small delta-t. What he's actually looking at is the delta-t error between recorded samples.I believe this is what you have been saying but I don't think UT understands the distinction and thus does not understand why there could be wide variations of error introduced into the data.
As an example, if I'm sampling something at 1hz, and I record the first sample at 0.51s, I'd need to record the second sample at 1.51s +/- delta-t. None of this has bearing, at all, on when those samples were actually measured.
This particular requirement has to do with digital clock error and bit-synching.
Last edited: