AA77 FDR Data, Explained

Please correct me if I am wrong here, but there is a distinction between data being sampled and being recorded.

Essentially. Allow me to clarify the vocabulary for our dear friend, so he doesn't post another 10 page analysis of my use of terminology. In effect (please note I am using the terms "in effect" to convey a sense of abstractness. Not what is necessarily actually occuring), data is "sampled" twice. Essentially, the DAU takes a sample from the aircraft data source, time stamps it, individiually (if necessary), and stores it. Basically, this is the "measurement" sample. Later, the recorder/controller will sample the DAU's buffer of the most recent value, and store it. This is the "recorded" sampling.

You will notice that the graphic he posted shows this "parameter pool" in the DAU (what I call the digital buffer). The footnote also alludes to the fact that the "recorded" implied timestamp is not necessarily the measured time stamp. He has safely ignored this footnote because he doesn't understand that it submarines his entire argument.

Anyway, the implied timestamp (the frame's time stamp + location in the frame of the data) is the time of the "recording" sampling (That is to say, from the buffer to the recorded). The use of the term "implied" is of consequence. It means we aren't actually storing any information. Given the bit position of the data, and the distance from the most recent timestamp, we can calculate the implied timestamp of that recording.

On the other hand, the measured timestamp (from the aircraft, to the buffer) is lost, unless it has been specifically recorded as additional data.


I believe this is what you have been saying but I don't think UT understands the distinction and thus does not understand why there could be wide variations of error introduced into the data.
He understands, I think. What he mistakenly believes (because he's misreading the standard) is that the measured time to recorded time need to differ by some relatively small delta-t. What he's actually looking at is the delta-t error between recorded samples.

As an example, if I'm sampling something at 1hz, and I record the first sample at 0.51s, I'd need to record the second sample at 1.51s +/- delta-t. None of this has bearing, at all, on when those samples were actually measured.

This particular requirement has to do with digital clock error and bit-synching.
 
Last edited:
I corrected myself? Try again. Your graphics perfectly illustrate the concepts I talked about in my paper. I even thanked you for them. Even down to the components.

The only thing I didn't describe was the term LRU, which is the input. I used the term "computer" because LRU is an unnecessary acronym. I don't need to explain to you, however, what an LRU is, and how the ADC, for example, is an LRU. You already obviously know.

UT, his only reason to be posting here is to cut you as he stated in his first post, but that is my opinion after seeing his intro post in the intro area or somewhere

I was looking for an actual info on what happens in a big crash, what happens to the final data and how much data is lost,

You have contributed to your post with good work, UT is just trying to cut your stuff, and I am not so sure he has a clue, or understands the level of abstraction you are at, or why he thinks he can interpret the information he is finding without reference, to why it and his conclusions are correct when he could just fess up some info on the real topic and if he can explain to anyone what is going on and how to explain the FDR as you have done

Some info on the FDR, and then explain why it may be missing seconds, how many seconds and other ideas.

UT has no contribution yet on this area

The NTSB is the expert, next stop NTSB or some data on what they have found

But I will tell you this, I have investigated Air Force accidents, and after you learn about how the data is presented and the limitations you use it as facts to make your conclusion

The FDR presented facts on what flight 77 was doing during flight. The data presented matched what happen on 9/11. The FDR prove the terrorist pilot was not good, everyone agrees he was a poor pilot, his final turn is proof, his turn sucked, nice turn if you are just cruising but it was very poor if you would grade a turn.

It fills in the speed, then you can use the speed to confirm the aircraft destruction due to the KE. The Kinetic Energy of collision was close to a ton of TNT. Confirmed by speed read out on the FDR. Why was he speeding, he pushed up the throttles on his final run at the Pentagon, the FDR confirms it.

Is flight 77 still in the air 400 feet above the ground? No, it was brutally crashed into the Pentagon by cowards who defile their own religion by the very act. How do we know flight 77 hit the pentagon, darn the FDR was found inside the pentagon.

Pretty much leaves out the missile never seen, the little plane never flown, and other theories only poor researchers can come up with.

So unless UT comes up with some good ideas on the FDR, I have to assume the errors and general ideas you have worked on are good examples of possible errors and explain that we could be missing an unknown amount of information not recorded to the non-volatile area of the FDR.
 
You paper is full of assumptions, you don't reference a single standard for your error assumptions. You assume arguments from somewhere else without reference. You create your own error values. You state the the CSV file is not FDR data, when in fact, it is derived from it and through some poor software engineering your +-2 seconds and +-1.5 seconds assumptions are aboslutely ridiculous.

You can not even state one case world wide to demostrate your thoery of error despite having "gathered all publicly available informartion". Can you?

Yes, anything is possible. Especially when your attempting to create a case from scratch without reference to any real world examples. Of course, if it was up to you, I'm sure you would work on removing all these errors you have created from scratch in the design of your flight recording system. After all, it would have to be certified by several angecies before even being installed in the airframe.

Such as this statement here
AntiS said:
a digital reading in the CSV file, like Computed Airspeed, which comes from the Air Data Computer, has an enormous error range, in the vicinity of 2 seconds, although 1.5 seconds is probably a safe estimate (0.5s for the buffering latency, and 1s for the uncertainty of when the sample was actually recorded).

Unless you can site a case or reference this, how is this valid?

Through out your entire 5 page paper, it amazes me that you think the entire world history of flight recorders and every crash investigation for the past 40 years hasn't already engineered solutions to this problems you attempt to create. And that somehow, you have discovered the errors which they have overlooked through every paper, every report, and every scientific study done.

By the way, in 1971, NASA and Boeing research into supersonic transport and barometer/alitmeter performance revealed that sea level barometer lag (for existing subsonic transports) is on the order of 0.1-0.2 seconds. While at Mach3+ and 77,800ft it's between 5-10 seconds.
(Refernce: NASA_CR1770_1971)
 
You paper is full of assumptions, you don't reference a single standard for your error assumptions.

That's because there is no standard for how the CSV file should be. Are you dense? Show me what standard the CSV file adheres to. It is a derived work from STANDARD FDR data, but the CSV file, itself, conforms to NO standard. It's easy for someone, like me, to figure out what it means because I understand the standard of the data it was based upon. For you, however....

You state the the CSV file is not FDR data, when in fact, it is derived from it
That is a blatant lie. That is EXACTLY what I say.

and through some poor software engineering your +-2 seconds and +-1.5 seconds assumptions are aboslutely ridiculous.
Engineering you can find absolutely no mistake in, mostly because you can't understand it. And they aren't assumptions. I calculate them based on very simple digital signal processing methods. Do you know what a Nyquist Criterion is?

Of course, if it was up to you, I'm sure you would work on removing all these errors you have created from scratch in the design of your flight recording system. After all, it would have to be certified by several angecies before even being installed in the airframe.
Are you still repeating this strawman nonsense? I've stated, over and over and over, that the system on the aircraft doesn't contain these errors. THE CSV FILE CONTAINS THESE ERRORS, NOT THE FDR. THIS IS NOT A HARD CONCEPT TO COMPREHEND. YOU HAVE MISUNDERSTOOD IT CONTINUOUSLY.

Through out your entire 5 page paper, it amazes me that you think the entire world history of flight recorders and every crash investigation for the past 40 years hasn't already engineered solutions to this problems you attempt to create.
Read the statement above. These problems _ARE_ solved, but the answers are _NOT_ in the CSV file. Your amatuer analysis is based on the CSV file, which conforms to NO standard. If you based your analysis on the FDR data, it wouldn't have these errors, but you didn't. THE FLAW IN YOUR ANALYSIS IS THAT IT IS BASED ON THE CSV FILE, NOT THE RAW FDR DATA. Real engineers aren't basing their calculations on CSV files that were generated to plot data. They use the real data. You aren't.

And that somehow, you have discovered the errors which they have overlooked through every paper, every report, and every scientific study done.
Find me one real world example of a scientific study based on a CSV file that is based on a standard that doesn't exist.

By the way, in 1971, NASA and Boeing research into supersonic transport and barometer/alitmeter performance revealed that sea level barometer lag (for existing subsonic transports) is on the order of 0.1-0.2 seconds. While at Mach3+ and 77,800ft it's between 5-10 seconds.
Strawman. Has nothing to do with FDR time-slip errors in the CSV file.

Next.
 
Last edited:
AntiS said:
Please keep in mind the CSV file is not raw FDR data, and it was not meant to be used forensically.
What is your basis for this falacy?
When every case I have seen begins with the tabular readout of the raw data. The tabular readout is where the investigation begins. Or do you propose they convert from binary to engineer units every time they look at a different parameter. And you are proposing that every time they make a tabular readout, they won't know if that roll actually occurred 3 seconds ago compared to a pitch on the same tabular row?

Your entire myopic argument is based off you being the sole creator of this wonderful paradigm of miraculous appearing offsites, errors, and assumptions.

Despite all the world's government agnecies, regulartory bodies, and investigative resources somehow overlooking the that thier plots and tabular readouts are actually completely worthless based on your scientific analysis.

There is no standard for a CVS tabular plot becuase it is a direct text based represntation of the engineering computations and calcuations done on the Standards in place over the past 40 years. It is the result of the calculations, engineering, certification, and requirments that made the FDR data file in the first place. When you plug that raw file into a ground station, and examine the parameter history in an table on screen, are these errors you have created there then? And when you print that table out on the printer, where did your errors come from?

Are you so pompous as to propose that the world history of investigative reports is based on faulty use of tabular read outs? And that by plotting thier FDR data into a multi-line chart is in error because when they read out that FDR data to a timeline plot they are introducing all these errors you somehow managed to create out of thin air?
 
What is your basis for this falacy?

Me said:
The flawed interpretation is quickly disposed of by realizing a few key pieces of evidence. (EDIT: Feel free to consult the CVS file for this example) First of all, if you look at the longitudinal acceleration data above, you will see that it is sampled 4 times, and then the other 4 rows are blank. Without getting into the technical details, sampling at 0, 1/8, 2/8 and 3/8, and then not sampling again until 8/8 is absolutely silly. In digital signal processing, sampling out-of-phase like this would result in horrible aliasing effects and poorer reconstructed signal quality. It requires the same amount of effort, and the same amount of bandwidth to sample in equally spaced intervals, and the data is far superior. There is absolutely no way that the data was sampled “out-of-phase” like the incorrect interpretation would imply.

The second major clue is that our serial multiplexed signal is a constant bit-rate signal. This means that the same amount of data flows during the same period of time, at all times. All data points in this file are squished towards the top of the frame. This would mean much more data has to travel out from 0 to 1/8 then has to travel from 6/8 to 7/8. This violates the principle of constant bit-rate.
UnderTow said:
Despite all the world's government agnecies, regulartory bodies, and investigative resources somehow overlooking the that thier plots and tabular readouts are actually completely worthless based on your scientific analysis.

Me said:
Are you still repeating this strawman nonsense? I've stated, over and over and over, that the system on the aircraft doesn't contain these errors. THE CSV FILE CONTAINS THESE ERRORS, NOT THE FDR. THIS IS NOT A HARD CONCEPT TO COMPREHEND. YOU HAVE MISUNDERSTOOD IT CONTINUOUSLY.

Read the statement above. These problems _ARE_ solved, but the answers are _NOT_ in the CSV file. Your amatuer analysis is based on the CSV file, which conforms to NO standard. If you based your analysis on the FDR data, it wouldn't have these errors, but you didn't. THE FLAW IN YOUR ANALYSIS IS THAT IT IS BASED ON THE CSV FILE, NOT THE RAW FDR DATA. Real engineers aren't basing their calculations on CSV files that were generated to plot data. They use the real data. You aren't.
I'm always up for repeating myself.


There is no standard for a CVS tabular plot becuase it is a direct text based represntation of the engineering computations and calcuations done on the Standards in place over the past 40 years.
False. Evidence for this atrocity to human reason, please. Engineers base their calculations on the FDR data. Not CVS files generated to plot graphs.

When every case I have seen begins with the tabular readout of the raw data.
Oh, and what forensic analysis have you done?
 
Last edited:
That is not it. The data is in the frame in the raw fdr file. It has already been placed there.
You can't post an image of this table so your refernce is incomplete.
Which 4 rows are you talking about?
There is ZERO dsp occuring when you compute the fdr frames to a tabular plot? Where is your error coming from.

You have said yourself, the FDR data Does Not contain this error. So how in the world can you possible insert this error into your tabular readout?

Let's theorize what your are referring to in the quote block of yours.
If you have 2 columns in a table. 8 rows of data.
Column A has 8 values, and Column B has 4, how many blank spaces do you get below Column B?
 
Last edited:
Let's play a new game, shall we, UnderTow.

Since I am a complete idiot and have no idea what I am talking about, please feel free to explain the following questions I have about this footnote in your graphic:

(original: http://www.aa77fdr.com/misc/Fig1_A717FrameFormat.jpg)
The age of the NZ sample depends on how old it was when it arrived in the pool and how long it sat in the pool before time T-1. The source latency and transmit delay determine the age on arrival. The update rate determines the time spent in the pool before being used.
It appears that the Nz is 2/64 second older than Radalt because of its implied timetag, but it could actually be much newer.
1) Explain to me the difference between source latency, transmit delay, and how they effect the data "age".
2) How does "update rate" effect "time before use", and what effect does this have on the data?
3) How are these "delays" corrected?
4) Explain to me how you calculate the "age of a sample".
5) Explain to me how Nz "appears" older than Radalt.
6) Explain what an "implied timetag" is.
7) Explain how the Nz "could actually be much newer" even though it "appears" older.

If you don't have time to make up answers to all of it, just stick to #7.
 
Last edited:
You can't post an image of this table so your refernce is incomplete.
Which 4 rows are you talking about?

Look in the CVS data for an example of a parameter (like longitudnal acceleration) that has 4 samples per frame. That's exactly what the table looks like in my document. Pick any frame.

You have said yourself, the FDR data Does Not contain this error. So how in the world can you possible insert this error into your tabular readout?
Because this not a "tabular readout" of the FDR data. It was generated from the FDR data to be plotted. It is _not_ identical to the FDR data. And before you ask how I know, realize that the answer is in my two pieces of evidence already explained. It _cannot_ be the raw FDR for two simple reasons: (1) the samples are out of phase (see long. accel, or any other 4 Hz sampled datapoint in the CVS file) (2) the "bitrate" of the CVS file is non-constant (different amounts of data per row).

Raw FDR data would have a constant bitrate (ie, equal amounts of data per row, in a tabular printout), and all samples would be in phase (equally spaced out).
 
Last edited:
Who will grade this test? Your past employer?
Would you like an example picture to represent the delays and source code for the 429 card that calculates the sample before assembling this 717 frame?
Should I get my ARINC references out since you have failed to account for them in your great work of smarts?

If this frame is already compiled, and the time Parameter is syncrohnized in the FDAU prior to assembly, and the FDR Raw data does Not contain the errors you speak of in your report. Where do your errors come from?

Do you claim that every tabular readout and plot graphic is flawed and inaccurate?
If so, your claim as nothing to do with any conspriacy and you should petition the FAA to rewrite thier standards.
 
The transcribed data were processed by the National Transportation Safety Board's Recovery Analysis and Presentation Systems (RAPS), which converted the raw data to engineering units and presented it in tabular and graphic form.
 
There is no Phase or bitrate in a CSV file. It is TEXT in Table form.

The Plot is not made FROM the CSV. The RAPS system (and any FDR analysis software) uses the raw FDR file to make BOTH the Table and the Plot. They are BOTH based of the direct calculation of the FDR Data.
 
Who will grade this test?

I have explained to you how this system works, in reality. You have claimed I'm an idiot. Therefore, you obvoiusly believe you know more than me. It is not a "test". I want to see how your interpretation of those comments is different then mine.

I can answer all 7 of those questions given my knowledge (that you claim is wrong)... let me see your explaination of those 7 questions raised from the footnote.

Once we've established your interpretation of these issues, we can move on to the more abstract issues you raise above.
 
Last edited:
There is no Phase or bitrate in a CSV file. It is TEXT in Table form.

That's what I've been saying! You and JDX's dopey analysis is based on a misintrepretation of the CSV file.. that dopey intrepretation DOES assume bitrate and phase, inadvertently, by consequence of the way you are reading the file. The _full_ FDR data does contain that information (as in, it can be extracted entirely from the raw FDR data, given the descriptor)... and yet it's missing in the CSV file... where did it go?

This is exactly my point. Treating the CSV file as just true "tabular" FDR data is false. Tabular FDR data, like any serial framed data, is organized obviously in frames, with each row and column equaling a fixed amount of time. Every cell would define a unique period of time which is when the sample was RECORDED. Not measured. Each cell would represent a the same fixed amount of time, and they would all be full. That is what _raw_ tabular FDR data looks like. That is _not_ what the CSV file looks like.

Probability of this post being understood is frighteningly low, I'm afraid.

(Still waiting on your interpretation of the 7 issues raised in regards to the footnore)
 
Last edited:
I see that you've mentioned ARINC numerous times, UT, as if to say that everything A-S has written can be dismissed if only he knew what the ARINC standards were.

This tells me one of two things. Either you still havent read A-S' orginal post or follow-ons(I mean actually read them - not scan them to see what you can cherry-pick), or you've no idea what ARINC is and how it pertains to the operation on the FDR/DFDAU.

ARINC is merely a transfer system, a means for communicating. All avionics on Boeings and Airbus use a Digital Information Transfer System (DITS), namely ARINC 429/629 - and in the case of the Loral Fairchild 2100, ARINC 717, 573, or 747 - to communicate. The front end of ARINC capable LRUs eventually strip away all of the ARINC formatting and convert the data(via signal condtioning) to something the on board circuit cards can use. This can be analog error signals for the Flight Control Computers to send to the autopilot servos during an ILS approach, or it can be TTL for data storage and processing.

You see, while the ARINC standards may indeed be very tight for accuracy and snycing of data between LRUs - it doesnt really pertain to how data gets acquired, buffered, compressed, multiplexed, stored, demuxed, decompresed, written or erased in the DFDR. All that happens behind the signal conditioners. Not to mention the fact that the DFDAU has similar things happening and its really the more pertinent LRU to A-S' essay on data aquisition.

I dont pretend to really grasp how these systems are designed or how they operate in detail, very few people do. But I can at least understand the point of A-S' original post. It seems you, UnderTow, still dont.
 
Last edited:
My explaination isn't complete. His notion that that was my intention is clearly misfounded. My intention was to provide exactly enough background to understand their flawed assumptions. If I wanted to add unnecessary complication, it would be easy.

First we would need to start with the electromagnetics. Ohm, Gauss, Faraday's law, at a minimum.

I'd go into the interconnection, when and why you should use single ended serial communication versus differential. We'd need to talk about coaxial, triaxial, twisted pair wiring, too.

Then we can talk about wire gauges (26, 22, 20...), shielding (especially for those twisted pair!), and terminiation resistors (wouldn't want any reflection, now, would we).

Now we also need to throw in a section about bit synching and under what conditions separate clock signals are approriate

Next, I could recite insane amounts of signal processing theory, error correction theory, and information theory to only explain the technical challenges associated with transfering this information, and provide sufficient motivation to develop a standard.

Once we understand the challenges facing such a system, I could get into the ARINC standad. I could compare and contrast them with MIL-1553 or HOO-9, both used on Boeing military aircraft, to help make the concepts clear. We'd show how each standard meets the technical challenges and solves the most difficult issues.

Once we've established what the standards require, and why they require the things they do, we can talk about the basic fundamentals of the system design, and how these systems meet the standards. We can talk about the necessary support required on the ground in terms of software and hardware. We can talk about the necessary maintence and testing procedures to ensure the standard is met.

And then, once all that is finished, you guys will have to read 400 pages and UnderTow will be complaining I didn't include an IRIG chapter 10 explaination.
 
Last edited:
My explaination isn't complete. His notion that that was my intention is clearly misfounded. My intention was to provide exactly enough background to understand their flawed assumptions. If I wanted to add unnecessary complication, it would be easy.

First we would need to start with the electromagnetics. Ohm, Gauss, Faraday's law, at a minimum.

I'd go into the interconnection, when and why you should use single ended serial communication versus differential. We'd need to talk about coaxial, triaxial, twisted pair wiring, too.

Then we can talk about wire gauges (26, 22, 20...), shielding (especially for those twisted pair!), and terminiation resistors (wouldn't want any reflection, now, would we).

Now we also need to throw in a section about bit synching and under what conditions separate clock signals are approriate

Next, I could recite insane amounts of signal processing theory, error correction theory, and information theory to only explain the technical challenges associated with transfering this information, and provide sufficient motivation to develop a standard.

Once we understand the challenges facing such a system, I could get into the ARINC standad. I could compare and contrast them with MIL-1553 or HOO-9, both used on Boeing military aircraft, to help make the concepts clear.

Once we've established what the standards require, and why they require the things they do, we can talk about the basic fundamentals of the system design, and how these systems meet the standards. We can talk about the necessary support required on the ground in terms of software and hardware. We can talk about the necessary maintence and testing procedures to ensure the standard is met.

And then, once all that is finished, UnderTow will be complaining I didn't include an IRIG chapter 10 explaination.

You forgot DC theory, solid state, transistor theory, digital theory and numbering systems(binary, octal, hexadecimal), logic gates, etc.. :D
 
You forgot DC theory, solid state, transistor theory, digital theory and numbering systems(binary, octal, hexadecimal), logic gates, etc.. :D

Digital encoding (floating point, fixed point, integer, big/little endian), too. It mixes in with the information theory, along with shannon entropy. Those two topics would be big.

Oh, and I'd need a section for a quick refresher on calculus 1, 2, and 3 so we could prove the characteristics of all our wire-types using Maxwell's Equations. I'd need to include data sheets of all the wires and signal conditioners, so we could properly prove that what goes in one side comes out the other. This goes back to the signal processing knowledge, you know.. nyquist criteria, fourier transforms, frequency responses, and the like.
 
Last edited:
Digital encoding (floating point, fixed point, integer, big/little endian), too. It mixes in with the information theory, along with shannon entropy. Those two topics would be big.

Oh, and I'd need a section for a quick refresher on calculus 1, 2, and 3 so we could prove the characteristics of all our wire-types using Maxwell's Equations. I'd need to include data sheets of all the wires and signal conditioners, so we could properly prove that what goes in one side comes out the other. This goes back to the signal processing knowledge, you know.. nyquist criteria, fourier transforms, frequency responses, and the like.

Don't forget VLSI concepts:
- Wiring and Interconnect: Elmore Delay, capacitance (fringing, interwire, cross-talk), low-k dielectrics, reduced swing, resistance, electromigration, inductance
- Gates: nmos, pmos, transient response, propagation delay, sizing, fan-in/out, subthreshold leakage
- combinational logic: logical effort, ratioed logic, pseudo-nmos, DCVSL, pass-transistors
- dynamic logic: charge sharing, backgate coupling, domino logic, differential domino logic
- sequential logic: latches, (positive/negative) edge triggered registers, C2MOS, TSPC, Schmitt Trigger, VCO

Etc, etc...just for starters ;)
 

Back
Top Bottom