JayUtah
Penultimate Amazing
Unless someone besides Patrick asks for a detailed analysis of his most recent walls of text, I will not write a point-by-point answer. Patrick does not read them, does not respond to them, so I will not write them unless there is interest from people who do read and appreciate them.
For those engineering students who elect to study forensic engineering as a specialty, occurrences such as Three Mile Island, Apollo 13, and the Challenger accident come to dominate their study of human factors in operator response. That is primarily a psychology study. But these students eat, sleep, and breathe the transcripts of operator activity in connection with these occurrences.
The first psychological factor that applies to operator response to an incident is what we call de minimus thinking. Simply put, de minimus thinking is the urge to explain things by the least tragic or ominous causes. We have faith in our equipment, and we optimistically hope that when the red lights start flashing, it's nothing.
This is especially true in high-end engineering that relies heavily on engineered safety devices (ESDs) that automate much of the control and safety equipment. They are often built with hair triggers, and therefore sometimes go off when they shouldn't. Think of the smoke alarm going off in your house after a particularly hot shower, or simply because of smoke from cooking. You know not to pay attention to it, and your first thought when the smoke alarm goes off starts to be, "Okay, who's cooking bacon?" You don't believe your house is really on fire.
Mission Control doesn't get to see the spaceship. All they see are numbers. It's their job to put those numbers together to paint a picture of what's happening in the machine. This only happens after years of training and experience on that system. Laymen cannot do this for some system. There's no light on the console that says, "Lightning struck your rocket," or "Your oxygen tank exploded." You have to figure that out by deduction.
The first numbers they see on Apollo 13 are indications of electrical failure. Thus the de minimus conclusion they draw is that some electrical problem has occurred. All the subsequent data they receive is interpreted through perceptual filters set up by that conclusion. They're trying to interpret everything as either a potential cause for, or a consequence of, an electrical failure.
In a large system, because of ESD reliability and because of the sheer number of sensors and failsafes they contain, there is an art to determining what sensor readers are trustworthy and which are not. Particularly insidious is the fact that in an undervolt condition, the sensors themselves -- deprived of electricity -- begin to give false readings and cannot be trusted.
This is what the EECOM and most of Mission Control initially believed. They drew the conclusion that some electrical fault had occurred (most probably a fuel cell "trip") and that the multiple failures and warnings they were receiving were the consequences of voltage-deprived sensors and were not indicating real conditions. When you are presented with conflicting input (e.g., tank pressure and quantity readings), and you know some of the input is unreliable, you create a mental model to process that input. And the model in this case is a de minimus routine trip of the fuel cells. You begin to accept unconsciously the sensor readings that confirm that interpretation, and you unconsciously discount or reject those that don't.
Hence the crew's report of the "bang and shimmy" was heard, but then set aside. To the ground controllers, this was just one more report -- no more salient than any of the other real-or-bogus data points they were dealing with. To the crew, a physical response in their spacecraft is a highly salient event. Therefore the reports of the crew and the interpretations of the flight controllers proceed from different mental models. Mission controllers had become accustomed, due to prior mission incidents, to consider abrupt spacecraft movement (pyro detonations, impacts, etc.) as precipitating events for sensor malfunctions and inadvertent valve movements. Hence they're not thinking, "What are these sensors indicating that would have caused a bang?" but rather "What about a bang would have produced this pattern of sensor readings."
Not only do you get a plethora of readings that aren't necessarily valid or relevant, you may miss something important or relevant. O2 tank 2 pressure spiked, then dropped to zero almost immediately. No one noticed; they were looking elsewhere during those two seconds. If that sensor reading had been seen earlier, it might have led to the conclusion that the tank had failed due to overpressure. Instead, all the EECOM team noticed was the subsequent static condition that pressure and quantity readings disagreed -- a condition that might arise in a completely empty tank.
After a few minutes of attempting to diagnose an electrical failure, Liebergot is stumped. What he's seeing is not a component failure, and because he missed a few key readings he doesn't have the proper information to attempt to diagnose a system failure. He's trying to determine which sensor warnings are real and which are likely bad readings from power-deprived sensors. Kranz makes his famous quote: "What do we got on the spacecraft that's good?" Ed Harris delivers the line in the theatrical version with an overtone of disdain. But in reality, Kranz is telling Liebergot to reverse his thinking and to try to diagnose the problem based on what cannot have gone wrong given what he can know to be in working order.
They also continued to believe that the lack of voltage on the buses was not because the fuel cells were starved, but because they had simply been disconnected. This is a much more common occurrence. They still all believed that the cryo tanks were full and available to supply reactants to the fuel cells. In complex systems with limited and indirect instrumentation, correct theories about cause and effect are extremely difficult to produce.
The de minimus thinking begins to collapse 17 minutes into the incident when the venting report is received. Only then are these men persuaded to begin re-evaluating all the previous data. The problem is that they don't remember all of it. Their mental map is composed only of the information they didn't previously reject. Even in the initial stages of diagnosing the venting, they're thinking of it as a consequence of the electrical failure, not a cause for it.
They still don't know that it's oxygen that they're venting. Kranz' description later on in Failure Is Not an Option is naturally a condensation of what all that happened. To note that the venting changed their thinking is not the same as claiming they knew at that same instant it was the oxygen that was venting. Nevertheless, anything that can vent from the service module is something they'll need, so it doesn't need to be known as an oxygen vent in order to be recognized as a survival scenario. In short, you're going to put the LM lifeboat on the table before you know it's oxygen that's venting.
And this is not the only problem they're working. They're dealing with bringing the computer back online. They're dealing with propellant quantities in order to diagnose the RCS failure. They're dealing with communication configurations as the ship bucks about. Time pressure does not make for careful thinking. "I had heard about the fog of battle," writes Kranz, "but I had never experienced it until now. The early minutes were confusing: all reports and data were suspect." (Kranz, Failure Is Not an Option, p. 312)
Even at the 17-minute mark, they still hadn't figured out the causality. Here is what Kranz actually wrote: "A shock rippled through the room as we recognized that an explosion somewhere in the service module had taken out our cryogenics and fuel cells." (Ibid. p. 314) In the next paragraph he describes that "an oxygen tank had exploded," but he doesn't say that he was sure of that cause at the time. This is why it doesn't appear in the log book. He didn't write down "O2 tank 2 exploded" in the log because that conclusion wouldn't be reached until well into the next shift, as Liebergot's EECOM team left their consoles to make a more systematic analysis of the telemetry -- finally noting things such as the tank pressure spike on the strip charts.
This is the difference between primary and secondary sources. Secondary sources such as memoirs have the luxury of constructing a more omniscient narrative. They're full of turns of phrase such as, "little did we know at the time that..." which foreshadow elements to come. Patrick is trying to backfill Kranz' present knowledge of the story from beginning to end into what was unfolding at the time.
Yes, an hour into the crisis the EECOM teams both on and off duty are trying to see if they can restart the fuel cells. They know they're losing oxygen but they still don't know why. They know from the venting report that it's an actual loss, not a broken sensor. And because they don't know why they're venting, they still think they can solve the problem. De minimus thinking prevails at all levels here.
Patrick is trying to tell us that there's no ambiguity in Kranz' account, and that he's claiming -- without the possibility of contradiction -- that Kranz is saying in his book that he knew at the time it was an oxygen tank that had exploded. But no, that's not the only way the paragraph can be read, especially given the "somewhere in the command module" qualification prior to it, and the statement on the next page that Kranz wanted some more time to review all the data, for fear he'd missed something -- those parts Patrick leaves out.
Nor do the primary sources agree with Patrick's interpretation of Kranz' retrospective. Patrick is trying to parlay that disagreement into evidence that Kranz is a "perp." From one sentence shorn of its context, Patrick tries to claim Kranz "slipped up" and knew ahead of time that the oxygen tank had exploded, instead of taking the retrospective narrative for what it is.
For those engineering students who elect to study forensic engineering as a specialty, occurrences such as Three Mile Island, Apollo 13, and the Challenger accident come to dominate their study of human factors in operator response. That is primarily a psychology study. But these students eat, sleep, and breathe the transcripts of operator activity in connection with these occurrences.
The first psychological factor that applies to operator response to an incident is what we call de minimus thinking. Simply put, de minimus thinking is the urge to explain things by the least tragic or ominous causes. We have faith in our equipment, and we optimistically hope that when the red lights start flashing, it's nothing.
This is especially true in high-end engineering that relies heavily on engineered safety devices (ESDs) that automate much of the control and safety equipment. They are often built with hair triggers, and therefore sometimes go off when they shouldn't. Think of the smoke alarm going off in your house after a particularly hot shower, or simply because of smoke from cooking. You know not to pay attention to it, and your first thought when the smoke alarm goes off starts to be, "Okay, who's cooking bacon?" You don't believe your house is really on fire.
Mission Control doesn't get to see the spaceship. All they see are numbers. It's their job to put those numbers together to paint a picture of what's happening in the machine. This only happens after years of training and experience on that system. Laymen cannot do this for some system. There's no light on the console that says, "Lightning struck your rocket," or "Your oxygen tank exploded." You have to figure that out by deduction.
The first numbers they see on Apollo 13 are indications of electrical failure. Thus the de minimus conclusion they draw is that some electrical problem has occurred. All the subsequent data they receive is interpreted through perceptual filters set up by that conclusion. They're trying to interpret everything as either a potential cause for, or a consequence of, an electrical failure.
In a large system, because of ESD reliability and because of the sheer number of sensors and failsafes they contain, there is an art to determining what sensor readers are trustworthy and which are not. Particularly insidious is the fact that in an undervolt condition, the sensors themselves -- deprived of electricity -- begin to give false readings and cannot be trusted.
This is what the EECOM and most of Mission Control initially believed. They drew the conclusion that some electrical fault had occurred (most probably a fuel cell "trip") and that the multiple failures and warnings they were receiving were the consequences of voltage-deprived sensors and were not indicating real conditions. When you are presented with conflicting input (e.g., tank pressure and quantity readings), and you know some of the input is unreliable, you create a mental model to process that input. And the model in this case is a de minimus routine trip of the fuel cells. You begin to accept unconsciously the sensor readings that confirm that interpretation, and you unconsciously discount or reject those that don't.
Hence the crew's report of the "bang and shimmy" was heard, but then set aside. To the ground controllers, this was just one more report -- no more salient than any of the other real-or-bogus data points they were dealing with. To the crew, a physical response in their spacecraft is a highly salient event. Therefore the reports of the crew and the interpretations of the flight controllers proceed from different mental models. Mission controllers had become accustomed, due to prior mission incidents, to consider abrupt spacecraft movement (pyro detonations, impacts, etc.) as precipitating events for sensor malfunctions and inadvertent valve movements. Hence they're not thinking, "What are these sensors indicating that would have caused a bang?" but rather "What about a bang would have produced this pattern of sensor readings."
Not only do you get a plethora of readings that aren't necessarily valid or relevant, you may miss something important or relevant. O2 tank 2 pressure spiked, then dropped to zero almost immediately. No one noticed; they were looking elsewhere during those two seconds. If that sensor reading had been seen earlier, it might have led to the conclusion that the tank had failed due to overpressure. Instead, all the EECOM team noticed was the subsequent static condition that pressure and quantity readings disagreed -- a condition that might arise in a completely empty tank.
After a few minutes of attempting to diagnose an electrical failure, Liebergot is stumped. What he's seeing is not a component failure, and because he missed a few key readings he doesn't have the proper information to attempt to diagnose a system failure. He's trying to determine which sensor warnings are real and which are likely bad readings from power-deprived sensors. Kranz makes his famous quote: "What do we got on the spacecraft that's good?" Ed Harris delivers the line in the theatrical version with an overtone of disdain. But in reality, Kranz is telling Liebergot to reverse his thinking and to try to diagnose the problem based on what cannot have gone wrong given what he can know to be in working order.
They also continued to believe that the lack of voltage on the buses was not because the fuel cells were starved, but because they had simply been disconnected. This is a much more common occurrence. They still all believed that the cryo tanks were full and available to supply reactants to the fuel cells. In complex systems with limited and indirect instrumentation, correct theories about cause and effect are extremely difficult to produce.
The de minimus thinking begins to collapse 17 minutes into the incident when the venting report is received. Only then are these men persuaded to begin re-evaluating all the previous data. The problem is that they don't remember all of it. Their mental map is composed only of the information they didn't previously reject. Even in the initial stages of diagnosing the venting, they're thinking of it as a consequence of the electrical failure, not a cause for it.
They still don't know that it's oxygen that they're venting. Kranz' description later on in Failure Is Not an Option is naturally a condensation of what all that happened. To note that the venting changed their thinking is not the same as claiming they knew at that same instant it was the oxygen that was venting. Nevertheless, anything that can vent from the service module is something they'll need, so it doesn't need to be known as an oxygen vent in order to be recognized as a survival scenario. In short, you're going to put the LM lifeboat on the table before you know it's oxygen that's venting.
And this is not the only problem they're working. They're dealing with bringing the computer back online. They're dealing with propellant quantities in order to diagnose the RCS failure. They're dealing with communication configurations as the ship bucks about. Time pressure does not make for careful thinking. "I had heard about the fog of battle," writes Kranz, "but I had never experienced it until now. The early minutes were confusing: all reports and data were suspect." (Kranz, Failure Is Not an Option, p. 312)
Even at the 17-minute mark, they still hadn't figured out the causality. Here is what Kranz actually wrote: "A shock rippled through the room as we recognized that an explosion somewhere in the service module had taken out our cryogenics and fuel cells." (Ibid. p. 314) In the next paragraph he describes that "an oxygen tank had exploded," but he doesn't say that he was sure of that cause at the time. This is why it doesn't appear in the log book. He didn't write down "O2 tank 2 exploded" in the log because that conclusion wouldn't be reached until well into the next shift, as Liebergot's EECOM team left their consoles to make a more systematic analysis of the telemetry -- finally noting things such as the tank pressure spike on the strip charts.
This is the difference between primary and secondary sources. Secondary sources such as memoirs have the luxury of constructing a more omniscient narrative. They're full of turns of phrase such as, "little did we know at the time that..." which foreshadow elements to come. Patrick is trying to backfill Kranz' present knowledge of the story from beginning to end into what was unfolding at the time.
Yes, an hour into the crisis the EECOM teams both on and off duty are trying to see if they can restart the fuel cells. They know they're losing oxygen but they still don't know why. They know from the venting report that it's an actual loss, not a broken sensor. And because they don't know why they're venting, they still think they can solve the problem. De minimus thinking prevails at all levels here.
Patrick is trying to tell us that there's no ambiguity in Kranz' account, and that he's claiming -- without the possibility of contradiction -- that Kranz is saying in his book that he knew at the time it was an oxygen tank that had exploded. But no, that's not the only way the paragraph can be read, especially given the "somewhere in the command module" qualification prior to it, and the statement on the next page that Kranz wanted some more time to review all the data, for fear he'd missed something -- those parts Patrick leaves out.
Nor do the primary sources agree with Patrick's interpretation of Kranz' retrospective. Patrick is trying to parlay that disagreement into evidence that Kranz is a "perp." From one sentence shorn of its context, Patrick tries to claim Kranz "slipped up" and knew ahead of time that the oxygen tank had exploded, instead of taking the retrospective narrative for what it is.