Nick Terry
Illuminator
I've seen a few of the veteran posters on here refer to probability theory from time to time, as a means of illustrating how implausible conspiracy theories are. At the same time, I also recall seeing someone use probability theory to demonstrate the reliability of witnesses.
apropos the first, pomeroo posted the following a few months back:
The above example makes perfect sense even to a mathematical layman such as myself.
Regarding the second possible use of probability theory, it strikes me that this is an already built-in feature of Western legal theory, and everyday thinking. This goes right back to the Judaic and Roman law principle that two witnesses were sufficient evidence to regard something as proven.
One can illustrate the two-witness principle fairly simply. If one assigns a 50/50 chance that any witness is telling the truth, then if there are two who say the same thing, the probability that they are both telling the truth rises to 75%.
This is an important threshold, since common law and current US law has several standards of proof. The first is the balance of probabilities, i.e. 51%, the second is clear and convincing evidence, which one legal theorist has assigned at 75%, and the third is beyond reasonable doubt.
Lawyers are extremely reluctant to assign a statistical probability to the last level, since to assign a figure of say, 90%, would imply that 10% of the time it would still be reasonable to doubt. Philosophers, however, argue that there can be no certainty, so clearly beyond reasonable doubt must be less than 100%. It is a reversible error for a judge to inform a jury that if they are 90% certain then that is 'beyond reasonable doubt'. In fact there are serious contradictions within the US court system as to whether it is permissible to explain what 'beyond reasonable doubt' means at all. Different states have different precedents regarding this. Until recently, the phrase 'moral certainty' was often used, but apparently that's now too confusing for juries so it can't necessarily be used.
But let's leave the philosophical and legal discussions to one side for the moment. I mention them because they tie in with 'everyday', common-sense usages, and because they are routinely abused by CTists.
So, on to the questions I'd like to throw out for discussion. The first is how to more precisely model the use of probability theory regarding witnesses. The second is how to guard against the abuse of probability theory by CTists, and to guard against potential counterattacks.
1. Regarding the first use, let me draw on an example from my field, history, and my debunking interest, combating Holocaust denial. Holocaust deniers are notoriously sceptical that any witness may be telling the truth. In fact they flatly state as a matter of course that they are all liars.
Therefore, let us entertain the possibility that this might be so, and assign a 90% chance (as in pomeroo's example above) that any one witness might be a liar.
Am I correct in deducing from pomeroo's demonstration of probability theory that if there were 100 witnesses each with a 90% chance of being a liar, that there is in fact a 99.97% chance that they are all telling the truth?
On the face of it, it seems counterintuitive but when this has been used before on this forum in a similar fashion, it made sense. The common-sense interpretation would be, surely, there is a 90% chance that they are lying, therefore they are all lying.
However, it surely also follows that if 100 liars are lying, and we assign a 90% chance that they can keep silent about their conspiracy, as in pomeroo's example, that the conspiracy will be let out of the bag.
Does this follow? It strikes me that if it does, then probability theory is a useful double-edged tool. It would surely demonstrate both that a large group of witnesses are telling the truth, and also demonstrate that for all of them to be lying is wholly improbable.
2. Regarding the potential abuses by CTists, how would one combat a potential abuse? One could take the example of cherrypicked quotes from eyewitnesses (actually, 'earwitnesses') which could be construed as evidence in support of controlled-demolition on 9/11.
I may be worrying unnecessarily here, since it would be surely unlikely that any CTist would concede that there is any chance he might be wrong, and assign a probability to his beliefs.
But let us assume there is a particularly dishonest and better-educated specimen of CTist who wants to abuse probability theory in order to make a quasi-rhetorical point.
What kinds of abuse could the CTist try on, and how would one counter them?
3. How can we use probability theory in relation to cherry-picking? If only a certain number of witnesses claim a certain anomaly, then they form a tiny percentage of a much larger group of witnesses, surely. For example, only a few first-responders claim to have heard noises 'like an explosion'.
How would one model this aspect?
Again, there are some examples in my own debunking field. Some Holocaust deniers claim on the basis of a few allegations that all key witnesses were coerced into giving false testimony. They have to, because otherwise their beliefs are simply refuted.
Now, this claim is surely vulnerable to the standard probability-theory conspiracy-busting argument put forward by pomeroo. The chance that 100 torture victims all kept silent is close to nil.
However, the claim is surely spurious to begin with, since if one had, say, six allegations out of 206 interrogated war criminals, then simple percentages suggest that only 2.9% of the set of interrogated war criminals even made allegations regarding coercion, let alone could be proven to have been coerced.
How would one then quantify the probability that all had been tortured? I ask because it would surely be quite a leap to go from six examples out of 206 (to keep with the figures; they roughly correspond to the number indicted at the Nuremberg and successor trials) to infer that all had been coerced.
What would the probability be? Surely extremely low to non-existent, even assuming a high level of credulity to this particular conspiracy theory.
Basically, help! I'm a historian, and fine with addition, subtraction, percentages and historical statistics in the general sense, but never did the full statistical-mathematical training.
I sense that there are some very useful points to be made with probability theory, but I want to know if there are any pitfalls. The thing I like about the examples that have been used on this forum is that one can assign different standards of proof, from the everyday to the ultra-sceptical, and in many cases demonstrate that what might seem 'obvious' to a CTist is in fact drastically improbable.
apropos the first, pomeroo posted the following a few months back:
Suppose that instead of thousands of people knowing the secret, there are only one hundred. Suppose further that these hundred people are extraordinarily good at keeping their mouths shut. Assign them an average probability of .9 (a typical human's might be .5) of never spilling the beans. This conspiracy--much, much smaller than yours--has a 99.97% probability of letting the cat out of the bag (use a calculator to raise .9 to the hundredth power).
The gigantic network of perps, accomplices, and coerced innocents posited by conspiracy liars would let the secret slip far more often, 99.99999...% of the time. The math is childishly easy. You can let everyone involved clam up with a probability of .99--they're all James Bonds and G. Gordon Liddys--and your conspiracy will still unravel with near-certainty.
The above example makes perfect sense even to a mathematical layman such as myself.
Regarding the second possible use of probability theory, it strikes me that this is an already built-in feature of Western legal theory, and everyday thinking. This goes right back to the Judaic and Roman law principle that two witnesses were sufficient evidence to regard something as proven.
One can illustrate the two-witness principle fairly simply. If one assigns a 50/50 chance that any witness is telling the truth, then if there are two who say the same thing, the probability that they are both telling the truth rises to 75%.
This is an important threshold, since common law and current US law has several standards of proof. The first is the balance of probabilities, i.e. 51%, the second is clear and convincing evidence, which one legal theorist has assigned at 75%, and the third is beyond reasonable doubt.
Lawyers are extremely reluctant to assign a statistical probability to the last level, since to assign a figure of say, 90%, would imply that 10% of the time it would still be reasonable to doubt. Philosophers, however, argue that there can be no certainty, so clearly beyond reasonable doubt must be less than 100%. It is a reversible error for a judge to inform a jury that if they are 90% certain then that is 'beyond reasonable doubt'. In fact there are serious contradictions within the US court system as to whether it is permissible to explain what 'beyond reasonable doubt' means at all. Different states have different precedents regarding this. Until recently, the phrase 'moral certainty' was often used, but apparently that's now too confusing for juries so it can't necessarily be used.
But let's leave the philosophical and legal discussions to one side for the moment. I mention them because they tie in with 'everyday', common-sense usages, and because they are routinely abused by CTists.
So, on to the questions I'd like to throw out for discussion. The first is how to more precisely model the use of probability theory regarding witnesses. The second is how to guard against the abuse of probability theory by CTists, and to guard against potential counterattacks.
1. Regarding the first use, let me draw on an example from my field, history, and my debunking interest, combating Holocaust denial. Holocaust deniers are notoriously sceptical that any witness may be telling the truth. In fact they flatly state as a matter of course that they are all liars.
Therefore, let us entertain the possibility that this might be so, and assign a 90% chance (as in pomeroo's example above) that any one witness might be a liar.
Am I correct in deducing from pomeroo's demonstration of probability theory that if there were 100 witnesses each with a 90% chance of being a liar, that there is in fact a 99.97% chance that they are all telling the truth?
On the face of it, it seems counterintuitive but when this has been used before on this forum in a similar fashion, it made sense. The common-sense interpretation would be, surely, there is a 90% chance that they are lying, therefore they are all lying.
However, it surely also follows that if 100 liars are lying, and we assign a 90% chance that they can keep silent about their conspiracy, as in pomeroo's example, that the conspiracy will be let out of the bag.
Does this follow? It strikes me that if it does, then probability theory is a useful double-edged tool. It would surely demonstrate both that a large group of witnesses are telling the truth, and also demonstrate that for all of them to be lying is wholly improbable.
2. Regarding the potential abuses by CTists, how would one combat a potential abuse? One could take the example of cherrypicked quotes from eyewitnesses (actually, 'earwitnesses') which could be construed as evidence in support of controlled-demolition on 9/11.
I may be worrying unnecessarily here, since it would be surely unlikely that any CTist would concede that there is any chance he might be wrong, and assign a probability to his beliefs.
But let us assume there is a particularly dishonest and better-educated specimen of CTist who wants to abuse probability theory in order to make a quasi-rhetorical point.
What kinds of abuse could the CTist try on, and how would one counter them?
3. How can we use probability theory in relation to cherry-picking? If only a certain number of witnesses claim a certain anomaly, then they form a tiny percentage of a much larger group of witnesses, surely. For example, only a few first-responders claim to have heard noises 'like an explosion'.
How would one model this aspect?
Again, there are some examples in my own debunking field. Some Holocaust deniers claim on the basis of a few allegations that all key witnesses were coerced into giving false testimony. They have to, because otherwise their beliefs are simply refuted.
Now, this claim is surely vulnerable to the standard probability-theory conspiracy-busting argument put forward by pomeroo. The chance that 100 torture victims all kept silent is close to nil.
However, the claim is surely spurious to begin with, since if one had, say, six allegations out of 206 interrogated war criminals, then simple percentages suggest that only 2.9% of the set of interrogated war criminals even made allegations regarding coercion, let alone could be proven to have been coerced.
How would one then quantify the probability that all had been tortured? I ask because it would surely be quite a leap to go from six examples out of 206 (to keep with the figures; they roughly correspond to the number indicted at the Nuremberg and successor trials) to infer that all had been coerced.
What would the probability be? Surely extremely low to non-existent, even assuming a high level of credulity to this particular conspiracy theory.
Basically, help! I'm a historian, and fine with addition, subtraction, percentages and historical statistics in the general sense, but never did the full statistical-mathematical training.
I sense that there are some very useful points to be made with probability theory, but I want to know if there are any pitfalls. The thing I like about the examples that have been used on this forum is that one can assign different standards of proof, from the everyday to the ultra-sceptical, and in many cases demonstrate that what might seem 'obvious' to a CTist is in fact drastically improbable.