• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Double Headed Coins and skepticism

Because it disconfirms, to an extremely high degree, the null hypothesis of random chance.

No it doesn't. Any particular sequence of numbers has the same odds.

Yes it does. Some particular sequences are much more probable given cheating than random chance. For example, suppose you rollend a ten sided die and got the following result:
31415926535897932384

Only an idiot would think it was a fair die/toss, correct? But that result is just as likely as any other. What makes it special is that it is one of a small set of results we expect if someone is cheating.

To put it another way, there are many, many combinations that are consistent with random chance.

to put it another way: how do you sort out those sequences of numbers that are consistent with random chance from those that are not consistent?

Because if someone is going to go to the trouble of rigging a "random" event, there are only a few plausible ways they would go about doing it.

However, the set of combinations that is consistent with cheating is much smaller. So when we see a result that belongs to the set that is consistent with cheating, a bunch of red flags go off.

You are saying, for example, that there is only one way of getting HHHHH, and many ways of getting any other combination - so the odds against are pretty long (in this case you'd reject chance at 95%).

5% is often the threshold for statistical significance. In the case of flipping a coin heads 100 times in a row, the odds against are .0000...1%. Very good odds for rejecting the null hypothesis of random chance.

However: there is only one way of getting HTHHHT too. Many ways of not getting it. Would you consider this result more or less likely to occur by chance alone?

That is a result we would expect given random chance. A fair coin tossed fairly 100 times will almost always give a roughly equal distribution of heads and tails: 55-45, 58-42, 48-52, etc. There will be some outliers, but even 65-35 is pushing it. 100 heads, though, is so far removed from what a fair-coin should produce, that we instantly assume foul "play".

Have a look at: The Longest Run of Heads ... when people are asked to simulate a number of random tosses, they always produce one with too few long runs in it.

So?

If you are comparing the probabilities of getting 5 heads on 5 tosses as opposed to any other combination you are right. However, the way the example is set up is properly thought of as a sequence not a combination ... the math for any other sequence is identical. The same calculation will "prove" that no possible sequence can occur by chance ... you need to revise the calculation.

The calculation I have in mind is Pr(H/E) = Pr(E/H) x Pr(H) / Pr(E) (Bayes Theorem). if (H)pothesis is "the coin is fair" and (E)vidence is "100 heads in a row", then Pr(H/E) will drop to nearly zero.

The key word is "cheating". Some particular result is attributed a special value - perhaps you win $1 each time you roll H? In which case, there are many ways of losing and only a few of winning. However, on the strength of the argument posed in my article, I will maintain that you have rejected the hypothesis too soon.

I'd have to reread the article. I was bogged down in the statistical derail. However, if you don't understand how statisically significant results, like a run of 100 heads, affect hypotheses, the article won't do you much good.

If we are talking about 5 successfully called tosses in a row then your analysis would be correct in that it is as unlikely as getting any particular sequence out of all possible sequences. Of course we note that rejecting chance as an explanation of events is not the same as accepting any other particular explanation - which is where other peoples discussion of controls and so on comes in.

If it's not chance, then what else could it be?

It's quite difficult to talk clearly about probabilities. Part of the idea of discussing the article here is to discover where I am confusing people (and where I'm getting confused) and rewrite to compensate.

Sure
 
Last edited:
Malerin said:
5% is often the threshold for statistical significance. In the case of flipping a coin heads 100 times in a row, the odds against are .0000...1%. Very good odds for rejecting the null hypothesis of random chance.
What are the odds of another 100-long string?
 
Yes it does. Some particular sequences are much more probable given cheating than random chance. For example, suppose you rollend a ten sided die and got the following result:
31415926535897932384

Only an idiot would think it was a fair die/toss, correct? But that result is just as likely as any other. What makes it special is that it is one of a small set of results we expect if someone is cheating.

There are two questions that need to be separated here.

Q1) Is a sequence of 100 heads on a fair coin possible?
A1) Yes. The claim that it isn't can be immediately disposed of by noting that said sequence is no more unlikely than any other, and some sequence must always result.

Q2) Is a sequence of 100 heads on a supposedly fair coin more likely to be the result of a truly random fair sequence or of cheating?
A2) Cheating, almost certainly.

As far as I know, no one other than Piggy disagrees with those.
 
There are two questions that need to be separated here.

Q1) Is a sequence of 100 heads on a fair coin possible?
A1) Yes. The claim that it isn't can be immediately disposed of by noting that said sequence is no more unlikely than any other, and some sequence must always result.

But that's not quite good enough, for the reasons I mentioned earlier.

A sequence of 100 heads is one of the configurations we can write out on paper as the results of a series of 100 flips.

But by the same token "The quick brown fox jumped over the lazy dog" is a series of keystrokes we can write out on paper as the result of a monkey banging on a keyboard.

However, experimentation shows that monkeys, in practice, don't produce the entire gamut of possible keystroke combinations, even though on paper this combination is no more unlikely than ttttltlltlllsssiiosssssssssssssssssssssssssssss

The question remains whether a series of 100 heads or 100 tails will actually occur in practice, or whether the rough edges of our bumpy world will ensure that the actual outcome has a smaller range of variation and volatility than that.
 
But that's not quite good enough, for the reasons I mentioned earlier.

A sequence of 100 heads is one of the configurations we can write out on paper as the results of a series of 100 flips.

But by the same token "The quick brown fox jumped over the lazy dog" is a series of keystrokes we can write out on paper as the result of a monkey banging on a keyboard.

However, experimentation shows that monkeys, in practice, don't produce the entire gamut of possible keystroke combinations, even though on paper this combination is no more unlikely than ttttltlltlllsssiiosssssssssssssssssssssssssssss

The question remains whether a series of 100 heads or 100 tails will actually occur in practice, or whether the rough edges of our bumpy world will ensure that the actual outcome has a smaller range of variation and volatility than that.

To be honest, I really have no idea what you're talking about. What kind of "rough edges" are going to magically intervene and change the results when you start getting close to 100 heads?
 
The typing monkey analogy would work if there was approximately a 50% chance of each keystroke being the right keystroke.
 
First of all, there is no actual 100-sided die in D&D, so I don't believe that story at all.

As to 6 billion people flipping coins, on average you'd expect at least one of them to get 32 heads in a row on the first go round. Nowhere close to a hundred in a row, but still pretty impressive.

As to the OP, you need to have another probability in mind -- the probability that someone is cheating and getting away with it. I don't know how you might determine that.
 
Last edited:
To be honest, I really have no idea what you're talking about. What kind of "rough edges" are going to magically intervene and change the results when you start getting close to 100 heads?

None.

But the rough edges could make the system volatile enough that extended runs don't happen in practice.
 
The typing monkey analogy would work if there was approximately a 50% chance of each keystroke being the right keystroke.

That would be the wrong way to apply the analogy.

In fact, with the monkeys and keyboards, long streaks are highly likely.
 
I was making your monkey typing analogy fit the chances of getting a streak with a coin.
 
Piggy, if the rough edges of the real world are somehow intervening to stop the coin from turning up heads, perhaps you can answer the following question:

You accept that a string of two heads is possible. What number of consecutive heads is the maximum possible number?
After that number has been reached, what do you think stops another head from turning up?
 
Roboramma, I asked a question about what the approximate number would be (knowing that asking for a specific one would be too hard), but Piggy couldn't give one.
I'm not sure how one would determine that, and I doubt it's a bright line.

Like the Earth's atmosphere, there are some things we can say are clearly inside it, other things we can say are clearly outside it, but no dividing line where we can draw the boundary.

I was trying to figure out if the paper cited by Simon Bridge above provided any indication of a possible answer, but I'm way too rusty on my math (as in, decades rusty, and never did much of it in the first place, just what I had to).

If we've got some math that's grounded in real-world experimentation which indicates that runs of 100 are no big deal, given a sufficient number of permutations, then I'll change my mind, of course.
 
I'm just trying to understand what possible mechanism he thinks will cause a string of heads to stop. I understand why long strings don't happen with "small" numbers of iterations, but Piggy seems to think that the likelihood of getting another head is somehow determined by how many heads have already come up, otherwise I really don't understand what makes him think that the number of long strings in practice will be less than the number in theory.

Actually that gives me an idea to test his hypothesis, or at least a variation of it: look at how frequent strings of say 10 heads in a row are in practice. If it's the same as in theory then that's good evidence that the theory is correct. If it's less than in theory (and we can rule out effects like the unfairness of the coins which Sol referred to earlier) then Piggy's ideas can be given a little more credence.
 
But the rough edges could make the system volatile enough that extended runs don't happen in practice.

You're still not making any sense. No mechanism is needed to ensure extended runs don't happen, they're simply rare enough that we don't expect to see them anyway. Reality predicts exactly the results we see without needing your bizarre contortions, so why do you insist on making them? Yes, flipping 100 heads is possible. No, you shouldn't hold your breath waiting for it to happen.
 
None.

But the rough edges could make the system volatile enough that extended runs don't happen in practice.

Why does this "volatility" only affect flip sequences that humans happen to consider special, like runs of 100 heads, and not other sequences? Magic?
 
None.

But the rough edges could make the system volatile enough that extended runs don't happen in practice.

If extended runs don't happen in practice, it's only because vast populations don't spend entire lifetimes flipping coins.

The mathematics has already been provided: 2100 (1.2676506 × 1030) tosses would, on average, be expected to yield one sequence which was either all heads or all tails. If you multiply that number of tosses by 100 (1.2676506 × 1032), the result is virtually guaranteed.

If you have never seen the result, it's because no one has actually run the experiment.
 
Ah, but that's an improper comparison.

The proper comparison would be TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT..... versus everything else, not versus a single alternative chosen at random.

Why? I think that's what we're disagreeing about. I don't understand why you think this.
 
Yes it does. Some particular sequences are much more probable given cheating than random chance. For example, suppose you rollend a ten sided die and got the following result:
31415926535897932384

Only an idiot would think it was a fair die/toss, correct? But that result is just as likely as any other. What makes it special is that it is one of a small set of results we expect if someone is cheating.

Gargh. This sounds wrong. I'd like to explain why I think this reasoning is faulty.

Basically, it's inferring cause from results, rather than testing a pre-formed cause hypothesis. Bem just got pwnd for this. (See: [Why Psychologists Must Change the Way They Analyze Their Data: The Case of Psi])

This is related to [pareidolia] in that we have a tendency to anthropomorphise 'recognizeable' natural events backwards from the event. ie: we see random numbers, and look for "a pattern" - we're good at finding patterns with completely open criteria.

In the world of hypothesis testing, we need to stick to a pre-designated binary criteria, or we've stopped doing science and started doing metaphysics.

When a lottery number sequence comes up a winner, to me it's random. To the person who chose it, it's probably a very special set of numbers. Birthdates or something.

31415926535897932384 is 'obviously cheating' to somebody who recognizes pi, but not to other people. I'm sure mathematicians have a personal inventory of irrational numbers they would recognize but the rest of us wouldn't.

I think that's what piggy's trying to describe: perhaps a string of identicals is attracting his attention more than other strings because there aren't a lot of special combinations out there he would recognize easily on account of being, well, normal, and not having a math degree or something.

And it's not even restricted to math: the problem generalizes to other disciplines like biology. Behe makes a good living reverse-engineering natural sequences in DNA or shapes of molecules for evidence of intelligent cause.
 
Last edited:
Can we please come back to the article? Thank you. It is not the intent of this thread to discuss what counts as "possible" or not.

I undestand why you're frustrated, but I think the discussion is relevant to the content of your paper.

My interpretation of your paper is that it's a more rigourous version of "extraordinary claims require extraordinary evidence" - a claim with a high prior probability input is easier to accept with a lower confidence interval in the test protocol.



Part of the article demonstrates a way of assessing how confident one can be of a claim given the evidence, as the evidence mounts.

For example: skeptics are often criticized for being closed-minded about paranormal claims ... but we can show that we do not need very many honestly determined disproofs for any particular type of claim to be very confident that no such claim can be true. When testing a particular claim, we are wise to be very generous in our prior assumption of "innocence" on the part of the claimant. However, we are aware of a history of such testing of thousands of such claimants, none of which have demonstrated anything paranormal. Thus, we would be very silly, on the basis of that result, to go about our everyday life as if paranormal abilities exist.

I don't think anyone here will disagree about this - the difference here, I hoped, was that having a quantifiable example could be illustrative of many aspects of our interactions with believers. I had hoped that it would be that aspect which would interest members here.

I think this is part of the general discussion in other threads about prior probability and its relationship to what constitutes extraordinary claims requiring extraordinary evidence.

And I think you're correct that it's something skeptics should be aware of - the take-away from this is that paranormal advocates will use exactly the same formulas (some have) to explain why skeptics are wrong. In their opinion, we have misinterpreted the body of literature and are undervaluing prior probability. They accuse us of insisting on tests with unjustifiably small confidence intervals to 'pass' our challenges.
 

Back
Top Bottom