No, it's based on the fact that you maxed out at 3, and that you only had 3 runs of 3 or more.
You're running these one at a time?
Try this. Go to the "advanced" page. Select binary format. Because we want 55 flips per sample, and 55 factors into 11 and 5, you can generate 11 bits of number by selecting a range from 0 to 2047, and the default 5 columns gives you one sample per row. Generate your maximum of 10,000 numbers per request, and you get 2,000 samples.
With one such sample I analyzed (gvim/cygwin fun) for run lengths, I came up with the following:
Code:
[size=1]
1 *
2 ****
3 **
4 **
5 **********
6 **********
7 ***
8 ***
9 *
[/size]
Every "*" represents one sample of 55 flips, or 0.05% of the total (so every 2 is 0.1%). This graph shows all of the runs that max out at run lengths of 3. The numbers are the number of runs
of 3 in each sample. The theory behind gambler's fallacy patterns in random data is that people tend to avoid run lengths, so
fewer and shorter runs count (run lengths equaling 2 are obviously meaningless--to avoid a run of length 2 you have to alternate, which nobody's going to do when trying to look random).
Your sequence had a whopping 3 runs of length 3--there are seven sequences with 3 or fewer runs of length 3 topping out at 3 in this diagram, representing 0.35% (or 0.0035) of the 2,000 samples. There are 0.45% of the border case just after--where there are four or fewer runs of length 3, topping out at 3. Grand total, there are 36 runs that top out at 3, representing 1.8% of the total.
No, I don't believe that. That's why I didn't say that, and in fact, said quite the opposite (that the latter was more special).
What I did say is that the former is a stronger implication of
intent. A two headed coin will give you the all heads flips, without even being sentient. But it's a lot to ask of
any weighted coin, no matter how you weight it, to show a strong bias towards GF looking patterns. Ordinary
weighting doesn't produce those kinds of patterns.
Or 15 seconds, and a few minutes analyzing the results of the 2,000 samples. As I said previously, with samples merely going up to 55, we're not dealing with insane probabilities here--we're talking on the order of single digit percentages tending towards the decimal end.Unless it's a two-headed coin, in which case it's
certain you'll see it after the 55th flip. If it's a heavily weighted coin, you might see this too.
But there's no ordinary weighted coin that will show you the GF pattern with such likelihood that the two headed coin will show you all heads. Neither ordinary weighted coins, nor two headed coins, are very sentient devices, so you shouldn't describe their outcomes as being
intended.
Now, humans can certainly come up with sequences containing all heads. But it's not insanely hard--and it's not even
that much harder at all--for them to come up with sequences showing heavy GF patterns. Humans can even, with enough practice, come up with reasonably
good random number sequences.
But both humans and a lot of dumb objects can come up with the
most special cases--the all heads case. And both humans and real coins can come up with pretty good random number sequences. But where you find
more humans in the set, and
less non-sentient alternative causes, is in the data that shows GF patterns.
So if you just invert this, then you see that it's not the case that the more special a thing is, the more likely it is that it was done with intent.
Here?
[qimg]http://www.internationalskeptics.com/forums/imagehosting/thum_268924a9f1df8ee9cb.jpg[/qimg]
Code:
[SIZE=1]2 1 2 2 1
1 2 2 1 1
1 1 2 2 2
2 1 1 2 1
2 2 1 2 1
1 2 1 1 2
2 1 2 2 1
1 2 1 2 1
1 2 1 1 2
1 2 1 2 1
2 1 2 2 2
(TS 2009-09-02 18:37:09 UTC)[/SIZE]
Row 2 has two one's at the end. Row 3 continues with two one's. That's a run of four 1's. Right after that, there are three 2's, and carrying to the next row, another 2. So that's two runs of length 4 back to back.
...that one has 5 runs of length 3. FYI, it'd be easier to analyze these in text form than JPG--the latter requires either manual calculating, or manual typing/verifying, which is prone to error. In the former, I can just split the numbers to different lines and filter by "uniq -c".