Simon Bridge
Critical Thinker
- Joined
- Dec 27, 2005
- Messages
- 331
Yes - good - we agree on that - you did not put it that way to start with. Futher: the calculation you showed me does not take this into account. That is what I am saying - you need to adjust the math to take account of the liklyhood of cheating and you didn't. Bayes theorum does, as you point out below:Yes it does. Some particular sequences are much more probable given cheating than random chance.
Well - that's not the calculation you gave me before - Bayes theorum is what I have used in the paper ... you read it right? I used Bayes theorum explicitly.... by name.The calculation I have in mind is Pr(H/E) = Pr(E/H) x Pr(H) / Pr(E) (Bayes Theorem). if (H)pothesis is "the coin is fair" and (E)vidence is "100 heads in a row", then Pr(H/E) will drop to nearly zero.
You've handled it differently from me since I had to quantify my terms while you are the general idea qualitatively. Crunch the numbers and let me know if you get anything different to me.
The term 1/Pr(E) (your notation) is determined by normalization for example and has quite a big impact on the calculation. Pr(H) is the prior probability ... in the article I referred to it as the a-priori suspicion of honesty ... I used the same one. This actually has a strong effect on the outcome. Pr(E|H) (hint: use the "pipe" character - shift+backslash - for the conditional separator) is the forward probability or likelyhood function- which is the same for any specific sequence of heads and tails. C'mon, you know this! This is the term that is (0.5)^n for n tosses. That is the figure you kept quoting to me ... as it turns out - out of context. If you use this figure by itself for your hypothesis testing, you will reject the hypothesis too soon for a particular prior.
Now - you don't have to do it this way. Instead you can just rig the number of trials in a test to be so large it makes no difference since Pr(H|E)--->Pr(E|H) for n --> \inf ... as you point out
The trouble is (read first post) I don't want very big numbers of trials. I want to avoid the argument that I need a much larger number of trials than I have performed in order to be certain. To do this I need to model what happens for small numbers of trials ... in particular, where is the threshold? How do I adjust the liklyhood while evidence is still rolling (or tossing) in? Bayesian stats is great at this.
One of the principle results in the article is that the rejection thresholds change in the manner of a learning curve (go look at the graph) ... for high priors one may be quite justified in maintaining a strong belief for a counter-intuitively long time... but the prior has to be pretty strong (0.9999 was the strongest considered).
Would it be more helpful if I posted the stats in this thread? I didn't want to post the whole thing since it is quite long, not final, and involves a lot of math which looks ugly here even in TeXI'd have to reread the article.
You don't really mean that all education is useless.? The usefulness of the article depends on how it is written and how it is expected to be used. Improving on it is why I asked for comment remember?I was bogged down in the statistical derail. However, if you [the reader?] don't understand how statistically significant results, like a run of 100 heads, affect hypotheses, the article won't do you much good.
Last edited: