Lance Manion
New Blood
- Joined
- Mar 4, 2008
- Messages
- 8
Well, it's either heads or tails, so 50-50, obviously!
Oh yeah? I meant the bag contains 2,049 pennies, smart guy.
Well, it's either heads or tails, so 50-50, obviously!
Sorry - I don't understand your example. What's the null hypothesis here?
In my example, I've already done that, and any statistican always would. If I've noticed that the bulbs seem to fail in clusters, I'll use a test based on a measure of clustering, and reject the null hypothesis if there is more clustering that plausibly seen in the Poisson distribution.
I don't really understand this: I'm taking the Bayesian point of view to mean choosing a prior somehow and applying Bayes Theorem. I'm not interested in what probabilities mean. To apply Bayes Theorem, you need a distribution on the degree of faultiness of the power supply. I agree it's reasonable to guess a probability that the supply if faulty, but to apply Bayes Theorem, your prior needs to be specific enough to calculate the probability of seeing a certain degree of clustering in the failures given that there is a fault in the supply. I can't imagine how you would come up with such a prior. The whole point of hypothesis testing is that you don't need to - it's good to have an idea how likely a fault is, because this affects what significance level you should choose, but there's no need to think about modelling the degree of faultiness until you've established that you have any evidence at all that anything is wrong.
Hypothesis testing lets you pretend that you don't need to. But implicit in any particular test is such a prior. If, were you explicitly shown that prior, you'd agree that it more or less reflects what you know about the power supply, then by all means accept the result of the significance test. If not, you shouldn't consider the test to have established anything, because it was based on faulty assumptions.
Hmm. Well, rotating doesn't change anything for B - it's simply off in total either way. But I agree that (if rotating is reasonable, which I don't think it always will be, but never mind) you could view A as simply predicting 4 parameters and saying nothing about the last.
Secondly, I think with your setup, Bayes gives the answer you don't want: viewing A as a single theory that only predicts 4 parameters, what prior probabilities are you assigning to A and B? If they are roughly equal, Bayes will tell you that given the observations, A is 99.9999% certain.
That would certainly affect the significance level that I'd use in the test. But I don't at all see how to use it to construct a prior distribution on the degree of clustering of failures.
No - in fact I'd say your example supports it! If you allow that C might be correct, you presumably also allow variations of C in which the fractions of A and B vary. So this argument says that any prior distribution is ok. To obtain a conlcusion by Bayesian reasoning you need to decide on one, and it still seems to me totally clear that there is no reasonable way of doing this for two Theories of Everything.