• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Statistical significance

Please provide a reference to anything Radin has published in the Psychological Bulletin.
Not Radin himself, but in Entangled Minds Radin states that a 1999 ganzfeld article in the Psychological Bulletin by Julie Milton and Richard Wiseman was flawed, and that Lance Storm and Suitbert Ertel had replied to it in a 2001 article in the Psychological Bulletin. According to Radin (at p. 118): "They found that Milton and Wiseman had overlooked a number of earlier ganzfeld studies, and they argued that the best way to judge if the ganzfeld method really was successful was to combine all known studies. They found 79 studies that Bem and Honorton hadn't considered. This new batch of studies was associated with overall odds against chance of 131 million to 1."
 
I dunno Jeff, I thought what Ben said had some sense to it. For example, name me the five most important theories in psychology. OK, now name me the five most important principles of psychology.

OK, now, is either of those lists likely to be accepted, even in a different order but with substantially the same items on it, by a majority of other psychologists?
 
Not Radin himself, but in Entangled Minds Radin states that a 1999 ganzfeld article in the Psychological Bulletin by Julie Milton and Richard Wiseman was flawed, and that Lance Storm and Suitbert Ertel had replied to it in a 2001 article in the Psychological Bulletin. According to Radin (at p. 118): "They found that Milton and Wiseman had overlooked a number of earlier ganzfeld studies, and they argued that the best way to judge if the ganzfeld method really was successful was to combine all known studies. They found 79 studies that Bem and Honorton hadn't considered. This new batch of studies was associated with overall odds against chance of 131 million to 1."

See http://www.internationalskeptics.com/forums/showthread.php?postid=2250888#post2250888 for a detailed discussion of how reporting bias can throw a calculation like this way off. Combining all known studies includes only those studies that resulted in someone publishing. Even a slight bias towards not publishing negative studies will result in a dataset that gives apparently overwhelming odds.

From the sounds of your summary, it seems that this basic point was ignored in the follow-up article. I'd expect any competent statistician to be aware of this (variations on it come up in many ways). Particularly so given that this point has been long made with respect to ganzfeld experiments in particular. So if this point truly was ignored in the follow-up article, then I'd take that as evidence that peer review did not do a very good job in this case.

(Which would confirm my existing opinion...)

Cheers,
Ben
 
Ben Tilly, after all your fallacies, weaseling, obfuscation, denials, strawmen, equivocations, etc., unraveling all the threads that have been created in this discussion would be a rather large task. But I'd like to address one in particular. You claimed that something was "standard statistical methods", then made an appeal to authority by claiming that unnamed statisticians agreed with you. When I expressed doubt, you became offended, insulted me, and then implied that Snell was one of them. Well, I emailed him, and guess what? Not only does he disagree with you, he denies being a statistician. So if you're trying to prove that you aren't a liar, you're doing a rather poor job of it.
 
You are claiming that hypothesis testing is unreasonable because it's not allowed by Bayes' Theorem, and you say that it's not allowed by Bayes' Theorem because it's unreasonable.
i have no clue what he intended; but isn't the Bayesian argument against hypothesis testing one of coherence?
 

Back
Top Bottom