Okay, I'll make you happy by retracting my comment about a "large effect" as technically inaccurate, but why the focus on the strength of the effect if the likelihood of the results occurring by chance is only .48%? (I understand that you disagree with that percentage, but p=.0048 was the figure used by the authors, following the same methodology used by Milton and Wiseman in their previous article.)
I don't disagree with that figure. What I pointed out is that the figure depends upon including the results of a single study with results that are so different from the rest that it would be difficult to reasonably assume it is measuring the same thing as the rest of the studies. If the results of that study aren't included, the p-value for the meta-analysis is no longer statistically significant. The 'success' of the ganzfeld for measuring psi is essentially resting on a single, highly questionable, unpublished study.
Again, because I'm interested in the likelihood of the results occurring by chance. You seem to think that obtaining 440 hits in 1533 trials with a hit probability of 25% is within the range of what would be expected by chance. I don't think that's true, but I'd like to hear Beth's viewpoint.
That didn't answer my question.
And that evidence is . . .
Quit being coy.
I would simply note that some prior ganzfeld experiments have also shown high hit rates. (I know -- you think they weren't sufficiently tightly controlled, but it's unclear to me whether that made a difference.)
Right. You don't know whether it made a difference. And since the 'proof' of psi is really about eliminating those things that may have made a difference, you don't know whether that burden has been met. Which is the whole point of the criticism leveled against claims that psi is proven.
The authors were responding to Milton's and Wiseman's prior article, which was not exactly a study in humility, either.![]()
That's the problem with starting down this path. The ganzfeld data is too heterogeneous to combine. The guidelines for meta-analysis would say ditch the idea of combining the data ("an unwise meta-analysis can lead to highly misleading conclusions"). However, once the Battle of the Meta-analyses began, no one seemed able to stop, and so on and on we go. You can't blame Milton and Wiseman for the authors' studied indifference to just how sensitive their conclusions were to their assumptions, though.
Speak for yourself. (With the exception of my "large effect" comment, which was the first (minor) error I've ever made in my whole entire life.)![]()
I was.
Linda