Ersby said:
We're going round in circles. First, get down from your high horse: I'm not saying Radin is lying, I'm saying he's mistaken. Since this is a mistake oft repeated, then I hardly think this counts as a black mark against his character! So I have no evidence that he's lying, because I don't think he's lying. I think he IS mistaken and I have evidence that he's mistaken.
You're getting mixed up. There are two things you claim Radin is wrong about. The first being that he wasn't all inclusive in his analysis pre-early 97. That's a matter I will deal with below. The second thing you claim Radin is wrong about, and the one I was obviously refering to when I told you that you should retract your claim, is his statement that all of the experiments he listed in his meta-analysis were taken from published material.
You don't really think that it's an "oft repeated" mistake for researchers to unknowingly include studies which haven't been published yet into meta-analyses of published material, do you? Radin explicitly states that he got all of his data from published sources. How could he be mistaken about a published study verses an unpublished one? It is clear that if you are right about this, Radin is not mistaken but lying.
Even in his own words, he includes in his analysis the first meta-analysis (by itself containing only 28 of the first 40 ganzfeld experiments), the PRL experiments and then post PRL experiments until '97. He himself doesn't mention the work from '85-'91. Why do you assume that he included them?
Since he says the overall hit rate of his meta-analysis "...is the combined estimate based on all available ganzfeld sessions, consisting of a total of 2,549 sessions.", and since you, a layman, know about the studies conducted from 85-91, certainly Radin, a proffessional with access to all the studies, knew about them too and included them.
I should note here that Radin writes: "Figure 5.4 includes all studies where the chance hit rate was 25 percent."
Quite apart from the fact that his figures are too low to encompass the entire ganzfeld database.
He reports 2,549 total sessions reported as of early 1997. How many do you think there had been up to that time? Unless you can come up with a different number, backed by studies, you can't say that his figures are "too low".
And I still don't see how he could have those kinds of figures (over 280 for Edinburgh, 500 plus for Durham) unless he had the results from those large scale experiments published in 1997. If you could answer that, that'd be nice.
You seem to think it would be impossible to conduct 289 sessions in three years, why? And I already gave you a detailed response as to how, since we don't know the exact date the 1997 Durham work was published, if it was published in early 1997 and included in Radin's analysis, everything fits. But if it wasn't published before Radin's analysis and not included, I still don't see how everything doesn't.
As for the Beirman paper I previously said:
quote:
--------------------------------------------------------------------------------
Well, in 1993 Beirman, in his paper “Anomalous information access in the Ganzfeld: Utrecht - Novice series I and II” did a brief overview of these and found that the effect size post 1985 had dropped considerably. He writes:
“However, the point remains that the 17 Ganzfeld experiments reported since the first metaanalysis in 1985 and for which we could infer the effect size that we were able to locate, do conflict with the outcomes reported in that 1985-analysis which incorporated 28 studies. In fact the effect-sizes do regress to chance expectation as can be seen from the linear regression analysis.”
--------------------------------------------------------------------------------
In that same paper he also writes:
"If we take a global look at the present series there is no clear sign of any paranormal effect in the data. As argued in the results-section the direct scoring rates however do not invalidate previous meta-analysis. Actually, if we compare the present results with novice series from other laboratories the global chance results are to be expected. So it seems too early to draw negative conclusions from this chance result."
It is not too early to draw conclusions any longer. We now know based on the recent meta-analyses that "...replications yield significant effect sizes comparable with those obtained in the past."
http://comp9.psych.cornell.edu/dbem/updating_the_ganzfeld_data.htm
amherst