• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Psi in the Ganzfeld

Ersby

Fortean
Joined
Sep 12, 2001
Messages
1,881
Split from here:

http://www.internationalskeptics.com/forums/showthread.php?t=70319&page=6

Brief recap: Rodney is quoting from Dean Radin's book The Entangled Mind, specifically those section regarding the ganzfeld experiments, and how Radin's meta-analysis demonstrates an effect far in excess of what you'd expect by chance.

Radin, The Entangled Mind,
"From 1974 through 2004 a total of 88 ganzfeld experiments reporting 1,008 hits in 3,145 trials were conducted. The combined hit rate was 32% as compared to the chance-expected 25% (Figure 6-6). This 7% above-chance effect is associated with odds against chance of 29,000,000,000,000,000,000 (or 29 quintillion) to 1."


I maintain that Radin's meta-analysis is incomplete, and has no inclusion criteria so can't be taken seriously as an exhaustive and even-handed survey of the field. Effectively, Radin has taken the most famous results (which are the most favourable) and put them together. There are in fact twice as many ganzfeld experiments than he reports on (a little under 7,000). Rodney also added the quote from Radin:

"This excludes a few of the earliest ganzfeld studies that couldn't be evaluated with a hit vs. miss type of analysis."

Which more or less brings us up to the last post by Rodney in which he said:

So, Radin is clearly stating that he has not excluded any but the earliest studies, nor has he excluded any studies at all on the basis of low hit rates. Rather, he is stating that his exclusions were based only upon the earliest studies' protocols, which, he claims, were not amenable to a hit vs. miss type of analysis. If that's wrong, you need to provide specifics on the studies that have been excluded by Radin.

The missing experiments from Honorton's 1985 meta-analysis, and therefore Radin's are: Parker, 1975; Stanford, Neylon, 1975; Smith, Tremmel, Honorton, 1976; Terry, 1976; Terry, Tremmel, Kelly, Harper, Barker, 1976; Habel, 1976; Rogo, 1977; Dunne, Warnock, Bisaha, 1977; Parker, Miller, Beloff, 1977; Braud, Wood, 1977; Palmer, 1979; Keane, Wells, 1979; Stanford, 1979; Palmer, Whitson, Bogart, 1980; Roney-Dougal, 1981.

Details about these experiments can be found at http://www.skepticreport.com/psychicpowers/ganzfeld.htm (download Part 1a at the bottom of the page) In the meantime, the scoring methods used (with hit/miss in bold) were:

Parker, 1975, hit/miss 50% MCE
Stanford, Neylon,1975, ratings converted into z-scores *
Smith, Tremmel, Honorton, 1975, ten binary guesses per trial, MCE 50%
Terry, 1976, ten binary guesses per trial, MCE 50%
Terry, Tremmel, Harper, Barker, 1976, ten binary guesses per trial, MCE 50%
Habel, 1976, hit/miss 50% MCE
Rogo, 1977, ten binary guesses per trial, MCE 50%

Dunne, WArnock, Bisaha, 1977, ranking converted into z-scores
Parker, Miller, Belhoff, 1977, ranking *
Braud, Wood, 1977, ten binary guesses per trial, MCE 50%, **
Palmer, 1979, ratings converted into z-scores,
Keane, Wells, 1979, ratings,
Stanford, 1979, (don't know) *
Palmer, Whitson, Bogart, 1980, ratings converted into z-scores,
Roney-Dougal, 1981, ranking.

* those which did not report results
** had three different scoring systems

So I was wrong when I said that most of the experiments could be assessed in a hit/miss format. But if you know the results and the number of trials, it's not difficult at all to come up with an "equivalent" hit rate and work from that. Radin demonstrates enough statistcial know-how in his books to make me think it wouldn't be beyond him to reintroduce these experiments.

And as for those which did not report results numerically, they were described as being near chance, or in the case of Stanford, 1975, below chance but not significantly. From that a value can be estimated, thus avoiding any bias that could be introduced by reports on failed experiments which give only limited details regarding results.

Radin is also missing data from post-1985 but it is harder to establish what those are from the book.
 
Last edited:
[Homer Simpson]You can use fact to prove anything that's even remotely true.[/Homer Simpson]

Good post. :)
 
Oh boy, the ganzfeld experiemnts are so lacking in any controls in selection of targets and target words, then they use all sorts of silly-ness to make it look 'significant'. Apparently they don't understand statistics or protocol.
 
What does MCE in the OP mean? I tried googling MCE and statistics but didn't get an answer ... Thanks.
 
What does MCE in the OP mean? I tried googling MCE and statistics but didn't get an answer ... Thanks.

Sorry. After reading so much of this stuff I find myself using terms, forgetting that no one else may understand them.

MCE is Mean Chance Expectation. For example, the MCE of correctly guessing a target out of four choices is 25%.
 
I maintain that Radin's meta-analysis is incomplete, and has no inclusion criteria so can't be taken seriously as an exhaustive and even-handed survey of the field. Effectively, Radin has taken the most famous results (which are the most favourable) and put them together. There are in fact twice as many ganzfeld experiments than he reports on (a little under 7,000).
Thanks for all of the information. It will take me a while to read and digest. Preliminarily, however, if Radin has selectively cited 3145 favourable ganzfeld experiments out of a little under 7,000, that still would mean that the overall hit rate would be highly statistically significant, if Radin is correct that 1008 of the 3145 experiments that he cites produced hits. For example, assume that there actually are an additional 4000 experiments that overall produced chance results of 25% hits, or 1000 hits total. That would bring the total number of ganzfeld experiments to well over 7000 (7145). The total number of hits would be 2008 (the 1008 cited by Radin plus the assumed additional 1000). That would produce an overall hit rate of 2008 out of 7145, or 28.1%. While, to the layman, that might seem only narrowly above the chance rate of 25%, with that many experiments the true odds against chance would actually be 986 million to 1. In fact, using an on-line binomial calculator, if you put in 7145 for the value of "n", 2008 for the value of "k", and 0.25 for the value of "q" and click on calculate, you will obtain a "P" value for "2008 or more out of 7145" so small that it is not even calculated exactly, but simply shows as "<0.000001." See http://faculty.vassar.edu/lowry/binomialX.html

So, to invalidate the ganzfeld experiments, it appears that something more than selective inclusion must be found.
 
Hi Ersby. Well done for having the patience to research this for so many years. One day, I'll take the time to read through all parts of your article!

I maintain that Radin's meta-analysis is incomplete, and has no inclusion criteria

I take it Radin's meta-analysis paper doesn't include a section about inclusion criteria? If it doesn't, that's a poor show. Are you then basing your above statement on the fact that Radin didn't reply to your email request about this subject?

The missing experiments from Honorton's 1985 meta-analysis, .

Did Honorton say why he left out those experiments?

On an aside, I was reading through your introduction section. I was wondering about the "funnel" graph you plotted. The graph you've taken from the Richard Palmer study looks like its on a log scale on the x-axis. Could you confirm that because your ganfeld graph appears linear on the x-axis.
 
So, to invalidate the ganzfeld experiments, it appears that something more than selective inclusion must be found.

Let's go one step at a time. First I want to invalidate Radin's meta-analysis. That his work is selective is, I think, established and it cannot be taken seriously. Do you accept that now? Once we've agreed on that, then I'll move on to the database as a whole.
 
Last edited:
I take it Radin's meta-analysis paper doesn't include a section about inclusion criteria? If it doesn't, that's a poor show. Are you then basing your above statement on the fact that Radin didn't reply to your email request about this subject?

Mostly, yes. I wouldn't expect him to define his inclusion criteria in a popular science book, but I would expect him to be quite happy to explain it in an email or on his blog.

Did Honorton say why he left out those experiments?

It was because Hyman had an issue with a few of the early experments using more than one scoring method, and then reporting the most successful one. Honorton agreed that this was a problem. His solution was to limit his analysis to those experiments that used direct hit scoring of 25%, 20% and 16.5% MCE. Why he didn't include 50% MCE as well is a puzzle to me.

On an aside, I was reading through your introduction section. I was wondering about the "funnel" graph you plotted. The graph you've taken from the Richard Palmer study looks like its on a log scale on the x-axis. Could you confirm that because your ganfeld graph appears linear on the x-axis.
I don't know the details on Palmer's graph :o I just used it as a nice illustration of the shape you'd expect. The x-axis on my graph is linear, yes.
 
Last edited:
I don't know the details on Palmer's graph :o I just used it as a nice illustration of the shape you'd expect. The x-axis on my graph is linear, yes.

The Palmer graph x-axis certainly looks like a log scale from the spacing of intervals. But having just done a search on funnel plots in meta-analysis, it appears that both linear and log scales are used for both axes depending on the variable of interest.

This looks like a reputable summary. But it's way above my head!

http://biostatistics.oxfordjournals.org/cgi/reprint/1/3/247.pdf

I would be interested to see what kind of shape the ganzfeld plot turns out to be if you use a log scale on the x-axis though. If you still have the excel data, would you be able to do that relatively easily and let me know the result? Would be appreciated.
 
Let's go one step at a time. First I want to invalidate Radin's meta-analysis. That his work is selective is, I think, established and it cannot be taken seriously. Do you accept that now? Once we've agreed on that, then I'll move on to the database as a whole.
No, I don't accept your belief that Radin has deliberately excluded unfavourable ganzfeld studies to make the odds higher. What sense does it make for him to quote odds against of 29 quintillion to 1 when, even if you include every possible study, the odds against are still a billion or so to 1? Would anyone fail to be impressed by odds of a billion to 1 against? So, I have to believe that Radin sincerely believes that a number of studies should be excluded because they did not meet the proper protocol. Can you cite a specific case where there were two studies done with the same protocols and he included one in his meta-analysis, but excluded the other?
 
Sorry. After reading so much of this stuff I find myself using terms, forgetting that no one else may understand them.

MCE is Mean Chance Expectation. For example, the MCE of correctly guessing a target out of four choices is 25%.

Thanks Ersby.

I've read your introductions to each secion of your article, and plan on going through your actual article next.

As is probably no surprise, my lacking in statistical knowledge is getting in the way of understanding all of your points, but I think I still get the main gist.

davidsmith73 said:
I take it Radin's meta-analysis paper doesn't include a section about inclusion criteria? If it doesn't, that's a poor show. Are you then basing your above statement on the fact that Radin didn't reply to your email request about this subject?

Ersby said:
Mostly, yes. I wouldn't expect him to define his inclusion criteria in a popular science book, but I would expect him to be quite happy to explain it in an email or on his blog.

I agree, but I would also expect him to provide a footnote to the inclusion criteria in his popular science books so that anyone who wanted to refer to it could.

IMHO, Radin's refusal to answer this question strongly weakens his position. I suppose it's possible that he has answered this question privately to a "peer-reviewed" scientist, but I think this type of information should be public. Is it fair to say that in other fields, like the "hard sciences" or in research psychology, it is?
 
No, I don't accept your belief that Radin has deliberately excluded unfavourable ganzfeld studies to make the odds higher. What sense does it make for him to quote odds against of 29 quintillion to 1 when, even if you include every possible study, the odds against are still a billion or so to 1?

Where do you get the billion to 1 odds from? It just seems to be an assumption.

However, I think the crux of the matter is that if you average a number of good studies with a number of crocked up studies, you will get an overall result above chance expectation.

Again - this kind of metaanalysis rests on the assumption that all studies are good. Which we know they are not. If Radin wants to make a case, he'd have to explain how to make a study that will consistently give significant results - not just when some researchers do it, but always.
 
Where do you get the billion to 1 odds from? It just seems to be an assumption.
No, in post #6 here, I demonstrated that, even if 4000 additional ganzfeld trials were added with chance results to Radin's numbers to give a total of 7145 trials, the odds against the number of hits would be 986 million to 1. However, Ersby says that the total number of trials is actually less than 7000, which would push the odds against to well over a billion to 1.

However, I think the crux of the matter is that if you average a number of good studies with a number of crocked up studies, you will get an overall result above chance expectation.
If there were a sufficient number of crocked up studies, yes. But it remains to be demonstrated how many of those studies there were.

Again - this kind of metaanalysis rests on the assumption that all studies are good. Which we know they are not. If Radin wants to make a case, he'd have to explain how to make a study that will consistently give significant results - not just when some researchers do it, but always.
No. When human beings are involved, results will vary. During Michael Jordan's basketball career, he established way beyond chance that he was one of the all-time greats. But there were many games when he was, at best, average.
 
No, in post #6 here, I demonstrated that, even if 4000 additional ganzfeld trials were added with chance results to Radin's numbers to give a total of 7145 trials, the odds against the number of hits would be 986 million to 1. However, Ersby says that the total number of trials is actually less than 7000, which would push the odds against to well over a billion to 1.

Where do you get the 4000 number from?

Additionally, you assume that the extra results would be neutral. If we ascribe these missing results to the 'file drawer effect', which I think it is most reasonable to do, then we would expect them to have a below chance expectation on average.

No. When human beings are involved, results will vary. During Michael Jordan's basketball career, he established way beyond chance that he was one of the all-time greats. But there were many games when he was, at best, average.

That's why every study involves multiple trials, to give the subjects the benefit of the doubt that their PSI abilities may only be measurable on good days. But you seem to be suggesting that the researcher can do a "good" job, and end up with a positive study, or a "bad" job, and end up with chance or below chance expectations. Doesn't this seem a bit.. suspect?
 
Where do you get the 4000 number from?
In the Opening Post, Ersby stated: "There are in fact twice as many ganzfeld experiments than he [Radin] reports on (a little under 7,000)." So, since Radin reports on 3145 experiments, Ersby is suggesting that there about 3855 (7000-3145) experiments that Radin has not reported on. To be generous, I rounded the 3855 up to 4000, which would imply that there have actually been 7145 total ganzfeld experiments.

Additionally, you assume that the extra results would be neutral. If we ascribe these missing results to the 'file drawer effect', which I think it is most reasonable to do, then we would expect them to have a below chance expectation on average.
Why? A significantly below-chance result would actually be consistent with some psi believers' theory that certain researchers negatively impact the number of hits. So, I would think those experiments would get reported. On the other hand, to the extent that experiments are slightly positive or slightly negative, but not statistically significant, they could be subject to the 'file drawer effect'. In the absence of any evidence to the contrary, I think assuming exact chance results for unreported experiments is the fairest way to proceed.

That's why every study involves multiple trials, to give the subjects the benefit of the doubt that their PSI abilities may only be measurable on good days. But you seem to be suggesting that the researcher can do a "good" job, and end up with a positive study, or a "bad" job, and end up with chance or below chance expectations. Doesn't this seem a bit.. suspect?
I'm not assuming that, although again, some psi believers seem to believe that researchers who have negative attitudes toward psi can adversely affect results. But my point is that, since the results reported by Radin are so overwhelmingly statistically significant, it would take far more than 4000 unreported neutral experiments to refute Radin's general thesis.
 
No, I don't accept your belief that Radin has deliberately excluded unfavourable ganzfeld studies to make the odds higher.
Well, think about those pre-1985 experiment again. In The Conscious Universe, Radin said they were excluded because they did not report results. Now, in The Entangled Mind, he says it is because they do not use hit/miss scoring systems.

It appear to me that, having discovered that his initial reason for excluding these experiments was wrong, he does not re-introduce them, but rather comes up with a new reason to keep them excluded.

Can you cite a specific case where there were two studies done with the same protocols and he included one in his meta-analysis, but excluded the other?

Okay, right at the very beginning there's one of these dichotomies. Honorton and Parker both carried out the very first ganzfeld experiments seperately (though Honorton was first to be published). Both are fairly typical ganzfeld set-ups, but Honorton's experiment is included, while Parker's isn't. In fact, Parker's is more typical, since he used white noise as the audio stimulus, while Honorton used sounds of the sea. Parker's used 50% MCE and it is excluded on those grounds.

Another example, which is perhaps more telling, was what happened to then Cornell experiment. This replication of the PRL trials explored the difference between meditators and non-meditators, and it ran for 50 trials, scoring a 24% hit rate. Radin split the results according to meditators (36%) and non-meditators (12%) and then he simply excluded the non-meditators. He said he couldn't include data from subjects which were expected to do badly.

Well, quite apart from the fact that non-meditators aren't expected to do badly, he should've really applied that thinking to the whole database. Of course, this would leave him with a very small number of experiments. So he just took the non-meditators out of this one experiment.

Other examples (and it's sometimes hard to "reverse engineer" which experiments he's left out by looking at the data in his books) include Williams, Roe, Upchurch, Lawrence, 1994, which tested the sender/non-sender protocols and also looked at geo-magnetic activity. Both of these are pretty standard in ganzfeld so I can't see what grounds this experiment should be missing.

Lastly, back in 1978, Schmit and Stanford di an experiment to see whether the menstrual cycle would effect success in ganzfeld ESP tests. This is in Radin's meta-analysis (as part of Honorton's data) but the replication of this experiment (which got worse results) by Keane & Wells, 1979, is not included.

Whether or not this is all deliberate, I don't know. I suspect that, having found the results he wants by adding together some very high profile results, I think that Radin doesn't really want to explore the other experiments too closely.
 
Last edited:
Where do you get the 4000 number from?

Yes, he got it from me. My database of ganzfeld experiments is between 6,700 and 7,000 trials (from aorund 140 experiments), depending on which way you count the number of trials in certain experiments.
 
But my point is that, since the results reported by Radin are so overwhelmingly statistically significant, it would take far more than 4000 unreported neutral experiments to refute Radin's general thesis.
Just to make things clear - I'm saying that Radin's meta-analysis is wrong and can't be considered evidence of anything. I'm not trying to refute Radin's general thesis (that psi exists).

Like I said, one thing at a time.
 

Back
Top Bottom