• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Psi in the Ganzfeld

Thanks for all of the information. It will take me a while to read and digest. Preliminarily, however, if Radin has selectively cited 3145 favourable ganzfeld experiments out of a little under 7,000, that still would mean that the overall hit rate would be highly statistically significant, if Radin is correct that 1008 of the 3145 experiments that he cites produced hits. For example, assume that there actually are an additional 4000 experiments that overall produced chance results of 25% hits, or 1000 hits total. That would bring the total number of ganzfeld experiments to well over 7000 (7145). The total number of hits would be 2008 (the 1008 cited by Radin plus the assumed additional 1000). That would produce an overall hit rate of 2008 out of 7145, or 28.1%. While, to the layman, that might seem only narrowly above the chance rate of 25%, with that many experiments the true odds against chance would actually be 986 million to 1. In fact, using an on-line binomial calculator, if you put in 7145 for the value of "n", 2008 for the value of "k", and 0.25 for the value of "q" and click on calculate, you will obtain a "P" value for "2008 or more out of 7145" so small that it is not even calculated exactly, but simply shows as "<0.000001." See http://faculty.vassar.edu/lowry/binomialX.html

So, to invalidate the ganzfeld experiments, it appears that something more than selective inclusion must be found.

The statistics would be nicer if there were better controls in place.

1. Each picture needs to have matching words that are assigned to it, this is a crucial control, otherwise the probability of a match is not twenty five percent. It is unknown.
2. Then the word distribution associated with each picture needs to be examined, at what rate do certain picture match certain words.
3.Then each set of pictures can be controlled for a very impotant variable called by me 'the random match rate'. each set should contain pictures that match different words. So no two pictures should be adjudged as having the same matching words. This has to be done, becuase fior the probability of a word match to be twenty five percent you have to do this.
4. then the overall distribution of words given by the reciever must be examined and the target pictures have to be matched against this random, or pseudo random match rate of just random words stated by the reciever. this would eliminate then the probabilty of a picture having a fifty per cent match to any random word selection.

These are the controls that need to be in place. Otherwise the 'target' picture might have a rate of 50% random match to any given reciever string, that would then raise the probability above 25%. especialy if all the pictures in a set have high match rates to random reciever strings.

For the ganfeld to be significant you have to prove that the given match rate is 25%, you can't assume that the given match rate to a randomly chosen reciever string is 25%. ceratin pictures could have a much higher or lower match rate to any given set of reciever word strings.

That is why the ganzfeld data is not valid at this point.
 
No, in post #6 here, I demonstrated that, even if 4000 additional ganzfeld trials were added with chance results to Radin's numbers to give a total of 7145 trials, the odds against the number of hits would be 986 million to 1. However, Ersby says that the total number of trials is actually less than 7000, which would push the odds against to well over a billion to 1.


that is only true because you assume without proving it that a given picture has only a twenty five percent chance of matching a random string of reciever words. First off each picture in a set can not have match words to any other picture in the set. Then you have to prove that the matxch rate for any given [picture is only twnty five percent to any given random reciever string of words.

If a picture has a high proability to match any random reciever string, then the odds are not twenty five percent.
 
that is only true because you assume without proving it that a given picture has only a twenty five percent chance of matching a random string of reciever words. First off each picture in a set can not have match words to any other picture in the set. Then you have to prove that the matxch rate for any given [picture is only twnty five percent to any given random reciever string of words.

If a picture has a high proability to match any random reciever string, then the odds are not twenty five percent.
I don't follow your logic. The way a classic ganzfeld experiment works is that a recipient, without knowing which of four pictures is the target picture focused on by a sender, selects one of the four. If the recipient chooses the picture focused on by the sender, that's a hit -- otherwise, it's a miss. The sender is not allowed to choose the picture focused on, but rather is required to focus on a picture selected at random. There are some variations on this procedure, such as using a panel of judges to determine, based on the recipient's comments, which of the four pictures is the best match, but again, the panel does not know which is the target picture.

Now, in some cases, the picture focused on may be more likely to be chosen by the recipient or the panel because, for example, it may have more vivid imagery than the other three pictures. However, the reverse is also true: In some cases, the picture focused on may be less likely to be chosen by the recipient or the panel because it may have less vivid imagery than the other three pictures. As long as selection of the picture is random and the recipient or panel does not which picture is the target, the expected hit rate should be 25%.
 
Why? A significantly below-chance result would actually be consistent with some psi believers' theory that certain researchers negatively impact the number of hits. So, I would think those experiments would get reported.

If only some psi believers believe this (which appears to be correct) then we should assume that only some studies with below-chance expectations would be published. Which again, suggests a below-chance average for unpublished results.

Additionally, for Radins selection (thanks for explaining the numbers again, I misunderstood), the choice is done by Radin, who apparently does not consider below-chance expectations to be significant.

I'm not assuming that, although again, some psi believers seem to believe that researchers who have negative attitudes toward psi can adversely affect results.

Your analogy with the basketball player suggests that you do assume this. Otherwise, why would researchers need several chances to get a positive result from an overall study?

I'm sure that researchers who have negative or even neutral attitudes towards psi can adversely affect results. First, they tend to impose stricter controls and be more careful with their statistics. Second, we can be quite sure that they will not fabricate results that give positive results (they may of course fabricate a non-significant result).

But my point is that, since the results reported by Radin are so overwhelmingly statistically significant, it would take far more than 4000 unreported neutral experiments to refute Radin's general thesis.

Which again assumes that all the positive experiments are good. Which is clearly not the case. This is why, again, we need Radin and friends to point out a procedure for making the experiment that consistently gives significantly positive results.
 
I would be interested to see what kind of shape the ganzfeld plot turns out to be if you use a log scale on the x-axis though. If you still have the excel data, would you be able to do that relatively easily and let me know the result? Would be appreciated.

I'm not sure what that would acheive. But as it turns out, I brought the wrong cd-rom into work this morning, so I can't do anything 'til after Xmas.
 
Another example, which is perhaps more telling, was what happened to then Cornell experiment. This replication of the PRL trials explored the difference between meditators and non-meditators, and it ran for 50 trials, scoring a 24% hit rate. Radin split the results according to meditators (36%) and non-meditators (12%) and then he simply excluded the non-meditators. He said he couldn't include data from subjects which were expected to do badly.

Well, quite apart from the fact that non-meditators aren't expected to do badly, he should've really applied that thinking to the whole database. Of course, this would leave him with a very small number of experiments. So he just took the non-meditators out of this one experiment.

Oh what a mess. Its things like this that shake my confidence in the ganzfeld meta-analysis. I had no idea that the Cornell experiments were split into two hit rates, just by reading the "conscious universe". Perhaps Radin had a good reason to do this, but from what you've said Ersby, I can't see how. Yes, non-meditators should also be getting above chance results, since much of the entire database are non-meditating trials.

However, I still believe that ganzfeld research provides promissng results. The Cornell experiments in themselves suggest a difference between altered states. Perhaps meta-analyses should concentrate on finding differences between these kinds of states of consciousness instead of trying to lump different kinds of experiments together.

Do you know of such a meta-analysis?
 
Honorton investigated the difference in his work at PRL, which found that meditators scored higher than non. Milton & Wiseman, in their m-a, found no such effect in the data they examined. Broughton also looked at the difference, but I can't remember what he found.

[shameless plug]If you search my articels for "meditator", that should point you to the data you want.[/shameless plug]

Bierman and Wezelman also examined the difference in scoring for people on drugs, but found no effect.
 
However, I still believe that ganzfeld research provides promissng results.

"No matter what happens, even if the research I have so far put so much trust in turns out to be bogus...I will still be a believer, because there is always some new crap I can hitch on to."
 
I don't follow your logic. The way a classic ganzfeld experiment works is that a recipient, without knowing which of four pictures is the target picture focused on by a sender, selects one of the four. If the recipient chooses the picture focused on by the sender, that's a hit -- otherwise, it's a miss. The sender is not allowed to choose the picture focused on, but rather is required to focus on a picture selected at random. There are some variations on this procedure, such as using a panel of judges to determine, based on the recipient's comments, which of the four pictures is the best match, but again, the panel does not know which is the target picture.

Now, in some cases, the picture focused on may be more likely to be chosen by the recipient or the panel because, for example, it may have more vivid imagery than the other three pictures. However, the reverse is also true: In some cases, the picture focused on may be less likely to be chosen by the recipient or the panel because it may have less vivid imagery than the other three pictures. As long as selection of the picture is random and the recipient or panel does not which picture is the target, the expected hit rate should be 25%.


Procedure varies, but in many experiments there is a word choice made by the reciever , they say some owrds and then decide which picture they think it matches.

In the protocol you are describing a similar effect would still apply. certain picture are more likely to be chosen or not chosen at random without any Ganzfeld experiment. To control for random picture selection you would have to match the pictures on a different scale.

Show a random set of four pictures to each reciever, tell them that someone else trying to 'send' them an image and ask them to chose the target. Do the same with senders have them choose a picture to 'send' without a reciever. Certain pictures will have higher chances of being chosen over time, certain pictures will have a lower probability of being chosen over time. Due to just appeal in the subject matter.

You can then match sets to provide that either there is an equal proability of a sender match or a receiver match.

Then you could say that there was a base probaility of 25% but it is very possible that certain pictures are more likely to be chosen by recievers over others and if those are the 'target' then that is going to be an effect other than psi.
 
Oh what a mess. Its things like this that shake my confidence in the ganzfeld meta-analysis. I had no idea that the Cornell experiments were split into two hit rates, just by reading the "conscious universe". Perhaps Radin had a good reason to do this, but from what you've said Ersby, I can't see how. Yes, non-meditators should also be getting above chance results, since much of the entire database are non-meditating trials.
I don't see the evidence that Radin has excluded non-meditators from the overall figures he cites in Entangled Minds; i.e., 3045 ganzfeld experiments with 1008 hits. If he did exclude them, that would be wrong, but I still come back to how far above chance 1008 hits in 3045 experiments is. Is there any meta-analysis that Ersby or anyone else has done of all experiments that shows the overall hit rate was not significantly above chance?
 
Rodney: Well, if he DID exclude the non-meditators, then this would be exactly the kind of selection I was talking about. Exclusion of below chance outcomes. So at any rate, I think your theory that any added data would only be at chance expectations seems very weak indeed.

In fact there seems to be three possible explanations here:
a) Data with below-chance expectations was excluded. Either by Radin himself, or due to a 'file drawer' effect.
b) Some above-chance studies are bad, raising the average. Either because of outright fraud, or due to flawed methodology.
c) The PSI effect exists.

So in the end, I repeat my mantra: Radin should stop focusing so much on averages, which will never prove anything, and instead concentrate on finding a test procedure that works reliably.
 
I don't see the evidence that Radin has excluded non-meditators from the overall figures he cites in Entangled Minds;

Why do you say that you cannot see it? Maybe I have not been clear enough. Let me try once more.

Bear in mind that the Entangled Mind meta-analysis is just an updated version of the m-a in the Conscious Universe. Okay?

So, go back to your copy of The Conscious Mind.

Look at figure 5.4 regarding ganzfeld experiments. See that Cornell is listed as having a hit rate of 36%. The only way Radin could come to that conclusion is if he deliberately excluded non-meditators from his meta-analysis for that experiment.

Okay?

Now, just to seal the arguement, let's hear from Radin himself:

Radin, "Should Ganzfeld Research Continue to Be Crucial in the Search for a Replicable Psi Effect? Part Ii. Edited Ganzfeld Debate", JoP, vol 6, 1999
"Bem's experiment was a differential ganzfeld study involving meditators (25 sessions) and nonmeditators (25 sessions). I did not include the nonmeditator data in my analysis because that group was predicted to not perform as well as the meditators (which is what happened), and I couldn't justify including in a proof-oriented meta-analysis a subset which was predicted to "not" perform."


I repeat: non-meditators are not predicted to do badly in ESP ganzfeld experiments.

Now, Rodney, whether you see it or not is no longer an issue for the ganzfeld experiments, more an issue for your ability to process data which does not conform to your world view.

Either way, please furnish us with data supporting your view, or admit that Radin's meta-analysis of ganzfeld is not evidence for ESP.

Is there any meta-analysis that Ersby or anyone else has done of all experiments that shows the overall hit rate was not significantly above chance?

Once again, I must insist that we deal with one topic at a time.

When you admit that Radin's meta-analyses is incomplete, we can move on to the database as a whole.

And if you do not admit that Radin's meta-analysis is incomplete, please give your evidence for doing so. I'm afraid that statements like "I have to believe that..." are not good enough.

I have given you plenty of data demonstrating beyond dount that Radin's meta-analysis is fatally flawed.

If you think otherwise, please explain why. With data.

Then we may continue.
 
Last edited:
Psi in the Ganzfeld, shoo, psi, shoo!
Psi in the Ganzfeld, shoo, psi, shoo,
Psi in the Ganzfeld, shoo, psi, shoo!
Skip to m'lou, my darling.
 
Why do you say that you cannot see it? Maybe I have not been clear enough. Let me try once more.

Bear in mind that the Entangled Mind meta-analysis is just an updated version of the m-a in the Conscious Universe. Okay?
Are you sure that Radin has not made any adjustments from the Conscious Universe?

So, go back to your copy of The Conscious Mind.

Look at figure 5.4 regarding ganzfeld experiments. See that Cornell is listed as having a hit rate of 36%. The only way Radin could come to that conclusion is if he deliberately excluded non-meditators from his meta-analysis for that experiment.

Okay?

Now, just to seal the arguement, let's hear from Radin himself:

Radin, "Should Ganzfeld Research Continue to Be Crucial in the Search for a Replicable Psi Effect? Part Ii. Edited Ganzfeld Debate", JoP, vol 6, 1999
"Bem's experiment was a differential ganzfeld study involving meditators (25 sessions) and nonmeditators (25 sessions). I did not include the nonmeditator data in my analysis because that group was predicted to not perform as well as the meditators (which is what happened), and I couldn't justify including in a proof-oriented meta-analysis a subset which was predicted to "not" perform."
I agree that doesn't seem justified, but I can't see why Radin would be so concerned with a paltry 25 trials, which has no significant bearing on his analysis.

I repeat: non-meditators are not predicted to do badly in ESP ganzfeld experiments.
Are you sure that Daryl Bem did not predict that? One of the commenters stated: "Daryl Bem's study is unpublished, so I cannot check whether he predicted nonmeditators not to perform at all or to perform less well than meditators--two entirely different things."

Now, Rodney, whether you see it or not is no longer an issue for the ganzfeld experiments, more an issue for your ability to process data which does not conform to your world view.
Ah yes, the objective skeptic versus the biased believer. :)

Either way, please furnish us with data supporting your view, or admit that Radin's meta-analysis of ganzfeld is not evidence for ESP.
1008 hits out of 3145 experiments with an expected hit rate of 25% is not evidence for ESP? And I've already demonstrated that adding in 4000 more experiments is highly unlikely to reduce Radin's findings to insignificance.

When you admit that Radin's meta-analyses is incomplete, we can move on to the database as a whole.
Okay, it's incomplete, but I'm still not persuaded that Radin has deliberately excluded experiments that produced insignificant or negative results solely to hype the hit rate. But let's move on because I'm interested in hearing what you think was flawed about the many successful ganzfeld experiments.
 
Are you sure that Radin has not made any adjustments from the Conscious Universe?

How could Radin "adjust" the Cornell experiments?

I agree that doesn't seem justified, but I can't see why Radin would be so concerned with a paltry 25 trials, which has no significant bearing on his analysis.

Then which 25 trials are significant? If we know that he selected some data erroneously, then why should we expect him not to have done the same in many other cases?

Are you sure that Daryl Bem did not predict that? One of the commenters stated: "Daryl Bem's study is unpublished, so I cannot check whether he predicted nonmeditators not to perform at all or to perform less well than meditators--two entirely different things."

But we know that Radin did not predict that. If he did, he should exclude all the other trials with non-meditators, too. Presumably that would be most of the data in the m-a.


1008 hits out of 3145 experiments with an expected hit rate of 25% is not evidence for ESP?

Not if the selection of experiments has been shown to be flawed.

And I've already demonstrated that adding in 4000 more experiments is highly unlikely to reduce Radin's findings to insignificance.

Again, only if you assume that they will be neutral. Based on the 25 experiments that we know he excluded, we know that this is not true.

Okay, it's incomplete, but I'm still not persuaded that Radin has deliberately excluded experiments that produced insignificant or negative results solely to hype the hit rate.

It's not necessary that he did it deliberately, only that he did it.

But let's move on because I'm interested in hearing what you think was flawed about the many successful ganzfeld experiments.

Which one of them? I would like to know which test setup you think will produce significant results. If you could tell us, we could move on to compare different studies using the same kind of setup, and from there it should be possible to draw such conclusions.
 
Ah yes, the objective skeptic versus the biased believer. :)
Well, look at it from my point of view - I don't want to be the one that brings all the data to the party. If you hold an opinion on something, the least you can do is explain why.

1008 hits out of 3145 experiments with an expected hit rate of 25% is not evidence for ESP?
As Merko said, with Radin's selection process (or, rather, lack of it), it is not evidence for ESP. If I wanted I could put together a meta-analysis as big as Radin's with a hit rate at chance. Would you consider that to be evidence against psi?

Okay, it's incomplete, but I'm still not persuaded that Radin has deliberately excluded experiments that produced insignificant or negative results solely to hype the hit rate.
Deliberate or not, I don't know. But that's what he did.

But let's move on because I'm interested in hearing what you think was flawed about the many successful ganzfeld experiments.
Not all of them were flawed, of course. There's about a dozen which have statistically significant scores and no obvious flaws. Do you want me to pick an experiment and we can discuss it?
 
As Merko said, with Radin's selection process (or, rather, lack of it), it is not evidence for ESP. If I wanted I could put together a meta-analysis as big as Radin's with a hit rate at chance. Would you consider that to be evidence against psi?
If it is logical. Feel free to do so.

Not all of them were flawed, of course. There's about a dozen which have statistically significant scores and no obvious flaws. Do you want me to pick an experiment and we can discuss it?
Yes.
 
My m-a, should I do it, will be every bit as logical as Radin's.

But it'll be after Christmas. See you in a few days.
 

Back
Top Bottom