The Ganzfeld Experiments

JMA said:
Hi,



Do you know in wich publication I can find that? I'd like to read that, especially if the results are stat. significant...

Thanks,

Sure.

"[...]When 10 new studies published after the Milton Wiseman cut off date are added to their database, the overall ganzfeld effect again becomes significant, but the mean effect size is still smaller than those from the original studies. Ratings of all 40 studies by 3 independent raters reveal that the effect size achieved by a replication is significantly correlated with the degree to which it adhered to the standard ganzfeld protocol. Standard replications yield significant effect sizes comparable with those obtained in the past. " Bem, D.J, Palmer, J. and Broughton, R.S. (2001). Updating the Ganzfeld database: a victim of its own success? Journal of Parapsychology, 65, 207-218.

It was also covered in Science News:

"Since the metanalysis was completed, nine more ganzfeld studies have been published. Milton acknowledges that the psi effect would be statistically significant if the analysis were updated to include these studies."

http://www.sciencenews.org/pages/sn_arc99/7_31_99/fob4.htm

BTW, you may also not be aware of Bem's response to Hyman.

Here it is:

http://comp9.psych.cornell.edu/dbem/response_to_hyman.html
 
Ian said:
Yes it's exactly the same thing. Of course if we take bias into account then over a particular experiment the average likely be either greater or lesser than 25%. But that all averages out over a sufficiently large number of experiments so that the average is exactly 25%.
What are you averaging over multiple experiments? Target bias makes sense within one experiment. You can't average it over multiple experiments, because the targets aren't the same.

You are right that target bias could just as well skew results negatively, but this is a tricky business. Bias might tend to work in favor of the results, because one or two subjects tease out the positive bias while the rest of the subjects aren't really affected by any negative bias.

As I've said before, the statistical model is naive. It's a complicated problem.

~~ Paul
 
Luci said:
Hi, welcome. I think you may be ref to the Wiseman/Milton Meta analysis. In fact, it was not complete. When Julie Milton completed it, it was found to be stat. significant.
...
"[...]When 10 new studies published after the Milton Wiseman cut off date are added to their database, the overall ganzfeld effect again becomes significant, but the mean effect size is still smaller than those from the original studies. Ratings of all 40 studies by 3 independent raters reveal that the effect size achieved by a replication is significantly correlated with the degree to which it adhered to the standard ganzfeld protocol. Standard replications yield significant effect sizes comparable with those obtained in the past. " Bem, D.J, Palmer, J. and Broughton, R.S. (2001). Updating the Ganzfeld database: a victim of its own success? Journal of Parapsychology, 65, 207-218.
Are you referring to the same thing here?

~~ Paul
 
Ersby said:
Originally posted by Interesting Ian
But are people not able to understand that this not alter the chance of getting the right target?? Do people really fail to understand that if psi does not exist the average cannot possibly be greater than 25%???
--------------------------------------------------------------------------------



Response bias, for the last time, is an entirely post hoc measure.

You are claiming that the hit rate will on average be grater than 25%. Your suggestion is preposterous. The fact that parapsychologists are making the same mistake doesn't impress me either.

If, after the run of an experiment, it transpires that targets were chosen at random that JUST SO HAPPENED to coincide with the ALREADY KNOWN AND UNDERSTOOD propensity for people to talk about certain things in the ganzfeld state, then there is grounds for suggesting that response bias will inflate the hit rate expected by chance.

Indeed, and if targets were chosen at random that JUST SO HAPPENED to differ from the ALREADY KNOWN AND UNDERSTOOD propensity for people to talk about certain things in the ganzfeld state, then there is grounds for suggesting that response bias will reduce the hit rate.

And on average such biases will cancel each other out resulting in exactly 25% expected hit rate if there is no psi.

I already gave a hypothetical example of how it could work. If someone knows that 50% of a target pool has water, and he talks about water in each session, and (by chance) water pictures are chosen as a taregt 60% of the time, not 50%, then the hit rate expected by chance is 29%.]

If water pictures predominate and people have a psychological propensity to talk/pick water pictures, then clearly it will be greater than 25%. Obviously everything has to be random as I have made clear. It was my understanding that they were.
 
Ersby said:
Lucian is mistaken. Wiseman and Milton worked on the paper together. When it was completed it showed no effect.

http://www.csicop.org/si/9911/lilienfeld.html

The reason the Milton/Wiseman meta-analysis showed no effect was because some of the studies in their database were very non-standard. To prove this, Bem had three of his advanced graduate students at Cornell blindly rate each study according to the degree to which it complied with the standards set out in the original Psychological Bulletin article:

"The method sections for the 40 studies to be rated were first edited to eliminate all article titles, authors, hypotheses, references to results of other experiments in the sample, and descriptions of psychological tests (except those given during the ganzfeld or used for subject selection). The edited method sections were then photocopied and assembled into judging packets."

The end result was that studies which adhered to the standard procedure, retained a hit rate consistent with the original meta-analysis. Studies which were rated non-standard, such as one where a piece of music was the target, were not consistent and showed no effect.
http://comp9.psych.cornell.edu/dbem/updating_the_ganzfeld_data.htm

amherst
 
amherst said:

The reason the Milton/Wiseman meta-analysis showed no effect was because some of the studies in their database were very non-standard. To prove this, Bem had three of his advanced graduate students at Cornell blindly rate each study according to the degree to which it complied with the standards set out in the original Psychological Bulletin article:

"The method sections for the 40 studies to be rated were first edited to eliminate all article titles, authors, hypotheses, references to results of other experiments in the sample, and descriptions of psychological tests (except those given during the ganzfeld or used for subject selection). The edited method sections were then photocopied and assembled into judging packets."

The end result was that studies which adhered to the standard procedure, retained a hit rate consistent with the original meta-analysis. Studies which were rated non-standard, such as one where a piece of music was the target, were not consistent and showed no effect.
http://comp9.psych.cornell.edu/dbem/updating_the_ganzfeld_data.htm

amherst

""Milton acknowledges that the psi effect would be statistically significant if the analysis were updated to include these studies."
 
Ersby,

Let's say I go and play on a roulette wheel. I either bet on red or black for 100 goes. Now lets initially forget how many I get right and just examine the response bias. At the end, on examining the number of hits it is found that 67% of them are black. This reveals I most probably have a "response bias" preferring black to red, on average twice as much. Now how would this effect my overall hit rate? You would say that if the ball has the propensity to land in black holes the average hit rate will be inflated meaning that I will get more than 50%. But if the wheel is a fair one then in the next trail of 100 goes the red holes might well be favoured. This will then result in my score being reduced by, on average, the same as it was increased on the previous trial. But on average I will be right only 50% of the time. OK?? But note again the number of hits which are black will still be 67%.

But it should be clear here that the response bias cannot magically alter the percentage I get right!

Understood? ;)
 
Interesting Ian said:
Ersby,

Let's say I go and play on a roulette wheel. I either bet on red or black for 100 goes. Now lets initially forget how many I get right and just examine the response bias. At the end, on examining the number of hits it is found that 67% of them are black. This reveals I most probably have a "response bias" preferring black to red, on average twice as much. Now how would this effect my overall hit rate? You would say that if the ball has the propensity to land in black holes the average hit rate will be inflated meaning that I will get more than 50%. But if the wheel is a fair one then in the next trail of 100 goes the red holes might well be favoured. This will then result in my score being reduced by, on average, the same as it was increased on the previous trial. But on average I will be right only 50% of the time. OK?? But note again the number of hits which are black will still be 67%.

But it should be clear here that the response bias cannot magically alter the percentage I get right!

Understood? ;)

No I've got this wrong. Say if I have a psychological propensity to choose black balls, and in fact I'm twice as likely to choose black balls then red balls. This would mean that if there were an equal number of red ball and black balls resulting in 100 spins of the wheel, that 67% of my correct calls will be black. But this percentage will be higher if during a 100 ball trial there are more black balls resulting than red balls. The percentage will be lower than 67% for less red balls than black balls. But it still cancels each other out leaving 50% on average that you will get right.
 
Interesting Ian said:

Yes it's exactly the same thing. Of course if we take bias into account then over a particular experiment the average likely be either greater or lesser than 25%. But that all averages out over a sufficiently large number of experiments so that the average is exactly 25%.


Not quite. As a simple example : if you flip ten (fair) coins, you expect to get five heads and five tails, yes? Well, I'm prepared to make a little wager : five quid says that you don't. I'll go one better and offer you odds of 2:1.

And you're a fool if you take it, since the actual odds of getting exactly five yeads in ten flips are about 3:1 against. You usually get results "close to" a 50/50 split, but not exact. The difference between the actual results and the predicted results is random.

Aha, you say, but on average the errors will balance out! No, they don't. If you think about it, it's random whether you will get "too many" heads or "too few," so we could model that as another coin flip, and you see that the chances of getting exactly equivalent numbers of too high and too low are yet another coin flip. In fact, I'm prepared to make another wager with you. The same freakish talking money says that if you flip ten coins ten times, the average number of heads per flip will not be 5.0. Heck, I'll give you 5:1 odds on that....

And, again, it's a fool's wager. The odds of getting an average of exactly 5.0 heads/flip is exactly the chance of flipping exactly 50 heads in 100 tosses, which works out to about 12:1 against.

So you have to distinguish between the results "expected" of a random process, which are almost never the same as the actual results, but are usually close. Not identical, but close.

The Bierman paper showed that, in the experiment under discussion, if you use the actual results of the random decisions, instead of the imaginary expected results, the findings become "non-significant." In other words, there's no evidence for a real effect.
 
Interesting Ian said:


No I've got this wrong. Say if I have a psychological propensity to choose black balls, and in fact I'm twice as likely to choose black balls then red balls. This would mean that if there were an equal number of red ball and black balls resulting in 100 spins of the wheel, that 67% of my correct calls will be black. But this percentage will be higher if during a 100 ball trial there are more black balls resulting than red balls. The percentage will be lower than 67% for less red balls than black balls. But it still cancels each other out leaving 50% on average that you will get right.

Only if the amount by which more black balls came out in the first trial is exactly the same as the amount by which more red balls came out in the second.

Which is highly unlikely.

Test it yourself. Flip fifty coins, and count how many heads came up. Then flip another fifty and count how many tails came up (in the second batch) The odds that those two numbers are equal are more than 12:1 against.
 
Luci said:
I think you may be ref to the Wiseman/Milton Meta analysis. In fact, it was not complete. When Julie Milton completed it, it was found to be stat. significant.
Why did you say this when it had nothing to do with Milton "completing" the analysis?

Ian said:
Indeed, and if targets were chosen at random that JUST SO HAPPENED to differ from the ALREADY KNOWN AND UNDERSTOOD propensity for people to talk about certain things in the ganzfeld state, then there is grounds for suggesting that response bias will reduce the hit rate.
Correct.

And on average such biases will cancel each other out resulting in exactly 25% expected hit rate if there is no psi.
Only if your "grounds for suggestion" happen to be correct. Maybe that's not how response bias works in this case. [Edited to add: and see Dr. Kitten's posts above.]

Why does everyone just guess at how things might work?!?

But if the wheel is a fair ...

But it should be clear here that the response bias cannot magically alter the percentage I get right!
You don't seem to remember what assumptions you make from one paragraph to the next. Obviously if the targets have absolutely no bias in them, then response bias is irrelevant. Do you think that the targets had no bias? For example, was there a representative sample of exotic insects from Borneo?

~~ Paul
 
drkitten said:


Not quite. As a simple example : if you flip ten (fair) coins, you expect to get five heads and five tails, yes?



No, the odds would be against that.

Well, I'm prepared to make a little wager : five quid says that you don't. I'll go one better and offer you odds of 2:1.

And you're a fool if you take it, since the actual odds of getting exactly five yeads in ten flips are about 3:1 against. You usually get results "close to" a 50/50 split, but not exact. The difference between the actual results and the predicted results is random.

Aha, you say, but on average the errors will balance out!

The more and more coins you toss, on average the closer and closer percentage wise you will get equal numbers of heads and tails. You won't get heads coming out at 55% of the time for a sufficient large number of tosses. That is the pertinent point. And even for a few number of tosses, it is just as likely you will get 55% tails and 45% heads than 55% heads and 45% tails. But on average it will be 50%. Same for the ganzfeld. On average it will be 25%, not 27%.


No, they don't. If you think about it, it's random whether you will get "too many" heads or "too few," so we could model that as another coin flip, and you see that the chances of getting exactly equivalent numbers of too high and too low are yet another coin flip. In fact, I'm prepared to make another wager with you. The same freakish talking money says that if you flip ten coins ten times, the average number of heads per flip will not be 5.0. Heck, I'll give you 5:1 odds on that....

And, again, it's a fool's wager. The odds of getting an average of exactly 5.0 heads/flip is exactly the chance of flipping exactly 50 heads in 100 tosses, which works out to about 12:1 against.

So you have to distinguish between the results "expected" of a random process, which are almost never the same as the actual results, but are usually close. Not identical, but close.

None of this explains why you would expect to get more heads than tails. Why not more tails than heads?? Back to the ganzfeld. There is equal chances of the actual result being above or below 25%. Therefore you cannot say you would expect 27% on average.

The Bierman paper showed that, in the experiment under discussion, if you use the actual results of the random decisions, instead of the imaginary expected results, the findings become "non-significant." In other words, there's no evidence for a real effect.

How can 33% for a sufficiently large number of trials be non-significant when the average hit rate is only 25%??
 
drkitten said:
No I've got this wrong. Say if I have a psychological propensity to choose black balls, and in fact I'm twice as likely to choose black balls then red balls. This would mean that if there were an equal number of red ball and black balls resulting in 100 spins of the wheel, that 67% of my correct calls will be black. But this percentage will be higher if during a 100 ball trial there are more black balls resulting than red balls. The percentage will be lower than 67% for less red balls than black balls. But it still cancels each other out leaving 50% on average that you will get right.
--------------------------------------------------------------------------------



Only if the amount by which more black balls came out in the first trial is exactly the same as the amount by which more red balls came out in the second.

Which is highly unlikely.

Obviously. It could well be the case that more red balls come out in the 2nd trial than black balls come out in the first trial. Given that I prefer black balls this means that my hit rate will be less than 50%. Yes less than 50%!

Same goes for the ganzfeld. My preferences will sometimes make the hit rate grater than 25%, sometimes lower than 25%. They won't exactly cancel each other out. But the bias is just as likely to skew below 25% as above it. We can reduce this problem by increasing the number of trials/experiments.
 
Interesting Ian said:


None of this explains why you would expect to get more heads than tails. Why not more tails than heads?? Back to the ganzfeld. There is equal chances of the actual result being above or below 25%. Therefore you cannot say you would expect 27% on average.

In the study you quoted, the scientists "expected" that the random choice of stimuli would give them a 25% baseline.. They got a random set of stimuli that gave them a 27% baseline, which is close but not exact. But when you use the 27% baseline instead of the 25% in calculating the statistics, the results are no longer significant, i.e. there is no evidence of psi.
 
Lucianarchy said:


Are you sure about this??

There may be pis but I have yet to see any real demostration of psi. So much can be accounted for by other means. Now is you want to relabel other behaviors as psi then, I guess there might be evidence of psi.

I have seen plenty of wierd stuff but very little that demostrates psi, the way that you can demonstrate the repeling property of magnets.

As sure as I am of anything, none of it's true at any rate, being a nihilist I had to throw that in.
 
posted by Interesting Ian
It's wholly irrelevant how many experiments are run. The average must still be 25%. You clearly fail to understand what the word average means. Of course the more experiments we run the more and more confident we can be that there is an anomalous effect. Running just a few trials is worthless. Nevertheless my points remains; the average cannot conceivably be greater than 25%.

The thread has discussed many potential sources of error in the twenty five percent match rate. On the surface there is going to be on overall 25% match rate, but given the behavior of random and near random systems you would want thousands and thousands of runs before you expect there to be a 25% match on a random basis. that is because of the actual nonrandom distribution of numbers in a 'random series' , what you or I would call a 'random' distribution is frequently going to contain all sorts of biased sequences. So you do need runs in the thousand to get a truely good random distribution. For a run of numbers in the hundreds you might find that your random number generator has produced an abnormaly high number of the numeral 5.

The random match factor and the response bias are important, and could easily be controlled for, because in a trail of less than a thousand runs, you could have targets that are getting a high response bias and a high random match bias. In fact if you chose photos that have low scores in both and still got a positive results then you could be very confident of that result.
 
With all of the sensory leakage and the proven poor controls that have been exposed the results from chance should be much higher! Including the flaws in these trials out of 4 targets by chance it should be close to 87%! They haven't even gotten near this since these fake scientists can't even properly cheat! This proves that ESP doesen't exist because with 4 targets they should get 87% from chance! You can't convince me that I'm wrong since I'm a skeptic!

This kind of test may be convincing to retards who know nothing but it is clearly BS when analyzed by true critical material minds!

It's also a fact that the results decrease when better but still flawed controls are put in place! I know this because of science!
 
drkitten said:


In the study you quoted, the scientists "expected" that the random choice of stimuli would give them a 25% baseline.. They got a random set of stimuli that gave them a 27% baseline, which is close but not exact. But when you use the 27% baseline instead of the 25% in calculating the statistics, the results are no longer significant, i.e. there is no evidence of psi.

You can play this game if you like. But the baseline percentage will be just as likely to be below 25% as above it. So how does this justify Ersby and others saying that for ganzfeld experiments one can expect a 27% hit rate even without psi??

And given that the baseline will just as likely be below 25%, this means that conceivably for some experiment a 25% hit rate will be suggestive of psi. Try telling Skeptics that and watch them foam at the mouth in rage and indignation.

Why do we never hear Skeptics complain that the average hit rate expected by chance should be below 25%?? Well we know the reason for that; because they are utterly biased and irrational when it comes to the subject of the paranormal, that's why.
 
Interesting Ian said:


You can play this game if you like. But the baseline percentage will be just as likely to be below 25% as above it. So how does this justify Ersby and others saying that for ganzfeld experiments one can expect a 27% hit rate even without psi??


What game? I quoted from the Bierman et al. paper you cited above : "A conservative correction [...] reduces the 10% differential effect to a non-significant 6.8%" ("Non-significant," in this context, means that there is not enough evidence to believe the effect could not have come from chance.) This has nothing to with Ersby, as I haven't read the Ersby paper, and don't intend to as I have other demands on my time.

This is part of the scientific process, performing a close examination of the claims and the evidence that support it. Have you read the Ersby paper? What justification does he give for his counterintuitive statement? Is he referring to a specific experiment, to a group of experiments, or to the experimental setup as a whole? What is wrong with his justification?

But statistics is a fairly exact science, and statisticians are usually very good about being explicit about their assumptions and reasoning. If you can't point to what is wrong except a dislike of Ersby's conclusion, that's not very credible. Especially since you've admitted upthread that your mathematical understanding is weak. Why should I trust your opinon when it conflicts with the experts'?
 

Back
Top Bottom