Ganzfeld million dollar challange?

Status
Not open for further replies.
I'd like a little more detail about what you consider "realistic". The five lines you quoted are the closest to "realistic" I could come up with. Could you write a proposal with your own numbers? You see, it doesn't need to be more than five lines.
http://www.dailygrail.com/features/the-myth-of-james-randis-million-dollar-challenge
The only proposal for a Ganzfeld test I could find there is the one about recruiting of 989x2 people for more than four years; and it's clearly an example of a completely unrealistic Ganzfeld test, i.e. just the opposite of what you're supposedly answering.

At this point, I can only repeat my question: what is a realistic Ganzfeld test like?
 
I would say that you have twenty people do ten trials each. Then you have two hundred trials.
This could be replicated 10 times (at different sites) for 2,000.

The issues remain, can people do the Ganzfeld at the same time? Or do they have to be seperate runs? When they ran the ganzfeld in the past did they put 10 people sending and ten people receiving at teh same time? That way you could run the twenty trials in four hours. With seperate runs you then have like 40 hours.
 
http://www.dailygrail.com/features/the-myth-of-james-randis-million-dollar-challenge

Note I had to make the last two posts to be allowed to post the url.

This is the problem. For some reason, people keep referring to effects that are not visible to the naked eye. And yet, the source of paranormal claims is casual and informal observation - effects that are so large that they are obvious and amazing to everyone who is privileged to see them, not effects that are small enough to be shrugged off as due to chance. If the ganzfeld trials really did show a paranormal result, you'd only need 37 trials to have a good chance of demonstrating this effect to a standard of less than 5 in 1000 due to chance.

Linda
 
...
If the ganzfeld trials really did show a paranormal result, you'd only need 37 trials to have a good chance of demonstrating this effect to a standard of less than 5 in 1000 due to chance.

Linda

I do not understand this part. Could you clarify, please?
 
If the ganzfeld trials really did show a paranormal result, you'd only need 37 trials to have a good chance of demonstrating this effect to a standard of less than 5 in 1000 due to chance.

Linda

If a do a sample size calculation with a large (i.e. visible to the naked eye) effect size (i.e. h=0.80) that exceeds a standard of being due to chance less than 5 in 1000 times (i.e. one-tailed alpha of 0.005), then 37 trials will show a difference that exceeds the above standard 80% (a typical power level) of the time.

(Source: Statistical Power Analysis for the Behavioural Sciences, Cohen)

Linda
 
If a do a sample size calculation with a large (i.e. visible to the naked eye) effect size (i.e. h=0.80) that exceeds a standard of being due to chance less than 5 in 1000 times (i.e. one-tailed alpha of 0.005), then 37 trials will show a difference that exceeds the above standard 80% (a typical power level) of the time.

(Source: Statistical Power Analysis for the Behavioural Sciences, Cohen)

Linda
Fine, but in a controlled test, psi may be weak because there is no strong emotional component. Guessing which of four pictures that the sender was trying to transmit is not the most exciting way for the average person to spend a day. Nonetheless, if the results are in the 30-35% range over thousands of trials in a tightly-controlled experiment, what is the alternative to psi?
 
If a do a sample size calculation with a large (i.e. visible to the naked eye) effect size (i.e. h=0.80) that exceeds a standard of being due to chance less than 5 in 1000 times (i.e. one-tailed alpha of 0.005), then 37 trials will show a difference that exceeds the above standard 80% (a typical power level) of the time.

(Source: Statistical Power Analysis for the Behavioural Sciences, Cohen)

Linda

This is a ridiculous argument. Nobody is claiming an 80% hit rate for Ganzfeld, and anyone that understands statistics knows that a hit rate only a little bid above chance would be very unlikely to have happened by chance if the sample size is sufficient to give a small enough p value.
 
Fine, but in a controlled test, psi may be weak because there is no strong emotional component. Guessing which of four pictures that the sender was trying to transmit is not the most exciting way for the average person to spend a day. Nonetheless, if the results are in the 30-35% range over thousands of trials in a tightly-controlled experiment, what is the alternative to psi?

Why ask for an explanation for something that hasn't happened?

Linda
 
Why ask for an explanation for something that hasn't happened?

Linda
That depends on whom you believe. Again, at p. 120 of his 2006 book, Entangled Minds, Dean Radin reports that out of 3145 ganzfeld trials in 88 different experiments conducted during 1974-2004 that used standard hit or miss protocols, 1008 produced hits (32% hit rate). Ersby contends that Radin omitted a number of ganzfeld experiments that would dramatically reduce that hit rate, but Radin argues that those experiments did not use standard protocols.
 
This is a ridiculous argument. Nobody is claiming an 80% hit rate for Ganzfeld, and anyone that understands statistics knows that a hit rate only a little bid above chance would be very unlikely to have happened by chance if the sample size is sufficient to give a small enough p value.

You didn't understand what I was saying.

First off, I didn't say a hit rate of 80%. I said an effect size of 0.8. This corresponds to a hit rate of 64% for the ganzfeld.

Second, the point of the MDC, and also of parapsychology research, is that people claim to do and see amazing things. People are unable to see effects that are only a little bit above chance (that's the point of dividing effects into 'small', 'medium' and 'large' for the purposes of statistical analysis). Things have to (appear to) be way above chance before it becomes obvious enough for word to get around. So parapsychology research and the MDC should be looking for things that are obvious. If they are finding small to tiny effects, that suggests that they are missing whatever it is that they are supposed to be looking for.

Third, nobody denies that these things are unlikely, due to chance. However, bias also contributes to differences, and the lack of control groups means that the effect of bias cannot be eliminated. A "small enough p value" may simply be a measure of the presence of bias.

Linda
 
That depends on whom you believe. Again, at p. 120 of his 2006 book, Entangled Minds, Dean Radin reports that out of 3145 ganzfeld trials in 88 different experiments conducted during 1974-2004 that used standard hit or miss protocols, 1008 produced hits (32% hit rate). Ersby contends that Radin omitted a number of ganzfeld experiments that would dramatically reduce that hit rate, but Radin argues that those experiments did not use standard protocols.

So what? A bunch of small trials is not the same as one large trial. In medicine, when a meta-analysis suggests that an effect is present, it's considered reliable enough to make it worthwhile to do a large trial, but it's not considered reliable on its own merits. Large randomized controlled trials often overturn the results of meta-analysis, and it is the large trial that is given more weight.

Linda
 
You didn't understand what I was saying.

First off, I didn't say a hit rate of 80%. I said an effect size of 0.8. This corresponds to a hit rate of 64% for the ganzfeld.


Well they are also not claiming a 64% hit rate.



As for the rest of what you say I will have to conclude that if this is also the view of Randi then the million dollar prize can not reasonably considered any more than a con. Statistics are used in many fields such as medical treatments to come to firm conclusions often on effect sizes much smaller than that found in Ganzfeld.
 
So what? A bunch of small trials is not the same as one large trial. In medicine, when a meta-analysis suggests that an effect is present, it's considered reliable enough to make it worthwhile to do a large trial, but it's not considered reliable on its own merits. Large randomized controlled trials often overturn the results of meta-analysis, and it is the large trial that is given more weight.

Linda

But even if anyone where to apply to Randi with an experiment set up with 0.8 power for what ever p value he was looking for based on the results of the studies mentioned you appear to think this would be a waste of time because the effect size is to small.
 
So what? A bunch of small trials is not the same as one large trial.
I'm still trying to figure out why that is the case if the protocol is uniform in each small trial. Can you explain?

In medicine, when a meta-analysis suggests that an effect is present, it's considered reliable enough to make it worthwhile to do a large trial, but it's not considered reliable on its own merits. Large randomized controlled trials often overturn the results of meta-analysis, and it is the large trial that is given more weight.
I'm guessing that the protocol was not uniform in each of the small trials. Can you give an example where a large trial overturned the results of a meta-analysis of a number of small trials where the protocol in each small trial was uniform?
 
I'm still trying to figure out why that is the case if the protocol is uniform in each small trial. Can you explain?


I'm guessing that the protocol was not uniform in each of the small trials. Can you give an example where a large trial overturned the results of a meta-analysis of a number of small trials where the protocol in each small trial was uniform?



You forget to ask her to do this for a meta analysis which had odds against chance that was a few billion to one against.
 
Well they are also not claiming a 64% hit rate.

I realize that.

As for the rest of what you say I will have to conclude that if this is also the view of Randi then the million dollar prize can not reasonably considered any more than a con.

Why? I'm simply trying to convey what it is that we usually consider paranormal - information that seems amazingly accurate, objects that move in front of our eyes, sensing who is calling us on the telephone more often than you'd expect from guesses. We don't consider an occasional lucky guess paranormal, nor an object that we don't see move, but someone tells us it moved when subject to incredibly meticulous measurements.

Statistics are used in many fields such as medical treatments to come to firm conclusions often on effect sizes much smaller than that found in Ganzfeld.

But they have the benefit of control groups, which eliminates bias as an alternate explanation. (ETA: that should be bias wrt internal validity)

Linda
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom