Ganzfeld million dollar challange?

Status
Not open for further replies.
But Radin stated earlier in his talk that he can identify a group of people that average a 65% hit rate. Why wouldn't he use them?
I have now reviewed the tape and Radin's claim is that "special populations" have hit rates "more like 65%." I just e-mailed him to ask whether these populations have been re-tested and, if so, whether the hit rates remained the same.
 
Of all the possible explanations given so far in this thread, my favorite goes along the lines of "failed tests get less publicity, so the test results available for use in a meta-analysis are likely to be the most successful ones".
Have you watched the video that andy2001 posted? (http://www.youtube.com/watch?v=qw_O9Qiwqew). On it, Radin argues that there is no significant Ganzfeld file drawer problem.

Now, I hope you'll agree that, until the results of the meta-analysis are replicated in an independent test, they don't carry much weight. So, Ganzfeld proponents should be at this moment working in more tests to replicate the results. As I see it, the point of this whole thread is to ask if those new tests could be valid for the MDC, which of course can't be answered until there's some description of those tests, which in turn is the reason why I keep asking what a Ganzfeld test is like.

The idea is:
  • If someone's willing to replicate the results, I'd like to know how the new tests will be conducted, so I can make an idea about how MDC-like they are.
  • If nobody's willing to replicate the results, we can assume that Ganzfeld will never be observed again, so it surely won't win the MDC.
On the video, Radin notes that Ganzfeld experiments have been replicated. However, with respect to the MDC, it's very unlikely that a Ganzfeld proponent will apply unless the JREF clarifies that a Ganzfeld experiment is eligible.
 
On the video, Radin notes that Ganzfeld experiments have been replicated. However, with respect to the MDC, it's very unlikely that a Ganzfeld proponent will apply unless the JREF clarifies that a Ganzfeld experiment is eligible.
LOL! Did you see the clip of the MDC claimant who thought she had powers to make a person have to urinate? I heard that for the longest time she wouldn't apply because the rules didn't come right out and say she was eligible.
 
LOL! Did you see the clip of the MDC claimant who thought she had powers to make a person have to urinate? I heard that for the longest time she wouldn't apply because the rules didn't come right out and say she was eligible.


Right, that makes a lot of sense because parapsychologists have been conducting urinieren (German for to urinate) trials for a long time.
 
I have now reviewed the tape and Radin's claim is that "special populations" have hit rates "more like 65%." I just e-mailed him to ask whether these populations have been re-tested and, if so, whether the hit rates remained the same.

You don't expect an answer, do you?
 
From the February 29, 2008 SWIFT:

The famous “Ganzfeld” tests are cited by Auerbach as examples of phenomena that could not be tested because of the amount of time and data required. Since no one has ever submitted this claim to us for testing, we’ve never had to handle it. Had that occurred, we would have negotiated reasonable terms, of course.
 
Ganzfeld has been subjected to intense peer review to look for flaws. As long as the targets are randomly selected and randomly presented and there is no sensory leakage then there is no other reasonable explanation but psi.

And as for the question about what you call the control group my answer is I would expect it to be biased to whatever was listed as target even if there is no sender. Telepathy is not the only type of psi. If there was no correct choice then I would expect a bias towards the first option shown, but if this is done on a real test it only gets 25% correct.


How can it not be bogus when it's based on the assumption that there is actually any ability to "send" and "receive"? The very idea is totally nuts, IMO.


M.
 
I do. Please keep us posted, Rodney.
Once again, you have proven correct. (It's almost as if you have ESP, or something ;)). Radin's response is: "Individual performance test-retest reliability has not been established. It's a good question for someone to look into."
 
On the video, Radin notes that Ganzfeld experiments have been replicated. However, with respect to the MDC, it's very unlikely that a Ganzfeld proponent will apply unless the JREF clarifies that a Ganzfeld experiment is eligible.

If its intent is to show psi exists, then it is a paranormal claim and therefore eligible.

So it seems clear -- these Ganzfeld proponents should apply. Everything else is simply a matter of discussing acceptable protocol, and since each protocol is hand-tailored to each claim, no such discussion can take place before a claim is submitted.

Glad we got all that sorted out, then.
 
Once again, you have proven correct. (It's almost as if you have ESP, or something ;)). Radin's response is: "Individual performance test-retest reliability has not been established. It's a good question for someone to look into."

Of course Radin has not looked into it further. He knows full well that selecting populations with higher performances is nothing more than post hoc cherry picking of the data. He also knows full well that if these populations were retested, they would perform no better than the larger group. He also knows full well why results were higher than 25% and knows that proper controls on the experiment will make those results evaporate.
 
Of course Radin has not looked into it further. He knows full well that selecting populations with higher performances is nothing more than post hoc cherry picking of the data. He also knows full well that if these populations were retested, they would perform no better than the larger group. He also knows full well why results were higher than 25% and knows that proper controls on the experiment will make those results evaporate.


And of course you know you're right. :rolleyes:
 
Once again, you have proven correct. (It's almost as if you have ESP, or something ;)). Radin's response is: "Individual performance test-retest reliability has not been established. It's a good question for someone to look into."

That's very disappointing. He really, really, really should know what it means when an association found on post-hoc analysis doesn't hold up on retesting. And yet, did anyone get any sense whatsoever from his presentation about creative, etc. individuals, that he was talking about a bogus association?

Linda
 
What is that assumption, and how significant do you think the Ganzfeld file drawer problem is?

That's very disappointing, Rodney. I really want to believe that you are sincere and not simply ********ting. Yet this pretense that you haven't already been given this information, from me, several times already, makes it very difficult.

Linda
 
And of course you know you're right. :rolleyes:

Well, the problem is that Radin really should know better. And so it presents a contradiction that is difficult to resolve. Either he lacks the minimum understanding a researcher needs before designing and carrying out experiments, or he allows his biases to overwhelm his ability to form reliable conclusions. Either way, it makes his assurances that the experiments were essentially flawless, that all these proposed problems were resolved, that the findings are robust, etc., highly questionable, even if we didn't already know that those assurances were false.

Linda
 
They have computers randomly chose the target. Take 4 photos let the computer randomly pick one as the target. Let the computer randomly pick the order they will be shown. Do all this and it’s random. It’s not huge problem with Ganzfeld in general.

Actually it is, first off you will throw out all the studies that do not use a random number generator to pick the picture, which is a big chunk.
Then you throw out the studies that did not make sure that there were multiple copies of the same target in each set. (Which I haven't seen a protocol that even includes this criteria yet.) So show me where this effect is controlled for, it wasn't in any of the protocols that I could find.
 
In the Ganzfeld studies it is my understanding that the subjects picked the second, third and fourth target presented much more frequently than the first. Was randomization done in the presentation of targets?

It's the other way round - subjects are more likely to pick the first target they're shown. Then, I think, the fourth target. I'd have to check. Not that there's a great deal of data on this from ganzfeld experiments, but that's my understanding.
 
There no need to make this assumption. If you set up an experiment with odds of 25% because you have 4 photos randomly select one as the target and randomly select the order they will be shown the expected results for choosing the first target or liking red targets or choosing randomly are still 25%.

See there you go, show the protocols about random selection of targets, elimination of multiple target matchs in a set and where a selection preference was ruled out.

That is what protocols are for to eliminate confounding variables. You can't just wave it away with the word 'random'.
 
Status
Not open for further replies.

Back
Top Bottom