Thanks for the update Ersby - it will be interesting to see if you get a response regarding the question about whether he considers his work incomplete or not.
If those 14 experiments are added to the ones Radin included in his meta-analysis, is the overall result statistically significant or insignificant?Another update. It's been confirmed that Radin doesn't consider his ganzfeld meta-analyses as full and formal, and furthermore I had a brief email exchange with him over the ganzfeld database.
I'm not at liberty to give any details re. figures, but suffice to say he still believes the ganzfeld database demonstrates a large enough effect to be considered proof of psi.
What is interesting is that in our exchange he offered a third reason as to why those 14 early experiments were not included in his meta-analysis. Namely, because they didn't supply one data point per trial. Quite apart from being untrue, it is interesting to see his attitude towards those 14 large and unsuccessful experiments from 1974-1983.
There are some possible methodological problems with Terry's experiments, but they don't appear to undermine his basic findings. For example, Parker and Wiklund state in "The ganzfeld experiments: towards an assessment", Journal of Parapsychology 54, 1987:Hmm, adding those 14 experiments to Radin's m-a still does not make it complete, nor does it suddenly become a properly conducted and thorough review of the database. As I thought I'd already demonstrated. There none so blind as those that will not see, I guess.
By the way, any thoughts on Terry's experimnt as described on page 2? You did seem keen on discussing experiments at one time.
More seriously, I would question the value of the meta-analysis approach as a basis for psi skepticism. It is unrealistic to hope to find all the flaws in a large corpus of work by studying the published reports. The strategy would make sense only if one assumed that the reports were accurate. In my view, reporting deficiencies are easier to accept than psi. Consider two categories of error source.
Fraud. Given the existing motivation structure of parapsychology as a profession, it is reasonable to expect some fraudulent experiments. (Practising parapsychologists such as J. B. Rhine and Carl Sargent have publicly stated their belief that fraudulent experiments are not unusual in parapsychology, and of course there have been several celebrated exposures.) A fraudulent experiment will naturally be supported by a dishonest report, and Hyman's approach, being entirely based on the report, will find nothing wrong.
Self-deception. Some (perhaps many) experimenters are slipshod in their laboratory work. Some of them will tidy up the mess in writing the report. (A well-known example is provided by the Brugmans experiment; see my paper in Research in Parapsychology, 1982.) Again, Hyman will find nothing wrong."
It doesn't appear to me that Terry's experiments were flawed to the extent that they affected the results significantly.Okay, just as a recap, the problems with Terry's work is that the randomisation process was less than optimal, the reporting of the data was incomplete and then there's the issue of not repeating targets for the same subject. Three issues. I cannot categorically say that these were responsible for the high hit rate, but when you consider that ESP can mean Error Some Place just as much as Extra Sensory Perception, and then you see an experiment as flawed as this, it causes questions to be raised.
The problem with Scott's analysis is that fraud and self-deception can affect any scientific experiment. Why does he assume that there is more fraud and self-deception in ganzfeld experiments than other experiments?Remember the quote from Scott on page 2:
Quote:
More seriously, I would question the value of the meta-analysis approach as a basis for psi skepticism. It is unrealistic to hope to find all the flaws in a large corpus of work by studying the published reports. The strategy would make sense only if one assumed that the reports were accurate. In my view, reporting deficiencies are easier to accept than psi. Consider two categories of error source.
Fraud. Given the existing motivation structure of parapsychology as a profession, it is reasonable to expect some fraudulent experiments. (Practising parapsychologists such as J. B. Rhine and Carl Sargent have publicly stated their belief that fraudulent experiments are not unusual in parapsychology, and of course there have been several celebrated exposures.) A fraudulent experiment will naturally be supported by a dishonest report, and Hyman's approach, being entirely based on the report, will find nothing wrong.
Self-deception. Some (perhaps many) experimenters are slipshod in their laboratory work. Some of them will tidy up the mess in writing the report. (A well-known example is provided by the Brugmans experiment; see my paper in Research in Parapsychology, 1982.) Again, Hyman will find nothing wrong."
No, he is simply pointing out how overwhelming the ganzfeld evidence is. In other scientific fields, the evidence would be accepted, not nitpicked.As for Radin's file drawer figures, they're based on his flawed understanding of the database.
All righty then, let's accept it. So now it's time to develop a theory of the ganzfeld and start performing experiments whose hypotheses are derived from the theory, rather than experiments whose hypotheses are about the protocol and statistical analysis. It's time to test a theory, rather than collect data.Rodney said:No, he is simply pointing out how overwhelming the ganzfeld evidence is. In other scientific fields, the evidence would be accepted, not nitpicked.
It doesn't appear to me that Terry's experiments were flawed to the extent that they affected the results significantly.
The problem with Scott's analysis is that fraud and self-deception can affect any scientific experiment. Why does he assume that there is more fraud and self-deception in ganzfeld experiments than other experiments?
No, he is simply pointing out how overwhelming the ganzfeld evidence is. In other scientific fields, the evidence would be accepted, not nitpicked.
One of the most revealing properties of psi research is that meta-analyses consistently find that experimental results do not become more reliably significant with larger sample sizes as assumed by statistical theory (Kennedy, 2003b; 2004). This means that the methods of statistical power analysis for experimental design do not apply, which implies a fundamental lack of replicability.
This property also manifests as a negative correlation between sample size and effect size. Meta-analysis assumes that effect size is independent of sample size. In medical research, a negative correlation between effect size and sample size is interpreted as evidence for methodological bias (Egger, Smith, Schneider, & Minder, 1997).
Too bad we cannot verify any of this.
Not exactly, but he did say: "In my view, reporting deficiencies are easier to accept than psi."He doesn't say this.
Not necessarily so.
Did you read Ersby's posts in the thread itself? He did good work on this kind of thing.
Ok...so just to set the record straight. Skeptics are claiming that the ganzfeld data does not attest to any phenomenon, paranormal or otherwise.
If that's the case, then the data collected in the ganzfeld experiments is meaningless, right? If so, then a post hoc analysis of the data should reveal nothing, right?
If so, then there should be no relationships between subsets...no coherence...no patterns...no consistency across experiments linking psi performance to psychological variables, right?
I think what it is, is that skeptics expect to find patterns in the absence of a specific effect, especially under the sorts of circumstances present in parapsychological research (small studies, small effects, relatively large numbers of tested relationships, flexibility in design/definitions/outcomes/analytical modes, testing by several independent teams, bias). Whereas parapsychologists seem to consider any deviation, no matter how tiny, from a theoretical probability distribution, unexpected and therefore indicative of a specific effect.
Linda
Most of the 150 or so papers investigate some kind of secondary analysis along with the primary question of testing for psi. It's only reasonable that some of them are going to have the same results to certain aspects. But there are other experiments out there with data in the opposite direction to the ones you're posting, which you'd also expect.