• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Psi in the Ganzfeld

Thanks for the update Ersby - it will be interesting to see if you get a response regarding the question about whether he considers his work incomplete or not.
 
Another update. It's been confirmed that Radin doesn't consider his ganzfeld meta-analyses as full and formal, and furthermore I had a brief email exchange with him over the ganzfeld database.

I'm not at liberty to give any details re. figures, but suffice to say he still believes the ganzfeld database demonstrates a large enough effect to be considered proof of psi.

What is interesting is that in our exchange he offered a third reason as to why those 14 early experiments were not included in his meta-analysis. Namely, because they didn't supply one data point per trial. Quite apart from being untrue, it is interesting to see his attitude towards those 14 large and unsuccessful experiments from 1974-1983.
 
It can't be verified, that's true. But I've found Ersby to be honest and accurate in his posts. I believe him.
 
Another update. It's been confirmed that Radin doesn't consider his ganzfeld meta-analyses as full and formal, and furthermore I had a brief email exchange with him over the ganzfeld database.

I'm not at liberty to give any details re. figures, but suffice to say he still believes the ganzfeld database demonstrates a large enough effect to be considered proof of psi.

What is interesting is that in our exchange he offered a third reason as to why those 14 early experiments were not included in his meta-analysis. Namely, because they didn't supply one data point per trial. Quite apart from being untrue, it is interesting to see his attitude towards those 14 large and unsuccessful experiments from 1974-1983.
If those 14 experiments are added to the ones Radin included in his meta-analysis, is the overall result statistically significant or insignificant?
 
Hmm, adding those 14 experiments to Radin's m-a still does not make it complete, nor does it suddenly become a properly conducted and thorough review of the database. As I thought I'd already demonstrated. There none so blind as those that will not see, I guess.

By the way, any thoughts on Terry's experimnt as described on page 2? You did seem keen on discussing experiments at one time.
 
Hmm, adding those 14 experiments to Radin's m-a still does not make it complete, nor does it suddenly become a properly conducted and thorough review of the database. As I thought I'd already demonstrated. There none so blind as those that will not see, I guess.

By the way, any thoughts on Terry's experimnt as described on page 2? You did seem keen on discussing experiments at one time.
There are some possible methodological problems with Terry's experiments, but they don't appear to undermine his basic findings. For example, Parker and Wiklund state in "The ganzfeld experiments: towards an assessment", Journal of Parapsychology 54, 1987:

"Two further Maimonides studies by Terry and Honorton in 1976 were judged to be flawed because of their procedure of eliminating each used target in the series from future selection. This could increase the chance expectation from 1/4 to 1/3 (or higher) if subjects gained knowledge of the target pool. Parker, however, calculated in a worse case analysis of the number of subjects relative to the number of target packs re-appearing, that such an effect would be negligible. Wiklund regarded it, nevertheless, as a serious methodological flaw. In addition to this, the series had other flaws concerning the randomisation of the target series and its reconstruction after viewing."

I don't think you can throw out Terry's studies on the grounds that methodological problems invalidate them, unless you can show that those problems were serious enough to affect the results more than insignificantly. If you cannot do that with Terry's or other studies that produced positive results, you have to deal with what Radin noted at p. 121 of "Entangled Minds": "If we insisted that there had to be a selective reporting problem, even though there's no evidence of one, then a conservative estimate of the number of studies neded to nullify the observed results is 2,002. That's a ratio of 23 file drawer studies to each known study, which means that each of the 30 known investigators would have had to conduct but not report 67 additional studies."
 
Last edited:
Okay, just as a recap, the problems with Terry's work is that the randomisation process was less than optimal, the reporting of the data was incomplete and then there's the issue of not repeating targets for the same subject. Three issues. I cannot categorically say that these were responsible for the high hit rate, but when you consider that ESP can mean Error Some Place just as much as Extra Sensory Perception, and then you see an experiment as flawed as this, it causes questions to be raised.

Remember the quote from Scott on page 2:

More seriously, I would question the value of the meta-analysis approach as a basis for psi skepticism. It is unrealistic to hope to find all the flaws in a large corpus of work by studying the published reports. The strategy would make sense only if one assumed that the reports were accurate. In my view, reporting deficiencies are easier to accept than psi. Consider two categories of error source.

Fraud. Given the existing motivation structure of parapsychology as a profession, it is reasonable to expect some fraudulent experiments. (Practising parapsychologists such as J. B. Rhine and Carl Sargent have publicly stated their belief that fraudulent experiments are not unusual in parapsychology, and of course there have been several celebrated exposures.) A fraudulent experiment will naturally be supported by a dishonest report, and Hyman's approach, being entirely based on the report, will find nothing wrong.

Self-deception. Some (perhaps many) experimenters are slipshod in their laboratory work. Some of them will tidy up the mess in writing the report. (A well-known example is provided by the Brugmans experiment; see my paper in Research in Parapsychology, 1982.) Again, Hyman will find nothing wrong."

As for Radin's file drawer figures, they're based on his flawed understanding of the database.
 
Okay, just as a recap, the problems with Terry's work is that the randomisation process was less than optimal, the reporting of the data was incomplete and then there's the issue of not repeating targets for the same subject. Three issues. I cannot categorically say that these were responsible for the high hit rate, but when you consider that ESP can mean Error Some Place just as much as Extra Sensory Perception, and then you see an experiment as flawed as this, it causes questions to be raised.
It doesn't appear to me that Terry's experiments were flawed to the extent that they affected the results significantly.

Remember the quote from Scott on page 2:
Quote:
More seriously, I would question the value of the meta-analysis approach as a basis for psi skepticism. It is unrealistic to hope to find all the flaws in a large corpus of work by studying the published reports. The strategy would make sense only if one assumed that the reports were accurate. In my view, reporting deficiencies are easier to accept than psi. Consider two categories of error source.

Fraud. Given the existing motivation structure of parapsychology as a profession, it is reasonable to expect some fraudulent experiments. (Practising parapsychologists such as J. B. Rhine and Carl Sargent have publicly stated their belief that fraudulent experiments are not unusual in parapsychology, and of course there have been several celebrated exposures.) A fraudulent experiment will naturally be supported by a dishonest report, and Hyman's approach, being entirely based on the report, will find nothing wrong.

Self-deception. Some (perhaps many) experimenters are slipshod in their laboratory work. Some of them will tidy up the mess in writing the report. (A well-known example is provided by the Brugmans experiment; see my paper in Research in Parapsychology, 1982.) Again, Hyman will find nothing wrong."
The problem with Scott's analysis is that fraud and self-deception can affect any scientific experiment. Why does he assume that there is more fraud and self-deception in ganzfeld experiments than other experiments?

As for Radin's file drawer figures, they're based on his flawed understanding of the database.
No, he is simply pointing out how overwhelming the ganzfeld evidence is. In other scientific fields, the evidence would be accepted, not nitpicked.
 
Rodney said:
No, he is simply pointing out how overwhelming the ganzfeld evidence is. In other scientific fields, the evidence would be accepted, not nitpicked.
All righty then, let's accept it. So now it's time to develop a theory of the ganzfeld and start performing experiments whose hypotheses are derived from the theory, rather than experiments whose hypotheses are about the protocol and statistical analysis. It's time to test a theory, rather than collect data.

Who's game? Who's going to do this? Surely all the parapsychologists who accept the evidence, and know how replicable these experiments are, would be willing to go out on a limb with a theory.

~~ Paul
 
It doesn't appear to me that Terry's experiments were flawed to the extent that they affected the results significantly.

I disagree. The three issues, especially the non-reporting of data, give me enough cause for concern that this experiment shouldn't be included in a meta-analysis. Garabage in, garbage out.

The problem with Scott's analysis is that fraud and self-deception can affect any scientific experiment. Why does he assume that there is more fraud and self-deception in ganzfeld experiments than other experiments?

He doesn't say this.

No, he is simply pointing out how overwhelming the ganzfeld evidence is. In other scientific fields, the evidence would be accepted, not nitpicked.

It's not nitpicking to say something is wrong.

Would it be accepted in other fields of science? Here's what J.E.Kennedy, writing in 2005, says:

One of the most revealing properties of psi research is that meta-analyses consistently find that experimental results do not become more reliably significant with larger sample sizes as assumed by statistical theory (Kennedy, 2003b; 2004). This means that the methods of statistical power analysis for experimental design do not apply, which implies a fundamental lack of replicability.

This property also manifests as a negative correlation between sample size and effect size. Meta-analysis assumes that effect size is independent of sample size. In medical research, a negative correlation between effect size and sample size is interpreted as evidence for methodological bias (Egger, Smith, Schneider, & Minder, 1997).
 
Forgive me for resurrecting this thread, but I have a question about the ganzfeld.

From An Introduction to Parapsychology (2nd edition) pg 180

[...]

"Other skeptical theories of psi data have not been developed in detail. Spencer-Brown (1953), Brifgeman, (1956) and Gilmore (1989) have argued that classical scientific understanding of chance processes may be flawed or simplistic and that the data thought to attest to psi are therefore nothing else than a consequence of inappropriate notions about chance expectation. This argument has yet to be shown to accommodate the observed pattern of correlates of psi performance. Because classical probability theory has been widely applied in scientific fields other than parapsychology, even skeptics are loath to pursue the argument.

Contemporary skeptics generally do not believe the psi data attest to any phenomenon, paranormal or otherwise. Thus the anomalous data are dismissed as unworthy of consideration because of the perceived possibility of fraud (e.g., Hansel, 1966, 1980) or because of claimed procedural errors (e.g., Diaconis, 1978; Hyman, 1985). In that these views deny the existence of any nonartifactual affect they are strictly speaking not theories of psi.

Further, they are not open to falsification. In the unlikely event that there was a psi experiment for which the skeptic was at a loss to identify openings for fraud or procedural shortcomings it would not prove that an effect could occur in the absence of these artifacts: the latter might exist without being immediately obvious.

There remains a demand for other skeptical theories that address the psi data as indicative of a valid but non-parapsychological effect. The debate over the authenticity of psi is best conducted not by skeptics seeking to "explain away" the data but by the construction of skeptical theories that generate predictions capable of being pitted against those of the parapsychological models. Such theoretical development would have the fruitful effects of sharpening the assumptions underlying the authenticity issue and encouraging critical empirical investigation of them."


Ok...so just to set the record straight. Skeptics are claiming that the ganzfeld data does not attest to any phenomenon, paranormal or otherwise.

If that's the case, then the data collected in the ganzfeld experiments is meaningless, right? If so, then a post hoc analysis of the data should reveal nothing, right?

If so, then there should be no relationships between subsets...no coherence...no patterns...no consistency across experiments linking psi performance to psychological variables, right?
 
Last edited:
Not necessarily so.

Did you read Ersby's posts in the thread itself? He did good work on this kind of thing.
 
Not necessarily so.

Did you read Ersby's posts in the thread itself? He did good work on this kind of thing.


Yeah. Ersby was mostly talking about the bottom line of the meta-analyses. I'm talking about things like this:

A psychological analysis of ganzfeld protocols

[...]

"It was found that, in general, participants tended to hit when their scores suggested a very positive adjustment and when imagery was allowed to develop in a free and personally involving way. Participants who missed showed more signs of anxiety and obsessive attempts to control the experience."

[...]
 
Last edited:
Ok...so just to set the record straight. Skeptics are claiming that the ganzfeld data does not attest to any phenomenon, paranormal or otherwise.

If that's the case, then the data collected in the ganzfeld experiments is meaningless, right? If so, then a post hoc analysis of the data should reveal nothing, right?

If so, then there should be no relationships between subsets...no coherence...no patterns...no consistency across experiments linking psi performance to psychological variables, right?

I think what it is, is that skeptics expect to find patterns in the absence of a specific effect, especially under the sorts of circumstances present in parapsychological research (small studies, small effects, relatively large numbers of tested relationships, flexibility in design/definitions/outcomes/analytical modes, testing by several independent teams, bias). Whereas parapsychologists seem to consider any deviation, no matter how tiny, from a theoretical probability distribution, unexpected and therefore indicative of a specific effect.

Linda
 
I think what it is, is that skeptics expect to find patterns in the absence of a specific effect, especially under the sorts of circumstances present in parapsychological research (small studies, small effects, relatively large numbers of tested relationships, flexibility in design/definitions/outcomes/analytical modes, testing by several independent teams, bias). Whereas parapsychologists seem to consider any deviation, no matter how tiny, from a theoretical probability distribution, unexpected and therefore indicative of a specific effect.

Linda


What I'm trying to say is this. If the data collected in the ganzfeld experiments is meaningless, then how is it that we find correlates in the data?

I mean, there should be no correlation whatsoever between comfort and success...anxiety and failure...if the data is essentially meaningless. Right? We shouldn't be able to predict success and failure to any degree...right?

"To guard against overanalysis, the author defined a composite cluster a priori made up of several scales that were most strongly expected to predict scoring, and this was tested against rank scores in a relatively large pilot subset. This cluster did predict performance significantly."

I mean, if the data is meaningless then how is it that performance can be predicted to any degree whatsoever?

A REVIEW OF THE GANZFELD WORK AT GOTHENBURG UNIVERSITY BY ADRIAN PARKER

ABSTRACT

"The results of five standard ganzfeld studies and one multiple target ganzfeld (the serial ganzfeld) study are reported. The standard ganzfeld studies form a highly significant and consistent data base with an overall hit-rate of 36% (40% in the case of auditory monitored studies) and a mean effect size of .25 (.34 in the case of the monitored studies). This database has been used to study psychological correlates of psi in terms of psychometric tests. The most successful of these tests are the Australian Sheep Goat Scale, the Magical Ideation Scale, and ”Feeling” scores on the Myers-Briggs Inventory. Other scales that were used as predictors of psi-scores with varying degrees of success included the Transliminality Scale, the Defence Mechanism Test, and the Tellegen Absorption Scale. A further investigation suggests on the basis of confidence ratings made before and after ganzfeld relaxation, that there may be some awareness of the psi-content of the imagery generated during the ganzfeld state. The report includes a review of current work in developing the ganzfeld into a portable digital technique for process-orientated research."
 
Last edited:
Most of the 150 or so papers investigate some kind of secondary analysis along with the primary question of testing for psi. It's only reasonable that some of them are going to have the same results to certain aspects. But there are other experiments out there with data in the opposite direction to the ones you're posting, which you'd also expect.
 
Most of the 150 or so papers investigate some kind of secondary analysis along with the primary question of testing for psi. It's only reasonable that some of them are going to have the same results to certain aspects. But there are other experiments out there with data in the opposite direction to the ones you're posting, which you'd also expect.


So, you're saying the bottom line is meaningless coincidence. I say there seems to be undeniable patterns in the data. Patterns that shouldn't be there if the data is meaningless.

Anomalous information access in the Ganzfeld: Utrecht - Novice series I and II
by Dick J. Bierman, Douwe J. Bosga, Hans Gerding & Rens Wezelman

Abstract

"The results of the first 2 novice series are reported which precede a planned research programme of 4 series which is expected to stretch over a period of 2 years. In each of the two series 50 volunteers participated in a single standard Ganzfeld session with static targets. The over-all direct hit scoring rate was exactly at chance: 25%. Two factors related to the subjects that have been established as successful predictors in previous ganzfeld research were analyzed.

Over 50% of the subjects were or had been practitioner of a mental discipline, like meditation. Those subjects scored above chance consistently in both series (32.1 % over both series, chi2 = 2.5; p= 0.11). Subjects who reported previous paranormal experiences did score non significantly better than subjects that did not report these experiences (27.3% vs 0% in series I and 27.5% vs 20% in series II). Subjects who reported PK events did perform significantly better than other subjects with a scoring rate of 52.8% (chi2=10.8, p=0.02).

Psi-performance correlated negatively with geomagnetic activity in the first series (r=-0.28; p< 0.05) but not significantly so in the second series (r=-0.01, n.s.). The results, which seem to fit an over-all decline in effect size in the reported ganzfeld research with static targets (regression coefficient = -0.023, p=0.02) are discussed in the context of previous meta-analytic results. It is argued that decline effects constitute patterns in the elusiveness of psi."
 
Last edited:

Back
Top Bottom