• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Ganzfeld Experiments

Radin says there are 88 experiments in that timeframe, but that doesn't alter the fact that there are about 142. The "hit/miss" reason for leaving experiments out does not make any sense if you read the original papers.
What do those other 54 or so studies show, and what protocols were used?

I used the stouffer z because that was the measure used most often by parapsychologists. Of course if the statisician decides a different measure is more apt, I'd bow to their knowledge.
I believe the Stouffer Z has been misused by parapsychologists who are trying to discredit the Ganzfeld results. For example, Richard Wiseman and Julie Milton state in "Does Psi Exist? Lack of Replication of an Anomalous Process of Information Transfer" (available at http://74.125.47.132/search?q=cache...a.pdf+Does+Psi+Exist&hl=en&ct=clnk&cd=5&gl=us): "In this article, the authors present a meta-analysis of 30 ganzfeld ESP studies from 7 independent laboratories adhering to the same stringent methodological guidelines that C. Honorton followed. The studies failed to confirm his main effect of participants scoring above chance on the ESP task, Stouffer z = 0.70, p = .24, one-tailed."

However, Wiseman's and Milton's analysis is flawed. First, all of their negative Z values appear erroneous. For example, the "Kanthamani & Broughton 1994 Series B" Z score is given as -2.06 for only 10 trials. That is simply not possible with an expected 25% hit rate. The probability of zero hits in 10 trials with a 25% hit rate is 5.63%; however, a Z score of -2.06 corresponds not to that probability, but rather to a probability of only about 2%. Second, 26 of the 30 studies that Wiseman and Milton analyze appear to use a hit or miss protocol. In those 26 studies, there were 295 hits in 1,050 trials, which is a hit rate of 28.1% and -- using the binomial distribution -- is statistically significant at the 1.2% level. Further, I believe that, even if the other four non-standard studies are included, but weighted properly, there would be an effective 35 hits in 148 trials (under the usual outcome measure of comparing the number of hits obtained to the number expected by chance with an expected 25% hit rate). When those effective 35 hits in 148 trials are added to the 295 hits in 1,050 trials in the other 26 studies, there would effectively be 330 hits in 1,198 trials, which is a hit rate of 27.5% and, using the binomial distribution, that result would be statistically significant at the 2.4% level.

So, I see no justification for the conclusions of the Milton/Wiseman article that "the ganzfeld technique does not at present offer a replicable method for producing ESP in the laboratory."
 
No, he used a fail-safe N, which assumes no bias in the selection of unpublished papers. I hope that you can see this assumption would be unwarranted. Dropping this assumption downgrades the number of studies sitting in file-drawers by several orders of magnitude.

http://www.csicop.org/sb/2002-12/reality-check.html

Linda
According to Dean Radin, at page 121 of Entangled Minds: "If we insisted that there had to be a selective reporting problem, even though there's no evidence of one, then a conservative estimate of the number of studies needed to nullify the observed results is 2002. That's a ratio of 23 file drawer studies to each known study, which means that each of the 30 known investigators would have had to conduct but not report 67 additional studies. Because the average ganzfeld study has 36 trials, these 2002 "missing" studies would have required 72,072 additional sessions (36 x 2002). To generate that many sessions would mean continually running ganzfeld sessions for 24 hours a day, 7 days a week, for 36 years, and for not a single one of those sessions to see the light of day. That's not plausible." (footnote omitted)
 
According to Dean Radin, at page 121 of Entangled Minds: "If we insisted that there had to be a selective reporting problem, even though there's no evidence of one, then a conservative estimate of the number of studies needed to nullify the observed results is 2002. That's a ratio of 23 file drawer studies to each known study, which means that each of the 30 known investigators would have had to conduct but not report 67 additional studies. Because the average ganzfeld study has 36 trials, these 2002 "missing" studies would have required 72,072 additional sessions (36 x 2002). To generate that many sessions would mean continually running ganzfeld sessions for 24 hours a day, 7 days a week, for 36 years, and for not a single one of those sessions to see the light of day. That's not plausible." (footnote omitted)

And thus we are once again left wondering if every single one of those studies in advance had a set number of trials to be conducted. How consistent were the various protocols?
 
According to Dean Radin, at page 121 of Entangled Minds: "If we insisted that there had to be a selective reporting problem, even though there's no evidence of one, then a conservative estimate of the number of studies needed to nullify the observed results is 2002. That's a ratio of 23 file drawer studies to each known study, which means that each of the 30 known investigators would have had to conduct but not report 67 additional studies. Because the average ganzfeld study has 36 trials, these 2002 "missing" studies would have required 72,072 additional sessions (36 x 2002). To generate that many sessions would mean continually running ganzfeld sessions for 24 hours a day, 7 days a week, for 36 years, and for not a single one of those sessions to see the light of day. That's not plausible." (footnote omitted)

Yes, that was the analysis I was referring to.

Linda
 
What do those other 54 or so studies show, and what protocols were used?

That’s far too complicated to answer here. Suffice to say there’s no difference in the protocols used in the “missing” experiments than those in Radin’s meta-analysis.

I believe the Stouffer Z has been misused by parapsychologists who are trying to discredit the Ganzfeld results. For example, Richard Wiseman and Julie Milton state in "Does Psi Exist? Lack of Replication of an Anomalous Process of Information Transfer"

M&W’s paper uses a Stouffer z in a curious way, I grant you. Normally, a binomial distribution z-score is calculated for each individual experiment and then a Stouffer z is calculated for the database as a whole. Honorton used that method in his ganzfeld meta-analysis of 1985, Radin often uses it, and it’s frequently found in parapsychological papers from other authors. I’d say it’s the most popular statistical measure used in parapsychology.

M&W, meanwhile, calculated a Stouffer z for each experiment (labelled “effect size” in the paper) and then another Stouffer z for the database as a whole. This is an overly harsh statistical method, and Radin has already pointed out that using the more common method, exactly the same database gives a statistically significant score. So while there’s an argument that M&W mis-used the Stouffer z, that doesn’t mean it can be dismissed.

Of course, there may be a more appropriate measure that I don’t know about, which is why I’m sending it out to people.

By the way, which four experiments don’t use the direct hit method of scoring?
 
M&W’s paper uses a Stouffer z in a curious way, I grant you. Normally, a binomial distribution z-score is calculated for each individual experiment and then a Stouffer z is calculated for the database as a whole. Honorton used that method in his ganzfeld meta-analysis of 1985, Radin often uses it, and it’s frequently found in parapsychological papers from other authors. I’d say it’s the most popular statistical measure used in parapsychology.

M&W, meanwhile, calculated a Stouffer z for each experiment (labelled “effect size” in the paper) and then another Stouffer z for the database as a whole. This is an overly harsh statistical method, and Radin has already pointed out that using the more common method, exactly the same database gives a statistically significant score. So while there’s an argument that M&W mis-used the Stouffer z, that doesn’t mean it can be dismissed.

I'm confused. I thought that the Stouffer z for the database as a whole is the one reported in the M&W paper as non-significant (z=0.70, p=0.24 one-tailed). Is that not the way you said it's supposed to be used?

The main problem I have with the M&W and the subsequent Bem paper is that when I try to replicate the calculations, I get somewhat different results and I haven't yet figured out why. The point Rodney brought up earlier would be one example.

Of course, there may be a more appropriate measure that I don’t know about, which is why I’m sending it out to people.

It's not a measure used in meta-analysis of medical studies (which is what I'm used to). However, it seems to be similar to the "inverse-variance method" of combining study results.

Linda
 
By the way, which four experiments don’t use the direct hit method of scoring?
Kanthamani & Broughton (1994) Series 5b
Kanthamani & Broughton (1994) Series 6a
Kanthamani & Broughton (1994) Series 6b
Stanford & Frank (1991)
 
What shall we make of this?

The Effects of THC and Psilocybin on Paranormal Phenomena

Dick J. Bierman

Abstract

Two experiments are reported dealing with the effect of psychoactive drugs on paranormal phenomena. In the first experiment 40 subjects did two Ganzfeld ESP sessions in which they tried to get impressions of a remote target. One session while being, and one session while not-being intoxicated by Marijuana intake. When asked to select the actual target from 4 possible targets, the scoring rates were 30% (THC) and 15% (control), suggesting that there is an effect of THC intake on the performance in a standardized ESP task.

In the second experiment 20 subjects did two Ganzfeld sessions. As in the THC experiment, a within subject design was used in order to evaluate the effect of Psilocybin intake. The scoring rates in the two conditions did NOT differ and only when breaking down the result for negative and positive targets a clear picture arises. There is a positive effect of Psilocybin intake on psi performance when the material used is positive (scoring rate is 45%) and a negative effect when the material is negative (scoring rate is 8%). For the control conditions the opposite is true.


[...]

Gather a bunch of creative, extroverted, friendly people who claim previous psi experiences ("sheep") and get them stoned or tripping on psilocybin, make sure they are comfortable in every way, and I predict we would see some ganzfeld numbers that are out of this world. Make sure some of them are meditators and that might be a boost as well. Observe local sidereal time and that might help as well...must make sure the trials are timed accordingly. Use some sober "goats" as a control group.
 
Last edited:

Post-hoc data-dredging for fun and profit.

Gather a bunch of creative, extroverted, friendly people who claim previous psi experiences ("sheep") and get them stoned or tripping on psilocybin, make sure they are comfortable in every way, and I predict we would see some ganzfeld numbers that are out of this world. Make sure some of them are meditators and that might be a boost as well. Observe local sidereal time and that might help as well...must make sure the trials are timed accordingly. Use some sober "goats" as a control group.

Makes you wonder why we haven't already seen something like that published, doesn't it?

Linda
 
Post-hoc data-dredging for fun and profit.


Are you accusing Dr Bierman of something?

Makes you wonder why we haven't already seen something like that published, doesn't it?

Linda


You think it's a good idea? Suppose 25% is the MCE and the stoned group of sheep scored an average of 50%, and the control group of sober goats scored around 15%. Would that convince you that psi is real? Or would you just accuse them all of something and wash your hands of it?

No I don't "wonder why we haven't already seen something like that published"...you seem to suspect something. Out with it.
 
Last edited:
The cannabis results in that paper were from a previous paper - Process Oriented Ganzfeld Research In Amsterdam, series 5 and 6, and it wasn’t forty trials, but 20 trials judged twice. From the “discussion” section of the paper:

The series V and VI do confront us with some puzzling results. The formal part of these series concerned the effect of Cannabis and Judging procedure. We expected to find an increase in scoring rate in the Cannabis condition to a value clearly above the true effect size found in the normal population of around 33%. Instead, we found a quite similar effect size. To our surprise however the effect size for the untreated condition which should be 33% dropped to a near significant negative score of 15%. One might explain this by postulating that some subjects may have a preferred condition and use their ESP in the non preferred condition to exhibit psi-missing. This however, seems to us straight nonsense.

eta: I like Bierman. His stuff's always worth reading.
 
Last edited:
Are you accusing Dr Bierman of something?

Yes. Post-hoc data-dredging. It's not like he hid what he was doing or anything. "Let's see how may different ways we can divide people into groups until we find a difference."

You think it's a good idea?

I have stated numerous times that I think it would be a good idea if parapsychology research was undertaken in a rigorous and sound manner. One example would be designing a experiment which a priori included all those factors which supposedly enhance the effect instead of only ever discovering what they are post hoc.

Suppose the stoned group of sheep scored an average of 50%, and the control group of sober goats scored around 15%. Would that convince you that psi is real?

If this was a reproducible and robust result, it would convince me that hypotheses could be formed and tested with respect to psi.

No I don't "wonder why we haven't already seen something like that published"

Really? It has occurred to you, a relative lay-person, that this experiment would be useful. You don't really expect that this hasn't occurred to those who are heavily invested in psi research, do you? If you are a parapsychologist, truly struggling with the desire to be recognized as a leader in a ground-breaking field, and the lack of recognition comes about because the effect has been elusive, wouldn't a powerful, robust demonstration go a long way towards bringing you that recognition from other scientists?

...you seem to suspect something. Out with it.

I suspect the idea has already been explored and has been disappointing.

Linda
 
I cannot believe that people are paid to do this. The assumptions that are made are simply mind boggling. I liken these experiments and the delusions which drive them to the IDers and their "science." Pure, unadulterated hogwash.


M.
 
eta: I like Bierman. His stuff's always worth reading.


Evidently, Linda doesn't feel the same way. She seems to think he is out for "fun and profit". I imagine it would be hard to like or respect a scientist if one thinks they are after money and a good time. She probably pictures him smoking pot and laughing all the way to the bank with loose party-girls on his arm.

But that pretty much sums up how many skeptics feel about parapsychologists and/or scientists who dabble in it, doesn't it? That they are out for a quick buck by preying on the gullible. Get a nice book deal or something and go to a few parties. Whee! Greed! But it amounts to an accusation of fraud, doesn't it? That's serious business.

Do you agree with skeptics who feel that way, like Linda for instance? If not (I suspect not), have you ever tried to correct such an impression in a fellow skeptic? Or do you let it slide?

You say you find his stuff worth reading. I think it's safe to say you wouldn't feel that way if you felt his stuff wasn't done "in a rigorous and sound manner". Therefore, you must feel that at least some psychical research is quality work, right? So what would you say to a skeptic who says (directly or indirectly) that none of it is "rigorous and sound"?
 
Last edited:
I know one parapsychologist who fits the description of the cheerful heavy pot smoking commercially minded happy type. Nice person, and has never been afraid to discuss her pro-cannabis position publicly. :) She became a sceptic, and turned her back on parapsychology, and wrote a book about it, but the lady in question, now more involved in writing on consciousness as i recall is one of only two cannabis users I can recall from my days at the SPR - the other one is a neuroscientist, and i think she is more interested in other substances actually.

cj x
 
I cannot believe that people are paid to do this. The assumptions that are made are simply mind boggling. I liken these experiments and the delusions which drive them to the IDers and their "science." Pure, unadulterated hogwash.

M.


Have you ever read any of the research? :) Nothing like an informed opinion and an open mind?

j x
 
I know one parapsychologist who fits the description of the cheerful heavy pot smoking commercially minded happy type. Nice person, and has never been afraid to discuss her pro-cannabis position publicly. :) She became a sceptic, and turned her back on parapsychology, and wrote a book about it, but the lady in question, now more involved in writing on consciousness as i recall is one of only two cannabis users I can recall from my days at the SPR - the other one is a neuroscientist, and i think she is more interested in other substances actually.

cj x


I think I'll pull a Linda and accuse Susan of being out for fun and profit. Wow! That was so easy! ;)
 
Evidently, Linda doesn't feel the same way. She seems to think he is out for "fun and profit". I imagine it would be hard to like or respect a scientist if one thinks they are after money and a good time. She probably pictures him smoking pot and laughing all the way to the bank with loose party-girls on his arm.

But that pretty much sums up how many skeptics feel about parapsychologists and/or scientists who dabble in it, doesn't it? That they are out for a quick buck by preying on the gullible. Get a nice book deal or something and go to a few parties. Whee! Greed! But it amounts to an accusation of fraud, doesn't it? That's serious business.

Dude, data-dredging for fun and profit is ubiquitous among medical research, as well. It's not preying on the gullible, it's earnest and sincere attempts to confirm what you know in your heart. The difference is that the doctors reading and using your work recognize that it leads to unreliable conclusions and don't let you get away with it, while I have yet to see a parapsychologist criticize the practice. But I'd be happy to be proven wrong. Show me some papers/articles where parapsychologists are skeptical of the conclusions drawn by other parapsychologists instead of only those where they're credulous.

Linda
 
But I'd be happy to be proven wrong. Show me some papers/articles where parapsychologists are skeptical of the conclusions drawn by other parapsychologists instead of only those where they're credulous.

Linda


Are they ever anything else than sceptical of each others results? Parapsychologists spend all their time critiquing each others work, it's the major driving force of the discipline from what I can see. The latest issue of the EJP is a case in point - http://ejp.org.uk/index.php?page=Current%20Issue


cj x
 
Are they ever anything else than sceptical of each others results? Parapsychologists spend all their time critiquing each others work, it's the major driving force of the discipline from what I can see. The latest issue of the EJP is a case in point - http://ejp.org.uk/index.php?page=Current%20Issue


cj x

I presume the Wackerman article is critical, then?

I have enjoyed the articles I've read by Wackerman. I don't know if he's a believer, but he would be an example of someone taking an approach that is beneficial to the field.

(For the record, I usually find Bierman useful/interesting, as well. ;))

Linda
 

Back
Top Bottom