The Ganzfeld Experiments

Interesting Ian said:
Quite possible it ain't never going to happen. I think we should be wary, albeit not totally dismissive, of those who claim they can produce marked "in yer face" anomalous cognition/perturbation on demand. Such "in yer face" stuff tends to be inherently unpredictable so far as I am able to understand.

Precisely, probably 5 times out of a hundred.

But what about not concentrating on the authenticity question in a direct way, but attempting to discern certain characteristic patterns in these meta-analyses, and then seeing if they can be reproduced?? If they can that would be interesting would it not??.

Because first you need an effect. Reversing the process is fishing.

Then perhaps we could devise some hypothesis about how these effects operate. If further experimentation were then commensurate with the hypothesis in question this would not only settle the authenticity question, but will also furnish us with a theory about how psi operates.

What effect?

Also this method would be less susceptible to accusations of cheating and of artifacts skewing the results.

Ian, don't you find it a tad suspicious that virtually all of this research really sucks? I mean really. Look at the poster "experiments". The groundbreaking Targ work on AIDs patients, Schwartz, Scole, This crap. Seriously, all personal animosity aside, don't you find it odd?
 
T'ai Chi said:


Eliminating all possible sources of error isn't even possible in physics, dude. :) Think "measurement error" for one. We can only eliminate all that we can.

Tai, you know what i meant, and you can prtend all you want. the ganzfeld data has so many possible reason that there could be error it is incredable.

So what source of error is there in measuring the repulisve force of an electron, as I believe was done by Milikin? You said they can't eleminate the error, so please explain to me what you are talking about?

Or when they do a study on the effectiveness of a particular treatment for depression, what error do think can not be eleiminated.

Your hyperbole is showing.

You said it dude :).
 
T'ai Chi said:


Seriously just speculation, unless there is good evidence for that. Evidence that Paul has not presented a good case for, IMO.

Excuse me but you obviously haven't been involved in research. If a fellow says that they didn't randomise the target cards and then gets kicked out. That is a very serious charge.

In science the burden to prove that you aren't commiting fraud is on the one doing the research, always. Whichc is why the data should be clearly generated and clearly follow protocols with total transparency.

This is also called experimenter error, and it is not cool. It has to be addressed, there is no burden on the person making the accusation, the burden is always on the researcher.

Which is why since Shleiman , good arcaheologists draw pictures and take photos of everything.
 
Ian said:
But what about not concentrating on the authenticity question in a direct way, but attempting to discern certain characteristic patterns in these meta-analyses, and then seeing if they can be reproduced?? If they can that would be interesting would it not??.

Then perhaps we could devise some hypothesis about how these effects operate. If further experimentation were then commensurate with the hypothesis in question this would not only settle the authenticity question, but will also furnish us with a theory about how psi operates.
Ed et al are correct when they say that a reproducible effect should come first. But I'm willing to put that aside, since parapsychologists are going to continue working anyway. So, then, I agree with you.

Here's where we could start: Take the ganzfeld standardness meta-analysis, make a list of the top 10 deviations from standard protocol, and start reproducing the experiments, including one of those deviations in each reproduction. This should go a long way to determining which variations are okay and which are killers, giving us insight into what's really going on.

If, in the process, we find that that poor reliability is thwarting the process, well then, Ed told us so.

~~ Paul
 
Ed, is it really necessary to place your responses in quotes??

dharlow
Forced choice experiments (which often used zener cards in the Rhine era) relied on statistics, and there is not much of a difference (aside from some statistical assumptions that need to be changed) between those experiments and the Ganzfeld. The main difference is in target material.
Ed
No. The interposition of raters is a serious flaw in control, particularly when more than one is used. Why even go down that road? A pool of x stimuli, one transmitted, pick from the pool. The notion that a rater can pick more accurately what a receiver "sees" than the receiver is ludicrious. Simple, clear, unambigious.

Didn't I discuss this earlier in the thread? If the receiver selects what she thinks is the actual target, she may misinterpret a psychological gravitation towards a certain target for a parapsychological one so to speak. But if we have judges, hopefully they will make their choice based entirely on the transcript.

I somehow find it laughable that one could describe a complex scene more easily than a simple figure.

I have the feeling that psi might more easily be elicited with pictures which engage the emotions more. These will typically be complex scenes.

But you're saying if psi exists it should be elicited more easily with simple pictures?

dharlow
Where things seem more complicated (perhaps 'obfuscated' as you said) is in the use of meta-analysis to pool the results of these experiments. Meta-anaylsis should not be used to prove or disprove an effect (as has been done, respectively, by Radin, and Milton/Wiseman). Meta-analysis should only be used as an identifier of tentative correlations in the data which can then be retested, and it unfortunately being misused by a few proponents and skeptics in the parapsychology debate.
ed
I suspect that T'ai will take issue with you, he being a statistition and all. But I agree. The effects seem to be an artifact of the stats.

I'm not sure dharlow is saying that. In examining the meta-analysis to address the authenticity issue there is more scope for artifacts skewing the results perhaps.
 
Dancing David said:

Excuse me but you obviously haven't been involved in research. If a fellow says that they didn't randomise the target cards and then gets kicked out. That is a very serious charge.


As I have said, his charge might be serious, but his evidence isn't.


In science the burden to prove that you aren't commiting fraud is on the one doing the research, always.


So you're asking them to prove a negative? To prove they didn't do something? To prove fraud doesn't exist?


, there is no burden on the person making the accusation, the burden is always on the researcher.


You can repeat that mantra all you want, but if you are making the claim of fraud, you have burden.
 
Dancing David said:

Tai, you know what i meant, and you can prtend all you want. the ganzfeld data has so many possible reason that there could be error it is incredable.


Such as?


So what source of error is there in measuring the repulisve force of an electron, as I believe was done by Milikin? You said they can't eleminate the error, so please explain to me what you are talking about?


First, do you deny there is error?
 
Paul C. Anagnostopoulos said:

Yes, but, you see, physics doesn't rely entirely on protocol and statistics to show effect, whereas psi does. That means psi experiments have to be perfect, which is impossible.

That's why we keep asking for some psychic to just do it.

~~ Paul

I don't understand why you believe they have to be perfect. That is absurd, IMO.
 
Ersby said:

But may I ask you a question, T'ai Chi? You know more about statistics than me, so maybe you can help. One of the effect sizes quoted in the m-a seems peculiar to me. How do you calculate the effect size? In particular, an experiment that lasted 10 sessions, with a 25% hit rate expected by chance, yet got only one hit (10%). What would the effect size of that experiment be?

An effect size is just a standardized difference in means.
 
Okay, so a brief recap of how far we’ve come in the debate.

One, it is naïve to believe that the hit rate by chance for individual experiments will be 25%.

Two, Radin’s meta-analysis was merely a cobbling together of the two most prominent collections of ganzfeld work (Honorton’s m-a and the PRL work) and some of the most recent experiments. It was not complete.

Three, the effect seen in Honorton’s meta-analysis is nullified by the results of ganzfeld experiments from the following years.

Four, the most recent (post ‘91ish) ganzfeld experiments have a 30% hit rate. By resorting according to “standardness” and removing eleven experiments from the database you can bump it up to 32%. There’s only a spurious and arbitrary link between standardness and success rate.

To this I’d like to add the most recent experiments that I can find results for the years since the last meta-analysis. In the face of missing data I’ve taken what I consider to be the lowest possible estimates (ie, 20 is, I feel, the lowest number of any self-respecting ganzfeld experiment these days) and worked from there. Since those with missing data of this nature tend to be experiments with chance or below chance results, it tends to act in a pro-psi direction, since I’m lessening the impact of these experiments on the overall result.

Smith, Fox, Williams “Developing a digital autoganzfeld testing system”, 55 trials, 13 hits, 23.63%
Simmonds, Roe “Personality correlates of subjective anomalous experiences and psi performance in the ganzfeld” 52 trials, 16 hits, 30.7% hit rate
Roe, Flint, “Remote viewing pilot study”, 14 trials, order ranking with 12 trials scoring above the midpoint. If we assume that the hits were equally distributed, then that gives 6 hits. 42.8%
Roe, et al. “Sender and reciever creativity scores as predictors of performance at a ganzfeld esp task” 24 trials, 5 hits, 20.83%
Parker “A Review of the ganzfeld work at Gothenburg University” 30 trials, 12 hits, 40%
Stevens “Testing a model for dyadic ESP in the Ganzfeld” ? trials, hit rate 24% (with no data re. No of trials, let’s choose 20 as the lowest possible, giving 4 hits)
Simmonds “Sender personality and Psi performance in the ganzfeld and a waking ESP control”, 52 trials, ? hit rate, “there was no psi demonstrated” (some measures of psi found artefacts) If we use “no psi demonstrated” as being 25% hit this gives 13 hits.
Simmonds, Fox, Holt “Schizotypy, Creativity and Psi Performance in a Visual Noise Paradigm” 20 trials, multiple judging protocols: “there was no psi hitting effect”. The hit rate for “similarity” which seems closest to direct hitting, was 10%, which gives 2 hits
Fox “The Role of Introspection in the Study of ESP” 12 trials, 5 hits, 41.6%
Roe, Holt, Simmonds “Considering the sender as a pk agent in ganzfeld studies” 40 trials, 14 hits, 35%

Total sessions: 319
Hits: 90
Hit rate: 28%

Of course, this is by no means authoritative but it seems, after 13 pages, that the claim made near the start of the thread (by Interesting Ian, and oft repeated by others) that ganzfeld experiments get results of 32-35% is simply not true. If there is an effect, it is miniscule and by no means replicable.

So when people, even in this late stage of the debate, talk about the psi effect, could they point to the evidence for this effect because from where I'm standing, most of it has vanished/been diminished greatly.
 
Ian said:
Didn't I discuss this earlier in the thread? If the receiver selects what she thinks is the actual target, she may misinterpret a psychological gravitation towards a certain target for a parapsychological one so to speak. But if we have judges, hopefully they will make their choice based entirely on the transcript.
This is an interesting question, but the only way it can be judged is if the receiver speaks his mentation during the ganzfeld period, and the judges use the transcript to rate the targets. However, if psychological gravitation is a possibility when the receiver self-judges, a different flavor of it is also a possibility during the mentation. Perhaps what the receiver speaks is not really what he's "getting." Heck, perhaps the receiver is being influenced by some other form of psi.

This is just another couple of worms in the huge, squirming can of worms that is psi.

~~ Paul
 
dharlow said:

Derren Brown performs 'psychic feats' on an almost daily basis and fairly reliably.

So does he do it in a scientific setting in a lab?
 
T'ai said:
I don't understand why you believe they have to be perfect. That is absurd, IMO.
Here is the null hypothesis for a psi experiment: When we do the experiment, the outcome will deviate from chance results by a significant amount. In order to know that a significant deviation is due to some sort of nonmundane information transfer, you have to protect against all mundane methods of information transfer, you have to be perfect in your data collection, and you have to perform appropriate statistical analysis that correctly models the process. Otherwise your signficiant results might be due to those things rather than nonmundane information transfer.

Now, as your results become more obvious, more significant, more replicable, you can relax these requirements a bit. Why? Because people can see that there is actually something going on. This is why experiments measuring the acceleration due to gravity, for example, can be a bit sloppy: You can reproduce the effect at will. The experiments are about the theory of gravity, not about data collection and analysis, so you can vary the protocol to see what happens (e.g., drop the feather in air vs. in a vacuum). If you think someone else's experiment had a flaw, you can devise a similar one minus the flaw, and still expect the feather to fall down.

The problem with psi is a combination of irreproducible results and no theory on which to base the experiments. Together, those two are deadly.

~~ Paul
 
Paul C. Anagnostopoulos said:

Here is the null hypothesis for a psi experiment: When we do the experiment, the outcome will deviate from chance results by a significant amount. In order to know that a significant deviation is due to some sort of nonmundane information transfer, you have to protect against all mundane methods of information transfer, you have to be perfect in your data collection, and you have to perform appropriate statistical analysis that correctly models the process. Otherwise your signficiant results might be due to those things rather than nonmundane information transfer.

Now, as your results become more obvious, more significant, more replicable, you can relax these requirements a bit. Why? Because people can see that there is actually something going on. This is why experiments measuring the acceleration due to gravity, for example, can be a bit sloppy: You can reproduce the effect at will. The experiments are about the theory of gravity, not about data collection and analysis, so you can vary the protocol to see what happens (e.g., drop the feather in air vs. in a vacuum). If you think someone else's experiment had a flaw, you can devise a similar one minus the flaw, and still expect the feather to fall down.

The problem with psi is a combination of irreproducible results and no theory on which to base the experiments. Together, those two are deadly.

~~ Paul

One, even physicists, can only hope to control for all known sources. Being perfect, as you stated is required, is absurd.
 
T'ai Chi said:


One, even physicists, can only hope to control for all known sources. Being perfect, as you stated is required, is absurd.

How is your apologia for quackery related in any fashion to PCA's comments?
 
T'ai Chi said:
One, even physicists, can only hope to control for all known sources. Being perfect, as you stated is required, is absurd.

That's not at all what Paul said. It would be nice if you would address people's well though-out posts with a bit more than a snappy (fawlty) one-liner. Why don't you actually try to argue your case, instead of sniping at others?
 
T'ai said:
One, even physicists, can only hope to control for all known sources. Being perfect, as you stated is required, is absurd.
That is true, and I would not arrest a parapsychologist simply because he can't control for all possible mundane sources of information transfer. However, that does not make the problem go away. After the seemingly perfect psi experiment, I can still say: But hey, you might have missed a subtle mode of information transfer. And I would have a valid point.

One possibility, for example: Yes, we used two copies of the stimulus, one to show the sender and one to show the receiver during judging. Excellent! Now, did you remember not to store them next to each other?

The trick is to attack the phenomenon from many angles, running experiments with variant protocols. That's what the ganzfeld folks should do.

~~ Paul
 
T'ai Chi said:


As I have said, his charge might be serious, but his evidence isn't.

[/b]
The evidence is the verbal staement that a reseacher commited fraud, and that is enough to put the burden on the researchers that they didn't commit fraud. A list of the target chosen and there distribution as well as a record of the protocol followed to generate the random assignation of target would be sufficient. That would proof that the procol of randomization had been followed.

This is an issue in all areas of research, why do you get you knickers in a bunch. This is a discussion of science, and that is why protocols and records exist.


So you're asking them to prove a negative? To prove they didn't do something? To prove fraud doesn't exist?


They can prove if they followed thier protocol, this is a standard thing is the social sciences, just like inter rater reliability

You can feign ignorance as long as you want, the burden is on the researcher to demonstrate they didn't commit fraud or just choose the runs that they liked.

There could and should be a record of the chosen targets and the protocol and the record of how the random distribution was alloted.

This is a thing in all acince, you sure seem to be upset, this is call 'standard procedure'. Especialy in the social sciences.


You can repeat that mantra all you want, but if you are making the claim of fraud, you have burden.

Tai, that makes you appear really foolish, the burden is always on the researcher to make replication and prove that they didn't fake the data. You publish the recors and the protocolo, what is the big deal? There should be a record of how the targets were randomly chosen, and if someone who was in your lab says that you aren't randomisng the targets then the burden is on the researcher to show that they did.

You obviously know close to zero about social sciences and research.

People fake data all the time to get their stuff published. The charge of fraud is serious and can not just be mantrated away by the sacred Order of the Ostrich.
 

Back
Top Bottom