• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

psychics

Rodney - if you use Amazon.com "search inside" for SAIC, it's "Page 54" and "Page 59". He continues on with his critique in the next chapter, so you should try to check the book out from your library if you're looking for a skeptic's perspective.
... remainder snipped ...
Rodney is one of our resident skeptics. On a board dedicated to skepticism, that means he represents the non-skeptical side. Over the years he has shown himself to be be remarkably resilient to looking for the skeptic's perspective. One need look no further than his heroic efforts in the Non-Homeopathic Belladonna thread to be convinced of that.
 
Rodney is one of our resident skeptics. On a board dedicated to skepticism, that means he represents the non-skeptical side. Over the years he has shown himself to be be remarkably resilient to looking for the skeptic's perspective. One need look no further than his heroic efforts in the Non-Homeopathic Belladonna thread to be convinced of that.

It doesn't much matter - the chapter from Mark's book actually just summarizes and extends the arguments made in the Wiseman-Milton paper, and I KNOW Rodney's capable of selectively reading articles, even if they don't agree with his conclusions.
 
Last edited:
Furthermore,In 1999, a meta-analysis of current autoganzfeld experiments was performed showing no evidence of psi. The experiments were reportedly conducted using the procedures outlined jointly by Hyman and Honorton. In other words, "tighter" ganzfeld experiments show no evidence of psychic abilities.
Okay, I've read Milton's and Wiseman's article, and I've also read Chris Carter's critique of it in his new book Parapsychology and the Skeptics. According to Carter, at pages 63-64:

"The combined hit rate for these thirty studies [analyzed by Milton and Wiseman] is 27.5%, just below the 95 percent confident intervals of the first two major studies . . . About one year later, Milton and Wiseman's meta-analysis was independently repeated, but with the addition of 10 new ganzfeld studies that had since been performed. These ten new studies yielded an overall hit rate of 36.7%; when added to the thirty ganzfeld experiments analyzed by Milton's and Wiseman, the combined hit rate for all forty studies was 30.1%. The results were again statistically significant, as Milton conceded, but somewhat below the 35 percent result in the original meta-analysis and the 34 percent result from the replications reported in 1995.

"There is a simple explanation of the apparent disparity. In their joint communiqué, Hyman and Honorton asked future ganzfeld investigators, as part of their "more stringent standards," to clearly document the status of the experiment: that is, whether it was meant to merely confirm previous findings or to explore novel conditions. The problem with the Milton and Wiseman study was that it simply lumped all studies together, regardless of whether the status of each study was confirmatory or exploratory. In other words, Milton and Wiseman made no attempt to determine the degree to which the individual studies complied with the standard ganzfeld protocol as spelled out in the joint communiqué." [footnote omitted]
 
Ah, BUT,

You have indirectly expressed the opinion that ganzfeld studies are similar enough to the SAIC studies to be further proof that the SAIC studies are valid.

The SAIC studies included BOTH confirmatory and exploratory experiments (in fact, they were mostly exploratory experiments).

Therefore, if we are trying to use the successes of autoganzfeld experiments to strengthen the remote viewing experiments (which, IMO, is not an appropriate comparison), both the exploratory and confirmatory results should be used.

The more research I do into these psi experiments, the more interesting everything becomes. Wiseman and Schlitz have tentatively shown that experimenter bias has an effect on the results of some psi experiments (for example, the "stare" experiments performed by SAIC and others). Schlitz and Wiseman used the same lab space, procedure, and pool of volunteers to replicate the stare experiments over the same period of time. Schlitz's results were encouraging, while Wiseman's were no better than chance. Furthermore, the volunteer receivers that worked with Schlitz reported that they believed in psi more than Wiseman's volunteers did, even though no apparent bias was used to choose which experimenter tested which volunteers. This confirmed Wiseman's earlier findings (whenever he replicated a SAIC or ganzfeld experiment, he found no significant effects greater than chance).
 
Last edited:
Ah, BUT,

You have indirectly expressed the opinion that ganzfeld studies are similar enough to the SAIC studies to be further proof that the SAIC studies are valid.

The SAIC studies included BOTH confirmatory and exploratory experiments (in fact, they were mostly exploratory experiments).

Therefore, if we are trying to use the successes of autoganzfeld experiments to strengthen the remote viewing experiments (which, IMO, is not an appropriate comparison), both the exploratory and confirmatory results should be used.

The more research I do into these psi experiments, the more interesting everything becomes. Wiseman and Schlitz have tentatively shown that experimenter bias has an effect on the results of some psi experiments (for example, the "stare" experiments performed by SAIC and others). Schlitz and Wiseman used the same lab space, procedure, and pool of volunteers to replicate the stare experiments over the same period of time. Schlitz's results were encouraging, while Wiseman's were no better than chance. Furthermore, the volunteer receivers that worked with Schlitz reported that they believed in psi more than Wiseman's volunteers did, even though no apparent bias was used to choose which experimenter tested which volunteers. This confirmed Wiseman's earlier findings (whenever he replicated a SAIC or ganzfeld experiment, he found no significant effects greater than chance).
There are still some puzzling aspects of psi research, but how do you explain the ganzfeld findings? If Carter is correct, even if only the 40 more recent ganzfeld studies are considered to be the true database, and the older studies where the hit rates were higher are simply ignored, the results are still statistically significant.
 
So why would these results fluctuate so wildly unless it has something to do with Hymans objections you ignored in the Hyman link? Then you ignored me when I pointed it out in post #79. Wasn't this exactly what you kept asking us for?
 
There are still some puzzling aspects of psi research, but how do you explain the ganzfeld findings? If Carter is correct, even if only the 40 more recent ganzfeld studies are considered to be the true database, and the older studies where the hit rates were higher are simply ignored, the results are still statistically significant.

The older studies were ignored in the meta-analysis because both the experimenters and the critics agreed that the procedure was not "tight" enough to rule out information leakage. Do you have a reference for the competing, successful meta-analysis (it must be referenced in your book)? I remember reading the summary yesterday, but I can't find it today.

my_wan, Rodney's moved on from the SAIC experiments to the ganzfeld experiments. Save your arguments for when the Great Rodney Wheel slowly grinds back to the SAIC experiments.
 
So why would these results fluctuate so wildly unless it has something to do with Hymans objections you ignored in the Hyman link? Then you ignored me when I pointed it out in post #79. Wasn't this exactly what you kept asking us for?
No, I was asking whether flaws in the SAIC experiments had been identified since Hyman's concession that he couldn't find any obvious ones. His analysis essentially consists of speculating that there may have been biases in those experiments -- caused, for example, by May being the sole evaluator of when a hit occurred. However, there is a big difference between identifying potential problems and proving that experiments were biased. Now, if the SAIC experiments were the only ones purporting to show that PSI exists, I would agree that they would need to be replicated. But that's not the case, and I don't think you can get around the positive results of the ganzfeld experiments by citing perceived inconsistencies in those results and the results of the SAIC experiments.
 
Do you have a reference for the competing, successful meta-analysis (it must be referenced in your book)? I remember reading the summary yesterday, but I can't find it today.
Yes. It's Updating the Ganzfeld Database: A Victim of Its Own Success?, Bem, Palmer, Broughton, Journal of Parapsychology, 65, 2001.
 
But that's not the case, and I don't think you can get around the positive results of the ganzfeld experiments by citing perceived inconsistencies in those results and the results of the SAIC experiments.

One problem I'm having is that the procedure of the ganzfeld experiments and the procedure of the SAIC experiments are so completely exclusive of each other, if BOTH experiments are considered successful (which is under disagreement), then they are really demonstrating two different things.

1) Ganzfeld exp. use dynamic targets, while SAIC found only chance results with dynamic targets.
2) Ganzfeld exp. used senders, while SAIC found worse results with senders.
3) Ganzfeld exp. relies on "altered state", while as far as I know the SAIC experiments utilized psychics in their "normal state".

If you're willing, we can ignore the SAIC experiments and focus on the ganzfeld experiments, but I don't want this coming back to bite me in later.
 
One problem I'm having is that the procedure of the ganzfeld experiments and the procedure of the SAIC experiments are so completely exclusive of each other, if BOTH experiments are considered successful (which is under disagreement), then they are really demonstrating two different things.

1) Ganzfeld exp. use dynamic targets, while SAIC found only chance results with dynamic targets.
2) Ganzfeld exp. used senders, while SAIC found worse results with senders.
3) Ganzfeld exp. relies on "altered state", while as far as I know the SAIC experiments utilized psychics in their "normal state".
One possible explanation is that I believe the SAIC experiments involved selecting individuals who purportedly had some past success in remote viewing, while I believe the ganzfeld experiments have simply used volunteers. So, there may be a difference between the two participant populations that could affect the results.

If you're willing, we can ignore the SAIC experiments and focus on the ganzfeld experiments, but I don't want this coming back to bite me in later.
Fine by me.
 
No, I was asking whether flaws in the SAIC experiments had been identified since Hyman's concession that he couldn't find any obvious ones. His analysis essentially consists of speculating that there may have been biases in those experiments -- caused, for example, by May being the sole evaluator of when a hit occurred. However, there is a big difference between identifying potential problems and proving that experiments were biased. Now, if the SAIC experiments were the only ones purporting to show that PSI exists, I would agree that they would need to be replicated. But that's not the case, and I don't think you can get around the positive results of the ganzfeld experiments by citing perceived inconsistencies in those results and the results of the SAIC experiments.

Both experimental setup require after the fact judging. How is the bias a speculation when May for all intent and purposes flat out said a bridge could mean duck? The no obvious flaws in protocol means just that, but that has no bearing on the execution of the protocol by the likes of May.

Even the ganzfeld experiments required after the fact judging by the receiver that was sensory deprived no less. How many people were looking over their shoulder? I can get people to pick the card I want them to nearly every time with nothing more than body language. We as people communicate far more information to each other this way than we do with words, without even being aware of it. It's kind of funny to convince someone they are psychic and then tell them you are going to take your psychic power back from them. They often refuse to believe it when you tell them it was just a trick.

Any test that requires after test judging is worthless for any kind of evidence of non null results. Not because of protocol problems per se but for human execution reasons. May after all didn't believe for a moment that the results were skewed by those choices and that was an extreme case. We pass more information than that on far subtler clues every time we are within sight of another human.

If we consider this one test bogus then how can you talk about consistent results across all the experiments when that one should have shot the results way ahead of the other (supposedly) better controlled ones?
 
I can get people to pick the card I want them to nearly every time with nothing more than body language.
I think there may be a million dollars in this for you if you can actually do that. ;)

Any test that requires after test judging is worthless for any kind of evidence of non null results.
Skeptical parapsychologists such as Milton and Wiseman seem to disagree with you, but what kind of experimental evidence would convince you that psi is real?
 
If psi were real, it would be incredibly easy to prove.

These experiments would not have such ambiguous results, and would not require any form of judging if the person tested actually had the ability.
None of this controversy would exist if psi were a real phenomena.

It would be a simple case of using clearly destinct images;

Remote viewer - I saw the square.
Experimenter - correct! It was the square.

Remote viewer - I saw the star.
Experimenter – Correct! It was the star.

Over and over.


Similarly if someone really were psychic and could talk to the dead, their readings would not be identical to cold reading techniques. Unfortunately however, they always are.
The evidence would be clear, unambiguous, and there would be no controversy if people could really talk with the dead.

The only reason this debate rages on, is because many people really want psychic phenomena to be true. So they simply decide it is, and abandon all rational thought on the matter.

Unfortunately “You can’t rationally argue out what wasn’t rationally argued in”.
 
Last edited:
I think there may be a million dollars in this for you if you can actually do that. ;)

It's close to the only trick I know, except well known deck stacking and stuff, but it can be used in so many ways.

Skeptical parapsychologists such as Milton and Wiseman seem to disagree with you, but what kind of experimental evidence would convince you that psi is real?

I started considering a protocol a few days ago using the concept of triple blinding. To omit the judging it would have to forgo SAIC and ganzfeld. Actually similar protocols could be used for them but each run would take weeks and avoiding the God of Gaps would take years.

To automate the whole thing I would use random number generators similar to what the Princeton PEAR group used. Each run would be divided into predefined sets large enough to theoretically measure a non null effect. A predefined set of these sets would then represent a success or failure. Nobody, not the researcher, subject, or any observer would have any idea whether any given set or subset is a success or failure until the whole experiment passed or failed. The computer would track and do all judging. By grouping sets this way each given run would essentially be equivalent to the meta analysis of multiple previous experiments. In this way there is no statistical analysis required to understand results. The protocol would be similar to the way you would set the sensitivity of a psychically controlled mouse if it did actually work. It's either all or nothing regardless of how small the effect actually is. If you want to do your own statistical analysis after the fact you have a database on the computer you can sort any way you want.

I'll try to write a more complete protocol and post a thread about it for suggestions.
 
Skeptical parapsychologists such as Milton and Wiseman seem to disagree with you, but what kind of experimental evidence would convince you that psi is real?

Experimental evidence that can be replicated by skeptics.

Linda
 
It's close to the only trick I know, except well known deck stacking and stuff, but it can be used in so many ways.
I'll need a demonstration.

I started considering a protocol a few days ago using the concept of triple blinding. To omit the judging it would have to forgo SAIC and ganzfeld.
I suppose that's a convenient way to avoid confronting positive results. ;)

Actually similar protocols could be used for them but each run would take weeks and avoiding the God of Gaps would take years.
Which highlights the major shortcoming of Randi's Million Dollar Challenge: Any protocol that involves even weeks of testing is burdensome. To my knowledge, all of the protocols accepted by Randi have involved brief tests.

To automate the whole thing I would use random number generators similar to what the Princeton PEAR group used. Each run would be divided into predefined sets large enough to theoretically measure a non null effect. A predefined set of these sets would then represent a success or failure. Nobody, not the researcher, subject, or any observer would have any idea whether any given set or subset is a success or failure until the whole experiment passed or failed. The computer would track and do all judging. By grouping sets this way each given run would essentially be equivalent to the meta analysis of multiple previous experiments. In this way there is no statistical analysis required to understand results. The protocol would be similar to the way you would set the sensitivity of a psychically controlled mouse if it did actually work. It's either all or nothing regardless of how small the effect actually is. If you want to do your own statistical analysis after the fact you have a database on the computer you can sort any way you want.

I'll try to write a more complete protocol and post a thread about it for suggestions.
A new protocol is always welcome.
 
Experimental evidence that can be replicated by skeptics.

Linda
According to the September 2001 article I referenced in post #89, Julie Milton "noted that when replications published after the Milton-Wiseman cutoff date are added to the database, the accumulated studies do, in fact, achieve statistical significance." So, by the methodology Milton and Wiseman used in 1999 to show that more tightly-controlled ganzfeld experiments do not achieve statistical significance, now they do, even disregarding the fact that they considered ganzfeld experiments that did not conform to the standard ganzfeld protocol.
 
According to the September 2001 article I referenced in post #89, Julie Milton "noted that when replications published after the Milton-Wiseman cutoff date are added to the database, the accumulated studies do, in fact, achieve statistical significance." So, by the methodology Milton and Wiseman used in 1999 to show that more tightly-controlled ganzfeld experiments do not achieve statistical significance, now they do, even disregarding the fact that they considered ganzfeld experiments that did not conform to the standard ganzfeld protocol.

The point Linda was making is that when skeptics like Wiseman attempt to replicate remote viewing or ganzfeld experiments, they obtain no significant results. When organizations or experimenters who are NOT skeptical to the phenomenon replicate the same experiments using the same (presumably) methodology, they sometimes obtain chance results, and sometimes obtain statistically significant results. In other words, the beliefs of the experimenter have been pretty tentatively shown to affect the outcome of the experiments - this is a HUGE problem in developing a theory of psi. If the experiments can only be sucessfully replicated by believers, then how do establish any sort of scientific commonality about the phenomenon?
 
I'll need a demonstration.
Fair enough.. LOL

I suppose that's a convenient way to avoid confronting positive results. ;)
Positive beliefs do not equal positive results.

Which highlights the major shortcoming of Randi's Million Dollar Challenge: Any protocol that involves even weeks of testing is burdensome. To my knowledge, all of the protocols accepted by Randi have involved brief tests.
Actually it's not a problem. Randi only test for hit rates that the applicant claims they can accomplish. He is not testing for effects of arbitrary size. It is the applicants responsibility to insure the protocol will effectively demonstrate their individual talent rather than an arbitrary sized non null result. It is a challenge, not a test for undefined effects.

A new protocol is always welcome.
Perhaps, but what would be your attitude if such an approach failed to detect anything?

Ideally the protocol should be robust enough and locked behind the blinding procedure well enough that all sorts of ideas can be tested without compromising results. This would include adjusting sample set sizes (within reason and only predefined before the test), sensory deprivation, using only bleevers and/or exceptional testees, while supposedly mind altering tones are played, etc. Since I'm trying to allow it to be adapted to the greatest possible range of weird ideas I'm bogged down a little with correlations vs deviations like with the Global Consciousness Project.
 

Back
Top Bottom