• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

PEAR remote viewing - what the data actually shows

davidsmith73

Graduate Poster
Joined
Jul 25, 2001
Messages
1,697
While I realise there are some important method issues with the PEAR work on RV, I wanted to present what I think the PEAR data shows, from this paper:

http://www.princeton.edu/~pear/pdfs/jse_papers/IU.pdf

From here on I'll assume that people have the read the paper and become familiar with PEAR's methods.

There has been some claims put forward, by some people here and the Skeptic Report, that this paper demonstrates that the entire PEAR RV positive results were due to subjective biases in the analytical judging procedure. I will argue against this claim.

The first chunk of data that does not support the subjective bias claim is Table 4. They collect their data using the FIDO method and then run it through an array of different scoring methods. Each method produces approximately the same p-value. Notice that the binary method of scoring (which produced highly significant results when the data was collected using the binary method) produced nothing here. If the binary method of scoring were inflating the results then it should do so here too. But it doesn't. It produces worse results than the other methods.

Table 5 shows the same thing. They collect their data using a descriptor array of 10 possible answers to each question (distributive method) and then analyse the data using a variety of methods. Again, the binary method choice produced the same results as the distributive. If the positive results were due to the analysis method getting more "subjective" then a supposedly true chance result, produced by analysing the data with the distributive method, would get inflated when the binary method was used. This did not happen.

Conclusion:

The drop in results is most likely due to the method of data collection and not the data analysis.
 
http://www.skepticreport.com/psychics/shapesintheclouds.htm
And deep within that PEAR paper, right after the final and most searching analysis results were posted, is the following admission that the results simply did not appear:

"Once again, there was reasonably good agreement among the six scoring recipes, but the overall results were now completely indistinguishable from chance."
(PEAR, p227, Distributive Scoring)
 
The binary scoring method was based on 30 "yes/no" descriptors for the possible target. An example would be "is the target indoors". I've never actually seen the list of descriptors they used, however.

The Utts, Hanes, Markwick paper I lined to beforehand suggested that there existed a "best guessing" technique, meaning that the probabilities quoted in the PEAR paper could be much too high.

The drop in results is most likely due to the method of data collection and not the data analysis.

Remember that the earliest set of results, the Chicago data, scored the highest, but the data was not collected with the binary method. They took a previously successful (but flawed*) RV experiment and reinterpreted the data to fit their 30 descriptors. From the paper:

As a first test of this approach [ie, the binary scoring method], one series of eight trials from the earlier Chicago database was encoded ex post facto into the binary format by five independent encoders.

[...]

With growing confidence in the viability of this analytical methodology, an additional 51 prior trials from Chicago and PEAR were then transcribed into the new descriptor format, increasing the total number of ex post facto–encoded trials to 59, comprising all the original human-judged trials that met formal protocol criteria and had adequate target documentation to permit such retrospective encoding.

* According to Stokes, the person judging the session was given a photo of the target along with some decoys. The photo of the actual target was taken on the day of the trial. Therefore temporal clues concerning season, weather, would come in to play.
 
Last edited:
"Once again, there was reasonably good agreement among the six scoring recipes, but the overall results were now completely indistinguishable from chance."
(PEAR, p227, Distributive Scoring)

The results were at chance level when they collected their data using the distributive descriptor method (10 options for each question).

The interesting thing is that they then treated this data set as if a binary, tertiary, FIDO etc method had been employed to collect the data. As PEAR point out, none of these methods changed the result.

If what the sceptic report is saying is true, we would expect this distributive dataset to produce artifactual positive results, the most significant from the binary method.

If you read the paper carefully, PEAR carried out a set of experiments that test the idea that their results were due to some artifact of the analysis. What they found was that the analysis does not effect the results. Rather, it is something about the way in which the participants are forced to transfer their conscious impressions into data.
 
David,

What part of this:

"Once again, there was reasonably good agreement among the six scoring recipes, but the overall results were now completely indistinguishable from chance."
(PEAR, p227, Distributive Scoring)

do you have a problem with?
 
David,

What part of this:

"Once again, there was reasonably good agreement among the six scoring recipes, but the overall results were now completely indistinguishable from chance."

do you have a problem with?


None of it. Why would I have a problem with it?
 
Psuedoskeptics are against metaanalysis when it shows significance, but for it when it doesn't show significance apparently.
 
None of it. Why would I have a problem with it?

Then, why are you going on about this?

They admit they have no evidence of a paranormal phenomenon. What's to talk about?

Psuedoskeptics are against metaanalysis when it shows significance, but for it when it doesn't show significance apparently.

What are you talking about? Please provide evidence of your claim.

Provide something. Contribute something.


Why is it "interesting"?

Provide something. Contribute something.
 
Better take down skeptickreport.com then; there's nothing to talk about after all. Better disband JREF, CSICOP, CFI, etc. There's nothing to talk about after all.

Does your logic make any sense?
 
Then, why are you going on about this?

They admit they have no evidence of a paranormal phenomenon. What's to talk about?

They do not admit they have no evidence of RV. They cleary state that they think they do have evidence. I'm going on about this because the article on PEAR RV in Skeptic Report is heavily flawed. I've just explained above why it is flawed. I've just explained what I think the PEAR data actually shows. It doesn't show that the effects disappear when more objective analyses are performed. It shows that the method of analysis does not affect the results.

Do you agree with my points?
 
They do not admit they have no evidence of RV. They cleary state that they think they do have evidence. I'm going on about this because the article on PEAR RV in Skeptic Report is heavily flawed. I've just explained above why it is flawed. I've just explained what I think the PEAR data actually shows. It doesn't show that the effects disappear when more objective analyses are performed. It shows that the method of analysis does not affect the results.

Do you agree with my points?
No, because (to quote the original critics of the data):
Quoting the Hansen Utts Marwick conclusions entire best puts the statistical summary of this particular critique.

"The PEAR remote-viewing experiments depart from commonly accepted criteria for formal research in science. In fact, they are undoubtedly some of the poorest quality ESP experiments published in many years. The defects provide plausible alternative explanations. There do not appear to be any methods available for proper statistical evaluation of these experiments because of the way in which they were conducted."
(Hansen Utts Markwick, p107, Conclusions)
As evidenced by their own admissions:
For example, on page 237 of the paper, the following startling admission is made, seemingly inadvertently:

"Recall, for example, that the early exploratory trials, where percipients did not know the identity of the agent or the time of target visitation, produced completely null results…"
(PEAR, p237, Section X – From Analysis to Analogy)
In short, the only "evidence" that PEAR actually has is the product of poor data gathering initially plus poor meta-analysis at this stage. By their own admission in this very paper, more refined analysis in stages eventually revealed NO evidence of RV at all.

But the sad part was that this outcome did not stop them claiming there actually WAS evidence - they concluded DESPITE these results. Do you not agree this is somewhat strange?
 
Last edited:
No, because (to quote the original critics of the data):

Quoting the Hansen Utts Marwick conclusions entire best puts the statistical summary of this particular critique.

"The PEAR remote-viewing experiments depart from commonly accepted criteria for formal research in science. In fact, they are undoubtedly some of the poorest quality ESP experiments published in many years. The defects provide plausible alternative explanations. There do not appear to be any methods available for proper statistical evaluation of these experiments because of the way in which they were conducted."
(Hansen Utts Markwick, p107, Conclusions)


That paper is a good critique of the PEAR methods. Why do think it demonstrates that their drop in RV results were due to more objective analysis?

By their own admission in this very paper, more refined analysis in stages eventually revealed NO evidence of RV at all.

That's incorrect. I've already explained this. The method of analysis did not change the results at all. Look at Table 4 and 5. Do you understand my points?
 
Better take down skeptickreport.com then; there's nothing to talk about after all. Better disband JREF, CSICOP, CFI, etc. There's nothing to talk about after all.

Does your logic make any sense?

It's "SkepticReport.com".

What are you talking about? I am not suggesting that we don't talk about paranormal phenomena. I am talking about the futility of talking about whether the PEAR people are right or not. They are clearly not.

Why is the video you linked to interesting? What points does it make that you find interesting? Why do you find the points interesting?

Contribute something. Don't be such an empty shell.
 
That paper is a good critique of the PEAR methods. Why do think it demonstrates that their drop in RV results were due to more objective analysis?



That's incorrect. I've already explained this. The method of analysis did not change the results at all. Look at Table 4 and 5. Do you understand my points?
This from the abstract of the PEAR paper itself:
However, over the course of the program there has been a striking diminution of the anomalous yield that appears to be associated with the participants’ growing attention to, and dependence upon, the progressively more detailed descriptor formats and with the corresponding reduction in the content of the accompanying free-response transcripts.
Translation into English: The more rigorous the detailed analysis, the fewer the RV results.

Followed immediately by this statement:
The possibility that increased emphasis on objective quantification of the phenomenon somehow may have inhibited its inherently subjective expression is explored in several contexts, ranging from contemporary signal processing technologies to ancient divination traditions.
Translation into English: Never mind the actual results, we saw what we wanted to see, so there.

Is this not clear yet? The basis of the commentary in Skepticreport is that PEAR use such obscurant language to hide the obvious results. And they do so not to hide from scientists, but to maintain tenure and support from their private sponsors. Jobs and money...jobs and money...
 
Conclusion:

The drop in results is most likely due to the method of data collection and not the data analysis.

I hate to be a bore, but I already adressed this in the third post on this thread.

Let me be more specific.

The PEAR data can be split into five:

Chicago
Initial (binary)
Later (binary)
FIDO
Distributive

Of which the first three produced significant results, and the last two didn't. From what I understood in your opening post, the difference in data collection caused the drop in results. But the Chicago trials didn't use the binary method of collection either, and they contain the best results.

So, according to what the data actually say, the drop in results has nothing to do with the method of data collection.
 

Back
Top Bottom