• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

A question for Zep - PEAR

davidsmith73

Graduate Poster
Joined
Jul 25, 2001
Messages
1,697
Zep,

I read your Sceptic Report article on the PEAR remote perception experiments. Although the PEAR paper that is the focus of the article clearly shows that their different methods of data collection result in a decline in significance, I see nowhere in your article where you explain why this happens with reference to what they actually did. I was wondering if you could clear it up here. What was it specifically about their methods that lead to a decline in results?

Cheers
 
davidsmith73 said:
Zep,

I read your Sceptic Report article on the PEAR remote perception experiments. Although the PEAR paper that is the focus of the article clearly shows that their different methods of data collection result in a decline in significance, I see nowhere in your article where you explain why this happens with reference to what they actually did. I was wondering if you could clear it up here. What was it specifically about their methods that lead to a decline in results?

Cheers
Well, a number of far more erudite and capable people than I have already commented extensively on the data gathering, scoring and analysis of PEAR's data in this regard. So if you are after that sort of detail you should look for that elsewhere. I didn't go down that route because it has already been done by others in spades. But I'll try to give a brief rundown as I saw it:

The focus of my commentary with respect to the analysis results was not on the methods of data collection per se but on the scoring methodology used to analyse the results. It seems this varied greatly from one study to another (they were done in many places by many organisations over 25 years), but PEAR tried to shoehorn them all together sufficient to get an overall analysis going to look for support for "remote viewing". I didn't try to take that meta-analysis apart, but was prepared to accept it at face value for the purposes of getting to the heart of my own arguments.

So at the start of the analysis phase, using the initial data as provided from the disparate studies, PEAR believed they had something going on - there was something slightly anomalous in the results that looked like RV at work. However this meta-analysis was easily shown to be suffering from multiple scoring and statistical problems, any one of which could easily account for the anomalies arising.

For example, the study involving "Subject 10" was one such anomaly - his results completely skewed everything such that if his data was removed, you got nothing but chance results. This immediately raised eyebrows with the reviewers because apparently the base data also showed that Subject 10 provided a large proportion of the total data, that he was used very frequently as both a receiver AND transmitter, plus he knew his opposite number in each (supposedly independent) test.

Anyway, PEAR did the right thing and went back and (a) rescored* all the base data to be more consistent between studies, and (b) did better analysis to reduce the possibility of statistical artifacts (the FIDO program). This was repeated four times, and each time, as the analysis got more refined, the effects got less until they became indistinguishable from chance. And thus they reported, although obviously not with much relish.

Does this help?


*"Rescoring" does not mean obtaining the data again. It means the translation of raw output into consistently manageable data, sort of like marking exam papers.
 

Back
Top Bottom