• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

PEAR remote viewing - what the data actually shows

This from the abstract of the PEAR paper itself:
"
However, over the course of the program there has been a striking diminution of the anomalous yield that appears to be associated with the participants’ growing attention to, and dependence upon, the progressively more detailed descriptor formats and with the corresponding reduction in the content of the accompanying free-response transcripts."

Translation into English: The more rigorous the detailed analysis, the fewer the RV results.

The descriptor formats and correlated drop in results refer to how they collect their data, not the analysis. When they perform their analysisusing binary, tertiary, distributive etc, they find that the method of analysis does not affect the results. Read the paper carefully and I think you'll eventually see what I mean. It's a tough going paper I know.

For example, on Table 5, the remote viewers record their impressions using the distributive method (10 options per question). PEAR then treat this data as if it were collected using either binary, ternary and various distributive methods. From the table, we can see that data collected and analysed using the distributive method got no results. However, if the significant results of the previous binary experiments (tables 1-3) were due to the analysis method then we would expect to see the same artifactual result when the distributive data is treated in the same way. But this does not happen. How do you account for this?

Followed immediately by this statement:

The possibility that increased emphasis on objective quantification of the phenomenon somehow may have inhibited its inherently subjective expression is explored in several contexts, ranging from contemporary signal processing technologies to ancient divination traditions.

Translation into English: Never mind the actual results, we saw what we wanted to see, so there.

I think their use of the phrase "increased emphasis on objective quantification" is not meant to imply that their data collection methods gradually reduced artifactual subjective bias. They were just using more options on their descriptor sheets.

The basis of the commentary in Skepticreport is that PEAR use such obscurant language to hide the obvious results. And they do so not to hide from scientists, but to maintain tenure and support from their private sponsors. Jobs and money...jobs and money...

I thought one of the main points of the article was that the drop in results were due to progressive removal of subjective bias from their analysis. I do not think the results show that at all. Thats not to say the PEAR RV experiments contain no methodological flaws, cos they cleary do such as inadequate randomisation of targets. I just think it does the sceptical position no good by introducing errors like that in the article because it obscures the real issues. And yes, the language of the PEAR people is, ahem, individual, but I don't see that as a real problem.
 
I hate to be a bore, but I already adressed this in the third post on this thread.

Let me be more specific.

The PEAR data can be split into five:

Chicago
Initial (binary)
Later (binary)
FIDO
Distributive

Of which the first three produced significant results, and the last two didn't. From what I understood in your opening post, the difference in data collection caused the drop in results. But the Chicago trials didn't use the binary method of collection either, and they contain the best results.

Page 212 describes how 51 of the Chicago and previous PEAR trials were encoded into the binary format (I think these were called the ex post facto group). These experiments did indeed get the best result when encoded into binary format which could be due to a number of reasons.

So, according to what the data actually say, the drop in results has nothing to do with the method of data collection.

How so? There seems to be a good relationship in fig4. If we ignore the ex post facto experiments for now (because the free response data collection is hard to interpret as either closer to binary or distributive) then we can see that the binary method of collection gets the best result, FIDO next, then distributive. Granted, there could be some other reason for this drop but its very hard to justify that its to do with the method of data analysis.
 
The descriptor formats and correlated drop in results refer to how they collect their data, not the analysis. When they perform their analysisusing binary, tertiary, distributive etc, they find that the method of analysis does not affect the results. Read the paper carefully and I think you'll eventually see what I mean. It's a tough going paper I know.
And I have read it many times over, and am all too familiar with it, thank you.

For example, on Table 5, the remote viewers record their impressions using the distributive method (10 options per question). PEAR then treat this data as if it were collected using either binary, ternary and various distributive methods. From the table, we can see that data collected and analysed using the distributive method got no results. However, if the significant results of the previous binary experiments (tables 1-3) were due to the analysis method then we would expect to see the same artifactual result when the distributive data is treated in the same way. But this does not happen. How do you account for this?
In simple terms, the reduction of distributive data to binary data causes "clumping" of results - rounding errors if you like, which leads to artifacts. Binary data, no matter how it is obtained, cannot then be redistributed to obtain finer granularity for analysis.

For example, you cannot reasonably extract a point value in the range 1 to 10 out of Yes/No base data. Such information can only be obtained at the source, which PEAR did not do in some of their studies. However to aggregate their data they needed to massage it into some common "format", and the lower granularity was it.



I think their use of the phrase "increased emphasis on objective quantification" is not meant to imply that their data collection methods gradually reduced artifactual subjective bias. They were just using more options on their descriptor sheets.
I don't believe it refers to data gathering at all.

If you read the paper, it refers specifically to the (rather pathetic) apologetics used by PEAR to justify claiming there ARE results even when their own data and analysis shows otherwise. They are saying that the results they "wanted" were only present when the data was viewed subjectively. That is, someone is of a baseless opinion that it is there. Which is the direct antithesis of the objective scientific method, don't you agree?



I thought one of the main points of the article was that the drop in results were due to progressive removal of subjective bias from their analysis. I do not think the results show that at all. Thats not to say the PEAR RV experiments contain no methodological flaws, cos they cleary do such as inadequate randomisation of targets. I just think it does the sceptical position no good by introducing errors like that in the article because it obscures the real issues. And yes, the language of the PEAR people is, ahem, individual, but I don't see that as a real problem.
There were a number of issues facing PEAR when they went to analyse their data:

First, the issue of poor data. As mentioned above and also commented on extensively elsewhere, the quality of base data being used ranged from poor to mediocre, with glaring irregularities being included. The data obtaining methods varied considerably over time (the various studies ranged over 25 years!), and produced disparate data sets. The study methodologies were not particularly similar either, nor were they particularly well conducted. It seemed that ANY study that had "remote viewing" as its premise was included (see other comments above).

Second, the issue of poor arithmetic - a mechanical issue, basically. It stemmed almost entirely from trying to lump the sets of disparate data together and trying to draw viable conclusions from them. As mentioned above, to do this they had to discard some "information" in the process (reducing variant data to binary, etc). This was also the subject of the extensive criticism from elsewhere, and the main reason why they went to the lengths of progressively reviewing their analytical techniques (i.e. "improved recipes", as they called it).

However credit where credit is due - the PEAR analysts have not fudged the analysis results, as far as I can determine anyway, and were honest in their reporting of same. That is, they were objective in as far as this went.

But it is the swing away from being objective to being subjective that is the major flaw.
 
Zep may have read it many times, became "familiar" with it.

Understanding it, that is another question.
 
In simple terms, the reduction of distributive data to binary data causes "clumping" of results - rounding errors if you like, which leads to artifacts.

Wouldn't a specific prediction from this hypothesis be that the data gathered using the distributive method and then analysed using the binary method would lead to this artifact? If not, why not?

Binary data, no matter how it is obtained, cannot then be redistributed to obtain finer granularity for analysis.

True. But according to your hypothesis, wouldn't data collected using the distributive method and then treated as if it were collected using the binary method (for example by treating 0-4 as a "yes" and 5-9 as a "no") be the same as data collected using the binary method?

However to aggregate their data they needed to massage it into some common "format", and the lower granularity was it.

Your hypothesis could be supported if the Chicago free response data, or a new set of experiments using free response rather than descriptor formats, were encoded into a distributive analysis and got insignificant results.
 
davidsmith73:

Putting aside for a moment all the talk about statistics and method ...
Do you think that RV is a real thing?
Do you think a simple test for it can be designed?
And if not, why not?
 
Zep may have read it many times, became "familiar" with it.

Understanding it, that is another question.
I wonder if you have read it at all, let alone understood any of it or the implications it contains.

Then again, no I don't.
 
Wouldn't a specific prediction from this hypothesis be that the data gathered using the distributive method and then analysed using the binary method would lead to this artifact? If not, why not?



True. But according to your hypothesis, wouldn't data collected using the distributive method and then treated as if it were collected using the binary method (for example by treating 0-4 as a "yes" and 5-9 as a "no") be the same as data collected using the binary method?
No, or more accurately, "it depends". Here's a simple example of how it might pan out:

In a binary data point, the result might be 1 if ANY observation of a phenomenon is made, 0 otherwise. In a variant data point, that may be a grade of how "big" that phenomenon might be, from 0 (not seen) to 9 (huge). So ANY value over zero could be a taken as positive result in a binary measuring system. However the totality of the results in the distributive scoring method might be that they were all usually zero, sometimes 1 and occasionally 2. That is, the scores are a really small fraction of the potential values. However, translating to binary (zero/non-zero) will artifically inflate the results.

Let's try this with 10 made-up scores, from a range of 0-9 each.

1 0 1 2 1 0 0 1 2 1; Average = 0.09, stddev = 0.0737864787

Let's convert these to zero/non-zero binary, range 0-1.

1 0 1 1 1 0 0 1 1 1; Average = 0.35, stddev = 0.241522946

Nearly four times larger "positive" result = artifical inflation. (Thanks, Excel!)



Your hypothesis could be supported if the Chicago free response data, or a new set of experiments using free response rather than descriptor formats, were encoded into a distributive analysis and got insignificant results.
Better yet, that the testing was redone completely, taking all the proper precautions that were already specified before they started, rather than trying to massage the data after a poor data-gathering exercise.

However this isn't MY hypothesis at all, it's that of the previous critics of the analysis. I would refer you to them if you have further issues with the statistics - I'm no statistician by a long chalk and never claimed to be. My commentary at Skepticreport was on the mind-set and apologetics that were embodied in the PEAR paper and its subsequent criticism.
 
Let's try this with 10 made-up scores, from a range of 0-9 each.

1 0 1 2 1 0 0 1 2 1; Average = 0.09, stddev = 0.0737864787

Let's convert these to zero/non-zero binary, range 0-1.

1 0 1 1 1 0 0 1 1 1; Average = 0.35, stddev = 0.241522946

Nearly four times larger "positive" result = artifical inflation. (Thanks, Excel!)

Um, I get xbar=.9 and sigma=.74 for the 1st group and xbar=.7 and sigma=.48 for the 2nd...
 
If we ignore the ex post facto experiments for now

Why would you do that?

Perhaps I misunderstood your opening post, but you seemed to be saying that the Binary sessions had the greatest hit rate due to the fact that the "receiver" encoded their session directly into the 30 binary descriptors. And that when the method of data collection changed from binary ("yes/no") to other ostensibly more sensitive meansures ("yes/maybe/no" etc.) the effect vanished, even when there notes were scored as binary. In my mind the most sensitive method of data collection of all is free response, yet these scored the highest when encoded into the binary responses.

From this I deduce that the method of data collection has no bearing on the score.

EDIT:

As an aside, in 1983 Bierman, Berendsen, Koenen, Kuipers, Louman, and Maissan completed a ganzfeld experiment (ie, free response) which was also scored according to the 30 descriptors used in the PEAR trials. I'll ignore the trials they did with music as the audio input, and focus on the ganzfeld.

In the free response trials, the ganzfeld work scored a 34% hit rate (25% MCE) over 32 trials. When encoded, the same trials scored an equivalent of a 27% hit rate.

This is not the only experiment in which different methods of processing the "information" in a psi experiment throws up wildly different results.
 
Last edited:
Why would you do that?

Perhaps I misunderstood your opening post, but you seemed to be saying that the Binary sessions had the greatest hit rate due to the fact that the "receiver" encoded their session directly into the 30 binary descriptors. And that when the method of data collection changed from binary ("yes/no") to other ostensibly more sensitive meansures ("yes/maybe/no" etc.) the effect vanished, even when there notes were scored as binary. In my mind the most sensitive method of data collection of all is free response, yet these scored the highest when encoded into the binary responses.

From this I deduce that the method of data collection has no bearing on the score.
Not quite, I suspect. The method of data scoring was (eventually) the same in both cases, i.e. binary, although the actual data point collected was not a binary value.

Note also that these purely mechanical issues were not the only ones to be contended with. The actual methodology employed in a number of the studies left large gaps for doubt that the data was valid at all, even if fairly obtained. For example, not properly ruling out explicit or implicit collusion, using the same people as both senders and percipients, and not examining outlier data but allowing them to influence the overall results beyond their actual worth.

Incidentally, the issue of outlier data could easily have been a starting point to lead to a positive proof of RV for a particular person. However PEAR did not seem to follow through on that, that I can find anyway. No, it was NOT Uri Geller! ;)
 
davidsmith73:

Putting aside for a moment all the talk about statistics and method ...
Do you think that RV is a real thing?
I think its likely that its real. What each of us means by "remote viewing" is a different story! For example, if real, I think its possible that remote viewers are not in any sense "going to" the location they view. It could be that they are getting their information via ESP. In other words, there just needs to be a novel mechanism whereby information about the location is aquired by the remote viewers brain. Then its a case of perceiving that information via brain activity. The remote viewers could then merely think their mind, consiousness, spirit, whatever, is in some sense at the location, when in fact their brains are just giving this illusion. There are anecdotal reports of people having OBE's and perceiving things that they claim they could not have known. There's doubt to these stories of course, but if true there could be a similar scenario whereby the brain produces this strange alteration of body image and location of the illusiary self (the OBE) and also allows for psi information to be aquired thus "perceiving things they could not have known".

Do you think a simple test for it can be designed?

I think tests for it can be designed. The main flaw the PEAR group made was to perform no randomisation of targets in half their experiments! It also not clear how they performed the randomisation for the other experiments.
 

Back
Top Bottom