PEAR remote viewing - what the data actually shows

Because that is a mistaken assumption you have just made. You are assuming that the TYPE of data (binary/distributed) makes the difference, regardless of the validity of the data. If the data is simply not valid, it doesn't matter if it's binary, ternery, distributed, one derived from the other, two-tailed normal distribution, or inverted Polynesian blindfolded darts scores. It's still crap data - garbage in, garbage out. The situation is that PEAR's base data was often derived from protocols that clearly allowed flawed or biased results in the measurements.


What you are saying here is that the positive data obtained by PEAR was due to methodological issues not relating to their analysis methods. This is exaclty what I've been suggesting is possible!

Zep, I'm arguing that the drop in results reported by PEAR was nothing to do with their analysis method. The article in SkepticReport states that as the analysis method became progressively more "objective", the results disappeared. This is not true (see Tables 4 and 5).

Further, since each set of results was obtained by a different set of flawed test protocols using different guages, there is no reason to assume that "adjusting" one flawed set to make it "compatible" with another flawed set is going to eliminate any artifacts in either set, nor in the resulting combined set. Logically thinking, it is more likely to introduce artifacts (i.e. errors) than reduce existing ones. Don't you think?

Yes! This is what I've been suggesting from post 1!. Now you're in agreement with my point that treating the FIDO and distributive data as binary will introduce an artifact if you hypothesis is true!

the data shows this does not happen

So, do you agree that the SkepticReport is in error over this?
 
Thats ok. What do you think of the papers?

I think Spottiswoode is searching very hard for correlations between things that one would not expect to be correlated.
One might just as well look for correlation between 'claims of positive results found in psi research papers' and the phases of the moon.


I don't understand your point about the MRI scan. Can you explain what you mean?

Are you sure you don't understand? Perhaps you are pulling my leg!
Spottiswoode explores correlation between psi and the intensity of the Global Magenetic Field.
But he does not explore correlations between psi and much larger magnetic fields (eg, as are generated by MRI), when such correlations may be expected to be orders of magnitude larger, and trivially easy to test for.

As they say in the classics ... go figure!
 
Spottiswoode explores correlation between psi and the intensity of the Global Magenetic Field.
But he does not explore correlations between psi and much larger magnetic fields (eg, as are generated by MRI), when such correlations may be expected to be orders of magnitude larger, and trivially easy to test for.

As they say in the classics ... go figure!

I see what you mean. I think the idea was to go back through the AC trials they had already done and look for correlations between the GM field. This way, they do not have to perform any new AC trials which would cost money and more time. I agree that applying a controlled local field to their remote viewers and testing correlations directly would be the sensible way to go, but they may be just trying to establish whether such research would be worth it by looking at the database they already have.
 
I see what you mean. I think the idea was to go back through the AC trials they had already done and look for correlations between the GM field. This way, they do not have to perform any new AC trials which would cost money and more time. I agree that applying a controlled local field to their remote viewers and testing correlations directly would be the sensible way to go, but they may be just trying to establish whether such research would be worth it by looking at the database they already have.

So, PEAR has obtained information by remote viewing in at least some cases? Yes or no.
 
What you are saying here is that the positive data obtained by PEAR was due to methodological issues not relating to their analysis methods. This is exaclty what I've been suggesting is possible!

Zep, I'm arguing that the drop in results reported by PEAR was nothing to do with their analysis method. The article in SkepticReport states that as the analysis method became progressively more "objective", the results disappeared. This is not true (see Tables 4 and 5).
No. Because you have once again confused a number of concepts. Plus you are drawing a conclusion from two tables of data that is, frankly, stretching a long bow. Let's do this again, slowly:

1) The PEAR methodologies used to derive the base data had flaws in some cases that allowed the possibility of biased results. This was noted in some detail by the critics of the original PEAR paper (Utts, et al) - you have the reference to that already.

2) EVEN SO... PEAR went ahead and tried to lump all these studies together to see if the "paranormal" results became more marked. The PEAR methodology for doing this meta-analysis of these multiple disparate studies was also subject to criticism from the same source due to the possibilities of introducing errors. However, to PEAR's credit, they attempted to conduct this meta-analysis fairly and openly, taking into account subsquent criticisms by adjusting their analysis methods accordingly.

3) AS A RESULT... By the time the data in Tables 4 & 5 were derived, it was clear to all and sundry that there were no "paranormal results" any more. The differences between Tables 4 and 5, regardless of the analysis methods used to derive them, were so minor as to be inconsequential to the overall outcome, mathematically speaking.

4) OVERALL NULL RESULT... It was clear that the problems with the analysis lay back at the beginning of the process - poor data gathering. However it was when the meta-analysis showed definitely that this was the case that PEAR changed tack.

5) DIVERSION... Given the nature of the beast that PEAR is, other factors came into play in presenting this result. This was the subject of the skepticreport commentary. It did not go into any of the analysis in huge detail. It was a commentary on WHY the results were presented as they were, and posited WHO they were aimed at.



Yes! This is what I've been suggesting from post 1!. Now you're in agreement with my point that treating the FIDO and distributive data as binary will introduce an artifact if you hypothesis is true!

the data shows this does not happen

So, do you agree that the SkepticReport is in error over this?
You asked how a binary reduction COULD introduce an artifact in the data - I showed you how: quite simply. And given the low resolution of the "effect" being sought, any reduction in data definition (such as binary reduction would do) is not a good thing, I would imagine.

But did PEAR it actually introduce such an artifact in their data as a result? I don't know - a definite maybe on that one. In their paper they used at least two different methods to do this, with subsequent analysis of each. I suspect that any effect was lost in bigger waves of mush as a result.

The skepticreport commentary refers to both PEAR and Utts et al in regard to the analysis steps. The position that subsequent analyses revealed less information is one that PEAR themselves concluded in the original paper.

Other than the binary-reduction version, which produced nearly as many extra-chance ‘‘misses’’ as ‘‘hits,’’ the results from the other five methods all displayed relatively close concurrence, marginally significant composite z-scores, and effect sizes only about half that of the ab initio trials and only about a fifth as large as that of the ex post facto subset. Although the proportions of trials with positive scores were above 50% in all the calculations, neither these nor the numbers of significant trials exceeded chance expectation. Clearly, FIDO had not achieved its goal of enhancing the PRP yield, despite its potential sensitivity to subtle or ambiguous informational nuances in the data.
http://www.princeton.edu/~pear/pdfs/jse_papers/IU.pdf, p225
 
Last edited:
No. Because you have once again confused a number of concepts. Plus you are drawing a conclusion from two tables of data that is, frankly, stretching a long bow. Let's do this again, slowly:

1) The PEAR methodologies used to derive the base data had flaws in some cases that allowed the possibility of biased results. This was noted in some detail by the critics of the original PEAR paper (Utts, et al) - you have the reference to that already.

True.

2) EVEN SO... PEAR went ahead and tried to lump all these studies together to see if the "paranormal" results became more marked. The PEAR methodology for doing this meta-analysis of these multiple disparate studies was also subject to criticism from the same source due to the possibilities of introducing errors. However, to PEAR's credit, they attempted to conduct this meta-analysis fairly and openly, taking into account subsquent criticisms by adjusting their analysis methods accordingly.

Still with you.

3) AS A RESULT... By the time the data in Tables 4 & 5 were derived, it was clear to all and sundry that there were no "paranormal results" any more. The differences between Tables 4 and 5, regardless of the analysis methods used to derive them, were so minor as to be inconsequential to the overall outcome, mathematically speaking.

Agreed. And the reason there were no results anymore was not due to the analysis method because the FIDO and distributive data gave no artifactual result when treated as binary or ternary.

4) OVERALL NULL RESULT...

The overall result they got was a p-value of 0.00000003, all studies combined.

It was clear that the problems with the analysis lay back at the beginning of the process - poor data gathering.

I don't think so. The potential subjective bias caused by non-random selection of targets was present throughout all trials in all experiments that used volitional strategies. This means that the data gathered using the distributive method would still contain the subjective bias when converted to binary scoring. You don't lose the information this bias has passed onto the data when distributive scores get converted to binary. This converted binary data would be expected to show an artifactual result in line with the original trials that gathered the data using the binary method. But it does not. What you are saying simply doesn't hold up when you look at the data.

You asked how a binary reduction COULD introduce an artifact in the data - I showed you how: quite simply. And given the low resolution of the "effect" being sought, any reduction in data definition (such as binary reduction would do) is not a good thing, I would imagine.

I agree that your description of a binary reduction could introduce an artifact.

But did PEAR it actually introduce such an artifact in their data as a result? I don't know - a definite maybe on that one.

The answer is no because of the results of Table 4 and 5. The distributive and FIDO data do not show a significant result when treated as ternary and binary. Cleary, the experimental test of your hypothesis failed.
 
David,

So, PEAR has obtained information by remote viewing in at least some cases? Yes or no.

Why do you keep avoiding this perfectly simple question?
 
bumpity bump - or are we all done with this thread now?

I'll take this opportunity to thank all posters for their valuable contributions.
For me it has been very educational.

Is PEAR still being funded?
What is Princeton's latest spin on this department and their workings?
If I poke this stick into the hornets nest will they all come out buzzing?
 
I don't think so. The potential subjective bias caused by non-random selection of targets was present throughout all trials in all experiments that used volitional strategies. This means that the data gathered using the distributive method would still contain the subjective bias when converted to binary scoring. You don't lose the information this bias has passed onto the data when distributive scores get converted to binary. This converted binary data would be expected to show an artifactual result in line with the original trials that gathered the data using the binary method. But it does not. What you are saying simply doesn't hold up when you look at the data.
I would disagree with you on this. Let me try to show you the issue by way of analogy.

Imagine you are listening to a radio that is tuned to a specific frequency, and you hear mostly random faint noise (static). And you believe that, among that noise, there MAY be something significant - a voice speaking, perhaps. This is the equivalent of the original phenomenon being measured by PEAR - very faint, almost beyond real measure.

You then turn the radio volume up to full (it is now LOUD static), and apply a filter at the 50% level - any noise level above is set to 100%, any below is set to 0%. You would expect to get a loud machine-gun rattle from the speaker as a result. This is the equivalent of binarisation of the PEAR base data.

Now, do you seriously think there is still the possibility of extracting sufficient data from this resulting machine-gun noise to isolate and measure a possible faint distorted speaking voice? Do you think there is any meaningful purpose in analysis of this noise at all? Beyond showing that it is indeed random binary data? Plain logic says not - any "information" is long swamped out - the resolution of the data is way too low to be useful.

And this is exactly what happened with PEAR's analysis - they got nothing out of their reduced data sets.

And this is also why critics pointed them right back to the source data - in our analogy, the faint static hiss with the possible voice in it. What was REALLY needed was much more stringent controls on the gathering of data - much HIGHER fidelity recording with minimal interference. In PEAR's case, they needed to ensure that their protocols were a lot more stringent about gathering valid data, and not base artifacts and distortions (such as collusion and bias, as mentioned). That's where the real science problems lie, not paralysis by analysis, or hand-waving mumbo-jumbo excuses.



The answer is no because of the results of Table 4 and 5. The distributive and FIDO data do not show a significant result when treated as ternary and binary. Cleary, the experimental test of your hypothesis failed.
Alternatively, your expectations of what was actually expected from the data and analysis could be at fault. See above.
 
bumpity bump - or are we all done with this thread now?

I'll take this opportunity to thank all posters for their valuable contributions.
For me it has been very educational.

Is PEAR still being funded?
What is Princeton's latest spin on this department and their workings?
If I poke this stick into the hornets nest will they all come out buzzing?
Seems not! But I find I'm still going round in circles...
 
I would disagree with you on this. Let me try to show you the issue by way of analogy.

Imagine you are listening to a radio that is tuned to a specific frequency, and you hear mostly random faint noise (static). And you believe that, among that noise, there MAY be something significant - a voice speaking, perhaps. This is the equivalent of the original phenomenon being measured by PEAR - very faint, almost beyond real measure.

You then turn the radio volume up to full (it is now LOUD static), and apply a filter at the 50% level - any noise level above is set to 100%, any below is set to 0%. You would expect to get a loud machine-gun rattle from the speaker as a result. This is the equivalent of binarisation of the PEAR base data.

Now, do you seriously think there is still the possibility of extracting sufficient data from this resulting machine-gun noise to isolate and measure a possible faint distorted speaking voice?


This analogy is defficient in many ways. If anything, what you have is the equivalent to one of the binary questions of the list of 30. Also, you have not included an analogous target. All you have here is the answer to one descriptor question. So, left with just this, of course there is no way to extract any information!

Let me extend the analogy. If you had your initial faint noise and then applied 30 different filters to it and then compared the result of each filter to a control noise source that did not contain the voice pattern, then we might have an analogy that gets near to what is going on in the PEAR experiments.

But wait, I don't understand why you have gone to such lengths to invent an inadequate analogy that just confuses the issue and also does not address my question. Why can't you explain an answer to my question by refering to the actual processes that went on? Much simpler.

And this is exactly what happened with PEAR's analysis - they got nothing out of their reduced data sets.


They got positive results from their binary descriptor based data sets and overall positive results!

And this is also why critics pointed them right back to the source data - in our analogy, the faint static hiss with the possible voice in it. What was REALLY needed was much more stringent controls on the gathering of data - much HIGHER fidelity recording with minimal interference. In PEAR's case, they needed to ensure that their protocols were a lot more stringent about gathering valid data, and not base artifacts and distortions (such as collusion and bias, as mentioned). That's where the real science problems lie, not paralysis by analysis, or hand-waving mumbo-jumbo excuses.


I agree that the possibility of collusion and bias brings serious doubt upon the data. However, the data from the PEAR paper does not support the hypothesis that the positive results from the binary data collection experiments had anything to do with the method of analysis. Any bias present when participants answered the distributive descriptor questions would be passed on to a binary treatment of the data. For example,

Lets say I get asked - Is there water in the scene?

I am faced with 10 possible answers so I give this question a 7, equivalent to "quite sure". I give this answer because I have a psychological bias towards water.

When converted to binary, this 7 will become a "yes". Cleary, the bias is still responsible for the "yes" answer.

And, as you pointed out before, the conversion of the distributive scores into binary should, according to this kind of hypothesis, increase the incidence of hits.

But the data shows that the distributive data does not produce an artifactual result when treated as binary.

Can't you see that your hypothesis does not stand up to the data?
 
I'll take that as a "Yes", then.

You do believe in paranormal phenomena, then.

I've already told you I don't use the word paranormal. Its meaningless. Don't put words in my mouth. You were wrong on the first bit too. I guess you haven't understood what I've been saying in this thread.
 
I've already told you I don't use the word paranormal. Its meaningless. Don't put words in my mouth. You were wrong on the first bit too. I guess you haven't understood what I've been saying in this thread.
The term 'paranormal' is hardly meaningless in this context. It is a fact that the laws of physics will have to be seriously adjusted if any of this would be true. Even the theory of evolution would have to be revised to explain why no living creature has taken evolutionary advantage of remote viewing.

I believe, like Claus, that despite your woolen words, this stuff is something that you believe in. You believe that you evidence for your beliefs, although in this case rather tenuous, and you are entitled to hold that belief.

On this board you will be hard pressed to get support for the belief that flawed data can somehow be treated statistically to become good data.
 
I've already told you I don't use the word paranormal. Its meaningless. Don't put words in my mouth.

And I've already told you that it doesn't make a whit of difference what words you want to ignore.

You were wrong on the first bit too.

I was wrong? It was "No"?

May I remind you of your earlier posts, when I asked you that they can, in fact, obtain information by remote viewing:

Very difficult to say from the PEAR data because of the methodological problems. Other labs have got positive results however (see earlier links).

I was refering to their lack of randomisation and lack of safeguards for fraud. Unfortunately, because of this the PEAR data cannot be regarded as evidence for remote viewing, IMO. Having said that, on the issue of randomisation of targets, both the data for volitional and instructed trials got similar positive results. We would expect the volition trials to get far higher scores if personal bias were at play, which is interesting.

You are extremely careful with your wording, but this is as close as it gets to you being clear:

You do argue that there are positive results by remote viewing.

That means that you believe that paranormal phenomena are real.

You can call it whatever you like, it doesn't change reality.

I guess you haven't understood what I've been saying in this thread.

If I don't understand, it is certainly because of your vague and evasive replies. I - and others - have been trying to get you to state clearly what it is you mean. And believe. You have a hard time doing just that. Don't blame us for your unwillingness to speak clearly.
 
This analogy is defficient in many ways. If anything, what you have is the equivalent to one of the binary questions of the list of 30. Also, you have not included an analogous target. All you have here is the answer to one descriptor question. So, left with just this, of course there is no way to extract any information!
Sorry, but you haven't understood, or I really should have made it simpler for you. My analogy was chosen to represent PEAR's actual methodology and data collection processes in an understandable way. Underneath the larded gobbledegook of their reportage, this is the quality of their research. Other researchers have been particularly scathing for precisely these reasons. But let us push on!

Each data point (or descriptor, as you call it) is the equivalent of taking a noise measurement from your analogous radio speaker at a single point in time. So over a period of time, you will have many points of data generated, equivalent to multiple data points. However they will all be binary, and thus all useless, as described in posts above.

Let me extend the analogy. If you had your initial faint noise and then applied 30 different filters to it and then compared the result of each filter to a control noise source that did not contain the voice pattern, then we might have an analogy that gets near to what is going on in the PEAR experiments.
But you are not applying 30 different filters to it. You are applying only a few - reduction of all the data points to a few (usually two) levels.

Look, I'll take my analogy further for you, so you can see where it logically ends up. The PEAR report was actually an aggregation of a number of studies over 25 years, with the data massaged to make them compatible. This is the equivalent of having not just one radio making machine-gun clicks, but many of them all going simultaneously. And the analysis was the result of trying to make sense of the totality of noise they all produced at once.

And what results might we expect, logically, from this sort of exercise? Given that the most detail was in the unfiltered data, it should produce the most "positive" results. But it would also be the most likely to be affected by outside interferences - sunspots, passing interference, etc, etc. But these are very similar to the effect being sought, so it would seem reasonable to assume that any filtering would take out the good stuff too.

And when filtering WAS applied, lo and behold, the effect began to rapidly disappear. So rapidly that by the third and fourth iteration, it was gone. Did you note that PEAR reverted to a lower filtering level when re-analysing some other data later? That's why.

Once again, as urged by the science world, the obvious solution is NOT more data reduction, but better measurements and better quality control. In our analogy, this would be stuff like using high-fidelity digital recorders, working in Faraday cages, and other steps to minimise base data anomalies and maximise base data quality and reliability.

But wait, I don't understand why you have gone to such lengths to invent an inadequate analogy that just confuses the issue and also does not address my question. Why can't you explain an answer to my question by refering to the actual processes that went on? Much simpler.
Because I have answered your question in multiple ways, including an extended analogy I took the time to prepare. And yet you don't seem to have absorbed the rather obvious and salient observations.

Instead, you seem to be trying to nitpick one tiny point on the outer rim of the galaxy of this whole exercise. In what appears to be an attempt to invalidate an entire commentary based on a tiny and quite irrelevant discrepancy.

Can you not understand that it doesn't matter about the analysis results if the base data is rubbish. You can't make a nice cake from rotten eggs.




They got positive results from their binary descriptor based data sets and overall positive results!
And yet they reported they got nothing at the end. I quoted this above from the body of their report, and it's in the abstract of their own report. Perhaps you will tell us what we who have read the report many times have missed?




I agree that the possibility of collusion and bias brings serious doubt upon the data. However, the data from the PEAR paper does not support the hypothesis that the positive results from the binary data collection experiments had anything to do with the method of analysis. Any bias present when participants answered the distributive descriptor questions would be passed on to a binary treatment of the data. For example,

Lets say I get asked - Is there water in the scene?

I am faced with 10 possible answers so I give this question a 7, equivalent to "quite sure". I give this answer because I have a psychological bias towards water.

When converted to binary, this 7 will become a "yes". Cleary, the bias is still responsible for the "yes" answer.

And, as you pointed out before, the conversion of the distributive scores into binary should, according to this kind of hypothesis, increase the incidence of hits.

But the data shows that the distributive data does not produce an artifactual result when treated as binary.

Can't you see that your hypothesis does not stand up to the data?
No. For the umpteenth time. Biniarising the data CHANGES the results, so it WILL produce artificial positives OR negatives.

I'm getting the impression that you somehow think it amazing that a matrix of data that is damn near all zeroes initially somehow doesn't change a lot when it is binarised...to all zeroes.
 
Can you not understand that it doesn't matter about the analysis results if the base data is rubbish. You can't make a nice cake from rotten eggs.
I like the expression GIGO: Garbage In, Garbage Out.
 
I hope that davidsmith73 is within reach. He can have his beliefs for all I care, he just has to face the fact that PEAR is not the answer to his dreams. Maybe the next study. Or the next.
 

Back
Top Bottom