• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The Parapsychological Experimenter Effect

She participated from the very beginning. I saw one early "Tech Report" that showed a hand-held device that Dunne was able to take home and produce "data".
But how does that prove that Operator 10 was Dunne? Also, where is the proof that Operator 10 engaged in fraud? According to the New Scientist article: "[Jahn] believes that the recording procedures at PEAR are unusually tight and any fiddling with results would have to be systematic because it would have to include the laboratory's computer database, the print-outs and subjects' entries in the logbook. Jahn adds that sceptics have had a longstanding invitation to check his work first-hand and the few that have dropped by seem to have left relatively impressed."
 
For starters we have this great quote:
http://www.princeton.edu/~pear/pdfs/correlations.pdf

pg 2.
The enigma of consciousness continues to interest some contemporary physicists in such
contexts as the non-locality/EPR paradox/Bell’s theorem debates [19], single photon interference[20], causality violations in thermodynamics [21], neurophysics [22], complexity and chaos theory [23], and numerous other aspects of quantum epistemology and measurement [24, 25],once again without much resolution. Indeed, although a myriad of theoretical and empirical attempts have been made to define the elusive concept of consciousness itself, curiously little agreement on its origins, substance, characteristics, or functions has yet been achieved.


pg 3
II. Equipment and Protocol

I note that the issues of how and where trials were set and generated are not addressed, there is no mention of trial schedules and places. There is a discussion of recording protocols. But no discussion of samples taken to check the performance of the number generators and baseline. It does not appear that trials were assigned on an equal basis to all participants.

Pg. 5
Statistical data from benchmark REG experiments, listed for passive calibrations (CAL); Operator high intentions
(HI), Low intentions (LO), and null intentions (BL); and HI–LO separations.
Parameter CAL HI LO BL HI–LO
Nt 5,803,354 839,800 836,650 820,750 1,676,450
μ 99.998 100.026 99.984 100.013
st 7.075 7.070 7.069 7.074
ss 0.002 0.006 0.006 0.006
dμ –0.002 0.026 –0.016 0.013 0.042
sμ 0.003 0.008 0.008 0.008 0.011
zμ –0.826 3.369 –2.016 1.713 3.809
pμ 0.409* 3.77 10–4 0.0219 0.0867* 6.99 10–5
S.I.D. 0.523 0.536 0.502† 0.569
O.I.D. 0.623 0.473 0.593† 0.516
KEY
Nt: Number of trials (200 binary samples each)
μ: Mean of trial score distribution
st: Standard deviation of trial score distribution
ss: Measurement uncertainty (statistical) in the observed value of
st ; ss s0/2Nt where s0 = 50 is the theoretical trial standard
deviation
dμ: Difference of mean from theoretical chance expectation;
dμ μ – μ0 for HI and LO;
dμ(HI – LO) μ(HI) – μ(LO) dμ(HI) – dμ(LO)
sμ: Measurement uncertainty (statistical) in the observed value of
dμ ; sμ = s0 /Nt for HI and LO;
sμ(HI – LO) = s01/Nt(HI) + 1/Nt(LO)
zμ: z-score of mean shift; zμ dμ/sμ (calculated with full precision
from raw data values, not from the rounded values presented
above in the table)
pμ: One-tail probability of zμ (CAL, BL two-tail)
S.I.D.: Proportion of series having zμ in the intended direction
O.I.D.: Proportion of operators with overall results in the intended
direction
* p-values for CAL and BL are two-tailed due to lack of intention.
† BL is treated as in intended direction when positive.

Please not the part I bolded and underlined.

What is the standard deviation, why does it look like the mean of first order deviation is around 7
And WHAT!

This is greater than the alleged effect.

Gosh Rodney, what does that mean?
 
Has any skeptic discredited PEAR's methodology?

I guess it all depends upon what you mean by discredited. It has been discredited in the sense that inconsistencies have been identified. Possible sources for those inconsistencies have been suggested, but the opportunity doesn't really exist to narrow it down further than that.

Linda
 
I guess it all depends upon what you mean by discredited. It has been discredited in the sense that inconsistencies have been identified. Possible sources for those inconsistencies have been suggested, but the opportunity doesn't really exist to narrow it down further than that.

Linda
You and others here insinuate that, in the RNG experiments, Operator 10 must have committed fraud, and yet I have yet to hear any of you offer a hypothesis as to how the alleged fraud took place. Do you have a hypothesis?
 
You and others here insinuate that, in the RNG experiments, Operator 10 must have committed fraud, and yet I have yet to hear any of you offer a hypothesis as to how the alleged fraud took place. Do you have a hypothesis?

I'm saying that Operator 10's data is inconsistent with the rest of the data. And that this inconsistency accounts for the minuscule effect in the PEAR data. Whether or not this is due to fraud or some other abnormality in the data collection or equipment, I don't know. I can speculate, but I don't have the opportunity to investigate this. If it's fraud, then the principal investigators are in the best position to undertake that fraud.

There's still a problem with the data that needs an explanation, regardless of whether or not an explanation is forthcoming.

Linda
 
You and others here insinuate that, in the RNG experiments, Operator 10 must have committed fraud, and yet I have yet to hear any of you offer a hypothesis as to how the alleged fraud took place. Do you have a hypothesis?

Um, no fraud, sample bias is enough. Was there a handheld device? How was data transferred and recorded on it> How does that fit into a protocol? Did it have local editable storage?

Why is the 'effect' less than the standard deviation?
 
D D,
Not only was there a hand-held device that Dunne reportedly took home, but there was an earlier mechanical version of a pachinko game where plastic balls tumbled down through an array of pegs to generate a normal distribution. This device was mounted on a wall and people sat in front of it, trying to induce shifts in the distribution.
If the meta-analysis covered all the RNG studies, going back to 1959 as I have seen stated elsewhere, data from these early devices were included.
Errors in transcribing data from electromechanical counters to data analysis tables are not unheard of outside of parapsychology labs. http://www.envisionsoftware.com/Management/Rosenthal_Effect.html
 
Last edited:
Hmmm, then it must be hard to tell that there is an effect at all. How can you tell it from 'noise'? :)

An estimate of the standard error of the mean is calculated by:

se = sd / sqrt(n)

Where:

se = standard error of the mean.
sd = standard deviation of the sample.
n = sample size.

Thus arbitrarily small differences in the mean may be detected in noisy data by increasing the number (n) of samples taken.

Wiki (as ever) has a page on it:

http://en.wikipedia.org/wiki/Standard_error_(statistics)
 
An estimate of the standard error of the mean is calculated by:

se = sd / sqrt(n)

Where:

se = standard error of the mean.
sd = standard deviation of the sample.
n = sample size.

Thus arbitrarily small differences in the mean may be detected in noisy data by increasing the number (n) of samples taken.

Wiki (as ever) has a page on it:

http://en.wikipedia.org/wiki/Standard_error_(statistics)

Okay so we have

SE=~7/1580.25, SE=0.0044

Now the issue is that they say that the effect is 4 bits out of 10,000 how does the SE figure into that? The SE is based upon a trial SD 0f ~7 and trial runs of 2,947,200.

So now which way do you figure the standard error, if they say 10,000 that is fifty trials, so is the standrd SE error going to be ~50 x .0044= .22?
 
Not a hurdle at all, there would be plenty of volunteers who would be very willing to participate in such a study. You would not have to even pay them the nominal fee. I assume there are many pro and anti-psi believers who would volunteer.
Yes I had a rethink and 20 volunteers each putting in an hour of their time ought to show statistical significance if the effect they claim is real.
 
You and others here insinuate that, in the RNG experiments, Operator 10 must have committed fraud, and yet I have yet to hear any of you offer a hypothesis as to how the alleged fraud took place. Do you have a hypothesis?
Committing fraud would be very easy. There is a mechanism for excluding corrupt data from the database, so why couldn't the same mechanism be used to exclude unfavourable runs?

However, as I described before, you don't even have to allege fraud or even excessive carelessness to explain this result. A very slight looseness in laboratory practice would do.
 
Committing fraud would be very easy. There is a mechanism for excluding corrupt data from the database, so why couldn't the same mechanism be used to exclude unfavourable runs?
Do you apply this same standard to non-psi experiments? Again, according to Jahn: "[T]he recording procedures at PEAR are unusually tight and any fiddling with results would have to be systematic because it would have to include the laboratory's computer database, the print-outs and subjects' entries in the logbook."

However, as I described before, you don't even have to allege fraud or even excessive carelessness to explain this result. A very slight looseness in laboratory practice would do.
But why would this looseness apply disproportionately to Operator 10?
 
I once tried to read one of Rupert Sheldrake's books, and he had an interesting method of massaging statistics to support his result.

Before I gave up and threw the book away in disgust, I read that the results in his "sense of being stared at" experiments were only significant if you made a second-order meta-analysis of multiple different trials. Or something. There wasn't a statistically significant result in any of the trials, but when you turned the data sideways and added it to itself, then calculated the difference in results of different trials, there was a statistically significant difference in the list of differences.

Or something. I failed statistics in high school, but even to me it seemed that if you have to massage the data so much in order to get a result, then the result you're getting is a consequence of your massaging the data, not of a real effect.
 

Back
Top Bottom