• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

The Parapsychological Experimenter Effect

Do you apply this same standard to non-psi experiments? Again, according to Jahn: "[T]he recording procedures at PEAR are unusually tight and any fiddling with results would have to be systematic because it would have to include the laboratory's computer database, the print-outs and subjects' entries in the logbook."


But why would this looseness apply disproportionately to Operator 10?

Here we go again with "According to Jahn". Right. Wake up, get out of bed, drag a comb across your head.
 
Do you apply this same standard to non-psi experiments?
Yes, of course! As I have already said more than once.

In stats 1 at university we did a case study of an infamous meta-analysis of passive smoking studies
Again, according to Jahn: "[T]he recording procedures at PEAR are unusually tight and any fiddling with results would have to be systematic because it would have to include the laboratory's computer database, the print-outs and subjects' entries in the logbook."
PEAR recording procedures are unusually tight - according to the founder of PEAR. Not very impressive is it? I recall that the GCP people said that the never adjusted event start and finish times after the start of the event, but if you read closely they describe doing just this.

This study is a post-hoc examination of 140 hours of experimentation over 12 years - are we really suppose that this equipment was only ever active for 140 hours over 12 years?
But why would this looseness apply disproportionately to Operator 10?
Any special reason why it should not?

Again, bottom line, these results can be explained by either:

1. psycho-kinesis
2. Poor lab practice
3. Poor data management
4. Fraud

Is there any special reason why we should prefer 1 to 2,3 and 4?

If it is 1 then this could be put beyond doubt with a rather simple single experiment. When that happens I will start to be interested.
 
Here is a computer simulation of the PEAR type experiments over 800,000 trials per intention, using purely random data but losing 1% of unfavourable sessions. Looks somewhat familiar, doesn't it?
pear.png.jpg

As I said, it doesn't take much inclusion bias.
 
Has anyone heard of the Chicken-Robot Interaction experiment? I would very much like to see it replicated a few times. It looks very interesting.

Chicken-Robot Interaction

"I am much taken with a series of French experiments reported in issue 62 of Network, the journal of the Scientific & Medical Network, by Dr Peter Fenwick, a well-known London neuropathologist, whereby chickens and rabbits apparently influenced signals composed by a random-number generator for a robot close to them, and human subjects apparently influenced the movements of the robot even though its signals had been generated by a random-number computer program six months earlier.

Chicks hatched close to the robot imprinted on it as their mother and followed it about. It had a random-number generator inside it controlling its movements, which checks showed were truly random. The chicks were then removed and one placed so it could see the robot but not follow it. Under these circumstances the robot spent measurably more time close to the chick than away from it. The effect was that the chick was influencing the robot's generator.

The generator was then removed to a computer away from the experimental area. The same effect occurred. Non-imprinted chicks however had no apparent effect on the robot."


Please see page 11 of this pdf for a picture of the paths traced out by the robot mother hen.

http://www.sociology.org/content/200...thofnewton.pdf
 
Last edited:
Measurably more time does not equal statistically significant more time. Also, if the movement had been determined six months earlier, couldn't they just check to see if it followed the pre-determined path?

Has this been repeated? You link to a pk website, not exactly the most unbiased source. Your second link is 404ed.
 
Has anyone heard of the Chicken-Robot Interaction experiment? I would very much like to see it replicated a few times. It looks very interesting.

Chicken-Robot Interaction

"I am much taken with a series of French experiments reported in issue 62 of Network, the journal of the Scientific & Medical Network, by Dr Peter Fenwick, a well-known London neuropathologist, whereby chickens and rabbits apparently influenced signals composed by a random-number generator for a robot close to them, and human subjects apparently influenced the movements of the robot even though its signals had been generated by a random-number computer program six months earlier.

Chicks hatched close to the robot imprinted on it as their mother and followed it about. It had a random-number generator inside it controlling its movements, which checks showed were truly random. The chicks were then removed and one placed so it could see the robot but not follow it. Under these circumstances the robot spent measurably more time close to the chick than away from it. The effect was that the chick was influencing the robot's generator.

The generator was then removed to a computer away from the experimental area. The same effect occurred. Non-imprinted chicks however had no apparent effect on the robot."


Please see page 11 of this pdf for a picture of the paths traced out by the robot mother hen.

http://www.sociology.org/content/200...thofnewton.pdf
Did you actually read that whole article?

If you did, why did you ignore the part that said:

Rene Peoc'h told me in 1988 that there were some problem to repeat the experiments, because of the malfunctioning of the random event generator (tychoscope).

If the machine was malfunctioning to the degree that it could not repeat the experiments, how do you know that the machine was functioning properly during the experiments? And if one cannot repeat the experiment, how can we compare results to see if the reason for the behavior was not one of a flaw in the programming? How do you know that the "random number generator" was truly generating random numbers? Even today "random number generation" is not truly random, but is instead pseudorandom, and this "experiment" was done over 20 years ago when technology wasn't as advanced as it is now. And even if the numbers were truly random, how can you prove that the observed behavior was not just a statistical anomaly? I mean, since we can't repeat the experiment and all... :rolleyes:
 
I once tried to read one of Rupert Sheldrake's books, and he had an interesting method of massaging statistics to support his result.

Before I gave up and threw the book away in disgust, I read that the results in his "sense of being stared at" experiments were only significant if you made a second-order meta-analysis of multiple different trials. Or something. There wasn't a statistically significant result in any of the trials, but when you turned the data sideways and added it to itself, then calculated the difference in results of different trials, there was a statistically significant difference in the list of differences.

Or something. I failed statistics in high school, but even to me it seemed that if you have to massage the data so much in order to get a result, then the result you're getting is a consequence of your massaging the data, not of a real effect.
Perhaps you’ve heard the old joke – the Economics professor needs to know what 2+2 is for his next lecture and goes to the Maths department to find out. The algebra lecturer says “4”, the number theorist says “it depends on what you mean by 2 and +” and the statistician checks no-one is looking and whispers “what do you want it to be?”

(As told by an Economics professor)
 
Do you apply this same standard to non-psi experiments? Again, according to Jahn: "[T]he recording procedures at PEAR are unusually tight and any fiddling with results would have to be systematic because it would have to include the laboratory's computer database, the print-outs and subjects' entries in the logbook."


But why would this looseness apply disproportionately to Operator 10?


Sorry, we tear into everything. You ought to see some of what passes muster and what doesn't. There is a huge amount of bad psych research out there, just amaing i tell you. Poor design, poor controls, experimenter influence.

Take the book Emotional Intelligence, calls itself 'research based' (and a great functional skills concept), and what did they have ? Small anecdiatal studies and no research base, or Jeff's 'finger wagging' EMDR, a very popular method of treating PTSD, there is more research being done, but still mainly anecdotal.

Then there is the nonsense of neoFreuadian talk therapy, inner chidren, intergration, color therapy, etc...
 
Did you actually read that whole article?

If you did, why did you ignore the part that said:



If the machine was malfunctioning to the degree that it could not repeat the experiments, how do you know that the machine was functioning properly during the experiments? And if one cannot repeat the experiment, how can we compare results to see if the reason for the behavior was not one of a flaw in the programming? How do you know that the "random number generator" was truly generating random numbers? Even today "random number generation" is not truly random, but is instead pseudorandom, and this "experiment" was done over 20 years ago when technology wasn't as advanced as it is now. And even if the numbers were truly random, how can you prove that the observed behavior was not just a statistical anomaly? I mean, since we can't repeat the experiment and all... :rolleyes:


Is there a reason why it can't be done from scratch with a superior machine?
 
[digression about generating random numbers]

I've just read about a really efficient way of generating pseudo-random numbers. Here it is:

x[n+1] = a*x[n] mod 2^p

where:

a = 3 + 2^floor(p/2)

With p = 32 this can be simplified to:

x[n+1] = (3 + 2^16) * x[n] mod 2^32

which in logic (i.e., either software or hardware) can be implemented as:

x[n+1] = (x[n] + x[n] << 1 + x[n] << 16) and 0xffffffff

How cool is that!?

[/digression about generating random numbers]
 
The digression about random numbers is pretty cool

OK, lets get this back on track, if anybody wants to discuss massively retro-temporal psycho-kinetic ability in chickens they should start a new thread.

There are two points that need to be made about the experimenter effect:

Firstly the experimenter effect is not inherent in other psi claims.

If somebody claims that a mind can cause fluctuations in a white noise generator, this does not imply that a mind can suppress this effect in others

If somebody claims that we can sense images being watched by others, this does not imply that a third part should have the ability to suppress this ability.

The experimenter effect is, in fact, a completely new and even more extravagant hypothesis than the others.

The second point is that, while most claimed psi effects are slight, almost imperceptible and unreliable, my psychic ability as a sceptic to suppress other people’s psychic abilities is apparently complete and 100% reliable.
 
If somebody claims that a mind can cause fluctuations in a white noise generator, this does not imply that a mind can suppress this effect in others


I don't see how that implication can be avoided. Although perhaps 'suppress' isn't the right word. Interfere might be a better choice.

Take two psychics and have them each focus on the same random number generator. Have one intend more zeroes, and the other intend more ones. What would happen?

Then have them both intend more zeroes together. What would happen?

If somebody claims that we can sense images being watched by others, this does not imply that a third part should have the ability to suppress this ability.


Take two psychics, have one try to sense the images being watched by someone, and have the other try to block the images. What would happen?

Then have the other try to boost the clarity of the images. What would happen?

Would anyone care to comment on this experiment?

Experimenter Effects in Parapsychology

"What are experimenter effects and why are they important? Many parapsychologists have suggested that the belief of the experimenter may influence the outcome of their study – such that sceptics tend to find what they expect, and so do believers. Indeed, some have claimed that the experimenter’s own psi may affect the outcome of the study. This is an important issue for parapsychology because without an understanding of what causes experimenter effects, parapsychologists will not be able to specify the conditions under which other scientists can replicate their findings.

How did you study this? A series of KPU studies (e.g.Watt & Ramakers, 2003) have looked at the question of experimenter effects in parapsychology. We selected a number of individuals who scored extremely high or extremely low on a paranormal belief questionnaire, and then trained them to administer a psi task to naive participants. So, the 'experimenters' were either strong believers or disbelievers in the paranormal. The psi task was a simple 'remote helping' task involving two sensorially isolated individuals - the 'helper' and the 'helpee'. The helpees sat in a sound-shielded room and were asked to focus their attention on a candle and to press a button every time they noticed they had become distracted from this focus. A computer recorded the number of self-reported distractions and the time that they occurred during the session. At the same time, in a distant room, the helper was following a randomised schedule of 'help' and 'no help' periods. During the help periods, the helper was asked to attempt to mentally assist the distant helpee to have fewer distractions on the task. Since the experimenter and the helpee did not know the times when the helper was attempting to help, one would expect there to be no systematic relationship between the helpee's distractions and whatever the helper was doing. The psi hypothesis, on the other hand, would predict that the helpee would have fewer distractions during those randomly-scheduled periods when the helper was thinking of them. The results for all sessions combined showed overall significant positive scoring on the psi task - that is, fewer distractions during help periods. More interestingly, when comparing sessions conducted by believer experimenters with sessions conducted by sceptics, the effect was entirely limited to those participants tested by believer experimenters. Participants tested by sceptical experimenters obtained chance results on the psi task.

What does this mean? The positive psi result could not be due to subtle cueing of the experimenters or helpees, because all were blind to the randomised condition manipulations that were taking place during the psi task. Sensory leakage was also ruled out by locating helpees and helpers in separate isolated rooms. Questionnaire measures suggested that participants’ expectancy and motivation were unaffected by their experimenters’ paranormal belief, raising the possibility that it was the experimenter’s psi that influenced the outcome of the study. Note, however, that other researchers have not yet attempted to replicate this finding. So, although it is statistically significant, the study's findings should be regarded as suggestive but not conclusive. If experimenter psi effects are real then this raises challenging questions not only for parapsychology but also for science in general. Traditionally the experimenter is regarded as an objective observer of the data, rather than being another participant in the study."
 
Last edited:
Also, please comment on this one.

The Effect of a Change in Pro Attitude on Paranormal Performance: A Pilot Study Using Naïve and Sophisticated Skeptics

Abstract

A computerized symbol-identifying experiment was conducted to test Thalbourne’s (2004) concept of the ‘‘pro attitude’’ (an attitude towards a favorable outcome in a normal or paranormal task). Participants were required to identify the correct symbols randomly presented on computer in a run of 50 trials. Skeptics were given a second run. After each run, hit-rates were pre-sented on screen. A subgroup of randomly selected skeptics were informed that scores, if sufficiently high or low, indicate statistical evidence of psi. It was hypothesized that news of this information (the ‘‘treatment’’) would alter the pro attitude of some skeptics and lead them to try to score at chance, rather than risk producing scores that might indicate psi. A significant correlation between hit-rate and belief in psi after treatment (but not before treatment) was found for ‘‘converted’’ skeptics (i.e., ‘‘new believers’’ in psi). Post hoc evidence showed a significantly high hit-rate on symbol identification after conversion (but not before conversion). These results suggest a ‘‘conversion effect’’ in some skeptics, thus indicating a change in pro attitude. It was concluded that further research on the pro attitude is warranted since evidence of same may help identify sources of paranormal effects.
 
Take two psychics, have one try to sense the images being watched by someone, and have the other try to block the images. What would happen?

Then have the other try to boost the clarity of the images. What would happen?
This is exactly what I am talking about. Why do you assume that the ability to see images watched by another would also include the ability to block or boost this ability in others?
Would anyone care to comment on this experiment?
Yes, I would

There were 14 participants in this study conducting 36 experiments between them. One of the participants (responsible for 3 of the experiments) was the report author.

Hmm….
 

Back
Top Bottom