• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Edinburgh Parapsychologist claimed to have evidence of ESP

Big Les

Philosopher
Joined
Mar 9, 2006
Messages
5,057
Location
UK
http://edinburghnews.scotsman.com/features/Edinburghs-answer-to-the-ghostbusters.4072400.jp

^Dr Caroline Watt of the University of Edinburgh Koestler unit (interestingly enough she's Richard Wiseman's partner), has had some bold claims made on her behalf by the above paper;

The world-renowned scientist has conducted experiments which indicate that people can pick up on the thoughts of others when they are being “sent” mental images by someone in another room.

Caroline, who has had more than 50 research papers published, has also conducted research studies into psychokinesis and discovered that a subject’s reactions may be linked to someone else thinking good thoughts about them.

Is anyone familiar with her work? She's clearly less sceptical than Wiseman, but has she made those claims herself? If so, on what grounds?
 
So by "indicated", they mean a very low statistical result of some sort in one or more of her tests, I'm guessing. I'll try and find some reports.
 
Some years ago I was present at a World Skeptical Congress inItaly where Caroline Watt was one of the speakers. She seemed rather skeptical at the time. Her speech was about the research of the Koestler unit where they tried to weed out the scam artists, the deluded and the insane from the claims of parapsychological phenomena. She had apparently never encountered genuine claims.

With the work she has chosen to do, she cannot be blamed for looking for ESP and the like, as it is a legitimate search. The important thing is if she accepts low scientific standards to her research, and that was not my impression at the time.

But obviously, one cannot judge a person from a single speech, which may have been tailored to the skeptical audience. James Randi, however, seemed to know her and was very pleased with her.
 
Oh absolutely, I don't want to condemn the woman for a couple of suspect lines in an online article. I think it's good that there are people still investigating, on balance (though public funding has me stroking my chin). I'll always wonder why people stick at it despite the lack of any meaningful result in 100 years of trying though.

I'm just interested that she herself calls Wiseman more sceptical (and I think of him as being especially "open minded") and also what the basis is for the claims I quote above.
 
Oh absolutely, I don't want to condemn the woman for a couple of suspect lines in an online article. I think it's good that there are people still investigating, on balance (though public funding has me stroking my chin). I'll always wonder why people stick at it despite the lack of any meaningful result in 100 years of trying though.

I'm just interested that she herself calls Wiseman more sceptical (and I think of him as being especially "open minded") and also what the basis is for the claims I quote above.


The "suspect lines" are apparently endorsed by her. She links to the article on her personal website, and would've surely corrected an misapprehensions if she felt they had made any.

If the article is accurate then she at least carries, for now, a bit of the mark of the woo.
 
The study I linked to above is quite interesting. They have the subject focusing on a candle, and in another room a "helper" whose aim is to help the subject focus.

At random times the helper is supposed to take a break from "helping". In the meantime, the subject presses a button every time they feel their attention wander.

Apparently there's a statistically significant increase in loss of focus when the helper is taking a break, and what's more, the effect only exists if the helper believes in PSI.

No obvious flaws I could see on first look, except a typically small sample size.
 
If two persons have to concentrate for the same amount of time, what are the chances that they get tired at the same time, so that one takes a break, and the other loses concentration?
 
shouldn't matter in this case, as the "helper" breaks were randomly computer generated.

hmm ... one thing though. I think in the protocol the subject knows the helper, and likely knows whether the helper "believes" or not. One would think this would help a "believer" focus - the belief they're getting helped. This would mean the helpee should have fewer reported wavers of concentration, which could skew things somewhat.
 
I must be missing something important, but how do you get a significant difference out of a mean of 12.25 with a S.D. of 10.27 compared to a mean of 14.54 with a S.D. of 10.82 (n=24)?

And what's with the one-tailed testing (other than it makes it easier to obtain significant results (kinda silly when there are good arguments for raising the bar rather than lowering it when it comes to parapsychology (reduce false positives which are already likely to be overwhelming)))?

Linda
 
Last edited:
And what's with the one-tailed testing (other than it makes it easier to obtain significant results (kinda silly when there are good arguments for raising the bar rather than lowering it when it comes to parapsychology (reduce false positives which are already likely to be overwhelming)))?

Linda

If they're interested in measuring an exchange of information, then wouldn't one-tailed testing be the more sensible option? Otherwise you'd have the curious situation in which a significantly negative result (ie, no information exchanged at all) would end up supporting the psi hypothesis.

Of course, there are some people who think that "psi-missing" IS evidence for the existence of psi, but they are silly people.
 
Last edited:
If they're interested in measuring an exchange of information, then wouldn't one-tailed testing be the more sensible option? Otherwise you'd have the curious situation in which a significantly negative result (ie, no information exchanged at all) would end up supporting the psi hypothesis.

No exchange of information should lead to chance results. A significantly negative result still implies that information is exchanged, but that its influence is different from expected. And while I may be persuaded that one-tailed testing is reasonable when you are studying information exchange, the study in question was not about information exchange, but about influence and was exploratory in nature - a situation where not only is there a paucity of information to guide your constraints, but where standards of significance should probably be raised in order to decrease the number of false positives. Exploratory parapsychology research has many of those characteristics that lead to false conclusions - small study size, small effect size, large numbers of tested relationships with little selection of those relationships, flexibility in designs, definitions, outcomes, and analytical modes, and testing by several independent teams (http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10.1371/journal.pmed.0020124). One of the ways to increase the proportion of true-positives is to raise the standard by lowering alpha.

It is not sufficient that one is interested in a particular result to justify one-tailed testing - it is rarely the case otherwise. It is also necessary for it to be driven by the logic of the situation. Even in medicine, where we can reasonably expect that a particular treatment will (if anything) perform better than nothing, we still usually use two-tailed testing. And if tests of information exchange reliably and consistently showed that people performed worse when they had access to information through anomalous cognition, would that not be of similar interest? I think it would be quite unreasonable to ignore a negative relationship under these circumstances.

I admit that my criticism on this issue may be unfair, but I get the impression that parapsychologists view the standards, used to help protect us from drawing conclusions that aren't valid, as hoops to be jumped through in order to gain recognition. And this is one way to make the 'hoop' of significance testing easier to jump through. They seem more interested in the letter than the spirit of the 'laws'.

Of course, there are some people who think that "psi-missing" IS evidence for the existence of psi, but they are silly people.

It seems to me that the "psi-missing" argument is used to justify discarding negative results in order to pretend that the positive results that are left are not biased. This is different from an interest in reliable and reproducibly significant results that show a negative relationship.

On a different note, can you explain why the t-test results don't seem to make sense to me?

Linda
 

Back
Top Bottom