• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

A parapsychologist writes about leaving parapsychology

In the unlikely event that they submit themselves to retesting, yes.

Because they know, Jeff. They know that, if they are retested, with proper controls (strangely enough, the controls under which they performed so admirably are non-existent), they can't replicate their fantastic results.

They prefer to seduce the (often willingly) gullible with their one-hit wonder result. To pick one, John Edward, fooling an all-too-willing Gary Schwartz, boasts of his fantastic result in the Arizona Abominations.

They don't seek out the best researchers. They seek out those they know they can bamboozle. That's why skeptics have such a hard time getting them to be tested by skeptics: They know that there will be controls, so they can't cheat.

And then, they go on the circuit and brag that they have been "scientifically tested", and therefore, doesn't need to be tested again. And nobody can take their claim from them. Laughing all the way to the bank.

Claus, I was referring to the sincere, but statistically naive, researchers. For example, Rhine tested thousands of subjects, relegated the negative results to the famous file drawer and retested the "gifted" subjects. These were the subjects who showed what Rhine termed the "decline effect".
I do this as a class demonstration. I ask students to guess at a previously generated series of heads and tails, select out the ones with higher than average hits and test them again. Generally we see a "decline effect" due to statistical regression..
 
I don't have access to the journal (Perceptual and Motor Skills) these studies were published in. I also don't know how the journal or the university is regarded by most scientists, nor do I know how to find out.
So if you or anyone else reading this thread does, I would appreciate your sharing. :) Assuming no one knows, how could I go about finding out?
I have those papers in pdf, if you're interested. At least I think I do. I'll PM you tomorrow after I've checked.

As I recall the scoring method was fairly arbitrary (with students judging Swann's notes on a seven-point scale according to how accurate they thought it was) and despite that results were pretty near chance.
 
I have those papers in pdf, if you're interested. At least I think I do. I'll PM you tomorrow after I've checked.

As I recall the scoring method was fairly arbitrary (with students judging Swann's notes on a seven-point scale according to how accurate they thought it was) and despite that results were pretty near chance.

Thanks! I'll PM you my e-mail addy.

If you do have those papers, could you recall how you got them? IMHO, how people nail down this stuff is half the fun in reading about it.
 
To be honest, I don't think any of it is worth the amount of discussion and thought that it gets.

I begrudged writing ninety-thousand words, when I could have summed it up:

Everything is pointless. Parapsychology doubly so.
 
To be honest, I don't think any of it is worth the amount of discussion and thought that it gets.

I begrudged writing ninety-thousand words, when I could have summed it up:

Everything is pointless. Parapsychology doubly so.
While you may feel you have wasted you time, I don’t. With the wild claims put out we need rational people to look at parapsychology to confirm that there is nothing there.
 
But Lothian, scientists have been saying the same thing for 100 years or so. Do we we just continue with that attitude forever. That for as long as people have weird claims, science must expend thought upon them?
 

Some interesting language in that:

After thirty years, I have escaped from a fearsome addiction.
...
Another "psychic" turns up. I must devise more experiments, take these claims seriously. They fail - again. A man explains to me how alien abductors implanted something in his mouth. Tests show it's just a filling, but it might have been…

I'm surprised there still is a "field" and that any school gives attention to the subject as a academic subject.
 
To be honest, I don't think any of it is worth the amount of discussion and thought that it gets.

I begrudged writing ninety-thousand words, when I could have summed it up:

Everything is pointless. Parapsychology doubly so.
I don't blame you for feeling frustrated right now. I can only imagine what it must feel like to have spent so much time and money getting credentials in a field you no longer respect.

I suspect that you wanted to add to the world's body of knowledge in some way. I wish you luck in finding a new area to do that in. Increasing our knowledge is not pointless!

IIRC you were interested in dream prerecognition? Would you be able to get grants in related areas, even very remotely (pardon the poor pun) related areas? Perhaps lucid dreaming, research in animal dreaming, or perhaps the effect of the ability to dream on human health?

You may also be in a unique position to write a[n amusing] book about how funding research in the universities really works or at least funding of parapsychology research. Assuming you can do so without getting sued or ruining your own future ability to get grants. Or perhaps you can come up with a web site or book for adults on statistics, one for mathematically handicapped types that provides an overview without intensive formulas. Clearly many people need the education, and if you can work some humor into it, all the better.

As for parapsychology, I do like to read about it sometimes and I will continue to be asking questions about this area. I think its another area of study that is very revealing, though not in the way its most ardent fans would like to think. For one thing it reveals in yet another way what peoples fears, dreams and weaknesses are.

However, I think at least a few of the questions it raises will be answered by neurobiology, not parapsychology. Especially since your experiences adds to the growing pile that shows that the research methodology most parapsychologists use is of very poor quality. What a waste.

ETA: So if anyone in addition to Ersby can help me figure out how to answer the questions I asked in post #60 -- I'd still appreciate it. ;) They were:

How do I find out what the standing is of a University and professional journal is? Specifically Laurentian University in Canada and the Perceptual and Motor Skills Journal.

Also, does the US govt. database www.pubmed.gov have any criteria for including specific journals in its database? Thanks!
 
Last edited:
But Lothian, scientists have been saying the same thing for 100 years or so. Do we we just continue with that attitude forever. That for as long as people have weird claims, science must expend thought upon them?

They can leave it to skeptics (at first). If we find something - anything, we'll be sure to tell the scientists. :)
 
But Lothian, scientists have been saying the same thing for 100 years or so. Do we we just continue with that attitude forever. That for as long as people have weird claims, science must expend thought upon them?
Of course.

I am not saying that science should drop the useful stuff and throw everything after dreams but we must put some resource there if only to stop ignorance spreading, and to make sure that the study does not take too much money and resource away from proper subjects.

With no sensible voice from the paranormal scientific community, wild claims would attract generous funds from gullible MPs. It is bad enough at the moment with MPs wasting our money on promoting homeopathy, without us having our own super secret teams of goat starers.

People like yourself are also useful in deciding what is good and bad science. When people have an interest in getting certain results we need people who can look critically at the procedures to see if they are tight. Again you have been working in a field where those critical skills get plenty of opportunities to be developed. Perhaps one day you will develop the quadruple blind experiments that some of your ex 'colleagues' claim :D

Unfortunately science has developed at such a pace that normal people (like me) can not keep up. There is so much that I rely upon that I can not fully explain. The man in the street struggles with the difference between radio, micro and mobile phone waves. They allow him to listen, cook and call a friend. That is all he needs to know.

He has stopped asking 'how' because he doesn’t understand the answers. So when some people say they can read others brainwaves or pick up brain signals from the dead, it is just accepted. It just gets added to the list of things we don’t understand.

Society needs, on their behalf, people who do understand and will look at the claims to sort the wheat from the chaff. After all, one day we may be able to read each others minds. Although I strongly suspect that it will be with machines in a lab and not on a stage in front of a gullible audience.
 
What you mean is, you 'believe' other people that there is something paranormal going on? Let's get that right.

I believe that scientists submit their work to journals as accurately and honestly as they can. I have to. Otherwise I either have to reject all published experiments as fraud or I have to cherry pick which ones I believe are fraudulent. Those two options are not productive or fair. So I don't assume that fraud has gone on in negative or positive experiments.

I served as the parapsychological association programme chair in 2005. I was sent many different papers by parapsychologists around the world. I suggest you try and work to get that position in the future. It's very revealing seeing some of the terrible research that people spend their life researching.

I believe that!
 
This was presented at a conference of the Society of Psychical Research in 2005. Would that necessarily count it as unpublished?

That's problem I always have with people who say "We have x number of positive experiments". Always wonder how they worked it out. Radin's meta-analyses being a case in point. On his blog I pointed out his ganzfeld m-a had no inclusion criteria - it was simply a cobbling together of previous work. He implied he did have an inclusion criteria, I asked what it was, but he never answered.
 
Last edited:
That's a nice illustration of the kind of thinking that seems to drive parapsychology research among believers. Rather than attempting to prove themselves wrong (the basic idea behind the statistical methods/standards in general use), they are searching for a way to consistently demonstrate an effect that they think is real - looking for the signal amongst the noise.


Its the best approach that can be taken at the present time IMO. If there was no effect then the number of experiments with positive results would be at chance level. Although meta-analyses can't be taken as "proof" of anything, I think they do show that the number of positive experiments in certain kinds of experiments is above chance.


So it's like the "sharp-shooter" that fires a shot at the side of a barn and then draws a target around the hole. A number of studies are performed, and those that happen to fall on the upper half of the normal distribution are labelled "promising". If you do 13 studies, you even have a 50/50 chance that one will be "statistically significant". Actually, since parapsychologists are fond of one-tailed testing, you'd only need to do 6 studies to have an even chance that one study will happen to fulfill the requirements of "statistically significant".



I understand your point Linda. But remember that the 1 in 6/13 random successful experiment would have a p-value of 0.05 in your illustration.

If we stay with the precognitive habituation experiments, Bem's studies had a much more impressive p-value than that. Louie's succesfull experiment less so, but then he had a lower N number.

Do you think that meta-analyses are suited to resolving this kind of issue?

Also, you have the added problem that experiments are seldom exact replications. Experimental conditions are changed, which could legitimately affect the outcome of the experiment.

For example, I still don't understand why Louie et al decided to change the image exposure to supraluminal in their followup PH experiment. Experiments in conventional mere exposure effects show that supraliminal exposures reduce the effect and Bem's experiments show the same thing. This could be why they could replicate their own findings, because they changed the conditions.
 
This was presented at a conference of the Society of Psychical Research in 2005. Would that necessarily count it as unpublished?

Oh OK. I didn't know that. Were Bem's papers published in a journal?

That's problem I always have with people who say "We have x number of positive experiments". Always wonder how they worked it out. Radin's meta-analyses being a case in point. On his blog I pointed out his ganzfeld m-a had no inclusion criteria - it was simply a cobbling together of previous work. He implied he did have an inclusion criteria, I asked what it was, but he never answered.

Odd. Perhaps you should ask again?


Forgot Louie's first PH study:

http://m0134.fmg.uva.nl/research/PSI research/papers/19.pdf
 
Odd. Perhaps you should ask again?
I've had a couple of brief intrerchanges with radin, one on his blog and one on by email, both covering the same ground. In the end I'm not a proper scientist writing in a perr-reviewed journal, so he won't answer my questions. That's fair enough, I suppose, but I'd be surprised if my point even sunk in.
 
It is odd that this topic has turned into a discussion of methodology. It certainly does not merit much more discussion.

David. If you are just going to believe every thing that is written in a journal or proceedings, then you are going to fall for a lot of rubbish (as I did). Don't believe everything you read.

You ask why we used a supraliminal methodology? Well Bem used supraliminal trials and claimed an effect. Moreover as I have said a couple of times, unless you have very expensive equipment it is difficult to ensure that presentation of stimuli is subliminal for every person. Have you ever tried to conduct any psychological research? Do you not understand this point? This is a practical issue. Even though Bem called it subliminal in his paper, we noticed when using the software that presentation wasn't subliminal, just very fast. Sometimes people could report what they'd seen, consciously!!!

Since Bem's program wasn't really subliminal, we used the more accurate term supraliminal.

In terms of what my 3 PH studies found, my opinion (again) is nothing. The hit rates obtained in the first study are not different from chance and it is only with further statistical fishing that you find an 'effect'. I am not the only person to produce failed replications of this effect either. This is from the horses mouth. I doubt you'd get many other parapsychologists being quite so honest. There are potentially lots of reasons why we didn't find an effect. Maybe I'm not psychic. Maybe my subjects weren't psychic. Maybe lots of different things. But I'm going for the simplest answer. I found nothing because there is nothing.

My papers before quitting parapsychology are positively skewed in favour of a parapsychological interpretation. Just remember that!

If you are hoping that a simple cognitive experiment might provide evidence for a paranormal effect, I think you'll be disappointed. I conducted 8 other experiments on top of the 3 PH which were very similar in nature. And found nothing. That is why I quit.

As I keep saying, it's been a long time now since I completed this PH research and I have no interest in discussing it much further. You can direct any further technical issues to one of the other authors of the paper (Mathew Smith or Chris Roe).

I'm happy to discuss my experiences and the other experiments that I conducted.
 
psp said:
It is odd that this topic has turned into a discussion of methodology.
I don't think it's so odd. If you (generic you) ponder psi experiments, you realize that the hypotheses being tested are not about an underlying theory of psi,* but instead about the statistical results. They are hypotheses about data mining and statistics. Therefore, the results are easily skewed by methodological errors (e.g., leaks) or by bad statistical analysis. It all comes down to methodology, because the experiments are about methodology.

Experiments with hypotheses derived from underlying theory have these same potential problems, but scientists can vary the protocols and analyses to wash them out, because the basic result is replicable. You can't do that with psi experiments, because they aren't replicable enough to allow it.

~~ Paul

* I realize there have been some attempts at theory.
 

Back
Top Bottom