• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

A parapsychologist writes about leaving parapsychology

It is odd that this topic has turned into a discussion of methodology. It certainly does not merit much more discussion.

David. If you are just going to believe every thing that is written in a journal or proceedings, then you are going to fall for a lot of rubbish (as I did). Don't believe everything you read.

Do you mean that some write-ups do not describe what actually went on during the experiments?

How are we able to tell which experimenters are being dishonest?

When I said I have to believe what is written in the journals, I meant that I have to believe that the researchers are being accurate and sincere in describing their research. I don't mean that I automatically accept their conclusions or that I don't question methodology.
 
Last edited by a moderator:
From the 2003 Bem paper on PH:

"At this point, I asked a skeptical colleague at Williams College, Professor Kenneth Savitsky, to try replicating the PH effect using supraliminal exposures. But I made two critical changes: First, the on-screen directions explicitly instructed the participant to “keep your eyes on the picture as it is flashed—even if it is one of the unpleasant pictures.” Second, participants
were given the option of participating in the study without the negative pictures. (There were no erotic trials in the Williams replication.) Savitsky conducted the experiment as a class exercise in a laboratory course in
experimental social psychology. Serving as the experimenter, he ran himself and the 17 students in the experiment; each student was then instructed to run 4 of his or her friends. This produced a total of 87 participants, 84 of whom experienced the negative trials. Collectively they obtained a
hit rate of 52.5% (t(83) = 1.57, p = .061) on the negative trials. More importantly, the positive correlation between hit rate and Emotional Reactivity was restored: The 32 emotionally reactive participants obtained a hit rate of 56.0%, t(31) = 2.66, p = .006. In particular, the 12 emotionally
reactive men in the sample achieved a very high hit rate of 59.7%, t(11) = 3.02, p = .006. The hit rate on the low-affect trials was at chance."


Does anyone know if this study has been published?
 
When I said I have to believe what is written in the journals, I meant that I have to believe that the researchers are being accurate and sincere in describing their research. I don't mean that I automatically accept their conclusions or that I don't question methodology.

.....why?

If you accept their word that what they are telling you they are doing is accurate and sincere, why would you not accept their word that their conclusions or methodology are sound?

Once you take people's word for granted, you throw out everything else. You admit that you are a hardcore, blind believer.
 
From the 2003 Bem paper on PH:

"At this point, I asked a skeptical colleague at Williams College, Professor Kenneth Savitsky, to try replicating the PH effect using supraliminal exposures. But I made two critical changes: First, the on-screen directions explicitly instructed the participant to “keep your eyes on the picture as it is flashed—even if it is one of the unpleasant pictures.” Second, participants
were given the option of participating in the study without the negative pictures. (There were no erotic trials in the Williams replication.) Savitsky conducted the experiment as a class exercise in a laboratory course in
experimental social psychology. Serving as the experimenter, he ran himself and the 17 students in the experiment; each student was then instructed to run 4 of his or her friends. This produced a total of 87 participants, 84 of whom experienced the negative trials. Collectively they obtained a
hit rate of 52.5% (t(83) = 1.57, p = .061) on the negative trials. More importantly, the positive correlation between hit rate and Emotional Reactivity was restored: The 32 emotionally reactive participants obtained a hit rate of 56.0%, t(31) = 2.66, p = .006. In particular, the 12 emotionally
reactive men in the sample achieved a very high hit rate of 59.7%, t(11) = 3.02, p = .006. The hit rate on the low-affect trials was at chance."


Does anyone know if this study has been published?

It is extremely telling that you have no problems with Bem trying to shift the onus onto the skeptics, while changing the premises of the experiment with two critical changes.

Hello? That's not replication.
 
Its the best approach that can be taken at the present time IMO. If there was no effect then the number of experiments with positive results would be at chance level. Although meta-analyses can't be taken as "proof" of anything, I think they do show that the number of positive experiments in certain kinds of experiments is above chance.

How do you know it's above chance (taking bias into consideration as well)? You don't know how many studies weren't included because they were negative that would have been included (because they would have been published) if they were positive.

I understand your point Linda. But remember that the 1 in 6/13 random successful experiment would have a p-value of 0.05 in your illustration.

If we stay with the precognitive habituation experiments, Bem's studies had a much more impressive p-value than that. Louie's succesfull experiment less so, but then he had a lower N number.

Thousands of parapsychology experiments have been performed. The ones that get noticed are those that have "significant" p-values. It's not unexpected to come up with "one in a thousand" results out of thousands of studies. That doesn't even take into account the analytic flaws in the papers you referenced (multiple comparisons without adjustment in p-values, for example).

Do you think that meta-analyses are suited to resolving this kind of issue?

If they have an honest denominator, and if bias is taken into consideration.

Also, you have the added problem that experiments are seldom exact replications. Experimental conditions are changed, which could legitimately affect the outcome of the experiment.

For example, I still don't understand why Louie et al decided to change the image exposure to supraluminal in their followup PH experiment. Experiments in conventional mere exposure effects show that supraliminal exposures reduce the effect and Bem's experiments show the same thing. This could be why they couldn't replicate their own findings, because they changed the conditions.

There are always excuses. Bem came up with "precognitive boredom".

Linda
 
.....why?

If you accept their word that what they are telling you they are doing is accurate and sincere, why would you not accept their word that their conclusions or methodology are sound?

Because believing that a paper accurately and truthfully represents what actually went on during the experiment is a different issue to accepting the conclusions and methodology are valid. Someone could have accurately and truthfully written a methods and results section that contains methodological flaws and makes unjustified conclusions based on accurate and truthfull data.

Once you take people's word for granted, you throw out everything else. You admit that you are a hardcore, blind believer.

I know that fraud happens in science. I just don't believe in cherry picking which experiments are fraudulent. How are we to know which ones are? Independent replication sorts this out I suppose.
 
It is extremely telling that you have no problems with Bem trying to shift the onus onto the skeptics, while changing the premises of the experiment with two critical changes.

Hello? That's not replication.

How is Bem shifting the onus onto the skeptics?

I don't see how the premise of the experiment is changed at all in that replication. He simply left out the erotic pictures and tried to ensure that the participants would not look away from the horrible images. So it was replication of the effect using negative images.
 
Because believing that a paper accurately and truthfully represents what actually went on during the experiment is a different issue to accepting the conclusions and methodology are valid. Someone could have accurately and truthfully written a methods and results section that contains methodological flaws and makes unjustified conclusions based on accurate and truthfull data.

This reveals how willingly gullible you are. No, David, we do not take people's word for granted, no matter what they are saying.

If they say that what they are doing is A-OK, we check. Precisely the same way we check their results and methodology.

You are being wildly inconsistent here. You trust them to do right, because you want them to do right.

I know that fraud happens in science. I just don't believe in cherry picking which experiments are fraudulent. How are we to know which ones are? Independent replication sorts this out I suppose.

Yeah. Then, explain how Bem can possibly suggest what he did. Would you call that "replication"?
 
How do I find out what the standing is of a University and professional journal is? Specifically Laurentian University in Canada and the Perceptual and Motor Skills Journal.

You can get a good idea of a journal's standing within its field by the Science Citation Index. Unfortunately it's not freely available, but if you're affiliated with some University you can probably get access from there. Or someone else with access (I don't have it unfortunately) could give you the index for this particular journal and some others in the same area.

Of course a good standing within a certain field of research is no absolute guarantee for quality, but the title of this one suggests it's an empirically based field. :)
 
How do you know it's above chance (taking bias into consideration as well)? You don't know how many studies weren't included because they were negative that would have been included (because they would have been published) if they were positive.

True. We don't know how many unpublished negative studies there might be. Well, within reason of course. But it is possible to estimate how many negative studies would be needed to nullify an overall positive result of a meta-analysis (I don't know how reliable that estimation process is).

Thousands of parapsychology experiments have been performed. The ones that get noticed are those that have "significant" p-values. It's not unexpected to come up with "one in a thousand" results out of thousands of studies.

Isn't this what meta-analysis is supposed to address?

That doesn't even take into account the analytic flaws in the papers you referenced (multiple comparisons without adjustment in p-values, for example).

Could you explain this?


There are always excuses. Bem came up with "precognitive boredom".

Eh? That term was used to describe the unexpected results of the low-affect trials with more than 8 subliminal exposures.
 
As for the file-drawer effect, I think anyone who has been even slightly involved with any kind of research should understand this effect, but obviously many don't. Personally I've written one small article for a scientific journal (it's about an algorithm), but for this one article about an algorithm that works, how many algorithms have I been working on that either did not work, or worked but produced results that were no better than what has already been published, or that were simply of no general interest? Even I have no idea. Ok, this field may be an extreme, with areas requiring large planned studies on the other extreme. But even in that case, we should only expect that studies that don't show promises of results would be much more likely to drag out over time, and eventually be disbanded for lack of resources. And I wouldn't even call that scientific dishonesty. A responsible researcher should not waste money.

Instead, I think the fault is with the idea that it would be possible to perform some sort of meta-proof by aggregating results from many studies and in this manner somehow enhance their significance. That idea is completely flawed. Either we can define an effect, and a replicatable way to test it, and then it will give these results consistently. Or there is not, and there are multiple, poorly defined effects, with tests that have not been properly examined or that cannot be replicated. Then adding these apples and oranges into one bowl does not produce anything of value.
 
This reveals how willingly gullible you are. No, David, we do not take people's word for granted, no matter what they are saying.

If they say that what they are doing is A-OK, we check. Precisely the same way we check their results and methodology.


You're perfectly free to do that. But if you read a paper, how would you check that the author was reporting his methods and results section accurately and truthfully when the experiment has already been performed?
 
But it is possible to estimate how many negative studies would be needed to nullify an overall positive result of a meta-analysis (I don't know how reliable that estimation process is).

It's completely bogus, completely unreliable. To prove any sort of psi effect, we only need *one* study. But it has to be done well. The only way to ensure that it has been done well, is having other scientists replicate the same study, testing for the exact same claimed phenomenon, and trying to find flaws in the method.

The method of meta-analysis is based on the flawed idea that all studies are done properly. They are not. Not just in parapsychology, but in any field. No one cares if ten or a hundred scientific studies would claim to have produced cold fusion. It would not prove cold fusion is feasible. We would demand *one* study that can be examined by other experts, and where we can find no flaw, and which can be repeated with the same results. We should do the same for parapsychology.
 
As for the file-drawer effect, I think anyone who has been even slightly involved with any kind of research should understand this effect, but obviously many don't. Personally I've written one small article for a scientific journal (it's about an algorithm), but for this one article about an algorithm that works, how many algorithms have I been working on that either did not work, or worked but produced results that were no better than what has already been published, or that were simply of no general interest? Even I have no idea. Ok, this field may be an extreme, with areas requiring large planned studies on the other extreme. But even in that case, we should only expect that studies that don't show promises of results would be much more likely to drag out over time, and eventually be disbanded for lack of resources. And I wouldn't even call that scientific dishonesty. A responsible researcher should not waste money.

Instead, I think the fault is with the idea that it would be possible to perform some sort of meta-proof by aggregating results from many studies and in this manner somehow enhance their significance. That idea is completely flawed. Either we can define an effect, and a replicatable way to test it, and then it will give these results consistently. Or there is not, and there are multiple, poorly defined effects, with tests that have not been properly examined or that cannot be replicated. Then adding these apples and oranges into one bowl does not produce anything of value.

Good points. But how are researchers in parapsychology supposed to respond when a critic claims that the number of positive experiments are what we would expect by chance? If meta-analyses are not to be used in any way, it seems that parapsychologists have no way to answer such a criticism.
 
But if you read a paper, how would you check that the author was reporting his methods and results section accurately and truthfully when the experiment has already been performed?

I thought I'd already implied that. You can't. Somebody could literally sit there and make the data up on bits of paper and you wouldn't know. Soal's work stood for a long time. Go and read how he got found out, if you don't know.
 
You're perfectly free to do that. But if you read a paper, how would you check that the author was reporting his methods and results section accurately and truthfully when the experiment has already been performed?

That's the point, David: We can't know.

So, why are you so eager to believe these people when they say that they are doing right, when you (say that you) doubt them when they report their results?

And please explain how Bem can possibly suggest what he did. Would you call that "replication"?
 
But how are researchers in parapsychology supposed to respond when a critic claims that the number of positive experiments are what we would expect by chance? If meta-analyses are not to be used in any way, it seems that parapsychologists have no way to answer such a criticism.

First of all, they should stick their neck out and make a very strong statement that according to their research, some very well defined effect definitely exists, and can be measured through a test, that they have performed, and that could be repeated by other scientists. If they really are so sure, then they need to really put their scientific reputation at risk here. It's not enough to hint that 'further research is merited'.

Now, if someone does this, then we can be sure that other parapsychologists, and interested scientists from other fields - and the JREF too for that matter - will be very interested in examining those claims. When many such studies have been done and they generally confirm the results, then we can consider the findings to be confirmed.

I know of no such claim. Do you? I think we need to be specific here. Discussing the validity of PSI claims in general is like claiming that either gravitation or wormholes or cold fusion may exist.
 
True. We don't know how many unpublished negative studies there might be. Well, within reason of course. But it is possible to estimate how many negative studies would be needed to nullify an overall positive result of a meta-analysis (I don't know how reliable that estimation process is).

I'm suggesting that the number of necessary negative studies falls well within the realm of "plausible".

Isn't this what meta-analysis is supposed to address?

No. Metanalysis is appropriate only in limited situations and this isn't one of them (the current fad for the use and misuse of metanalysis notwithstanding).

Parapsychology research doesn't consist of subjects doing amazing things (like flying or making themselves invisible). It consists of subjects doing what they do normally, but in a different frequency than what you'd expect due to chance. But if you give yourself a thousand opportunities, sooner or later you're going to come up with something unlikely. It's unexpected if I will the lottery, but it's completely expected if somebody wins the lottery. The mistake is in after the fact deciding that the lottery winner is the one with the magical powers.

Could you explain this?

Tests of statistical significance are based on a single comparison - a single roll of the dice. If you make more than one comparison, you are effectively giving yourself extra chances to roll an "eleven". How much do you want to bet that in addition to comparing "emotionally reactive men" with "emotionally nonreactive men", they also compared "belief in ESP" with "non-belief in ESP", "prior ESP experiences" with "no prior ESP experiences", etc.? And how much do you want to bet that if those comparisons had been "statistically significant" they would have reported on them as well as everything else? Even if you just go on what they admitted to comparing, it looks like they gave themselves a few dozen chances at rolling "eleven".


Eh? That term was used to describe the unexpected results of the low-affect trials with more than 8 subliminal exposures.

Yes, it's a way to dismiss results that would make the overall results average. PEAR likes to do that as well.

Linda
 

Back
Top Bottom