It is odd that this topic has turned into a discussion of methodology.
I guess I meant specifically of precognitive habituation. It isn't really deserving of much discussion.
It is odd that this topic has turned into a discussion of methodology.
It is odd that this topic has turned into a discussion of methodology. It certainly does not merit much more discussion.
David. If you are just going to believe every thing that is written in a journal or proceedings, then you are going to fall for a lot of rubbish (as I did). Don't believe everything you read.
How are we able to tell which experimenters are being dishonest?
When I said I have to believe what is written in the journals, I meant that I have to believe that the researchers are being accurate and sincere in describing their research. I don't mean that I automatically accept their conclusions or that I don't question methodology.
From the 2003 Bem paper on PH:
"At this point, I asked a skeptical colleague at Williams College, Professor Kenneth Savitsky, to try replicating the PH effect using supraliminal exposures. But I made two critical changes: First, the on-screen directions explicitly instructed the participant to “keep your eyes on the picture as it is flashed—even if it is one of the unpleasant pictures.” Second, participants
were given the option of participating in the study without the negative pictures. (There were no erotic trials in the Williams replication.) Savitsky conducted the experiment as a class exercise in a laboratory course in
experimental social psychology. Serving as the experimenter, he ran himself and the 17 students in the experiment; each student was then instructed to run 4 of his or her friends. This produced a total of 87 participants, 84 of whom experienced the negative trials. Collectively they obtained a
hit rate of 52.5% (t(83) = 1.57, p = .061) on the negative trials. More importantly, the positive correlation between hit rate and Emotional Reactivity was restored: The 32 emotionally reactive participants obtained a hit rate of 56.0%, t(31) = 2.66, p = .006. In particular, the 12 emotionally
reactive men in the sample achieved a very high hit rate of 59.7%, t(11) = 3.02, p = .006. The hit rate on the low-affect trials was at chance."
Does anyone know if this study has been published?
Its the best approach that can be taken at the present time IMO. If there was no effect then the number of experiments with positive results would be at chance level. Although meta-analyses can't be taken as "proof" of anything, I think they do show that the number of positive experiments in certain kinds of experiments is above chance.
I understand your point Linda. But remember that the 1 in 6/13 random successful experiment would have a p-value of 0.05 in your illustration.
If we stay with the precognitive habituation experiments, Bem's studies had a much more impressive p-value than that. Louie's succesfull experiment less so, but then he had a lower N number.
Do you think that meta-analyses are suited to resolving this kind of issue?
Also, you have the added problem that experiments are seldom exact replications. Experimental conditions are changed, which could legitimately affect the outcome of the experiment.
For example, I still don't understand why Louie et al decided to change the image exposure to supraluminal in their followup PH experiment. Experiments in conventional mere exposure effects show that supraliminal exposures reduce the effect and Bem's experiments show the same thing. This could be why they couldn't replicate their own findings, because they changed the conditions.
.....why?
If you accept their word that what they are telling you they are doing is accurate and sincere, why would you not accept their word that their conclusions or methodology are sound?
Once you take people's word for granted, you throw out everything else. You admit that you are a hardcore, blind believer.
It is extremely telling that you have no problems with Bem trying to shift the onus onto the skeptics, while changing the premises of the experiment with two critical changes.
Hello? That's not replication.
Because believing that a paper accurately and truthfully represents what actually went on during the experiment is a different issue to accepting the conclusions and methodology are valid. Someone could have accurately and truthfully written a methods and results section that contains methodological flaws and makes unjustified conclusions based on accurate and truthfull data.
I know that fraud happens in science. I just don't believe in cherry picking which experiments are fraudulent. How are we to know which ones are? Independent replication sorts this out I suppose.
How do I find out what the standing is of a University and professional journal is? Specifically Laurentian University in Canada and the Perceptual and Motor Skills Journal.
How do you know it's above chance (taking bias into consideration as well)? You don't know how many studies weren't included because they were negative that would have been included (because they would have been published) if they were positive.
Thousands of parapsychology experiments have been performed. The ones that get noticed are those that have "significant" p-values. It's not unexpected to come up with "one in a thousand" results out of thousands of studies.
That doesn't even take into account the analytic flaws in the papers you referenced (multiple comparisons without adjustment in p-values, for example).
There are always excuses. Bem came up with "precognitive boredom".
This reveals how willingly gullible you are. No, David, we do not take people's word for granted, no matter what they are saying.
If they say that what they are doing is A-OK, we check. Precisely the same way we check their results and methodology.
But it is possible to estimate how many negative studies would be needed to nullify an overall positive result of a meta-analysis (I don't know how reliable that estimation process is).
As for the file-drawer effect, I think anyone who has been even slightly involved with any kind of research should understand this effect, but obviously many don't. Personally I've written one small article for a scientific journal (it's about an algorithm), but for this one article about an algorithm that works, how many algorithms have I been working on that either did not work, or worked but produced results that were no better than what has already been published, or that were simply of no general interest? Even I have no idea. Ok, this field may be an extreme, with areas requiring large planned studies on the other extreme. But even in that case, we should only expect that studies that don't show promises of results would be much more likely to drag out over time, and eventually be disbanded for lack of resources. And I wouldn't even call that scientific dishonesty. A responsible researcher should not waste money.
Instead, I think the fault is with the idea that it would be possible to perform some sort of meta-proof by aggregating results from many studies and in this manner somehow enhance their significance. That idea is completely flawed. Either we can define an effect, and a replicatable way to test it, and then it will give these results consistently. Or there is not, and there are multiple, poorly defined effects, with tests that have not been properly examined or that cannot be replicated. Then adding these apples and oranges into one bowl does not produce anything of value.
But if you read a paper, how would you check that the author was reporting his methods and results section accurately and truthfully when the experiment has already been performed?
You're perfectly free to do that. But if you read a paper, how would you check that the author was reporting his methods and results section accurately and truthfully when the experiment has already been performed?
But how are researchers in parapsychology supposed to respond when a critic claims that the number of positive experiments are what we would expect by chance? If meta-analyses are not to be used in any way, it seems that parapsychologists have no way to answer such a criticism.
True. We don't know how many unpublished negative studies there might be. Well, within reason of course. But it is possible to estimate how many negative studies would be needed to nullify an overall positive result of a meta-analysis (I don't know how reliable that estimation process is).
Isn't this what meta-analysis is supposed to address?
Could you explain this?
Eh? That term was used to describe the unexpected results of the low-affect trials with more than 8 subliminal exposures.