File Drawering on Psych Med Studies

Dancing David

Penultimate Amazing
Joined
Mar 26, 2003
Messages
39,700
Location
central Illinois
I just heard on NPR that a study has been published which would indicate that studies which are positive of psychiatic medication usage are published at a rate of 90% as compared to studies which are negative of use which are published at 8%.

So there is a definite bias in publishing research that supports the use of medication.

:(
 
Last edited:
I was listening to the same story. It surprised me only in that I thought someone on this forum had said that Big Pharma has to pre-register all med trials to avoid this.
 
Man, that's some bulging file drawer if it's true.

~~ Paul


Not really, I think that the same story also said the number of negative reports is less than the positive, I will have to wait until they post it.

The main thing i want to know is how they measured the 'effect' versus placebo.

They probably used a really gross tool like the Beck depression Inventory, which is good at detecting depression but not for grading it.
 
I was listening to the same story. It surprised me only in that I thought someone on this forum had said that Big Pharma has to pre-register all med trials to avoid this.

Yep. That's how they found out about the publication bias, isn't it? By looking at the results of the registered trials vs. the results of the published trials?
 
I was listening to the same story. It surprised me only in that I thought someone on this forum had said that Big Pharma has to pre-register all med trials to avoid this.

That might have been me.

My impression from the wording in the OP is that a "positive" paper submitted to a journal has a 90% chance of being published, while a "negative" paper has an 8% chance of being published.

As opposed to: 90% of published papers are positive, 8% of published papers are negative.

I'll have to get a copy of the report the program was discussing, because I think they may be confusing publication bias with file-drawering.

Publications can't publish every study that is submitted, and there is evidence that they publish trials of new treatments that show positive results over all the new treatments that show negative results, because it's evidence treatment may need to change, and practitioners should know. Sometimes, an accepted treatment is tested and shown not to work, and this is new information instead, and is an example of a reason to publish a negative study. This is publication bias: it's more likely that a new treatment is being tested, and it's more likely to publish new treatment research if it's positive. That's all.

File-drawering is different. This is the tendency for a research party such as a pharmaceutical company to start research and either stop it when the results look unpromising, or to complete the research, find unsatisfactory results, and simply discard it. The file drawer problem is not just that they are unavailable in high-profile journals, but that they have not been submitted to any journal at all and are completely unavailable. In this day and age of crap journals and the internet, if you really want something to be available, it can be. File-drawering is caused by the deliberate decision to keep these results from inclusion in literature reviews.

Registration does not mean that all trials will be published in peer-reviewed journals, but it does increase the likelihood that trials which are started will be known, whether they get published or not. In principle, the trial results would still be available in the corporate archives and available for review and consideration. If not, then there's a paper trail that the company has hidden results, and it's reasonable to assume the results were negative.
 
Yep. That's how they found out about the publication bias, isn't it? By looking at the results of the registered trials vs. the results of the published trials?
Possibly, but I didn't catch it. Then again, I was busy with other things and only half-listening.

And with that, I have exceeded my expertise in this area and so bow out, with thanks to Blutoski for his post.
 
Just to give an example from the Skeptical side, we're working right now on an attempt to replicate Sheldrake's phone precognition experiment.

Now, skeptics are more likely to be biased toward expecting a negative result, so the positive-negative situation is a bit reversed for us, but the principle is the same.

If the results are positive (if precognition hypothesis is supported), I might be tempted to pretend the whole thing didn't happen. Not sure how I'd do this, but let's say I buried the results so nobody could know about it. That's be file-drawering.

OTOH, let's say that we got a negative result (precognition not supported) and I submitted it to a bunch of psi publications. If they rejected the submissions for reasons other than failing peer-review (ie: if they agreed the methods were sound &c, but just felt it wasn't the right time or place to publish these results), that'd be publication bias. We could always publish it in a Skeptical magazine instead, and it would be available for anybody who wanted to do a literature review.
 
If the results are positive (if precognition hypothesis is supported), I might be tempted to pretend the whole thing didn't happen. Not sure how I'd do this, but let's say I buried the results so nobody could know about it. That's be file-drawering.

Mostly, what I've seen happen is a bit more complicated. Let's say you do the experiment once, and the results come out positive. You think about it for a bit, locate a potential design flaw, tweak the experiment, then come out with a negative result. Your original positive result gets file-drawered, and you publish your negative result in a peer-reviewed journal. Or, perhaps you get a another positive result, find another design flaw, and repeat the test again.

This is how science works, right? Continually eliminating sources of bias from experiments to get better results. But, it also leads to "file drawer" results. Your preconceived notion of what the results should be dictates when you stop looking for experimental flaws. Now, a good scientist will eventually say, "Maybe my preconceived notion was wrong" in the face of repeatedly successful results, and an even better scientist will ask, "How may this negative result have been biased by the experimental set-up?" after a negative result, but this does not occur all the time. And, in fields such as parapsyhology where the statistical difference between a positive and negative result is so small, this sort of unintentional file-drawer bias is even more likely to occur.

(I don't mean to pick on you, blutoski - I merely used "you" in the illustrative sense. And of course, I would never accuse skeptics of exhibiting such biases more than believers do.)
 
Last edited:
Mostly, what I've seen happen is a bit more complicated. Let's say you do the experiment once, and the results come out positive. You think about it for a bit, locate a potential design flaw, tweak the experiment, then come out with a negative result. Your original positive result gets file-drawered, and you publish your negative result in a peer-reviewed journal. Or, perhaps you get a another positive result, find another design flaw, and repeat the test again.

This is how science works, right? Continually eliminating sources of bias from experiments to get better results. But, it also leads to "file drawer" results. Your preconceived notion of what the results should be dictates when you stop looking for experimental flaws. Now, a good scientist will eventually say, "Maybe my preconceived notion was wrong" in the face of repeatedly successful results, and an even better scientist will ask, "How may this negative result have been biased by the experimental set-up?" after a negative result, but this does not occur all the time. And, in fields such as parapsyhology where the statistical difference between a positive and negative result is so small, this sort of unintentional file-drawer bias is even more likely to occur.

(I don't mean to pick on you, blutoski - I merely used "you" in the illustrative sense. And of course, I would never accuse skeptics of exhibiting such biases more than believers do.)

I think I understand your point: researchers can file-drawer unfavourable results with sincere intentions - not just deception.

I understand this concern, but propose that if there's a real methodological flaw, the paper would be rejected by a reviewer anyway, and the impact on total knowledge is zero.

It's the rejection of valid research results that impacts science's understanding of the phenomenon in question.

Or are you saying that maybe it's common that a researcher gets an unfavourable result and then rationalizes it away with an excuse of bad methodology, even though the study may be OK? I can see how that would be a problem, as the removal of good research from the body of literature impacts understanding.

However, this problem would be significantly mitigated with the prior registration approach, since the study would be available for peers to examine and see if the researcher's methodology rejection was sensible.
 
Blutoski said:
I understand this concern, but propose that if there's a real methodological flaw, the paper would be rejected by a reviewer anyway, and the impact on total knowledge is zero.
It would be nice if they did the parapsi world a favor and published the flaw. Then maybe it wouldn't happen again.

~~ Paul
 
It would be nice if they did the parapsi world a favor and published the flaw. Then maybe it wouldn't happen again.

~~ Paul

Absolutely, so I stand corrected when I said that the impact of suppression would be zero.

One of the things about science methodology is that it is its own field of study that benefits from the discussion that follows failed experiments.
 
I understand this concern, but propose that if there's a real methodological flaw, the paper would be rejected by a reviewer anyway, and the impact on total knowledge is zero.

Hopefully! But flaws may be of the type that a reviewer would not necessarily spot. I don't know much about medical trials, but I do understand that pharmacology corporations have a vested interest in getting positive results!

However, this problem would be significantly mitigated with the prior registration approach, since the study would be available for peers to examine and see if the researcher's methodology rejection was sensible.

The methodology in trials like this is rather fixed, I believe. However, I think the problem generally stems from the fact that a positive and negative result are not black and white. Drug trials often rely on measures of statistical significance between similar populations, right? So it's possible that one test shows a statistically significant difference, and another test won't. The drug companies aren't going to keep testing drugs after they get positive results. This is on top of any sort of intentional cover-up of bad trials.
 
The methodology in trials like this is rather fixed, I believe.

The only thing that's fixed is placebo-controlled double-blinded, randomized.

Beyond that, methodology is very dependent on the indication and drug.



However, I think the problem generally stems from the fact that a positive and negative result are not black and white.

That's the definition of an acceptable study, though: unambiguous outcome metrics. One can quibble over binary thresholds, of course.

I wouldn't call it a 'problem,' though.




Drug trials often rely on measures of statistical significance between similar populations, right? So it's possible that one test shows a statistically significant difference, and another test won't.

Happens all the time.

Most Phase I trials come out negative, but those that come out positive go to Phase II. Most Phase II trials are negative. It follows that most Phase II trials contradict the Phase I outcome.




The drug companies aren't going to keep testing drugs after they get positive results. This is on top of any sort of intentional cover-up of bad trials.

Yes, that's the point of preregistration: to expose 'cover-ups,' and make them more or less impossible.

The other thing to keep in mind is that often enough, other parties are retesting their drugs, too. eg: competitors, academia. Nobody just takes, say, Roche's word for it.
 
That might have been me.

My impression from the wording in the OP is that a "positive" paper submitted to a journal has a 90% chance of being published, while a "negative" paper has an 8% chance of being published.

As opposed to: 90% of published papers are positive, 8% of published papers are negative.

I'll have to get a copy of the report the program was discussing, because I think they may be confusing publication bias with file-drawering.
That was my statement, not the story or the article's, I did it deliberately to generate interest and grap attention. Shameless pandering on my part.

Mea culpa.

It is a publication bias for sure.
 
Last edited:
That was my statement, not the story or the article's, I did it deliberately to generate interest and grap attention. Shameless pandering on my part.

Mea culpa.

It is a publication bias for sure.

Fair enough.

Just to repeat myself: there may be innocuous reasons for publication bias.

One of the debates a few decades ago was about the definition of 'information'. An example is: when you read a newspaper, would you expect to find a headline like "Sahara Hot and Dry?" It is hot and dry, and this information is 'submitted' to weathermen every day. But they don't publish it.

On the other hand, if it were to suddenly snow on the Nile, a headline like "Sahara Cold and Wet" would be expected. It'd be published in a New York minute. Wuxtry!

So, the point is that there doesn't have to be an intentional suppression of negative findings so much as the evaluation that their publication is not very important.

The point is that a journal gets more submissions than they can print, and they have to select them for informational value. Trials that show a new drug doesn't work may not be useful to the journal's audience, whereas a trial showing a new drug does work will be valuable - it can lead to a scramble for duplication studies and move the prospective drug forward. Some MDs may even begin to prescribe it off-label.

The situation where a negative study would be valuable is if the drug in question is already in common use, and now there's evidence it doesn't work or doesn't work well compared to another option. These are much rarer than positive results from a study of a new drug.
 

Back
Top Bottom