• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Comments on homeopathic paper please

RationalVetMed

Graduate Poster
Joined
Jun 4, 2004
Messages
1,467
As usual I'm trawling through papers which homeopaths claim support their position. I've come across this one:

Linde, K., Jonas, W.B., Melchart, D., Worku, F,. Wagner, H., and Eitel, F., (1994) Critical review and meta-analysis of serial agitated dilutions in experimental toxicology Human Experimental Toxicology Vol.13, no. 7 pp. 481-92

The abstract is here - http://www.ncbi.nlm.nih.gov/pubmed/7917505

The authors say this is an "overview and quantitative meta-analysis of all experimental literature on the protective effects of serial agitated dilutions (SADs) of toxin preparations" and, after a sifting and sorting exercise they find: "Four of 5 outcomes meeting quality and comparability criteria for meta-analysis showed positive effects from SAD preparations... Average percent protection over controls in these preparations was 19.7 (95%Cl 6.2-33.2)".

Unfortunately I don't have access to the full paper so I don't know the details, particularly what the dilutions were. It is cited in an article about hormesis so it may be that they weren't dilute enough to qualify as real homeopathy.

Any and all contributions welcome and greatly appreciated. And if anyone has a copy of the paper I'd be grateful.

Cheers,

Yuri
 
It's just a meta-analysis, with no independent experimental work involved. They looked at 105 publications about homeopathy and concluded that most of them were garbage, but that in the high-quality published studies an effect was found more often than not.

This is, obviously, explainable entirely by publication bias, so as evidence for an actual medical effect it's not very persuasive.
 
Yuri, I have full access to this paper, inbox me your email address and I can try send it to you.
 
Comments on homeopathic paper please

It would be stronger evidence if it was shorter.

On a more serious note, I don't have access to the full paper but the abstract has some massive red flags in it. For example:
We found 105 publications...

Four of 5 outcomes meeting quality and comparability criteria

They looked at 105 publications, and found a grand total of 5[i/] that were judged to be any good. And on top of that they go to great lengths to make it clear just how terrible the rest of them were. This doesn't say anything about whether homeopathy works, it just says how pathetic the state of research by homeopaths is.

Edit: Or more accurately, how pathetic it was. It's nearly 20 years old. I think it would be more interesting to see what a more recent analysis shows has happened since then. Although I wouldn't get my hopes up to see anything different.
 
Last edited:
They looked at 105 publications, and found a grand total of 5[i/] that were judged to be any good.


I recently polled 105 people at random. I judged about 5 of them to be sane and sober based on their positive response to my question, "Am I, paiute, the Queen of England?"

After analysis of these 5, we can conclude with reasonable certainty that I am indeed Her Majesty.
 
Thought I would be able to simply comment on my feelings about driftwood washed ashore. Didn't realize there was reading involved and unrelated.

In case the above is still open for discussion...I'm in favor of it.
 
So - what is homeopathic paper, then?

I'll get my coat....
If you take a pen and write "Pulsatilla 30C" on it, it becomes homeopathic paper. You then place the piece of paper under a glass of water in order to allow the writing to vibrate healing energy into the water, then drink.

I'm not making this up. Homeopathic paper remedies are genuinely used (rarely though, I think - this particular bit of bonkersness is as far as I know, unlike many other bits of bonkersness, not standard for homeopaths).
 
There's quite a bit of discussion of it on this page. It seems to be one of DUllman's favourites.

That's what I was thinking of when the paper was cited by Yuri. I'll take a look and see if I can remember any extra thoughts from when it got discussed at Wikipedia.
 
Yes, I remember now. See the comments by OffTheFence under the section "Cazin (1987". Cazin was one of the papers included in Linde's meta-analysis. DU insisted it was "high quality" be size of the way Linde scored the papers. In fact it was an example of Linde using a bizarre scoring system in which studies could fail on various basic grounds of experimental design but still score highly under their scheme.
 
I never "met-an-analysis" I've liked.

Seriously, there is an inherent problem with metanalyses, including the fact that methodology between studies that often forces the authors to use data conventions to make the studies "fit" together. Different endpoints require, for example, the authors of the metanalysis to change the statistics so that the different papers can be co-analyzed.

Most dubious are collections of several papers that show no benefit for a particular variable that suddenly show an effect when pooled together.

In my opinion (and many others), nothing trumps a single, adequate and well-controlled, double-blinded study with robust statistics.

~Dr. Imago
 
Thanks folks, for the help. I'm now trying to wade through the paper but it seems to me, from the discussions, there are questions about what exactly the authors considered a good quality study, how many of the trials included were blinded or randomised and also about the independence of the lead author and the weight of the journal. Difficult things to overlook if you are claiming that plain water counterracts lethal toxins.

Those wiki discussions hurt my head :o but thanks for the links.

Cheers,

Yuri
 
I never "met-an-analysis" I've liked.

Seriously, there is an inherent problem with metanalyses, including the fact that methodology between studies that often forces the authors to use data conventions to make the studies "fit" together. Different endpoints require, for example, the authors of the metanalysis to change the statistics so that the different papers can be co-analyzed.

Most dubious are collections of several papers that show no benefit for a particular variable that suddenly show an effect when pooled together.

In my opinion (and many others), nothing trumps a single, adequate and well-controlled, double-blinded study with robust statistics.

~Dr. Imago

This is fundamentally misguided. The best possible way to figure out the answer to a question is to look at all the data, not to throw out most of the data and focus on a single study. Any time you throw away data you lose relevant information.

Also it's very, very basic statistics to understand that with a larger sample you can demonstrate the existence of a smaller effect at a given p value. There is absolutely nothing dubious whatsoever about pooling a number of studies that show no statistically significant benefit (which is vitally different from showing no benefit at all) to arrive at the conclusion that when you pool them there does turn out to be a statistically significant benefit.

This isn't an academic point, by the way, it's a matter of life and death. Lots of people have died in the past because while there was enough evidence in the world to show that a specific treatment saved lives, nobody had got all the evidence in one place yet by doing a meta-analysis. People were sitting around waiting for one great big thumping trial to decide the issue, because people who don't quite get the statistics like the simplicity of a great big thumping trial, and meanwhile people died preventably.
 
This is fundamentally misguided. The best possible way to figure out the answer to a question is to look at all the data, not to throw out most of the data and focus on a single study. Any time you throw away data you lose relevant information.

Also it's very, very basic statistics to understand that with a larger sample you can demonstrate the existence of a smaller effect at a given p value. There is absolutely nothing dubious whatsoever about pooling a number of studies that show no statistically significant benefit (which is vitally different from showing no benefit at all) to arrive at the conclusion that when you pool them there does turn out to be a statistically significant benefit.

This isn't an academic point, by the way, it's a matter of life and death. Lots of people have died in the past because while there was enough evidence in the world to show that a specific treatment saved lives, nobody had got all the evidence in one place yet by doing a meta-analysis. People were sitting around waiting for one great big thumping trial to decide the issue, because people who don't quite get the statistics like the simplicity of a great big thumping trial, and meanwhile people died preventably.

Yes. I inherently agree with what you are saying.

The problem with the meta-analysis methodology is that it can take several negative studies and find a positive result.

A single study, at least a well-controlled one, is structured to test a premise and determine, within its parameters, whether that premise is proven. When you pair studies that have different methodologies you have to make concessions that were not inherent in the original study. This further "dirties" the data, and it can make the conclusions drawn by a particular meta-analysis actually less robust, not more.

An excellent primer:

http://www.ccjm.org/content/75/6/431.full

~Dr. Imago
 
Pardon my ignorance, but I've often seen this mysterious word "meta-analysis" associated with laboratory studies of peculiar things like ESP. I've always thought there was something a bit fishy about it. Perhaps I'm misunderstanding the way it works, and if so, maybe an expert will correct me?

But as I understand it, the way it works is something like this. If 100 people independently do experiments to prove the existence of something, and all, or nearly all of them get a tiny but definite result, then it looks very much as if whatever you're looking for is very hard to detect but does exist.

If, on the other hand, 99 experiments show no evidence at all that whatever you're looking for exists, but one gives quite an impressive result, most scientists would tend to suspect that maybe that one experiment was flawed, especially if it's designed to detect something intrinsically improbable like telepathy.

However, meta-analysis of the second set of data will average out the results of those 99 unsuccessful experiments and that one suspiciously good one, and the combined result will be identical to what you'd get if every experiment had shown a very small positive result. Isn't there a risk of a small number of flawed experiments contaminating the data from a huge number of better-designed ones that gave completely different results?

And is it also the case that meta-analysis is most useful in evaluating results from experiments which for some reason cannot reliably be replicated, such as those parapsychology tests which allegedly won't work if anybody in the room doesn't believe in them strongly enough? Though I suppose that also applies to perfectly orthodox experiments with things like medicine which has to interact in a very complicated way with living tissue. However, it does seem to me that you could use this method to get some very silly results. Would anybody care to clarify whether this is true?
 
Yes. I inherently agree with what you are saying.

The problem with the meta-analysis methodology is that it can take several negative studies and find a positive result.

A single study, at least a well-controlled one, is structured to test a premise and determine, within its parameters, whether that premise is proven. When you pair studies that have different methodologies you have to make concessions that were not inherent in the original study. This further "dirties" the data, and it can make the conclusions drawn by a particular meta-analysis actually less robust, not more.

An excellent primer:

http://www.ccjm.org/content/75/6/431.full

~Dr. Imago
Re the bit I italicised - isn't that a problem with badly-conducted meta analyses rather than meta analyses per se?

See, e.g., here: http://www.cochrane-net.org/openlearning/html/mod12-2.htm

This is called Simpson's paradox (or bias), and is why we don't pool participants directly across studies.

Or perhaps I've misunderstood you and you are actually complaining about the fact that a meta analysis increases the sample size and therefore the power? (Earlier in the thread, you wrote that: "Most dubious are collections of several papers that show no benefit for a particular variable that suddenly show an effect when pooled together.") But that doesn't seem right, as that is one of the points made by the author of this excellent primer: http://www.ccjm.org/content/75/6/431.full which you link to approvingly. They see this as a positive virtue of meta analysis.

The author also writes that "A well-designed meta-analysis can provide valuable information for researchers, policy-makers, and clinicians."

And there's this in the conclusions:

Like many other statistical techniques, meta-analysis is a powerful tool when used judiciously; however, there are many caveats in its application. Clearly, meta-analysis has an important role in medical research, public policy, and clinical practice. Its use and value will likely increase, given the amount of new knowledge, the speed at which it is being created, and the availability of specialized software for performing it.

They're saying that meta analysis is a powerful tool that plays an important role, but pointing out that some caution is warranted as such analyses can be badly done - or can be inappropriate in some circumstances.

I don't think that your position (as stated earlier in the thread) that "nothing trumps a single, adequate and well-controlled, double-blinded study with robust statistics" is supported at all by anything you've written above or by the link you provided.
 
Pardon my ignorance, but I've often seen this mysterious word "meta-analysis" associated with laboratory studies of peculiar things like ESP. I've always thought there was something a bit fishy about it. Perhaps I'm misunderstanding the way it works, and if so, maybe an expert will correct me?

The key point is that if you make a study bigger, you can decrease the false negative rate, without increasing the false positive rate.

In a meta-analysis, you take multiple studies, and aggregate the results so you effectively have 1 big study. This 1 big study, when analysed together will have a lower false negative rate than a bunch of small studies. In addition, because you are now only doing 1 analysis, you chance of getting "1 or more false positives" is less.

In your example, you could take 99 experiments that show no effect, and pool them, and find that there is an effect which was too small to be detected in each study alone. As you rightly point out, however, a strongly "positive" can skew the results of the meta-analysis, resulting in a clearly positive overall result. You would also get the same with 100 experiments all showing a very weak result.

However, in the latter case, the result is more likely to be correct, for reason of consistency.

A good meta-analysis will doa more in-depth analysis, looking for various sources of bias, checking for outliers (e.g. analyse the results to see if there are one or more studies that give completely different results to all the others), or whether there are other problems (such as publication bias - negative studies tend not to get published, just forgotten about, even if good. Positive studies tend to get published, even if garbage).

Meta-analysis is most useful for studies which have been replicated multiple times, but where there does not seem to be an overall consensus, and where the individual studies are relatively small. Here, you already have a lot of data to work with (from lots of replications) and you just need to aggregate it.

The problem is that crops up is that not all the studies are direct replications - often the study designers want to put their unique spin on a problem, so modify a previous study design. This makes it difficult to aggregate the results, or can introduce additional biases.
 

Back
Top Bottom