Anyone know about this homeopathy trial?

Barbrae said:
Physician assessments were also more favourable for children who had received homoeopathic treatment (4.6-2.0 and 3.9-2.7; p<0.001).
But for some reason the abstract doesn't mention the physician assessments for adult patients:
The analyses ... indicated greater improvement in patients' assessments after homoeopathic versus conventional treatment
Perhaps the children were more susceptible to "mummy'll kiss it better."
 
Mojo said:

Perhaps the children were more susceptible to "mummy'll kiss it better."

I confess I regularly use this tactic with my own children (well, the "Daddy" will kiss it better, anyway).

It is extremely effective - a bit of a fuss and attention, a feeling from the child that someone they trust to help them is doing something positive to help them (even though they are not), and a bit of counterstimulus to take their mind off the problem that is causing them discomfort ........... Result = Magic!

Does this mean KIB ("Kiss it better") has any medical/scientifically valid effect? - No.
In fact KIB might well outperform homeopathy in a "comparative" trial.

I suggest the Department of Health immediately pumps a few £million into "KIB-integrated health care pathways". It's also cheaper than homeopathy.
 
Right, here we go. I've been on holiday for a couple of weeks and the reprint has arrived in my absence.

It is as bad as you might expect.

Intro:

"A pioneering meta analysis of 89 [placebo-controlled trials], selected on the basis of their quality, found a statistically significant overall effect in favour of homoeopathy<sup>7</sup>"

Guess what reference 7 is. Yep, it's Linde (1997). It's the only one they cite and they effectively imply the old lie that homeopaths tell about this paper showing a geater effect for homeopathy in the better studies, whereas in truth the opposite is true. Clearly their literature searches have not strayed far beyond the usual hom websites!



"An alternative approach is that of outcomes research, which focuses on the results of homoeopathic treatment in everyday medical practice"


Yeah, "alternative" in the sense that this is the crappy methodology that properly controlled trials superseded.


Methods:

"In this prospective, multicentre, parallel group, comparative cohort study..."

Ooh, look at all those big adjectives, just like you'd see in a real scientificky paper. Unfortunately,

"Patients were first approached at the doctor's practice and had thus already made their own choice of therapy; accordingly, the study was open and non-randomised"

which really means that we should read no further because whatever they found is going to be unreliable. But, I kept reading.

"adults...presenting with the selected chronic disorders headache, lower back pain, depression, insomnia or sinusitis, and children...presenting with bronhcial asthma, atopic dermatitis or allergic rhinitis"

In other words, the usual suspects prone to false reporting and psychosomatic effects.

They did lots of statistics, so I think they must have bought a computer. See folks, this is progess, Hahnemann could never have dreamed of having such a thing to invent his results and had to do it all the old manual way.

Results:

Obviously they report improvements in symptoms over time. The important thing to note is that the time points chosen were 6 and 12 months for patient self-assessments and 12 months for physician assessments, which would be the kind of period over which these sorts of problems might reasonably be expected to improve after an initially severe phase. They claim significant improvements overall for both conventional and hom groups over time, but specifically they claim a relatively greater benefit with hom at both time points for adults and children on patient assesment, but only for children with the physician assessment.

Homeopathy was significantly cheaper. Well, d'uh!

Discussion:
They report "adjusted analyses" were required for baseline differences between the cohorts, but the methods are neither explained nor the data reported.

"the design closely reflects regular clinical practice, so that outcome and cost measurements provide a more realistic picture than can be expected in a randomised trial"

We have a name for that kind of study, data dredge.

"Because conventional therapy can be viewed as an active control, there was the initial possibility that homoeopathic treatment might be found significantly inferior to conventional therapy. Thus, it is remarkable that homoeopathic treatment was never shown to be inferior in this study."

Not really. All you have to do is choose problems for which conventional medicine has no complete answers and assess your groups over long enough intervals that spontanenous improvements can be expected.

They then witter on for half a page about how their adjustments "allowed for self-selection by adjustment for baseline characteristics".

See, you don't need proper controls just do your study badly and adjust the baseline afterwards.

"While the study demonstrates differences in favour of homoeopathic therapy, it cannot explain what actually 'drives' these results"

Once again...d'uh!

"Homeopathy may inherently be more effective for the diagnoses under investigation compared to conventional treatment. It may also be that...compliance...is better. Finally, a methodical limitation of our study is the unblinded severity rating that might contribute to the observed results."

And yet again...d'uh!!

Of course, that third explanation should have caused this to be dropped in the reviewer's wastepaper bin, but this is "Complementary Therapies in Medicine" and they publish rubbish like this.

One interesting feature is the level of justfication they employ to get the reader to take the results at face value. The way it is written implies that these bozos really believe the stuff they are writing and that their methods are valid. So it adds some weight to the "quacks unknowing" side of the scales, i.e. they are buffoons not frauds.
 
I find this sort of rubbish desperately tedious. The problem is that the studies which do this are usually written with an eye to a good quote in the summary or conclusions, which allows homoeopathy proponents to regurgitate something impressive-sounding and beguile the unwary. Taking these studies apart is a painstaking job, as BSM had just demonstrated, and the casual enquirer is more likely to be impressed by the snappy sound-bite than the boring refutation.

In contrast, the really scientific studies are usually much more circumspect in what they claim to have demonstrated, for example the common statement that a certain study has only shown that homoeopathy is ineffective for this particular condtion under these particular circumstances, and this doesn't disporove homoeopathy as a whole. Maybe this is proper practice, but to be honest it often sounds very much like a cringe - an apologetic excuse for not having confirmed the woos' beliefs, and a plea not to be treated to harshly for this.

Unfortunately once published this sort of nonsense cannot be unpublished, and no matter how many times it's shredded the original is still available for the woos to gloat over. Like Linde - we know Linde's group essentially recanted in a follow-up paper, and agreed with their critics that a proper analysis of the data showed that the better designed a study, the more likely it was to show no effect of homoeopathy. However, we never see the woos refer to that, and the just repeatedly trot out the same cherry-picked quotes from the original paper.

Once and for all - we agree that TLC and a lot of flattering and individualised attention do make people feel subjectively better ("complimentary" medicine indeed!). And we agree that standard medical care may be inadequate in this respect. And we appreciate that one thing which nearly all complementary therapies do exceedlngly well is make their patients feel pampered. This is not our argument. Our argument is that the actual remedies are content-free and useless, indeed fraudulent.

Rolfe.
 

Back
Top Bottom