Right, here we go. I've been on holiday for a couple of weeks and the reprint has arrived in my absence.
It is as bad as you might expect.
Intro:
"A pioneering meta analysis of 89 [placebo-controlled trials], selected on the basis of their quality, found a statistically significant overall effect in favour of homoeopathy<sup>7</sup>"
Guess what reference 7 is. Yep, it's
Linde (1997). It's the only one they cite and they effectively imply the old lie that homeopaths tell about this paper showing a geater effect for homeopathy in the better studies, whereas in truth the opposite is true. Clearly their literature searches have not strayed far beyond the usual hom websites!
"An alternative approach is that of outcomes research, which focuses on the results of homoeopathic treatment in everyday medical practice"
Yeah, "alternative" in the sense that this is the crappy methodology that properly controlled trials superseded.
Methods:
"In this prospective, multicentre, parallel group, comparative cohort study..."
Ooh, look at all those big adjectives, just like you'd see in a real scientificky paper. Unfortunately,
"Patients were first approached at the doctor's practice and had thus already made their own choice of therapy; accordingly, the study was open and non-randomised"
which really means that we should read no further because whatever they found is going to be unreliable. But, I kept reading.
"adults...presenting with the selected chronic disorders headache, lower back pain, depression, insomnia or sinusitis, and children...presenting with bronhcial asthma, atopic dermatitis or allergic rhinitis"
In other words, the usual suspects prone to false reporting and psychosomatic effects.
They did lots of statistics, so I think they must have bought a computer. See folks, this is progess, Hahnemann could never have dreamed of having such a thing to invent his results and had to do it all the old manual way.
Results:
Obviously they report improvements in symptoms over time. The important thing to note is that the time points chosen were 6 and 12 months for patient self-assessments and 12 months for physician assessments, which would be the kind of period over which these sorts of problems might reasonably be expected to improve after an initially severe phase. They claim significant improvements overall for both conventional and hom groups over time, but specifically they claim a relatively greater benefit with hom at both time points for adults and children on patient assesment, but only for children with the physician assessment.
Homeopathy was significantly cheaper. Well, d'uh!
Discussion:
They report "adjusted analyses" were required for baseline differences between the cohorts, but the methods are neither explained nor the data reported.
"the design closely reflects regular clinical practice, so that outcome and cost measurements provide a more realistic picture than can be expected in a randomised trial"
We have a name for that kind of study, data dredge.
"Because conventional therapy can be viewed as an active control, there was the initial possibility that homoeopathic treatment might be found significantly inferior to conventional therapy. Thus, it is remarkable that homoeopathic treatment was never shown to be inferior in this study."
Not really. All you have to do is choose problems for which conventional medicine has no complete answers and assess your groups over long enough intervals that spontanenous improvements can be expected.
They then witter on for half a page about how their adjustments "allowed for self-selection by adjustment for baseline characteristics".
See, you don't need proper controls just do your study badly and adjust the baseline afterwards.
"While the study demonstrates differences in favour of homoeopathic therapy, it cannot explain what actually 'drives' these results"
Once again...d'uh!
"Homeopathy may inherently be more effective for the diagnoses under investigation compared to conventional treatment. It may also be that...compliance...is better. Finally, a methodical limitation of our study is the unblinded severity rating that might contribute to the observed results."
And yet again...d'uh!!
Of course, that third explanation should have caused this to be dropped in the reviewer's wastepaper bin, but this is "Complementary Therapies in Medicine" and they publish rubbish like this.
One interesting feature is the level of justfication they employ to get the reader to take the results at face value. The way it is written implies that these bozos really believe the stuff they are writing and that their methods are valid. So it adds some weight to the
"quacks unknowing" side of the scales, i.e. they are buffoons not frauds.