• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Homeopathy works...

I realize we're just joking around, but can I put in a request that we stop perpetuating the idea that homeopaths have better beside manner or that their patients are more satisfied with the therapeutic encounter than they would be from a doctor? We've talked about this before, and as far as I can tell, the evidence suggests otherwise. This is their last remaining justification for their presence, so why concede to it?


See the Homeopathy paper referred to in the O/P, which seems to confirm your point (although that may not have been the intention behind it):
CONCLUSIONS: Placebo effects in RCTs on classical homeopathy did not appear to be larger than placebo effects in conventional medicine.
 
Rather than post the same thing in two places, I just posted this at SBM. The basic point being that I suspect any effect from consultation via the Hawthorne Effect is almost by definition likely to be temporary, almost an artefact of the study itself rather than something that is exploitable for long-term gain.
 
Homeopathic elixirs do work better than a placebo (e.g., "Sugar Pill"), but only when used to treat dehydration.

:D
 
Linda, perhaps you could help.

I suspect not. :)

I noticed that this study gave incidence rates for adverse events, those for homeopathy arms totting up to be more than those in the placebo arms.
- Table 7.
http://rheumatology.oxfordjournals.org/content/early/2010/11/08/rheumatology.keq234.full

The authors say the differences are not statistically significant, but that may be because they only compare individual subgroups.

If one just counts those on placebo as compared to those on homeopathic remedy, you can see that 32 patients on placebo experienced 95 events (2.96 per patient) but 45 patients on a homeopathy remedy experienced 187 events (4.15 events per patient).

Now I think that may be a statistically significant difference, but am unsure what stats test to do to look at it. (the one I tried said p=0.03)

I'm not sure what to do with the limited information we are given. Maybe you could treat them as relative rates. The comparison you made was not one of the comparisons the authors made. When looking at the effect of homeopathic remedies they compare groups 2 and 4 vs. 3 and 5, which gives a less marked difference (3.97 events per patient instead of 4.2).

Even if a proper test is not quite statistically significant, it does rather point against the homeopaths claims that their remedies are completely harmless.

If you have a study which not only demonstrates the remedy is no better than placebo, but is also more harmful than placebo what possible justification can there be in making people take it?

I suspect that this result, like the reported 'positive' results, merely reflect what happens when you take advantage of post-hoc data dredging.

What is interesting is that this study was negative for an effect from the homeopathic consultation, yet it is being reported as though it were positive.

Perhaps a whole new era of homeopathic management is imminent, where patients go through the consultation procedure, and get given a plastic yellow bath duck at the end of it, rather than some magic pills. This would be so much cheaper, and the NHS might even reconsider the "cost-benefit" of the whole idiocy. It would also chop Boiron's and Nelson's and Helios's profits in one fell swoop.

Rubber ducky, you're the one...
You make bath time lots of fun...

:)

Linda
 
The question I have is why are humans inherently predisposed to respond to a placebo?
I suppose one could argue that there is a survival value in an action that convinces ourselves we are doing something which might have a positive outcome.
Is there any evidence though in other animals that they might entertain counter-factuals and that this has positive survival value?
 
The question I have is why are humans inherently predisposed to respond to a placebo?
I suppose one could argue that there is a survival value in an action that convinces ourselves we are doing something which might have a positive outcome.


Or possibly it is a carry-over from "Mummy will kiss it better" - babies cry when they need attention, but if they carried on crying when given attention there would be less incentive for the parents to provide the attention, so babies that cry, but stop when comforted are more likely to receive the attention they need than babies that just cry.
 
How was the benefit of the patients measured, self reporting of pain relief or something more specific?
 
Or possibly it is a carry-over from "Mummy will kiss it better" - babies cry when they need attention, but if they carried on crying when given attention there would be less incentive for the parents to provide the attention, so babies that cry, but stop when comforted are more likely to receive the attention they need than babies that just cry.

In my experience that does not apply during suicide hour :(
 
I suspect not. :)

I'm not sure what to do with the limited information we are given. Maybe you could treat them as relative rates. The comparison you made was not one of the comparisons the authors made. When looking at the effect of homeopathic remedies they compare groups 2 and 4 vs. 3 and 5, which gives a less marked difference (3.97 events per patient instead of 4.2).

I suspect that this result, like the reported 'positive' results, merely reflect what happens when you take advantage of post-hoc data dredging.

What is interesting is that this study was negative for an effect from the homeopathic consultation, yet it is being reported as though it were positive.


Rubber ducky, you're the one...
You make bath time lots of fun...

:)

Linda
Ta.

You'll find a little fella who's cute and yella,
and chubby....

Rubber ducky I'm awfully fond of you!
 
Last edited:
How was the benefit of the patients measured, self reporting of pain relief or something more specific?
The paper lists several primary and secondary outcome measures under the section called "outcome assessment". I am not very familiar with them, but they appear to be well validated methods in standard use, so at least the authors didn't invent their own assessment score.
 
The paper lists several primary and secondary outcome measures under the section called "outcome assessment". I am not very familiar with them, but they appear to be well validated methods in standard use, so at least the authors didn't invent their own assessment score.


What I'm trying to pin down is how big of an effect (other than self reports of feeling better in the short term) only talking, listening and believing something can possibly have to a patient that suffers from rheumatoid arthritis or any other condition :boggled:
 
I went to add a response to the comments on Pulse, but couldn't write anything that came across as civil. All drafts were variant of "how fanarkling stupid do you have to be to misread the report that badly?", so I didn't comment at all...

KE
 

Back
Top Bottom