Professor of complementary medicine

RichardR

Master Poster
Joined
Nov 21, 2001
Messages
2,274
Look at this Guardian article about a " world-class scientist who turned his back on the Viennese medical elite to become the UK's first (and only) professor of complementary medicine". Apparently this guy decided to take this job with the intention of testing alternative therapies scientifically.

A couple of quotes caught my eye:
Looking through a summary of the mountain of papers his unit has published in 10 years, a lesser mortal might feel discouraged. Most of the findings on the efficacy of therapies and treatments are either negative or inconclusive because too little research has been done for anyone to be sure.
"Too little research"? He's had ten years. Remember, he's not developing new therapies. These "successful" alternative therapies have been around for centuries – he's only testing them. Ten years of scientific testing and everything is either negative or inconclusive. Just how much testing is required before we can determine something doesn't work?
"They are not always negative results. In meta analyses [pooling the results of all available good quality studies], we generate quite a lot of positive results," he says.

Herbs such as St John's wort, which has proved effective in treating depression, have shown much promise. Kava kava also proved effective in relieving anxiety. But then evidence surfaced linking it to liver damage.
Firstly, don't the statisticians on this board criticize meta analysis used this way? And hasn't St John's Wort been shown to be indistinguishable from placebo?
 
Many "complementary/alternative" therapies may have a benefit - depends on your definition of CAM. Any "therapy" that has been shown to have benefit (aspirin, digoxin, quinine etc etc) is analysed, purified, synthesised and incorporated into conventional medicine where it can be reliably and predictably used and tested.

So some herbal remedies in current "complementary" use today may have minor therapeutic effects, but not be regarded by drug developers as having sufficient promise to be commercially developed.

(Most of it is all woo woo though, as you nicely point out)
 
RichardR said:
Firstly, don't the statisticians on this board criticize meta analysis used this way? And hasn't St John's Wort been shown to be indistinguishable from placebo?

(1) This is the most important point about meta-analyses: if the individual studies being pooled show no clear positive result on their own, then the subsequent meta-analysis with a positive result is meaningless! Period! The only meaning such an occurrence has is that the data from the individual studies was overly massaged in order for it to be pooled to fit into the meta-analysis. Usually, the only good meta-analyses that are routinely accepted in the scientific community (on therapies) are those of safety data, where there is a near-universal and systematically uniform data collection procedure.

(2) The whole St. John's thing is still being hotly debated. The clear evidence, right now, that people should know before taking it is that it interacts with one of the key pathways of metabolism in the liver potentially "speeding up" the break down of other drugs that are cleared by the same pathway. For someone starting St. John's wort, it may make the other drugs they are taking fall below therapeutic levels in the bloodstream. Point is, you should talk with your doctor before you start any herb.

-TT
 
Re: Re: Professor of complementary medicine

ThirdTwin said:


(1) This is the most important point about meta-analyses: if the individual studies being pooled show no clear positive result on their own, then the subsequent meta-analysis with a positive result is meaningless! Period!

Isn't a meta-analysis just involve pooling all the results of smaller individual studies to look for an effect? If so, then this statement is wrong (provided, of course, that the combination only includes properly carried out studies...including studies with poor methodolgy is never appropriate).

My usual example is to consider a study of 10 trials, where there are 4 possible results. In one study, Results A and B occur 3 times each, and results C and D each occur twice. No significance.

Someone else runs the study and gets A 3 times, B and C twice, and D 3 times. This study is carried out 1000 times and every time, A comes out 3 times, while B, C, and D come out a mix of 2 and 3 times. In that case, not a single individual study showed any significance. However, if you look at the total results, you have 10000 trials where A came up 30% of the time, when statistical is 25%. That's a statistically significant result. It's not a big result, but it is certainly to be considered a real effect.

And not a single trial showed anything.
 
Re: Re: Re: Professor of complementary medicine

pgwenthold said:


Isn't a meta-analysis just involve pooling all the results of smaller individual studies to look for an effect? If so, then this statement is wrong (provided, of course, that the combination only includes properly carried out studies...including studies with poor methodolgy is never appropriate).

My usual example is to consider a study of 10 trials, where there are 4 possible results. In one study, Results A and B occur 3 times each, and results C and D each occur twice. No significance.

Someone else runs the study and gets A 3 times, B and C twice, and D 3 times. This study is carried out 1000 times and every time, A comes out 3 times, while B, C, and D come out a mix of 2 and 3 times. In that case, not a single individual study showed any significance. However, if you look at the total results, you have 10000 trials where A came up 30% of the time, when statistical is 25%. That's a statistically significant result. It's not a big result, but it is certainly to be considered a real effect.

And not a single trial showed anything.

Sort of.

Here's the thing. If you have ten separate trials, and they all show a negative result, you have to pool all of the positive AND negative results. It's simple summation, and the resultant pooled data will still be negative. It doesn't matter what the individual numbers are in each study.

Now, if you have a few large studies that were conducted under AWC (Adequate and Well-Controlled) conditions that clearly shows a negative result, and you pool data from numerous other smaller studies that do not show a positive result and/or were not designed too, you may end-up with the numbers skewed towards a favorable outcome where it has already been definitively proven in pivotal trials that there is no effect. Furthermore, the smaller studies were not designed to show a treatment effect in the first place, even if they did.

This is one of the many pitfalls that you can find yourself in when meta-analysis is conducted on inappropriate studies. Another is "fudging" the data to fit the pooled database. In essence, data has to be standardized often from different studies with different measurements. For example, let's say you had a cut-off point of patients that weighed 200 lbs in one study and 180 lbs in another. In the meta-analysis, you'd have to come up with some statistical convention to account for all the patients who weighed between 180-200 lbs in the first study, whether or not they had a positive effect, and to correlate with those that were excluded from the second study. You have to "fudge", but that's doesn't necessarily render the subsequent meta-analysis meaningless if the "fudge" criteria account accurately and consistent with the patient population that were excluded from one study and not another. However, more often than not, these "data conventions" become very problematic and it renders potentially meaningless when trying to draw any firm conclusions as a result of the meta-analysis.

Does that make sense? I can only talk quasi-intelligently on this topic, but the gist is right. Again, I state for the record that I'm not a statistician, but every clinician ultimately thinks they are. :D

-TT
 
RichardR said:
Look at this Guardian article about a "world-class scientist who turned his back on the Viennese medical elite to become the UK's first (and only) professor of complementary medicine". Apparently this guy decided to take this job with the intention of testing alternative therapies scientifically.
It's not all bad. This guy is running a nice line in churning out null-effect homoeopathy papers. Arnica, individualised therapy for asthma, there are quite a few. He does seem to be a genuine scientist, and the shaken-up water is one area he seems to have no problem recognising the woo in.

Sometimes I think he feels constrained just to say a few non-negative things (like, OK, I just proved arnica is useless for bruising, but I never actually said that meant the whole subject was invalid), so as to keep some sort of street-cred with the woo-woos he has to work with. And not upset Prince Charles, of course.

Rolfe.
 
One of the great hopes of the modern drug industry is the specific targetting of drugs to individual genesets. It is pretty well known that drugs which help some people with condition x have no effect, or a negative effect, on others. Logically, the same may be true of non mainstream treatments. It's possible (stress on possible) that some (eg) traditional Chinese medecine may work well on some Chinese patients and not on westerners, because there is a subtle genetic difference between the two groups. Short of trying a drug on everyone in the world, we can never know. Then the drug may work, but in a different concentration... The combinations are endless. Could be, after a thousand years of research, we will be able to confidently say that St. John's Wort cures bunions in 4% of left handed Patagonian Jews named Macdonald, but only if drunk in tea on alternate Sundays. Meanwhile, the more data, the merrier.
 
Soapy Sam said:
One of the great hopes of the modern drug industry is the specific targetting of drugs to individual genesets. It is pretty well known that drugs which help some people with condition x have no effect, or a negative effect, on others. Logically, the same may be true of non mainstream treatments.

Not really, there are very good reasons why certain drugs have varied effects, where CAM therapies a) have no mechanism for how they work in the first place and b) have no clue why they do not work in most patients. So to compare the two in how and why they do not work is not really significant. For example, chriopractic may relieve back pain, but certainly not in every case. They claim virtually a 100% success rate much like psychics, but at a closer inspection they are far lower than that. When you discuss a drug or surgery, there are very good explanations regarding the therapy, and statistics are available as to success rate etc. Go to a CAM faith healer and you will hear a load of shiat they know isn't true. For the majority of reality based treatments, if the treatment does not work, another will be tried, or the diagnosis can be reevaluated etc. until a consensus can be reached, and since the mechanisms etc. are known, an answer is far more probable than some kook who beleives the body runs on "chi" and received their medical training over a long weekend over the internet. This is a key difference- reality based medicine is investigated and characterized. CAM is based on the not so sturdy foundation of denial and lying.
 

Back
Top Bottom