Phil63 said
And yes, I was using personal experience to back up my belief system - which is what I said fromt the beginning - That my belief is based on experience - not scientific evidence.
Phil63, this may be long, but you had _better_ be able to able to deal with all of this if you're thinking of getting involved with other peoples health. That belief is not just affecting you now, it's going to be affecting _other_ people. If you don't have answer to the problems below, you're no better than a snake-oil salesman that doesn't care whether their remedy works or not.
This personal observation above is a great reason (the best?) to have a look at something, but it has two problems when used to answer the question "Does it work?" The first is that this kind of observation, repeated however many time, says nothing about one thing causing the other (correlation, not causation: comment 1.) The second is the placebo effect (comment 2.)
You've had an experience that seemed significant to you. Now it's time to explore it, and squash those two problems. There's lots of people in the same boat, so it's worth looking at, but that's really all they do. You're looking at it now, and you've already stated you know they add nothing that your experience doesn't have, so it's time to throw the personal experiences of others away and start from scratch.
How can we get around the first problem, of correlation and causation? The problem is comparing groups that differ in more than the question of interest. As much as you can, you'll need two identical populations. This can be hard, but a good start for a test of whether homeopathy helped with colds would be to take a random sample of people who just caught a cold, and give half homeopathic treatment. It gets a bit harder because you want the two groups to have behaviour that is otherwise similar: similar food/drink intake, similar exercise, etc. Showing that homeopathy, and not something else, is the cause of getting better can't be done without this. There's just no simpler way around this.
How do we get around the second problem? If one group gets little pills, and the other doesn't, the placebo effect means the two groups are no longer identical, and we can no longer say that that the pill is what helped: it might just be the act of getting a pill. The study has to be blinded: the people with the colds can not be allowed to know whether they had a homeopathic treatment or not. Even worse, the people interacting with the people with colds shouldn't know who is getting treated or not: their behaviour may be different enough that that also affects the outcome (eg more placebo effect, at a bit of a distance: "ah-hah! the doctor did this, so I must have gotten the _real_ medicine.") It really needs to be double blinded. There's just no way around this either. Any test that doesn't do all of this, despite all of the extra effort, isn't really worth any more than that single observation "I think it did something for me." This still goes for any trial (positive, or negative) that anyone does. If the sampling is biased, or the blinding compromised, the study is worthless, and should be immediately discarded (again avoiding the "yeah I know it doesn't mean anything, but there's so _many_ of them that say it..." problem.)
Finally, there's stastistics games. If you were conducting this test for yourself, and had control over everything, this wouldn't be much of an issue: you aren't looking to delude yourself. Since you are unlikely to be doing this test yourself (although you could, over time, do a crude double blind experiment on your friends with colds, with their permission) any arguments you have for why homeopathy works will likely be based on someone _else's_ experiments. It then becomes necessary to examine whether the data has been misrepresented in the conclusions. The original untouched data _should_ be available, and you should be able to come to the same conclusions they did, and follow their process. The conclusion shouldn't be solely based on a few anomalous outlying points and the data should not be "conditioned" to account for a problem that was detected after the fact. These and other statistical no-nos render the conclusion worthless, and if the untouched data is unavailable, render the entire test useless.
So... The problem people here have is that there _aren't_ any positive studies for homeopathy that don't fail to address one or the other of these problems, and there are studies that (we think) do address these that find nothing. Given that none of these procedures is really _all_ that complicated (like I said, you could actually try a crude version yourself) the fact that there are no good positive studies is really pretty damning. Unless there's some other reason you have, it appears that based solely on your personal belief, you are going to go around monkeying with other peoples health, and that idea makes a number of people rather understandably upset.
1) This is like a study that shows a marked increase in the risk of cancer in people who eat hotdogs, no matter how much data you collect. It may turn out there if you break up the data by income, this difference disappears. The original showed correlation, but no causation. In the case of homeopathic treatment, along with the treatment itself there are instructions such as " take the tablets it with plenty of water," when just having plenty of water is all that's needed. This is related to Rolfe's comments about people healing remarkably quickly, just on their own, and to diseases that have periodic effects. You get sick, you try something and it doesn't work, then you try a homeopathic remedy, and then you get better. There's nothing here that shows causation. Unfortunately, many people think it _does. So someone who did get better is more likely to believe in homeopathy than someone who didn't get better. Those who believe in homeopathy, or at least a portion of, are a self selecting group of people who had a coincidence. The number of people who believe in homeopathy who had it work for them is going to be, statistically speaking, indisputably significant, but absolutely meaningless.
2) The placebo effect is actually a range of things, ranging from interpretation of effects (eg in pain relief: "I think I feel better now.') to actual observable effects (eg shorter recovery time under a "special new rehabilitation plan") The second is the wacky one. Sometimes you really can give someone something completely worthless, tell them it will help, and it does.