• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Anecdotal Evidence

Badly Shaved Monkey

Anti-homeopathy illuminati member
Joined
Feb 5, 2004
Messages
5,363
I've just posted this at SBM, but I suspect the "hard science" members of JREF might be better placed to answer my question.

I wonder whether the question should be phrased like this: there is a prior probability that a medication works, do uncontrolled anecdotal accounts of positive benefit or nil benefit have the same effect on the post-anecdote probability of the medication having a benefit?

it's just a little kite-flying exercise, so please don't be mean about it, at least not immediately.
 
Is it something like this?

It's a licensed product, so I expect it to work in, say, 80% of cases. If I find positive anecdotal success it simples lies with that majority, while being added to by the various positive confounders that are always present.

No effect is supposed to occur only 20% of the time, but all the positive biases would tend to reduce the proportion of these that actually get reported as nil benefit. Perhaps for a drug that is supposed to fail only 20% of the time, you'd actually expect to be given a report of nil benefit 5% of the time. If personal experience yields a lot of nil benefit reports, I think you could begin to have reasonable doubt that the true benefit of the drug ever was 80%, even though these anecdotes are uncontrolled observations.
 
Last edited:
If it's a commercially marketed product, I guess you have to take into account the extent to which you believe the product claims - i.e. by how much are they being hyped?
 
Last edited:
Also take into account the extent to which things like the manufacturer's reputation, fancy packaging, prior anecdotes of success, and how much you paid for it influence your expectations. These are all factors in a good placebo. As for how the anecdotes influence the likelyhood of an actual double-blinded study finding an actual effect, I'd think they're only loosely related.

I actually brought something similar up a couple of years ago, in a thread about ethnobotany. My hypothesis at the time was that when ethnobotanists gather stories about supposed medicinal plants used by primitive peoples, that those supposed medicinal plants would be more likely to actually be medicinal than a random sample of plant life. I still think that there's some potential validity to that, but when I stated this strongly I was just as strongly rebutted by folks who made some intelligent arguments.
 
Thanks, for the replies but really it's the specific comparison with apparently confirming apparently discomfirming anecdotes that I was hope to explore.
 
Last edited:
Thinking it through a bit further, I wonder whether my scenario is a greyer variant of the 'Black Swan' problem.

In that, the prior probability ascribed to all swans being white would have been close to 100%. Endless observations of white swans would have little net effect on that near-unity probability. Positively biased observations of big white seagulls and geese that were identified as white swans would have added to the total of positive reports and been buried among them.

A single observation of a black swan, albeit only anecdotal, uncontrolled and subject to the potential fallibility of human senses and reporting systems, but made in defiance of all the biases that would militate against the observer believing the evidence of his eyes, has an overwhelmingly huge effect on the post-observation probability of the validity of the hypothesis that all swans are white.
 
I don't understand that at all. But I have, various times in my long life, watched certain people try to argue in such a way that they discount all observations (anecdotal evidence) which backfires on them, because they lose all credibility in doing so.

It's like telling somebody who owns a few black swans that black swans do not exist.
 

Back
Top Bottom