steve74 said:
The question, in their study, was phrased rather more exactly than in your question, specifically:
“If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming that you know nothing about the person’s symptoms or signs?_ ____%”
Now that is valid. State clearly that the figure being given is the "false positive rate", which is simply 100 - specificity (or to be absolutely exact, the other way around, that is specificity is defined as 100 - the false positive rate). So, specificity is 95%. Perfectly clear.
"ASSUMING YOU KNOW NOTHING ABOUT THE PERSON'S SYMPTOMS OR SIGNS."
Exactly. State that clearly, and we understand the question. There is only one possible answer. In this case, the probability of the result being correct is only 1.87%. (We can say nothing about the probability of a negative result being right because we have not been told the sensitivity.)
I can see how only 18% were anywhere near. The combination of quite poor specificity (95% isn't very good) and very low disease incidence pushes the number very low, and if you're guessing and not working it out then it's not very intuitive.
Wrath's basic premise isn't so wrong. Left to themselves without being led through the arithmetic, a lot of medics and vets do make a wrong guess. However, that is one of the things clinical pathologists are for. You ring me up and you say, do I trust this, or do I have to do further testing.
Or in the real world, where knowing nothing about the patient's clincial signs or the reason the test has been requested isn't on the agenda, the intuition jumps to
- a negative test in a patient who probably doesn't have the disease is probably right
- a positive test in a patient who probably dosn't have the disease is probably wrong
- a positive test in a patient who probably does have the disease is probably right
- a negative test in a patient who probably has the disease (least firm situation as getting right over to the right-hand side of the graph isn't that common clinically, but certainly should not be taken at face value, and needs follow-up)
Let the experienced clinician do it by true instinct, and they will be right most of the time without working it out, so long as you allow them all the information they usually have.
Ask the question in a deliberately statistical manner, without allowing a full calculation, and the intuitive answer may well be wrong.
The danger lies in drumming the bare statistics into people (as Jacobson did) without qualifying it. The result is often that the simplified position of "negative results are always right, positive results are always wrong" might be the take-home message.
Which is where the NegTest came in.
More common is the (also false) position that negative results are always right, and positive results always have to go to the reference method. Not so bad, but it leads to unnecessary doubt being raised about the obvious true-positives in the obvious clinical cases, and underappreciation that even your negative results can be wrong, again especially if the patient is showing clinical signs.
So without fully understanding the complete range of possibilities, as Wrath clearly doesn't, you end up with people who are making worse seat-of-the-pants assumptions if you approach the problem this way, than you'd get if you left them alone.
Just to recap. Wrath got the question wrong. That's quite obvious. He didn't state the terms as clearly as the original did, if this is the original which is being quoted. (And if he has an original which asks his exact question, first show me the reference and second, it was a bad study.)
Yes, we could see what you meant. But please don't continue to assert that this was the only possible way the question could be taken.
Rolfe.