Wrath of the Swarm said:
Why the doctor performed the test has no bearing on the correct answer! Does it matter why Farmer Brown took away three apples from the box that held seven? No!
And there aren't an infinite number of answers. Without specifying different values for alpha and beta, we consider only error. Alpha and beta values follow from the overall accuracy.
Again: the hypothetical test was 99% accurate, so there was a 99% chance that any result it came up with would be correct. That tells you what the alpha and beta rates are - they're equal in this particular case.
This is confusing and conflating my two separate assumptions about what Wrath meant by "accurate" that I'm barely capable of disentangling them. It's now clear that Wrath had even less idea about what he was talking about than I realised. Breathtaking.
there was a 99% chance that any result it came up with would be correct
Do you realise that you've just soundly contradicted yourself? The entire thrust of this thread was to demonstrate (correctly, for the conditions you assumed but did not state) that the chance the result in queston was correct was in fact less than 10%.
Make up your mind.
There are only two ways I can see to get this "accuracy" figure.
An arithmetical mean of the sensitivity and specificity. If they were equal, then that would be right enough. But you've explicitly denied that this is how you calculate the figure.
Or the percentage of tests carried out in practice which are correct (positive or negative). This would seem more likely for a figure you now relabel as "error", but to calculate this you need all of the sensitivity, the specificity and the incidence of the condition in the population being tested.
Dream of a thousand cats (with apologies to Neil Gaiman).
1000 cats. Incidence of FeLV infection 10% (for whatever reason).
FeLV test, sensitivity 98%, specificity 95%.
We have 100 infected cats, and 900 uninfected cats.
Of the 100 infected, 98 are true-positive and 2 are false-negative.
Of the 900 uninfected, 855 are true-negative and 45 are false-positive.
Total results:
143 positive, of which 68.5% are correct.
857 negative, of which 99.8% are correct.
1000 results, of which 47 are wrong. Therefore 95.3% of the results on this population are correct. With the positives much more likely to be wrong than the negatives, as is quite often the case, special circumstances pertaining to individuals with very pathognomonic clinical presentations notwithstanding.
And you can see that if you plug in different values for the three original variables, you can get a wide variety of different answers.
OK Wrath. These are two ways of calculating "accuracy" to definitions I can comprehend. Now would you please do me the maths for your derivation of 99%?
And it's quite ridiculous to assert that because you gave only one figure, we should assume the same figure applies to sensitivity and specificity. This pretty much never happens in the real world. To say that since only specificity was relevant to the question, you therefore meant to say "specificity", is reasonable and it's what I originally assumed.
But if 99% is some calculated figure from sensitivity and specificity, I at least want to know how you are going to calculate it when the two values are not equal.
Rolfe.