Poll: Accuracy of Test Interpretation

Calculating? Rolfe, you don't understand. The accuracy of the test is the value we calculate other values from in the presented problem.

In any situation, the test is 99% likely to give the correct answer. That's the whole point of the question - people get confused between the idea that the test is right 99% of the time and the idea that the chance of it being accurate in a specific, particular case is only 9%.

If you're told only that the test is 99% accurate, that holds no matter what other conditions apply. From this, it follows that the alpha and beta rates are identical.

There is no information missing. You're just not capable of deriving information you weren't presented with.
 
Wrath of the Swarm said:
There is no information missing. You're just not capable of deriving information you weren't presented with.

Um, you did notice that she got the right answer the first time, yes?

So, what are you trying to say here? That the answer you claimed was right is not? Because, obviously, she had to derive the information to get that answer.

Or did little angels whisper in her ear? What are you trying to assert?

She not only derived the information, she then went on to explain how this information should not have to be derived, that the terms used are not applicable in the real-world, exactly how those terms should be used, why the 9% figure does not apply as an overall (because real-world tests are not typically given willy-nilly), and several other bits of info that you did not mention and still seem unable to grasp (or unable to admit that she is right).

I'm sorry, Wrath. Your ego is so big it's blocking your view. This thread would have been over three pages ago, without bickering and with some useful statistical knowledge, if you weren't too much of an a$$ to admit someone might know more than you.
 
Wrath of the Swarm said:
Calculating? Rolfe, you don't understand. The accuracy of the test is the value we calculate other values from in the presented problem.

In any situation, the test is 99% likely to give the correct answer. That's the whole point of the question - people get confused between the idea that the test is right 99% of the time and the idea that the chance of it being accurate in a specific, particular case is only 9%.

If you're told only that the test is 99% accurate, that holds no matter what other conditions apply. From this, it follows that the alpha and beta rates are identical.

There is no information missing. You're just not capable of deriving information you weren't presented with.
Wrath, you're digging deeper and deeper into "make it up as you go along".

Nobody will ever tell you in practice that any serology test is "99% accurate", because accuracy isn't a defined term in the vocabulary of this problem. BillyJoe has kindly posted a list of (almost) all the defined terms and how they are defined, which I reproduced above. These are the words we professionals use when talking about these things.

You keep talking about "people getting confused", but from the evidence presented so far on this thread, the most confused person is yourself. I think because you have learned by rote a particular example case which has a non-intuitive answer, and which can thus, by careful phrasing of a trick question, be used to ambush medical types. However, you have not really got anywhere close to coming to grips with the permutations and variabilities possible unless the parameters of the question are extremely carefully nailed down in advance.

Since you made the mistake of not nailing down the parameters of the question when you originally posed it, you are coming up against aspects of the problem you never even thought about.

Now, can I ask again. What do you mean by "alpha and beta" rates? I'm assuming this is another way of saying sensitivity and specificity, but I've never encountered this before, so would you mind confirming that this is the case, and revealing which is which?

And about that "accuracy" figure again. When I said, how do I calculate it, I meant that if I were characterising an assay, how would I derive the figure from the raw data? The same way BillyJoe detailed how you would derive figures like the specificity or the positive predictive value from a set of raw data.

Raw data for characterisation of the test is quite easy.

Group of patients, x% of whom have the disease (prevalence, as I keep saying a mobile and artificial concept), and you already know by some other means (reference method) which is which. Test them all. Some of the affected patients will test positive, TP. Some of them will test negative, FN. Some of the unaffected patients will test negative, TN. Some of them will test positive, FP. This is your starting point. I can derive all the defined characteristics of any assay lfrom these numbers - have to, it's all you're going to get. BillyJoe showed you how this is done. I want the same level of understanding from you about how you derive your "accuracy".

Rolfe.
 
Wrath of the Swarm said:
If you're told only that the test is 99% accurate, that holds no matter what other conditions apply. From this, it follows that the alpha and beta rates are identical.

Once again this is more information that is avaible in the orgianl question.
 
Originally posted by ceptimus
If Wrath had made this explicit in his question - say he had put, "The test is always 99% accurate, regardless of what percentage of those tested have the disease", then we wouldn't be having this discussion.
If the test is always 99% accurate, wouldn't the accuracy of your positive result be 99%?

That was my initial thought, because the test didn't specify false positives or false negatives explicitely. I ASSUMED an equal distribution between false positives and negatives though, which eventually led me to the 10% number , but obviously, that's a big assumption to make, seeing how wildly the results can vary depending on how you interpret that "accuracy" value of 99%.
 
Wrath of the Swarm said:
Neither.

I finally found the sources that duplicated the question (I even pointed them out, remember?).

The original source was a professor of mine, many years ago.

Your attempts at evasion would be funny if they weren't so wearisome. You posted three links:

Link 1

Link 2

Link 3

Your original post read:


Let's say that I went for an annual medical checkup, and the doctor wanted to know if I had a particular disease that affects one out of every thousand people. To check, he performed a blood test that is known to be about 99% accurate. The test results came back positive. The doctor concluded that I have the disease.
(My emphasis)

Nowhere in any of those three links is the term accuracy or accurate used in the way you use it. By contrast all three do use the terms false positive and false negative.

Again, I remind you that you claimed:

Point 1: The question, as I presented it, is the same question that was used in research with doctors.

Yet you have still not cited a source for this question. If the question was not copied directly from a study, then fine, just admit this and the question can be discussed on its own (de)merits. But don't lie and say it was a question posed to doctors in a study, when you are unable to cite this study.

Edited fur speelin mistuks
 
Originally posted by Wrath of the Swarm
Let's say I flip a coin (heads or tails) and then someone guesses which side came up.

I can specify an accuracy of the guessing without stating alpha and beta rates because they're the same.
How do you get a false positive when flipping a coin? How exactly does that work? I would really like to know?
 
Huntsman said:
Um, you did notice that she got the right answer the first time, yes?
Yes, but she claimed she could only get the right answer by assuming information that wasn't presented to her. That is a lie.

The question doesn't ambush anyone. Yes, the information given to doctors is generally more complex than that. That makes the question easier to answer, not harder.

Unless of course you've learned to get the answer by rote, in which case changing the presentation screws everything up.
 
Originally posted by Wrath of the Swarm
But you couldn't say that you had a 60% chance of being correct in all circumstances, could you now?
Hang on, are you saying that something which has a 60% chance of occuring does not have a 60% chance of occuring under all circumstances? And where exactly did you study statistics? Did your degree come in a box of cheerios too?
 
I will admit that I can't find a source that used precisely the same question. That is the question I remember being given (quite vividly), but I can't demonstrate that it was ever used in a study.

I retract the claim and admit I was wrong to make it.

Posted by exarch:Hang on, are you saying that something which has a 60% chance of occuring does not have a 60% chance of occuring under all circumstances? And where exactly did you study statistics? Did your degree come in a box of cheerios too?
You ignorant prat.

If the coin has a 60% chance of coming up heads, that does not mean that in a sample of tosses, heads will occur 60% of the time.

Anyone with even the slightest knowledge of statistics would know that. Hell, the majority of high school students would know that.

How old are you, again? Are you sure you're permitted on the net without parental supervision?
 
Originally posted by Wrath of the Swarm
The hypothetical test given originally has a 99% accuracy, no matter what subjects are fed to it. That's what accuracy means - it's no good using a concept that depends on the population distribution if you don't know what that is, and we can't presume beforehand that a doctor will face any particular population.
But this is the exact opposite of what you were saying earlier. If the population fed to the test are all healthy people, an accuracy of 99% means 1% false positives. How can this same test also be 99% accurate when a 50/50 sample is fed? By chance, in this case, you'd get 0.5% false positives and 0.5% false negatives. But with other "accuracies", the numbers don't work out that nicely.

When determining the odds of a particular positive being a false positive, the overall accuracy is meaningless unless you specify the odds of said false positive to occur.

In other words, a 1/1'000'000 occurance of an illness with a test of 95% accuracy cannot be 95% accurate when testing only sick people as when testing only healthy people, or a random population sample.

Accuracy as you describe it simply means that 1 out of every 100 tests is defective, and gives the opposite result to the truth. This type of logic works just dandy in case of puzzles, in the puzzle section, but it simply doesn't apply to reality, which, no matter how hard you want to deny it, is the not so very well hidden alterior motive of this poll: pointing out that, evil doctors are stupid for trusting test results, and you are not :rolleyes:
 
Originally posted by Wrath of the Swarm
And this "expert" is making an invalid objection. You can't just assert that the objection is valid because she's an "expert" - that's one of those nasty arguments from authority, remember?
Actually, it's not an "argument from authority" fallacy when the authority referenced really *IS* an authority on evaluating clinical test results.

I think your denial of her authority on this subject could in fact be seen as an ad hom :)
 
That's not the opposite of what was said. It's what was said.

The test's accuracy does not depend on the population incidence. If all sick people are tested, 1% of the responses will be wrong. If all healthy people are tested, 1% of the responses will be wrong. It doesn't matter.

We need to specify alpha and beta rates only if the chance of 'positive' being wrong is different from the chance of 'negative' being wrong regardless of what the population is.
 
Originally posted by Wrath of the Swarm
Exarch: you've claimed that the population distribution was irrelevant to the question.
Strawman.

Edited to elaborate before Woo of the Swarm jumps on this:
I said the distribution of the disease among the population didn't matter right after I stated that the "accuracy" of the test might simply apply to false negatives, in which case all positives are always positives, and the doctor is always right in assuming the test result to be correct.

I also stated that if the test itself is always 99% accurate, then your positive result, no matter how big or small the population, is always correct 99/100 times. Again, distribution among the population is not relevant in this case. But this situation doesn't occur in reality, only in theoretical puzzles. Since this is a theoretical puzzle though, you should have clarified.
 
But she's generalizing from analyzing results given in a particular manner to a completely different type of question, and she's failing.

She may be an expert in an extremely narrow band of competence, but she can't do basic statistics when taken outside of that narrow band.

If the accuracy of the test can be established without referencing alpha and beta rates, then it follows that the alpha beta rates are equal. This is a very simple and basic point that Rolfe does not seem to understand.
 
exarch said:
Well Ceptimus, if you put the question like that, this means that of every 100 positive results you get, only 1 will be wrong, and as such, your chances of being the false positive ar 1/100. The distribution of the affliction among the population isn't even relevant any more.

You're a liar, exarch. 'Strawman' indeed. You didn't even do the arithmetic correctly - if ceptimus phrased the question as he did, your conclusion is wrong.
 
Wrath, you are a total fool.

The incidence of disease X in the general population is W.

The incidence of disease X in a population with symptoms A, B, and C is W + Y.

The incidence of disease in those 2 groups is NOT the same.
 
Come on chaps - isn't it time to quit?

Wrath can retire (with some egg-on-face, having been so keen to show that all doctors are fools that he forgot sometimes others, himself included, can be shown to be fools too), and Rolfe can calm down and accept that although she is the queen of predictive values, her considerable talents would be better directed at woo-fighting than semantic arguments with Wrath.
 
yersinia29 said:
Wrath, you are a total fool.

The incidence of disease X in the general population is W.

The incidence of disease X in a population with symptoms A, B, and C is W + Y.

The incidence of disease in those 2 groups is NOT the same.
Who mentioned symptoms A, B, and C? Checking for those symptoms is another test - and we've already established that combining two tests gives a much more accurate result than either seperately.
 
Deetee said:
Wrath can retire (with some egg-on-face, having been so keen to show that all doctors are fools that he forgot sometimes others, himself included, can be shown to be fools too)
There's just three problems with this.

1) The point is to show that people are really bad at statistics, including doctors.

2) According to the presented arguments, Rolfe is the one who's wrong.

3) According to the presented arguments, Rolfe is the one who's wrong.

Now, I realize that this is only two problems, but the second is so important I thought it should be mentioned twice. I'll say it again:

If it's possible to establish an accuracy for the test as a whole, then the alpha and beta rates (chances of false positive and false negative, respectively) MUST BE EQUAL.

There's no other possibility. No one needs to 'assume' they're the same - it is mathematically impossible for it to be otherwise. If this is not the case, then the accuracy of the test becomes a variable dependent on the population given to it, and it is then impossible to consider how good the test is independent from a particular sample. That is fairly useless, and so it's not used when talking about the test itself.
 

Back
Top Bottom