bpesta22
Cereal Killer
- Joined
- Jul 31, 2001
- Messages
- 4,942
Crap, I hadda open my big mouth.
This stuff can be resolved only by appealing to signal detection theory (this was a method perfected by communication eniginers working for the military in WW II. They used it to help radar detectors tell if that spot on the screen was just noise, or actually enemy planes).
SDT can be used in any situation where two states of reality are possible (you have cancer / you don't), and you have some way-- a test-- of identifying which is which on any trial (i.e., trial being a single patient that you test).
So, as many posted earlier, on any one trial (i.e. patient tested) there are four possibilities:
The "response" is from the test (positive or negative); the "signa"l is reality / from the patient. God knows (sorry about that) the patient either has the disease or he doesn't
So, this is a detection task. Based on this test, can we detect the presence or absence of disease in our patients?
Detection tasks depend on two issues:
1) Sensitivity (unfortunately, this is not the "sensitivity" people are referring to above). Sensitivity in this example gets at the test's validity. A valid test is more sensitive than an invalid one (in the graph below, the bigger the distance between the means of the two bell curves, the greater the tests sensitivity / validity). If a test had zero validity (or reliability), the two distributions would be right on top of each other.
Finally, sensitivity is often referred to as d' (d prime)
2) Decision Making Criterion. This is up to the tester. At what point (test score) do I conclude the patient has the disease, and at what point (test score) do I assume the person does not have the disease?
This is called Beta, and is represented by the black vertical line in the graph. Any observed test score higher than beta, and the doctor will conclude the patient is positive. Any test score less than beta and the dr. concludes the patient is negative.
But, where do you put beta? Ideally, to be unbiased, you should put it exactly at the peak where the two curves intersect (in the graph above, Beta looks like it's about a centimeter left of being optimal.
I'm pretty sure at this point, specificity = sensitivity (as sensitivity is defined by other posters).
But, there are times when you want to be biased one way or the other.
If the detection task is hearing the linebacker husband come home while you are diddling his wife, you might
want to avoid misses at all costs (which will result, though, in making lots of false alarms). So, you would set beta way to the left (so, that any remotely loud noise will force you to get up and investigate).
In this scenario, you will make lots hits, but also lots of false alarms. To the best of my understanding, when beta is non-optimal, you could run into situations where sensitivity (which = hits/hits + fa's) differs from specificity (crs/crs + misses)
If the detection task were deciding whether an accused person is guilty, well, then we have to set beta way to the right (given our real high standard of proof beyond a reasonable doubt).
So, although we lower the incidence of putting innocent people in jail (i.e., false alarms) we raise the incidece on letting guilty go free (the miss)
Now, imagine testing not just one patient but 100's or 1000's. The results can be graphed in an ROC curve, which let you see the validity of the test, and sensitivity / specificity.
But, that's a post for a later date!
B
This stuff can be resolved only by appealing to signal detection theory (this was a method perfected by communication eniginers working for the military in WW II. They used it to help radar detectors tell if that spot on the screen was just noise, or actually enemy planes).
SDT can be used in any situation where two states of reality are possible (you have cancer / you don't), and you have some way-- a test-- of identifying which is which on any trial (i.e., trial being a single patient that you test).
So, as many posted earlier, on any one trial (i.e. patient tested) there are four possibilities:
The "response" is from the test (positive or negative); the "signa"l is reality / from the patient. God knows (sorry about that) the patient either has the disease or he doesn't
So, this is a detection task. Based on this test, can we detect the presence or absence of disease in our patients?
Detection tasks depend on two issues:
1) Sensitivity (unfortunately, this is not the "sensitivity" people are referring to above). Sensitivity in this example gets at the test's validity. A valid test is more sensitive than an invalid one (in the graph below, the bigger the distance between the means of the two bell curves, the greater the tests sensitivity / validity). If a test had zero validity (or reliability), the two distributions would be right on top of each other.
Finally, sensitivity is often referred to as d' (d prime)
2) Decision Making Criterion. This is up to the tester. At what point (test score) do I conclude the patient has the disease, and at what point (test score) do I assume the person does not have the disease?
This is called Beta, and is represented by the black vertical line in the graph. Any observed test score higher than beta, and the doctor will conclude the patient is positive. Any test score less than beta and the dr. concludes the patient is negative.
But, where do you put beta? Ideally, to be unbiased, you should put it exactly at the peak where the two curves intersect (in the graph above, Beta looks like it's about a centimeter left of being optimal.
I'm pretty sure at this point, specificity = sensitivity (as sensitivity is defined by other posters).
But, there are times when you want to be biased one way or the other.
If the detection task is hearing the linebacker husband come home while you are diddling his wife, you might
want to avoid misses at all costs (which will result, though, in making lots of false alarms). So, you would set beta way to the left (so, that any remotely loud noise will force you to get up and investigate).
In this scenario, you will make lots hits, but also lots of false alarms. To the best of my understanding, when beta is non-optimal, you could run into situations where sensitivity (which = hits/hits + fa's) differs from specificity (crs/crs + misses)
If the detection task were deciding whether an accused person is guilty, well, then we have to set beta way to the right (given our real high standard of proof beyond a reasonable doubt).
So, although we lower the incidence of putting innocent people in jail (i.e., false alarms) we raise the incidece on letting guilty go free (the miss)
Now, imagine testing not just one patient but 100's or 1000's. The results can be graphed in an ROC curve, which let you see the validity of the test, and sensitivity / specificity.
But, that's a post for a later date!
B