• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Polygraphs: The evidence

In a word: Yes.

But that's OK. "Polygraphs perform better than chance", so you have nothing to worry about. Nothing!!

It could be interesting to hear from those who are so impressed with the evidence just what they think the polygraphs can be used for.

So, Claus. Do polygraphs perform better than chance ?
 
I'm not disputing that you can correlate deviation from the baseline and deception with better than chance accuracy in a laboratory setting when you know what the truthfull answers are.

What I'm asking is how do you distinguish between a nervous reaction and deception in a real life situation when you don't know the truthfull answers?

I really don't understand why you keep asking this. If something gets 70% correct when you know what the answers are, what makes you think it would be any different when you don't. To go back to your psychic example, if someone can guess which way up a coin will land 70% of the time under certain conditions that means that in the future, under the same conditions, if they guess a coin toss they have a 70% chance of getting it right. That's what 70% accuracy means.

To apply it to this topic, why does it matter if you know they are lying or not? You ask someone some questions and they answer, and you decide whether they are lying or not without you knowing what the answer actually is. Afterwards, you go back and check the actual answers. It turns out that out of 10 answers you said were lies, 7 actually were, ie. you were 70% accurate. Now go and ask them another question. If your machine says they are lying, what is the chance it is right? 70%. Whether you know the real answer or not.
 
What you're missing about your conclusion is that you're ignoring the base rate of deception in the population you're testing. Within your hypothetical 80% deception indicated group, you could only conclude that more than half were lying if the base rate of deception was greater than 50%. If the base rate of deception is low, then the majority of your 80% deception indicated group would be false positives...

The bolded statement would only be true if we assume a priori that the polygraph doesn't work - i.e. it provides no information about the actual individuals being tested - which is what we are meant to be investigating. It is equivalent to saying: IF the real deception rate in the tested subjects is low AND the polygraph's positive rate is high THEN most of the positives are false - a completely contentless statement.

Continuing with the hypothetical example: I've forgotten what detection and false positive rates we're assuming (and can't be bothered trawling the entire thread to find out). Let's be conservative and say 70% detection and 10% false positive. We want to know the proportion of liars in the group that has tested positive. You are right that we also need to know the proportion of liars in the population under consideration – say 20%. Then:

Of 100 representative subjects:
20 are liars
80 are truthful

Of the 20 liars:
70% = 14 will be correctly detected
6 will not be detected

Of the 80 truthful subjects:
10% = 8 will be incorrectly 'detected'
72 will be found truthful

Summary:
True Negatives = 72
True Positives = 14
False Negatives = 6
False Positives = 8

Therefore, of the group that tests positive, 8 out of 22 i.e. 36% are false positives (innocent test failures) and not 'the majority'.

Imagine you run this test with 100 subjects every day for several years, with an average of 22 positives per day, and on one particular day 80 of the subjects test positive. That would certainly be anomalous.

So, reviewing the assumptions:
1) The detection rate (70%),
2) The false positive rate (10%),
3) The proportion of liars in the test's base population (20%),
4) That the group that scored 80% positive is representative of that population,

plainly, at least one of the above is wrong, but the numbers themselves cannot tell us which. If we got a batch of results like this in any kind of medical screening test we would begin by suspecting a malfunction in the assay or machine, or some administrative error. Failing that, we would consider the possibility that this batch of subjects was atypical in some way, and investigate whether or not this invalidated the results. (In the atheists example, the explanation could indeed be that professed atheists are several times more likely to lie than the general population - that would be quite consistent with the figures.)

My point is that it would be quite invalid to conclude that an anomalous high result well above the assumed base rate means that the subjects must be truthful and the polygraph incorrect.


If you knew that 80-90% of the people you were going to test were actually guilty, how hard do you think it would be to identify the guilty just through good interrogation alone?
As you point out, the accuracy of any test (false-positive and detection rates) is completely independent of the frequency of the condition (guilt and deception) in the test population. If you know that 80-90% of the subjects are guilty then yes, you could use interrogation techniques such that 80-90% confess, or a polygraph threshhold such that 80-90% test positive, but that does not 'identify the guilty'. If the test has no discriminatory power then you will still have equal detection and false positive rates - i.e. the test performs no better than chance. (This may be clear to you but it probably wasn't clear to most people reading this thread.)


But that's OK. "Polygraphs perform better than chance", so you have nothing to worry about. Nothing!!

It could be interesting to hear from those who are so impressed with the evidence just what they think the polygraphs can be used for.
The polygraph doesn't show that 80% of the subjects were lying. It shows that 80% of the subjects had a deviation from the baseline. To repeat and rephrase my earlier question which hasn't been answered yet (not directed particularly at you but for the proponents in general):

How can you tell the difference between a lie and a nervous reaction to a question when you don't know what the truthfull answer is?

CFLarsen, I see you have shifted your ground, and are no longer claiming that the basic concept is 'pseudoscience'. Good.

I don't think there are any 'proponents' of polygraphy here. My view is that there will probably never be any justification for routine use of the polygraph, and there is probably no justification for any (non-research) use at all at the moment. My motivation (and, I guess, skeptigirl's and drkitten's) was simply to correct a misconception - rather carelessly propagated by the anti-polygraph movement, and the 'skeptical' movement in general - that polygraph technology is bogus like Rife cancer treatment machines and electronic exercise belts and diagnostic dowsing machines etc. The claim is often made by the 'anti' campaigners that the results are entirely due to a kind of reverse-placebo effect, and the actual discrimination is performed by the trained tester using information completely separate from the machine readings. The point is that this is the wrong argument, and can easily be demolished by polygraph enthusiasts (for example, by reference to the NA report, especially the results of laboratory studies). The right argument is that the accuracy of the unaided technology is far below what is claimed (and commonly believed), and that it is therefore neither safe nor effective for any current or proposed use.

How could we use the polygraph in a real-world setting when we don't know the 'correct' result? .13., the important point to grasp is that the polygraph is no different in this respect from any imperfect test – a medical screening test, for example. We do the basic research, conduct studies to determine the scope of validity and obtain the 'calibration' data. When we have enough confidence in the test we introduce it in the field, including QA to monitor and improve the test's performance.

As to distinguishing a nervous response from a guilty one, we assume that there are in principle some detectable differences between the two types of response, and try to refine the test to amplify these differences. As digithead suggests, there are theoretical reasons to suggest that the problem will be reduced by using GKT-type questions rather than CQT – I don't know how well this has been tested.
 
I really don't understand why you keep asking this. If something gets 70% correct when you know what the answers are, what makes you think it would be any different when you don't. To go back to your psychic example, if someone can guess which way up a coin will land 70% of the time under certain conditions that means that in the future, under the same conditions, if they guess a coin toss they have a 70% chance of getting it right. That's what 70% accuracy means.

Is it a fair coin?

To apply it to this topic, why does it matter if you know they are lying or not? You ask someone some questions and they answer, and you decide whether they are lying or not without you knowing what the answer actually is. Afterwards, you go back and check the actual answers. It turns out that out of 10 answers you said were lies, 7 actually were, ie. you were 70% accurate. Now go and ask them another question. If your machine says they are lying, what is the chance it is right? 70%. Whether you know the real answer or not.

I bolded the important part. You can't check your answers in real world conditions. In such a case how can you determine wether this particular person is lying or just had a nervous reaction to the question?
 
CFLarsen, I see you have shifted your ground, and are no longer claiming that the basic concept is 'pseudoscience'.
No, I haven't.
CFLarsen, what is your unorthodox definition of 'pseudoscience', that includes a technology that performs at well above chance levels in laboratory tests?

Your assertion that the results have nothing to do with the technology, but are a fraud based on additional cues, trickery and intimidation, has been thoroughly debunked in this thread. For example:
People,

You are missing the point.

You can, in fact, use polygraphs to tell if people lie or not. What you can't use polygraphs for is to tell if people lie or not by reading the output.

Readings from polygraphs do not show if people lie or not. Using polygraphs puts people in a state of stress so they get confused, so they mess up their confession. That's what interrogations also do: You ask the same questions, again and again, in order to find out if the suspect has his story straight. Using a polygraph, you merely introduce a technobabble factor, to impress the lesser informed.

A polygraph is a an intimidation tool.
No, CFLarsen, you are missing the point that the National Academies report examines all the data (from a literature search by the authors) that has been produced under laboratory conditions, and this data clearly shows that polygraph tests in these studies performed well above chance levels.

This research data is from studies in which:
the questions were of no real-world significance
and
the results had no consequences for the subjects
and
the testers did not know the answers.

Therefore the results can have nothing whatever to do with technobabble, intimidation, fear or gullibility.

Note that I am making no claims about the utility of polygraphy as a forensic technique. My post here explains all of this in detail.


Here are some questions you have missed (any chance of an answer?):
So:
Cervical smear tests are pseudoscience because they don't reliably detect all cancers and ignore the healthy condition.
Academic examinations are pseudoscience because they don't reliably grade candidates into discrete categories of knowledge and ability.
Post mortems are pseudoscience because they don't reliably distinguish between death by natural and unnatural causes.

Do you begin to see the logical fallacy?

Do you seriously believe that any test (forensic, medical, whatever) that doesn't have false positive and false negative rates close to zero is 'pseudoscience'?
...

[qimg]http://www.internationalskeptics.com/forums/imagehosting/thum_2550477e34c28029c.jpg[/qimg]

Do you understand what the ROC figure is telling us?
...
Almost the entire body of published data suggests that, in laboratory conditions, polygraph tests give results that are better than chance. How on earth can you (or anyone) suggest that this is in any way comparable to studies of homeopathy?
I'm not convinced that the polygraphs work.
You have been asked this question a number of times in this thread, and have not given a satisfactory answer:

What do you mean by 'work'?

If you are still claiming that the basic method (using blinded interpretation of the physiological measurements only) cannot be shown to produce significantly better than chance results in laboratory conditions, then that is plain wrong. There really is no debate here: the laboratory data and analysis in the NA report is conclusive on this point. Or are you saying that the accuracy rate is low enough that polygraphy is not (and perhaps never can be) safe and effective in real-world applications? If so, do you see that this is a completely different claim?
...
Polygraphy is one of many policy matters that require the public and its representatives to have a good grounding in the basic scientific issues, and the ability to disentangle them from the social/political ones. You (and many of us here) wish to educate the public in this kind of thinking, but do you acknowledge that we can only be effective by taking every opportunity to discuss and explain the relevant science?
 
CFLarsen: Who on earth do you think you're impressing with this foolery?

Lucky,

You really, really need to read the report.


I have read the main scientific and analytical sections, (1) the Executive Summary, (3) The Scientific Basis for Polygraph Testing, (4) Evidence from Polygraph Research: Qualitative Assessment, (5) Evidence from Polygraph Research: Quantitative Assessment, (8) Conclusions and Recommendations, in detail, and skimmed the rest. Plainly, you have not (you seem not to have read my posts, either). It is utterly impossible to make a serious study of the report (in particular (3) The Scientific Basis for Polygraph Testing) and honestly maintain that polygraphy is a 'pseudoscience'.

So, point me to the important features of the report that you think I've missed, and I will be happy to discuss them with you.

Perhaps I overwhelmed you with my list of questions, so I'll condense it:

1) Do you agree or disagree with the NA report's statement that:
"features of polygraph charts and the judgments made from them are correlated with deception in a variety of controlled situations involving naïve examinees untrained in countermeasures: for such examinees and test contexts, the polygraph has an accuracy greater than chance. Random variation and biases in study design are highly implausible explanations for these results"?

2) Do you acknowledge that, if this statement is correct, polygraphy cannot be a 'pseudoscience'?

3) Do you agree or disagree with my view that, although polygraphy is based on plausible principles, and is supported by laboratory studies, its level of accuracy in real-world situations is likely to be so low that it will never be safe and effective for routine use (and that its utility has been grossly exaggerated by its proponents)?
 
I have read the main scientific and analytical sections, (1) the Executive Summary, (3) The Scientific Basis for Polygraph Testing, (4) Evidence from Polygraph Research: Qualitative Assessment, (5) Evidence from Polygraph Research: Quantitative Assessment, (8) Conclusions and Recommendations, in detail, and skimmed the rest. Plainly, you have not (you seem not to have read my posts, either). It is utterly impossible to make a serious study of the report (in particular (3) The Scientific Basis for Polygraph Testing) and honestly maintain that polygraphy is a 'pseudoscience'.

So, point me to the important features of the report that you think I've missed, and I will be happy to discuss them with you.

Perhaps I overwhelmed you with my list of questions, so I'll condense it:

1) Do you agree or disagree with the NA report's statement that:
"features of polygraph charts and the judgments made from them are correlated with deception in a variety of controlled situations involving naïve examinees untrained in countermeasures: for such examinees and test contexts, the polygraph has an accuracy greater than chance. Random variation and biases in study design are highly implausible explanations for these results"?

2) Do you acknowledge that, if this statement is correct, polygraphy cannot be a 'pseudoscience'?

3) Do you agree or disagree with my view that, although polygraphy is based on plausible principles, and is supported by laboratory studies, its level of accuracy in real-world situations is likely to be so low that it will never be safe and effective for routine use (and that its utility has been grossly exaggerated by its proponents)?

Lucky,

Their conclusion about the scientific evidence for the polygraph was that one cannot generalize from any results obtained from subjects in lab or field studies because there is no good scientific theory of why the CQT polygraph works the way it does in the lab. Additionally, there are so many potential confounders (i.e. conditions that have a similar physiological response as deception) for the CQT polygraph that haven't been properly studied that it is unknown how those confounders affect whatever lab accuracy CQT polygraph may have shown.

Is lying correlated with nervousness and emotional response? Yes, but it is not a 1 to 1 correlation. CQT polygraphy in my opinion makes the fallacy of composition. It is pseudoscientific to assume that all liars are nervous and can be detected from physiological response because some liars are nervous and can be detected by physiological response.

Additionally, when I worked in environmental science, we had immunoassay soil test kits that could detect certain chemicals down to the ppm level (pentachlorophenol I think but it's been 15 years). One of the caveats was that the test did not work in soils with high levels of certain heavy metals (iron I think) as it rendered the test useless. Would it be scientific to use that test anyway in an environment with high concentrations of the confounding element and report the results despite the fact we knew they were useless? No, it would be pseudoscientific to use it that way and that's why CQT polygraph is considered pseudoscientific. It's how and when it's used, not that it might be able to detect lies above chance levels in the lab.

Now I've gotta get back to my other stuff. School started yesterday and I've got lectures to prepare. I've also neglected my dissertation for a week and I got the evil eye from my chair this afternoon...

But I have been given food for thought on my paper on the polygraph and how to better argue against its use in sex offender treatment...

That's why I love being on the JREF forum. When the dialogue and arguments are respectful, one can learn a lot from some very smart people, even if there's disagreement...

Regards...
 
Last edited:
Lucky,

You say polygraphs are not pseudoscience. I disagree. People can make up their own minds.

Since you think polygraphs are not pseudoscience, how should we use them?
 
I bolded the important part. You can't check your answers in real world conditions. In such a case how can you determine wether this particular person is lying or just had a nervous reaction to the question?

That's the whole point. You work out the accuracy under known conditions. Once you know it will correctly identify 70% of liars, you can say that anyone flagged as a liar has a 70% chance of actually being one. I really don't see how this can be hard to understand.
 
I don't think there are any 'proponents' of polygraphy here.

Then I had misunderstood drkitten and skeptigirl. I doubt it so I'll wait untill either of them corrects me on their position.

My view is that there will probably never be any justification for routine use of the polygraph, and there is probably no justification for any (non-research) use at all at the moment.

I mostly agree with you.

The claim is often made by the 'anti' campaigners that the results are entirely due to a kind of reverse-placebo effect, and the actual discrimination is performed by the trained tester using information completely separate from the machine readings. The point is that this is the wrong argument, and can easily be demolished by polygraph enthusiasts (for example, by reference to the NA report, especially the results of laboratory studies).

That's not what I have claimed.

The right argument is that the accuracy of the unaided technology is far below what is claimed (and commonly believed), and that it is therefore neither safe nor effective for any current or proposed use.

I haven't studied the statistics but if this is accurate then it sure is a good argument against the use of polygraph. But this isn't my argument either.

How could we use the polygraph in a real-world setting when we don't know the 'correct' result? .13., the important point to grasp is that the polygraph is no different in this respect from any imperfect test – a medical screening test, for example. We do the basic research, conduct studies to determine the scope of validity and obtain the 'calibration' data. When we have enough confidence in the test we introduce it in the field, including QA to monitor and improve the test's performance.

But there is a difference. You can't verify the polygraph results. If you could you wouldn't need the polygraph in the first place.

Let's consider an example medical test: You test a patient for some viral disease. You get a negative result and send the patient home. Next morning he comes back showing symptoms. Now you know that your test was wrong.

Now consider this somewhat facetious example: You perform a polygraph test on an employee. He passes it. Next morning he comes back looking guilty: "I lied in my polygraph yesterday." And now you know your test was wrong.

:)

As to distinguishing a nervous response from a guilty one, we assume that there are in principle some detectable differences between the two types of response, and try to refine the test to amplify these differences. As digithead suggests, there are theoretical reasons to suggest that the problem will be reduced by using GKT-type questions rather than CQT – I don't know how well this has been tested.

Is that a valid assumption? Which measurment could potentially show this difference?

But in anycase regardless of if it could be done in the future or not: Surely the machine can't do that with current technology?
 
That's the whole point. You work out the accuracy under known conditions. Once you know it will correctly identify 70% of liars, you can say that anyone flagged as a liar has a 70% chance of actually being one. I really don't see how this can be hard to understand.

It was claimed that the machine can tell the difference between a nervous reaction to a question and a lie. Argue that if you agree with the claim.

my earlier question repeated:
I really don't understand why you keep asking this. If something gets 70% correct when you know what the answers are, what makes you think it would be any different when you don't. To go back to your psychic example, if someone can guess which way up a coin will land 70% of the time under certain conditions that means that in the future, under the same conditions, if they guess a coin toss they have a 70% chance of getting it right. That's what 70% accuracy means.

Is it a fair coin?
 
That's the whole point. You work out the accuracy under known conditions. Once you know it will correctly identify 70% of liars, you can say that anyone flagged as a liar has a 70% chance of actually being one. I really don't see how this can be hard to understand.

How do you know the conditions are known outside the lab?
 

Back
Top Bottom