• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Polygraphs: The evidence

Lucky,

You say polygraphs are not pseudoscience. I disagree. People can make up their own minds.

Since you think polygraphs are not pseudoscience, how should we use them?
Here's a question then: If a polygraph operator were to apply for the million saying that he can, under controlled laboratory conditions, identify falsehoods at a rate of 75-80%, would that qualify for the challenge in your opinion? I recognize that Randi's opinion may differ, and he has the final say, but I am interested in your opinion.
 
Here's a question then: If a polygraph operator were to apply for the million saying that he can, under controlled laboratory conditions, identify falsehoods at a rate of 75-80%, would that qualify for the challenge in your opinion? I recognize that Randi's opinion may differ, and he has the final say, but I am interested in your opinion.

You'll have to ask Randi about that.
 
You'll have to ask Randi about that.
I am asking for your opinion. If you were running the prize, would it qualify? IF Skeptic Report had its own challenge, on the same rules as the JREF prize (but with a smaller prize, I am assuming) would this qualify?
 
It was claimed that the machine can tell the difference between a nervous reaction to a question and a lie. Argue that if you agree with the claim.

my earlier question repeated:


Is it a fair coin?

How do you know the conditions are known outside the lab?

Why are you both babbling about irrelevant nonsense? It was stated that if something is shown to be 70% accurate, then when you run it once, there is a 70% chance that it is correct. .13. questioned this. I pointed out that actually, yes, that is entirely true, and is in fact what 70% accurate means. It doesn't matter if you're talking about polygraphs, tossing coins, predicting the weather or anything else. If you are right 70% of the time, then any single outcome has a 70% chance of being right. End of story.

You can argue whether any particular thing, like polygraphs, really are 70% accurate, and I haven't said anything about that. The fact remains that if they are 70% accurate, then they are 70% accurate. The more you argue with this, the more it looks like either you can't understand infant level maths or are just trolling because you can't make a real argument.
 
That is the sore spot, isn't it?

What sore spot ? I asked you a straightforward question about 6 times, now, and you refuse to answer. I wonder why.

Could it be because if you admit that they perform better than chance (which I admit, I have no idea), you will be in the impossible predicament of having to subsequently admit that they are not woo ?
 
Last edited:
Why would he have to ask Randi about your opinion?

I can't say whether it would qualify or not. That is entirely up to Randi.

If you were running the prize

I'm not.

IF Skeptic Report had its own challenge

It doesn't.

Why are you both babbling about irrelevant nonsense? It was stated that if something is shown to be 70% accurate, then when you run it once, there is a 70% chance that it is correct. .13. questioned this. I pointed out that actually, yes, that is entirely true, and is in fact what 70% accurate means. It doesn't matter if you're talking about polygraphs, tossing coins, predicting the weather or anything else. If you are right 70% of the time, then any single outcome has a 70% chance of being right. End of story.

You can argue whether any particular thing, like polygraphs, really are 70% accurate, and I haven't said anything about that. The fact remains that if they are 70% accurate, then they are 70% accurate. The more you argue with this, the more it looks like either you can't understand infant level maths or are just trolling because you can't make a real argument.

You can create an artificial situation in the lab and test just about anything. But you can't automatically extrapolate that to the real world, because the test isn't applicable if the situation is different.

That's why it is highly misleading to throw out the generalized "polygraphs perform better than chance" quip. It is certainly not what the report says.

Do you think I can create an artificial situation in the lab where I show that astrologers "perform better than chance"?
 
What sore spot ? I asked you a straightforward question about 6 times, now, and you refuse to answer. I wonder why.

Could it be because if you admit that they perform better than chance (which I admit, I have no idea), you will be in the impossible predicament of having to subsequently admit that they are not woo ?

See my post above.
 
You can create an artificial situation in the lab and test just about anything. But you can't automatically extrapolate that to the real world, because the test isn't applicable if the situation is different.

That's why it is highly misleading to throw out the generalized "polygraphs perform better than chance" quip. It is certainly not what the report says.

Do you think I can create an artificial situation in the lab where I show that astrologers "perform better than chance"?

And I really don't care. I haven't said a single thin about whether polygraphs are accurate or not. All I did was try to clarify something for .13.. He asked if something being 70% accurate really means that any single output has a 70% chance of being correct. The answer is yes. Your bizzare rants are utterly irrelevant to anything I have said. That said, .13.'s recent posts give the impression that he is also being deliberately obtuse rather than actually being interested in any answer, so there doesn't seem much point in being here at all.
 
And I really don't care. I haven't said a single thin about whether polygraphs are accurate or not. All I did was try to clarify something for .13.. He asked if something being 70% accurate really means that any single output has a 70% chance of being correct. The answer is yes. Your bizzare rants are utterly irrelevant to anything I have said. That said, .13.'s recent posts give the impression that he is also being deliberately obtuse rather than actually being interested in any answer, so there doesn't seem much point in being here at all.

We are not talking about any experiment. We are talking about an experiment where people's ability to lie is tested.

You have to make a lot of assumptions when you perform a polygraph tests. E.g., one is that the subjects don't have the skills to cheat the polygraph.

How do you know this? Ask them, and expect them to tell you the truth? In a lie detector test?
 
We are not talking about any experiment. We are talking about an experiment where people's ability to lie is tested.

You have to make a lot of assumptions when you perform a polygraph tests. E.g., one is that the subjects don't have the skills to cheat the polygraph.

How do you know this? Ask them, and expect them to tell you the truth? In a lie detector test?

Ahh, yes. In a lab test, it's entirely impossible to ask pre-arranged questions where the real answers are known after the blinding is removed. Weak sauce, Claus.

Why is it surprising that there could be a correlation between polygraph test results and lies? On its own, this says little about real world utility: statistical correlation and a couple bucks will buy you a nice coffee in a cheap cafe.
 
I think polygraphs are nice and that it's true because it feels like it is. Atheist logic (some atheists). Sigh, 11 more stupid posts to go before url time.
 
I think polygraphs are nice and that it's true because it feels like it is. Atheist logic (some atheists). Sigh, 11 more stupid posts to go before url time.

This has been a productive thread, and has nothing to do with atheism.

Stop trolling.

:mad:
 
Ahh, yes. In a lab test, it's entirely impossible to ask pre-arranged questions where the real answers are known after the blinding is removed. Weak sauce, Claus.

Hence the point I made in post 247.

Why is it surprising that there could be a correlation between polygraph test results and lies? On its own, this says little about real world utility: statistical correlation and a couple bucks will buy you a nice coffee in a cheap cafe.

It is just as surprising that there could be a correlation between astrology and people's lives.
 
Lucky,

Their conclusion about the scientific evidence for the polygraph was that one cannot generalize from any results obtained from subjects in lab or field studies because there is no good scientific theory of why the CQT polygraph works the way it does in the lab. Additionally, there are so many potential confounders (i.e. conditions that have a similar physiological response as deception) for the CQT polygraph that haven't been properly studied that it is unknown how those confounders affect whatever lab accuracy CQT polygraph may have shown.

Is lying correlated with nervousness and emotional response? Yes, but it is not a 1 to 1 correlation. CQT polygraphy in my opinion makes the fallacy of composition. It is pseudoscientific to assume that all liars are nervous and can be detected from physiological response because some liars are nervous and can be detected by physiological response.

Additionally, when I worked in environmental science, we had immunoassay soil test kits that could detect certain chemicals down to the ppm level (pentachlorophenol I think but it's been 15 years). One of the caveats was that the test did not work in soils with high levels of certain heavy metals (iron I think) as it rendered the test useless. Would it be scientific to use that test anyway in an environment with high concentrations of the confounding element and report the results despite the fact we knew they were useless? No, it would be pseudoscientific to use it that way and that's why CQT polygraph is considered pseudoscientific. It's how and when it's used, not that it might be able to detect lies above chance levels in the lab.

Now I've gotta get back to my other stuff. School started yesterday and I've got lectures to prepare. I've also neglected my dissertation for a week and I got the evil eye from my chair this afternoon...

But I have been given food for thought on my paper on the polygraph and how to better argue against its use in sex offender treatment...

That's why I love being on the JREF forum. When the dialogue and arguments are respectful, one can learn a lot from some very smart people, even if there's disagreement...

Regards...

digithead, I am enjoying this discussion, too. I appreciate your posts, which have given me a lot of information and also prompted me to do some further reading and thinking. My interest in the subject of polygraphy is not with its application, but its relevance to the issues of validity and accuracy in different kinds of population testing.

My main disagreement with you is over the lack of a precise understanding of the causes of the response patterns, and whether this is important. I would say that it is a side-issue, and the real objection to use of the polygraph is operational, having to do with the poor results we can expect in real-world applications. However, in itself low accuracy need not rule out useful applications – the reasons why I believe that polygraphy is a special case, where low accuracy can be expected to rule out any valid, widespread application, are a subtle mix of the scientific and the social.

It is very instructive to compare polygraphy with a medical screening test, say a serum test for a tumour marker. These typically have high detection rates but also fairly high false-positive rates, so can't be used as diagnostic tests – a positive result will be followed by a biopsy or equivalent. There is nothing at all unusual about medical screening tests with ROC results similar to those for the polygraph. Nor is it necessary to have an evidence-supported theory of how/why the test works. (In some cases the analyte's usefulness as a marker is a chance finding, and we may have no knowledge of the underlying mechanism.)

Here's a typical study of a potential multi-marker urine screening test for various cancers. There isn't a 1:1 correlation between tumour stage and marker levels, and the pattern of levels in different cancers and at different stages is not understood. But the technique is likely to be useful.

Generally, in a screening application we are using a cheap method of limited accuracy to reduce a large population to a small group that requires further, more expensive investigation. An important corollary is that we are screening people in (for further attention) rather than out (rejecting them).

There will be both false positives and false negatives, which can have bad consequences for both the individual and the screening programme itself. The consequences of a false negative to an individual should be no worse than not taking the test, and the non-negligible rate of detection failures must not invalidate the programme. The consequences of a false positive will tend to afflict individuals rather than the screening programme itself, and they are likely to be worse than not taking the test (unnecessary anxiety, at least).

The general principle must be that the sum of the beneficial effects to the subjects and to the organisation doing the testing (in relation to the purpose of the programme, e.g. public health) must outweigh the sum of the harmful effects to the subjects, and to the organisation, from incorrect results. Also, the test must work better than any cheaper alternative (or, conversely, be cheaper than a more accurate alternative).

I'd say the difference between low-accuracy medical screening and low-accuracy polygraphy is simply that these conditions are frequently fulfilled in medicine, because screening is cheaper than offering a diagnostic test to everyone and better than doing nothing, but they are unlikely to be fulfilled by the polygraph. First, it is not cheap. Second, the problem of false negatives may invalidate the programme, especially because confounding factors are likely to be severe (there may be no solution to the problem of criminals being trained to fool the test). Third, the consequence of a false positive to the individual is likely to be very severe.

The other point I would make is that invalid application, exaggerated claims, error, and even fraud, do not constitute pseudoscience (which is a useful term if we don't broaden it to include any type of poor science).

I think it is incorrect to label any technology a 'pseudoscience' if it can be shown to have some area of validity, even if the area is limited and the accuracy is low, and even if we don't understand how it works - and I would suggest you don't use the term in your paper. As an interested reader from a field with some similarities, it would jar, and perhaps make me think you have an axe to grind.

I am grateful to you for prompting me to develop and set down my ideas on the utility of screening programmes (and, I have to admit, to CFLarsen for starting the thread).

I'd be very interested to read your paper when it's finished.
 
Last edited:
Do you think I can create an artificial situation in the lab where I show that astrologers "perform better than chance"?

I doubt it. Not without cheating or inadequate blindess controls. But if you could... congratulations on your million.

Given a test, consistent with MDC rules and mutually agreed-upon protocols by claimant and JREF, a pseudoscientific polygraph claimant has a good chance of getting the million, right?

You know that under controlled laboratory conditions, using certain types of "falsehoods", the polygraph is likely to perform better than chance. That's why you would not allow this challenge if you were administering the challenge. That's why Randi would probably not allow this as a challenge, because it is not pseudoscience.

If this is, in your opinion, not the case, then please submit your critique of the methods and analysis of some of the studies used in the meta-analysis, or the methods of the meta-analysis itself.

I think we're getting confused by mixing and confounding discussions of 1) the phenomenon in question - whether polygraphs perfrom better than chance at determining falsehood and truth in controlled conditions, 2) the mechanism that might affect or cause such a phenomenon, if it existed (stress, colds, etc), and 3) the utility and ethics of applying and technology based upon the phenomenon.
 
Hence the point I made in post 247.

The last paragraph of #247 might have been your best writing in this thread. It mostly said what I thought drkitten, lucky (and probably some others who gave up earlier) were saying: in the lab environment, there is apparently an effect, and it's not safe to extrapolate that to real world usage. I may have missed a post which would lead me to think otherwise, but I don't think anyone was saying anything more than that.

Even that post still seemed a bit off. The report certainly shows that polygraphy makes above chance distinctions, under some circumstances. It does not show that polygraphy makes above chance distinctions in real world circumstances. Given the former, the phrase "polygraphs perform better than chance" is supported, even if the latter means there should probably be an added caveat, depending on the situation and audience.

It is just as surprising that there could be a correlation between astrology and people's lives.

Or hot dog consumption and general health. I thought there were correlations, something to do with the fact that your birthday affects your age in school, and older kids did slightly better on average. I think the idea that there are detectable differences when someone intentionally lies and when they don't is less surprising than being born a pisces makes you like swimming. And, yes, trying to do something with that correlation can lead you badly astray if you're later sampling from a different population than the one which generated the correlation.
 
You know that under controlled laboratory conditions, using certain types of "falsehoods", the polygraph is likely to perform better than chance.

That's why you would not allow this challenge if you were administering the challenge. That's why Randi would probably not allow this as a challenge, because it is not pseudoscience.

No?

What is the difference between that and astrology?

If this is, in your opinion, not the case, then please submit your critique of the methods and analysis of some of the studies used in the meta-analysis, or the methods of the meta-analysis itself.

I have yet to see the scientific community's consensus that polygraphs are not pseudoscience.
 
No?
What is the difference between that and astrology?
Astrology has never predicted, at a rate better than chance, anything. And, no, predicting that a Sagittarian will suffer "a cold wind on your next birthday" is not astrology by any stretch.

I have yet to see the scientific community's consensus that polygraphs are not pseudoscience.

Nor have I seen the scientific community's consensus that "polygraphs are pseudoscience", whatever that means. It is clearly a subject of much scrutiny, as it should be. From what I can tell, however, there is consensus is that the lab studies generally demonstrate detectable differences in physiological response.

But enough with the appeals to authority, already. Let's use our own brains and look at some data. Have you looked at the data? But I'll play this one more time.

I'll try it one more time: Do you dispute that in lab conditions, polygraph tests identify liars at a rate greater than chance? It is a very simple question.
 

Back
Top Bottom