Could you tell us a little more about how adjustments are made for joke(or otherwise problematic) answers?
Sadly only a little bit, because my reporting is based on conversations with pollsters, not on having actually done the analysis myself. The analysis I'm most familiar with doesn't have to deal with it. And in those conversations the surveyors mentioned that some techniques are proprietary, so I have to fill in the blanks myself with speculation. The context of the conversations was my effort to understand apparently high rates of responses to telephone and online polls for questions like, "Do you believe in ghosts?" or "Do you believe the Moon landings were faked?" And the answers I got from surveyors included statements like, "We believe a certain number of people give outrageous ansers to controversial questions just to be jerks."
The assertion, "We know what percentage of answers are jokes," is one I'm reporting as hearsay, not one I can support with data that I have in hand. Since the statement came from someone who makes his living in survey research, I would like to hope it came from some empirical exercise. The question of what to do with it is something I'm not sure about. Normally in such a case, you would assume that insincere answers are given in both directions. Otherwise such a correction would skew the results. Maybe there's a statistical basis for assuming that joke answers tend in some direction versus another, but I'm just speculating.
I'm pretty sure Pew, for example, handles the insincerity proportion symmetrically -- i.e., by not attempting to weigh insincerity in one direction over another. Without a strong statistical basis for such a weighting, it would be an obvious bias. In that case, say you have N=1000 respondents, which is about what a U.S. national survey poll would need to achieve 3% MOE with 95% CI. Let's say you know from prior testing that 2% of your respondents will give insincere answers in a telephone poll. You keep the proportion of responses the same, but decrease your effective N by 2% when doing your significance computations. So you use N=980 to estimate significance, and report the larger margin of error that this would result in. You're effectively throwing out some percentage of the answers as jokes without respect to how they answered. Does that sound defensible?
A related assertion, "Our surveys have internal consistency checks," was similarly cryptic and also hearsay. But this alludes to a very common practice in psychometric and sociometric instruments. Very often those questionnaires will ask a series of questions that are worded differently but actually address the same underlying phenomenon. It is expected that a sincere response will be consistent across all the congruent questions. This is meant to preclude the subject's attempt to fudge the test in one way or another. It allows the researcher to score an individual's set of responses for internal consistency, and allows inconsistent data to be rejected. Keep in mind this is not consistency as reckoned against some desired outcome, but consistency as reckoned between questions designed to discover the same thing.
In that vein, a survey might ask, "How strongly do you believe in ESP?" on a 5-point scale, and later ask, "Are some people psychic?" on the same scale. While my hastily-contrived example is probably too transparent to work, the gist is that if a responded answered 4 to the first question and 1 to the second, the surveyor might throw this whole subject out as having answered inconsistently. Note here that it doesn't matter whether he picked a 1 or a 5 on the scale, just so that he answered the "duplicate" questions consistently.