• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

IQ Tests

Complexity

Philosopher
Joined
Nov 17, 2005
Messages
9,242
This thread has been diverted from a pre-existing thread over in 'The Amazing Meeting! and other Skeptical Events' called Can a Skeptic Believe in God? Responses to Panel Discussion, as it has developed away from the original topic to discussing IQ tests.The quote by 'WonderfulWorld' cited here can be found in that older thread, here.
Replying to this modbox in thread will be off topic  Posted By: kiless



This is a straw-man, the result of popular misunderstanding about what "IQ" is. IQ (or "g", or whatever) is not measured by a single test. There are dozens of "IQ" tests, and they don't all measure the same thing -- some are broad-based, some look at the ability to see spatial relationships, some look at analogies, etc. All can be "normed" -- averages and standard deviations can be calculated, and an IQ assigned to any particular result -- but a single person can get a wide range of IQs by taking various tests. So, IQ is not a "single metric" in the sense you appear to mean. Perhaps the terminology — the way the term "IQ" is commonly used looks like a "single metric" — is partly to blame for the misunderstanding. Well, not "perhaps". Certainly.

Beg to differ.

Whatever the variety of IQ tests, the results get boiled down to a single number.

If someone gets tested and is told that his IQ is 89, or 104, or 128, or 205, he usually interprets this as his pecking order in human intelligence.

This number may, on occasion, be useful - an IQ of 45 suggests that there are problems.

In general, however, I think that IQ testing is wrong. Intelligence is a multifaceted phenomenon and the estimation of its components deserves more than its reduction to a single number.

I recommend The Mismeasure of Man by Stephen Jay Gould. He does a wonderful job of demolishing your position.
 
Last edited by a moderator:
Beg to differ.

Whatever the variety of IQ tests, the results get boiled down to a single number.

If someone gets tested and is told that his IQ is 89, or 104, or 128, or 205, he usually interprets this as his pecking order in human intelligence.

This number may, on occasion, be useful - an IQ of 45 suggests that there are problems.

In general, however, I think that IQ testing is wrong. Intelligence is a multifaceted phenomenon and the estimation of its components deserves more than its reduction to a single number.

I recommend The Mismeasure of Man by Stephen Jay Gould. He does a wonderful job of demolishing your position.


I have a little known self made rule for the JREF that if you cite Mismeasure to discredit IQ testing you lose the argument just as surely if you made an analogy to Hitler.

Let's work based on your premise that IQ is multifaceted, so it's wrong to reduce it to a single number.

If so, a person's (single) IQ score shouldn't predict anything. The research shows, however, that the single number predicts everything (with significant practical value, though far from perfectly).

In fact, a person's general intelligence (g-- the single number derived from most IQ tests) is often the single best predictor of whatever social outcome you're trying to predict.

There's so much literature out there showing this that ignorance is no excuse, especially for anyone claiming to be a skeptic.

From an article that just happens to be sitting on my desk:

"research evidence for the validity of [g] for predicting job performance is stronger than that for any other method...literally 1000s of studies have been conducted over the last nine decades...because of its special status, [g] can be considered the primary personnel measure for hiring decisions"

Rynes et al., 2002, Academy of Management Executive.

I'm sure I do it for other areas, but as a skeptic, there's no excuse for forming a strong opinion of something without having examined the evidence. Citing mismeasure shows that you really really haven't examined the evidence.
 
I have a little known self made rule for the JREF that if you cite Mismeasure to discredit IQ testing you lose the argument just as surely if you made an analogy to Hitler.

How about "Mutliple Inteligences" by Howard Gardner?

An IQ test is only somewhat useful because it is culturally biased.
 
How about "Mutliple Inteligences" by Howard Gardner?

An IQ test is only somewhat useful because it is culturally biased.

Show me a measure of any of his multiple IQ's that doesn't also measure g (positive manifold). If it doesn't also measure g (I'm guessing musical ability) then it doesn't have any criterion validity (i.e., predict anything important).

If it does predict, it's probably because it also happens to measure g.

Show me any evidence that g loaded IQ tests are culturally biased (meaning unfair to minorites because they create slope or intercept bias). I'm not talking about giving an American IQ test to the bush bunnies of the sahara-- Show me any data that IQ tests meet the legal or psyhcometric defnition of bias, with american test takers.
 
I have a little known self made rule for the JREF that if you cite Mismeasure to discredit IQ testing you lose the argument just as surely if you made an analogy to Hitler.

You do know what you can do with your self-made rules, don't you?

I seem to have found a True Believer in IQ. Sad.

Provide evidence for your assertions, if you are able.

Are there particular statements in The Mismeasure of Man that you can falsify?

Let's work based on your premise that IQ is multifaceted, so it's wrong to reduce it to a single number.

If so, a person's (single) IQ score shouldn't predict anything. The research shows, however, that the single number predicts everything (with significant practical value, though far from perfectly).

In fact, a person's general intelligence (g-- the single number derived from most IQ tests) is often the single best predictor of whatever social outcome you're trying to predict.

I never said that an IQ or g could not have some predictive value.

I said that it wasn't an adequate measure of intelligence. Nothing you have said provides evidence against this.

There's so much literature out there showing this that ignorance is no excuse, especially for anyone claiming to be a skeptic.

I have far better and more interesting things to do that wallow through a pile of "literature" in order to counter your silly claims.

I'm satisfied that intelligence is more complex than you believe it to be.

You can regard me as inexcusably ignorant or unskeptical, as you wish.

I have yet to find any reason to take you seriously.

From an article that just happens to be sitting on my desk:

"research evidence for the validity of [g] for predicting job performance is stronger than that for any other method...literally 1000s of studies have been conducted over the last nine decades...because of its special status, [g] can be considered the primary personnel measure for hiring decisions"

Rynes et al., 2002, Academy of Management Executive.

I'm sure I do it for other areas, but as a skeptic, there's no excuse for forming a strong opinion of something without having examined the evidence. Citing mismeasure shows that you really really haven't examined the evidence.

By the way, I did my doctoral work from 1987 - 1990 in AI with an emphasis on automated reasoning. I've been interested in, reading about, and thinking about intelligence for more than 30 years. I haven't formed my strong opinion without the benefit of time, reading, or thought.

I could be wrong, of course - I'm not claiming that I'm an authority. But your snide comments are simply wrong.

Again - care to cite evidence against Gould's book?

I have a lot of strong opinions, bud. I sure as hell have more confidence in them than I do in you.
 
Last edited:
I'm interested in debates about IQ tests - would participants mind if I created a new thread to host these posts?
 
Sure, start a thread. I'll try not to be as snide.

I just re-read what I wrote-- I guess my reply did seem like a personal attack. It was not my intention, though I admit to sometimes being a **** starter.

So, just some info on why I posted, if it matters:

We've probably had half a dozen long debates on iq here going back three or so years. Every debate has skeptics on one side with strong opinions about IQ, having read little on the subject besides Mismeasure, or various attacks of the Bell Curve.

My degree's in cognitive psych, which to me is the study of intelligence. As such, I take offense when people dismiss my field by citing crap (imo) like Mismeasure. Why it's crap is something we can talk about in kiless thread, if anyone's still interested.

I think you'd be hard pressed to find another area of scientific study that is so mis-represented by people outside the field. Citing Mismeasure as the authority, in my opinion, indicates that one is being mislead (perhaps through no fault of their own).

Gould was a billiant biologist, but a crappy psychometrician. It's hard to find current sources debunking mismeasure because it was so irrelevant to modern psychometrics that the field has reacted the way it should have to the work-- by ignoring it.

That said, if interested, read Arthur Jensen's reply to Mismeasure here:

http://www.debunker.com/texts/jensen.html

Just to give a quick sense, here's a short excerpt:

"Of all the book's references, a full 27 percent precede 1900. Another 44 percent fall between 1900 and 1950 (60 percent of those are before 1925); and only 29 percent are more recent than 1950. From the total literature spanning more than a century, the few "bad apples" have been hand- picked most aptly to serve Gould's purpose. Yet what relevance to current issues in mental testing are the inadequacies and errors of early anatomical studies by Samuel Morton (who died in 1851) or Paul Broca (who died in 1880) concerning racial- variation in cranial capacity (to which Gould devotes the better part of two chapters): Who now wishes to resurrect Lombroso's (1836- 1909) theory- of physical criminal types; Cyril Burt's 1909 report (his very first publication) of social class differences in intelligence; Goddard's account of the Kallikak family (1912) and the long since discredited theory of "feeblemindedness" as a simple Mendelian character; Terman's pronouncements in 1916 about eugenic measures to reduce the incidence of mental retardation; the primitive 1917 army mental tests; or the U.S. Congress's 1924 Immigration Restriction Act, which cited the 1917 army test data? These antiquated topics, which occupy most of Gould's book, can in no way serve to undermine or discredit current work in physical anthropology, psychometrics, differential psychology, behavioral genetics, and sociobiology. Readers expecting to find a forthright critique of the present status of issues and controversies in these fields are in for disappointment. The closest thing they will find to criticism of contemporary mental testing is the insinuation of its guilt through remote historic lineage."

I think the way Jensen trashes gould is inspiring. It's the epitome of good science.

Again, jmo, and I did not mean to single you out personally as being a woo, based just on your post above re IQ.

But, the issue of IQ is an empirical one, and I think it's been answered...
 
I'm interested in debates about IQ tests - would participants mind if I created a new thread to host these posts?
I think you'll be quite impressed by what Pest can bring to bear on the matter. I mean...I would rather chew off my own leg than agree with a cognitive psychologist, but I have read Pest's previous threads on the topic, and it was I who nudged him to look at this thread. I, for one, will be reading the new thread.
 
Show me a measure of any of his multiple IQ's that doesn't also measure g (positive manifold).

Show me any evidence that 'g' actually exists instead of being a statistical artefact created by factor loading.
 
I wonder, what do you think of a kid, say, age 14, who takes an IQ test in Florida in the school system in the mid-80s, takes it four times, and scores, in order, 145, 85, 166, and 100? Same test. Further, what do you think of said kid predicting, before entering the test, what his scores would be, give or take 10 points?

Kid finally took a 'covert IQ test' in high school and scored 142, which was used as his base IQ test when he went to join the military.

And why did the school system hide this IQ test from me, and its results?

Thank you, O major in cognitive psy...
 
I am also rather embarrassed now (hey, I did always have a little thought in the back of my head going 'this could be garbage. I mean, this 'nature intelligence???') about incorporating some of Gardner's strategies in my class. In fact, Athon should have half a dozen things sent by me, including a 'Bloomgarde' strategy for the classroom, books on multiple intelligence... bugger. Sorry A. :(

Eh, at least some of the stuff on Philosophy in the Classroom appears to work... and mindmapping et al is quite fun. But I had a professor who always criticised DeBono's work (I put a link up about that on the forums once) and I had the opportunity this January to attend a unit on Emotional Intelligence.

I didn't sign up because the lecturer had previously taught a class on 'Intelligence and Emotional Learning' the year before and I found his rather wishy-washy attitude (didn't like my raising questions in class... 'seen and not heard' syndrome) frustrating and after reading a few critical reviews about EI, became even less keen about going.

Is it possible that in the educational field we are being taken advantage of by these theories that supposedly give credit for 'intelligences' that may not actually be realistic representations of what they are doing?

(Hades, please don't let us head down the path of 'THEY'RE ALL INDIGO CHILDREN!!' :( )
 
I am also rather embarrassed now (hey, I did always have a little thought in the back of my head going 'this could be garbage. I mean, this 'nature intelligence???') about incorporating some of Gardner's strategies in my class. In fact, Athon should have half a dozen things sent by me, including a 'Bloomgarde' strategy for the classroom, books on multiple intelligence... bugger. Sorry A. :(

Eh, at least some of the stuff on Philosophy in the Classroom appears to work... and mindmapping et al is quite fun. But I had a professor who always criticised DeBono's work (I put a link up about that on the forums once) and I had the opportunity this January to attend a unit on Emotional Intelligence.

I didn't sign up because the lecturer had previously taught a class on 'Intelligence and Emotional Learning' the year before and I found his rather wishy-washy attitude (didn't like my raising questions in class... 'seen and not heard' syndrome) frustrating and after reading a few critical reviews about EI, became even less keen about going.

Is it possible that in the educational field we are being taken advantage of by these theories that supposedly give credit for 'intelligences' that may not actually be realistic representations of what they are doing?

(Hades, please don't let us head down the path of 'THEY'RE ALL INDIGO CHILDREN!!' :( )

From my lay perspective, as a parent of school-aged children, this is precisely the path we're heading down. Education 'experts' are becoming more and more afraid of buckling down and calling idiots idiots, that they're having to invent and re-invent new forms of 'intelligence' and 'genius' to fend off the bad test results of prior years.

Sometimes, you just have to tell the parent that their kid is dumb... point blank. It's ugly, it's antisocial, it's anti-PC... but it's the Bob-awful truth.
 
From my lay perspective, as a parent of school-aged children, this is precisely the path we're heading down. Education 'experts' are becoming more and more afraid of buckling down and calling idiots idiots, that they're having to invent and re-invent new forms of 'intelligence' and 'genius' to fend off the bad test results of prior years.

Sometimes, you just have to tell the parent that their kid is dumb... point blank. It's ugly, it's antisocial, it's anti-PC... but it's the Bob-awful truth.


I don't think it's specifically an 'educational' problem: I am fighting it in my workplace. All managers have to take a series of courses in Emotional Intelligence, and what it is, really, is what I'd call 'social awareness' or 'community standards'.

The prime mover of this calamity is a dubious community of psychologists, pushing their half-baked theories.

There is a sort of 'sour grapes' softness in the public, as half the population is below the mean on IQ scoring (by definition), so there's a market for consolation. It's nice to learn that pattern-puzzle-solving is not a very important life skill after all, and they are told they have high EQ scores to compensate, which boosts their egos.

What is left out of the discussion is research showing that IQ correlates to social awareness and other relevant proxies.
 
I wonder, what do you think of a kid, say, age 14, who takes an IQ test in Florida in the school system in the mid-80s, takes it four times, and scores, in order, 145, 85, 166, and 100? Same test. Further, what do you think of said kid predicting, before entering the test, what his scores would be, give or take 10 points?

Kid finally took a 'covert IQ test' in high school and scored 142, which was used as his base IQ test when he went to join the military.

And why did the school system hide this IQ test from me, and its results?

Thank you, O major in cognitive psy...


I must come across as pretty arrogant in my posts, since I keep getting good dings like this one. That's cool / it's all good.

Er, I can't explain this. The tests have reliabilities in the upper .90s, so your score bouncing around by that much suggests something very wrong happened.

Perhaps the administrator was incompetent, I dunno-- but obviously, the test wasn't a valid indicator of your IQ for whatever reason (I wonder if it's the test's fault, though)?

Person-who examples are likely irrelevant for discrediting IQ testing (or discrediting anything else). My uncle smoked 4 packs a day and lived til 90. What's up with that?!

*

To Dr. K:

I know very little about physics (as such, I will gladly defer to the experts here) but aren't there things in physics that we don't completely have defined / figured out, but which we can nonetheless measure very accurately?

Is gravity a good example? We can measure it reliably and validly, even though we don't completely understand what it is?

Is this analogous to g? We can measure it with high reliability; we can show that it has criterion validity (predicts the things that an IQ test ought too) and construct validity:

-- it doesn't predict things it ought not too-- like personality

-- it correlates with basic mental process like the speed with which neurons in our brain fire; individual differences in myelenization; working memory capacity, etc.

But, we don't completely understand what it is.

Nonetheless, how much explanatory precision does one require before accepting a construct as real / scientifically valid?

I'm ok with saying-- as a scientific hypothesis-- that g is some basic mental process (or combination of mental processes) like speed of processing, or working memory capacity or the ability to think in the face of distraction.

Can I point to where in the brain g is. No. (it may not be somewhere but everywhere-- in that it's the overall information processing capacity of one's brain). However, we can measure all kinds of psych constructs (personality, attitudes, job satisfaction) and critics don't typically ask "where in the brain is job satisfaction".

I think throwing out 90 years of data and prediction and utility until we biologically reduce g would indeed be throwing the baby out.

Finally, again, part of my arrogance here comes from me trying to defend my field. We've made fun of IT people before here, and there was a thread where IT people stepped up, offended, and defended themselves. No different here I think.

I don't have all the answers. I'm not even sure I'm an expert in the study of intelligence (if by expert one means publishes in A journals in that specific area).

I am somewhat current on the literature here, though, and I am consistently surprised by how big the divide is between the IQ scientists and people who are not (and it especially irks me when skeptical people form strong opinions about it without bothering to read up on it).

It often reminds me of creationism versus evolutionism. The creationists just refuse to look at and accept the data for the validity of evolution

The Gouldians do the same with regard to the importance of g to humanity.

The analogy even has an interesting parallel:

Biologists were so frustrated at how the general public misunderstands evolution that they signed a petition about it (there's a thread here with a link to it).

Psychometricians were so frustrated at how the public reacted to the bell curve that they signed a petition on it.

http://www.lrainc.com/swtaboo/taboos/wsj_main.html

It's only 40 names, but they are the front runners in the field (many of them were co-authors on the very fair APA task force article about IQ...)
 
Show me any evidence that 'g' actually exists instead of being a statistical artefact created by factor loading.

Throw out factor analysis, and you still have the fact that if you give a battery of cognitive tests to lots of people, the between people variance will be much larger than the within person variance.

Factor analysis just tries to take the correlation matrix containing tons of correlations and reduce it to a smaller, simpler set of "factors" that best explains the complex pattern in the matrix.

It's used in many fields beside IQ. IIRC, even real scientists (chemists!) have used it with great success.
 
Factor analysis just tries to take the correlation matrix containing tons of correlations and reduce it to a smaller, simpler set of "factors" that best explains the complex pattern in the matrix.

It's used in many fields beside IQ.

It is indeed. But the "real scientists" who have used it have a tendency to check their models against reality after they do the analysis, just to make sure that they're not reifying statistical artifacts.

Nonetheless, how much explanatory precision does one require before accepting a construct as real / scientifically valid?

Explanatory precision? That's the wrong standard entirely. The explanatory precision of the Ptolemaic epicycles, for example, is nearly limitless -- and yet, they don't exist. It's perfectly possible to have a perfect match between your construct and the high-level real-world observations and still have an utterly incorrect set of "constructs" that do not correspond in any way to reality.

I can produce a set of Ptolemaic epicycles that describe the meanderings of a drunk on his way home. I can produce an underlying factor that describes, with high precision, the relationship between the salaries of Protestant ministers in London, the price of rum in Havana, and the number of patents granted by the USPTO.

That's why it's important to reality check the "factors" obtained by factor analysis and see whether there's any basis to believe they exist. Otherwise, you're simply persuing an extended "correlation is the same as causation" fallacy.

Throw out factor analysis, and you still have the fact that if you give a battery of cognitive tests to lots of people, the between people variance will be much larger than the within person variance.

Similarly, throw out factor analysis and you will still ahve the fact that the between-year variance in rum pricing and clerical salary is much larger than the within-year variance. This doesn't imply that there's a genuine underlying "factor" relating the two.
 
OK Bpesta, I wasn't trying to 'ding' you. I was just curious what you thought.

I can tell you, I remember thinking how stupid the tests were. I knew exactly what they were asking for, but felt strangely belligerent. I put the beads on in the exact reverse order they asked for. I came up with morbid or silly pattern associations, then justified them. I'd complete sequences incorrectly (based on what the test wanted to see), then would explain how my answer was also valid. It wasn't until high school that I was apathetic enough to not bother... and they concealed this IQ test as a 'Pre-SAT cognitive awareness analysis'... or something like that. I think I was vaguely aware of what they were doing, but that was close to 1990, and wasn't there something of an uprising against IQ tests in the end of the '80s?
 
It is indeed. But the "real scientists" who have used it have a tendency to check their models against reality after they do the analysis, just to make sure that they're not reifying statistical artifacts.

I've always felt this was key to IQ's link to g: the nature of g, per se.

Two pieces of the puzzle I find important are:

  • population IQ increases over the past century - education's influence on IQ is a pretty good explanation
  • individual IQ changes over time, as a result of focused training

The question isn't whether IQ is reified - it's real, such that it measures the subject's ability to score on IQ tests.

The question isn't even whether it's mapped to a trait that looks like g. The tests look like they're testing for intelligence. That can't be dismissed as a coincidence.

The important question is whether g is a static trait, or a malleable mental feature, because this is where policy hangs its hat.
 
Ok, so explanatory precision isn't proof of causality, I'd agree.

But, it's important, I think nonetheless.

I think your challenge to me is to prove that g exists independent of factor analysis?

But, just to clarify:

Are you arguing there is not some common "mental process" that drives most of the variance on different types of IQ tests? That g is just a "statistical artifact," and if only psychologists (those who invented factor analysis) would do it right, they'd see they're like the archeologist studying his shovel?

Or are you arguing that g exists, but is not general intelligence (e.g., that it's test bias or cultural bias -- rather than innate ability-- that explains test scores)?

The criterion validity of IQ for things like educational achievement or job performance is among the most replicated effects in all of psychology. And, when one patials g out of the IQ test, the validities always crash.

Similarly, if one partialed out how mental speed moderates the relationship between paper and pencil IQ tests and job performance, the validities crash.

Isn't that a reasonable test of reality? We suspect g might be a basic mental process like speed. When we control for mental speed, IQ tests no longer predict anything. With speed in, they predict everything.

But, mental speed is correlated with working memory capacity which may be the same thing as how much information one can process at once.

This seems like good science to me. We can control for conscientiousness, and nothing happens to the validity of IQ as a predictor of job performance (for example).

So, seems like IQ's measuring something much closer to mental speed than to the personality trait known as conscientiousness.

Incidentally, the same factor analysis techniques have been used to show conclusively that there are only 5 basic personality traits (5 g's so to speak).

Are these traits statistical artifacts too? How come the personality people can't find a general factor of personality, yet the IQ people can't avoid one-- all of them use factor analysis. If the technique were bogus for IQ, wouldn't it produce the same types of results for personality?

Why haven't we reified personality (telling someone they're not dependable, responsible or hardworking-- not conscientious-- is almost as insulting as telling someone they're low in IQ)?
 

Back
Top Bottom