• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

IQ tests

Paul C. Anagnostopoulos said:
Pesta, can you explain this reliatility measure? I don't understand what a "test's error" is.

~~ Paul

Hey Paul.

There are text books on reliability, but basically, the concept gets at how consistently the test items measure one and only one thing.

You can think of a person's observed test score (i.e., the score he gets on the test) as being caused by two things:

his true score

his error score.

the TS is what God knows the person's IQ really is-- it's reality (sorry for the God reference!)

the Observed score is our best guess as to what the person's true score is.

the error score is any variance in a person's observed score not caused by his true score.

The correlation between OS and TS is the measure of the test's reliability.

If the test, and the test taking situation had no error, the correlation would be perfect-- the test would be perfectly reliable. Every time you took the test, you would get the exact same score.

All the more error in the test, all the weaker the correlation between true and observed scores. And with error, scores will bounce around if you take the test multiple times.

But, one could put confidence intervals around an observed IQ score based on the test's error.

so, an if a person scores 100 on an IQ test, there's a 95% chance his true score lies between, say, 97 and 103

And, there's a 99% chance his true score lies between 95 and 105.

IQ tests given in standardized settings have reliabilites in the .90's

Now, if you take an IQ test off the net, or the back cover of Cosmo, or from someone incompetent, or in non-standardized settings, the error of the test increases, and we can be less confident that the observed test score is a good reflection of the person's true test score.

There are also person factors that one would want to consider when giving an IQ test-- examples: a vocab IQ test wouldn't work well for someone who's not a native english speaker. Speeded IQ tests will underestimate an elderly person's IQ. Certain Learning disabilities in kids may mask their true iq score (but there are other tests on can give that measure the LD).

But, someone competent and licensed to give an IQ test would be aware of these.

That's why I think it's unfair to bash IQ tests based on someone's online experience with them.

The standardized ones (i.e, the SB, the WAIS, the wonderlic) kick major ass in terms of psychometric properties.

B
 
What were the odds on this--the radical behaviorist posting to agree with a cognitive psychologist? Nahh...well, yeah.

There are good and bad IQ tests--Pest is talking about the good ones. Very strong reliability, strong predictive validity when used to predict the sorts of things they are meant to predict. If you use IQ inappropriately, don't be surprised when it is a lousy predictor. My ball peen hammer makes a lousy screwdriver.

There are some terrible IQ tests out there on the web. (There are worse personality tests, but that is another thread) Even a good IQ test (I have some in my office) must, as Pest notes, be administered properly.

This is not to say that they are perfect, or that we all agree on what "intelligence" is, to measure on one test or another. The WAIS defines it one way, Howard Gardner defines it quite another. But for the appropriate definition of intelligence, and for the appropriate uses, IQ tests are much better than the critics make them out to be.
 
Anyone truly interested in IQ testing needs to read Stephen J. Gould's The Mismeasure of Man.

As a bonus, it thoroughly trashes the argument of The Bell Curve, and it was written well before. The latest editions have specific criticism of The Bell Curve.
 
Mercutio said:
What were the odds on this--the radical behaviorist posting to agree with a cognitive psychologist? Nahh...well, yeah.


I can explain it! It's the rescorla wagner model of classical conditioning:

Delta Vx= alpha*beta (Lamba- Va +Vx)

Delta Vx = change in how well the CS signals the US
Alpha = cs salience
Beta = US intensity
Lamba= the theoretical limit on conditioning.
Va = how well other things predict the CS
Vx = previous conditioning with the CS in question.

My clever posts are changing your opinion of us cognitive guys (inhibitory conditioning)

US = cognitive psychologist
UR= contempt
CS = My purty face
Lambda = putting me in your will

Pretty soon, you'll be drooling over the distinction between implicit and explicit memory, and you'll be starting bitter philosophical arguments with Jeff Corey.

:)
 
Pesta's rule of internet debate:

Mentioning Mismeasure to discuss modern IQ tests is like mentioning Hitler for any other topic. Debator loses.

Mismeasure is such a piece of crap, the worst thing that could ever happen to a scientific "contribution" has happened to it: It's been ignored.

Michael Redman said:
Anyone truly interested in IQ testing needs to read Stephen J. Gould's The Mismeasure of Man.

As a bonus, it thoroughly trashes the argument of The Bell Curve, and it was written well before. The latest editions have specific criticism of The Bell Curve.
 
Interesting. As a professional psycholinguist, I find that Dr. Gould's comments in _Mismeasure of Man_ are much more accurate than _The Bell Curve_. I would be interested to see why you think it's "a piece of crap."

Far from being ignored, it's got a wide (and
increasing) collection of citations since its original publication --- more than 50/year in publications tracked by the SSCI, which is not at all what I would consider "being ignored."
 
drkitten said:
Interesting. As a professional psycholinguist, I find that Dr. Gould's comments in _Mismeasure of Man_ are much more accurate than _The Bell Curve_. I would be interested to see why you think it's "a piece of crap."

Far from being ignored, it's got a wide (and
increasing) collection of citations since its original publication --- more than 50/year in publications tracked by the SSCI, which is not at all what I would consider "being ignored."

Hi Kit.

I'd submit Gould's book is very popular among lay people.

I meant ignored by people who advance the science-- those who publish in peer reviewed journals related to IQ.

Look at the Neisser at al. APA task force article, two years after the bell curve (there's a link to it here somewhere). Not a single mention of Gould. That's amazing, in my mind.

Also, see, the link "Arthur Jensen replies to SJ Gould"

IMO, Jensen (who IS an IQ scientist) devastates Mismeasure.

In a nutshell, Gould totally misinterprets the logic behind factor analyses, and criticizes IQ by attacking (IIRC) the Army Alpha (used in World war I!!!) and phrenology.

The analogy is like rejecting modern medical science because dr's in the past used leaches to cure disease.

If you're interested in a history of iq testing and some of the pitfalls the research area has faced (and overcome) Gould is a good read. If you want anything relevant to what people researching this stuff are doing today, Gould is a waste of time.

Plus, if mismeasure is so solid, then why does it contradict 100's of studies and 1000's of data points showing the validity of iq-- g-- as a predictor of success in life (see anything by Schmidt and Hunter as examples).

Hey, are you related to the other Kitty's on here???

B
 
High IQ may enable one to take employment as say a scientist on £20-£30k pa,...or, one could leave school at 16 with a couple of mediocre CSE's and become a Plant Operator on an oil refinery earning £50k+, hmmm tricky...



Interlect is mostly overated
 
bpesta22 said:
Pesta's rule of internet debate:

Mentioning Mismeasure to discuss modern IQ tests is like mentioning Hitler for any other topic. Debator loses.
Now that's ironic!
Plus, if mismeasure is so solid, then why does it contradict 100's of studies and 1000's of data points showing the validity of iq-- g-- as a predictor of success in life (see anything by Schmidt and Hunter as examples).
I'm confused. First, we were looking for a measure of intelligence, and now we're talking about predicting success in life? Isn't IQ supposed to be an objective measure of intelligence?

I'm not an academic, but I wouldn't think Gould's book, written for laypeople, and not primary research, would be cited many peer reviewed papers. Do researchers often cite such for-the-masses work?
 
Michael Redman said:
Now that's ironic!
I'm confused. First, we were looking for a measure of intelligence, and now we're talking about predicting success in life? Isn't IQ supposed to be an objective measure of intelligence?


Hey Mike

I guess the issue here is criterion valdity-- a test is valid if it correlates with things it's supposed to measure.

So, I think it'd be reasonable to expect that a valid IQ test would predict GPA or years of education. And, they do (.5 and .55, respectively).

Also, I think it's reasonable to expect that how smart you are (as measured by an IQ test) might affect how fast you learn your job, and how much you learn about your job, which then affects your job performance. And, IQ predicts about .5 for job performance and .55 for sucess in training.

And, I also think it's reasonable that if you believe IQ (g) to be some basic mental process, then scores on a full blown paper and pencil IQ test should correlate with basic reaction time / speed of processing measures. And, they do...

I'm not an academic, but I wouldn't think Gould's book, written for laypeople, and not primary research, would be cited many peer reviewed papers. Do researchers often cite such for-the-masses work?

This is a good point-- one I didn't consider. If there's an objective way to measure "influence" on a field, besides citation indices, my guess is Mismeasure would score real low. I guess the best example I can give is Jensen's reply to gould. Jensen is an IQ guy, who's devoted his life to studying just this topic. Gould is an anthropologist...

Perhaps it's appeal to authority, but if you also consider jensen's arguments, I think the appeal is valid.

B
 
Suezoled said:
what is their importance? Why are they around?

I have taken 4 so far: I scored a 120, 114, 138, and an 89.

Okay, I admit I was only trying when I got the 138, and I screwed off when I got the 89. Plus, they were timed tests, and I don't do well on timed tests anyway.

But what is their importance?

One girl I know got a 145, and she lords it over everyone else who scored beneath her.
Another girl I know got a 168, but she works hard and studies just like any serious student would.
Does 138 decide my destiny? Should I give up going for my undergrad so I can try for my PhD in the future? What does it all mean? Can you tell I'm having an indecisive moment in my life?

If it's of any help, I know of four Nobel laureates who scored less than 140. :D

James Watson - Nobel in medicine - 115 IQ
Richard Feynman - Nobel in physics - 126 IQ
Luis Alvarez - Nobel in physics - under 140 IQ
William Shockley - Nobel in physics - under 140 IQ

When the co-discoverer of DNA (Watson), the co-inventor of quantum electrodynamics (Feynman) and the co-inventor of the transistor (Shockley) all don't appear too hot on IQ tests, we have to ask some major questions about the things. ;)

Don't ask if you have the IQ, ask if you have the interest, passion and motivation for what you want to do. Good luck. :p
 
Pesta, I still don't understand how the error is measured. Don't you need to have an independent way to measure IQ so that you can see how consistent two methods are with each other? How can you measure the consistency of just one method without a benchmark?

~~ Paul
 
Re: Re: IQ tests

wipeout said:


If it's of any help, I know of four Nobel laureates who scored less than 140. :D

James Watson - Nobel in medicine - 115 IQ
Richard Feynman - Nobel in physics - 126 IQ
Luis Alvarez - Nobel in physics - under 140 IQ
William Shockley - Nobel in physics - under 140 IQ

When the co-discoverer of DNA (Watson), the co-inventor of quantum electrodynamics (Feynman) and the co-inventor of the transistor (Shockley) all don't appear too hot on IQ tests, we have to ask some major questions about the things. ;)

Don't ask if you have the IQ, ask if you have the interest, passion and motivation for what you want to do. Good luck. :p


These seem like "person who" statistics. We could just as easily find people with off the chart IQ's who were absolute failures in life.

But, plug a .5 validity into a utility formula, and see -- as one example-- how *much* money a company can save by using IQ to select people (even though there'll be some people who....)


To Paul:

Good question. I'll give you an example later tonight once I get home from work!

B
 
Re: Re: Re: IQ tests

bpesta22 said:
These seem like "person who" statistics. We could just as easily find people with off the chart IQ's who were absolute failures in life.

But, plug a .5 validity into a utility formula, and see -- as one example-- how *much* money a company can save by using IQ to select people (even though there'll be some people who....)

But equally, that's a warning not to trust it too much.

In the common analogy of height and basketball ability, if you chose only the very tallest basketball players from history for an imaginary team you'd miss out on below NBA average height players like Michael Jordan.

The temptation to choose only the highest IQ applicants in job selection means you could miss out on a someone special there too.
 
Re: Re: Re: Re: IQ tests

wipeout said:


But equally, that's a warning not to trust it too much.

In the common analogy of height and basketball ability, if you chose only the very tallest basketball players from history for an imaginary team you'd miss out on below NBA average height players like Michael Jordan.

The temptation to choose only the highest IQ applicants in job selection means you could miss out on a someone special there too.

I agree wipeout, but the point people seem to be missing is that prediction significantly better than chance is a very useful thing

The .5 validity of an IQ test, means that-- more than any other single measure in existence-- if you had to use one thing to predict whether an applicant was qualified, your best bet would be the iq test (It will predict significantly better than chance, and as it turns out, sig better than any other single measure).

There will be cases where the qualified guy wasn't hired, and the non qualified guy was, but ironically, using the IQ test will minimize those cases.
 
Paul C. Anagnostopoulos said:
Pesta, I still don't understand how the error is measured. Don't you need to have an independent way to measure IQ so that you can see how consistent two methods are with each other? How can you measure the consistency of just one method without a benchmark?

~~ Paul

Hey PA!

This is sorta why I mentioned earlier there were whole textbooks on reliability. None of it's rocket science, but there are a lot of different ways to do it.

I just gave a 40 item multiple choice test to my students. I want to see if it's reliable.

The most straightforward way to do it is called test-retest: I simply give each student the exact same test on two occasions.

If the test was reliable-- consistently measuring one thing-- there'd be a strong correlation between students' test scores at time 1 and time 2 (it'd be odd, for example, to flunk it today yet ace it tomorrow, if it were reliable).

There are obvious problems with this approach. It takes more time and money to give 2 tests, and students have memories. The second time they take it, they will have looked up the answers to the ones they were unsure of. This mucks things up.

So, a better way to go is "parallel forms". Two different tests-- each has different content items-- but supposedly each measures the same construct. Now you give Form A and Form B (different but parallel tests) to the same people and do the correlation to see if it's reliable.

I'm pretty sure IQ tests more often than not will use this type of Rel.

But, you're still giving two tests, and, there are strict statistical assumptions that you have to meet before you can claim two forms are parallel.

So, the best way to go (finally!) is internal consistency reliability. The issue here is do my 40 test items all consistently measure 1 thing (hopefully knowledge of the course material).

So, I give the test only once to students, but "pretend" like it's two tests by splitting it in half.

I might calculate each student's score (out of 20) for all the odd items on the test, and then correlate them with his/her score on the 20 even items.

A strong correlation indicates a reliable test (it'd be odd to flunk the even numbered items, yet ace the odd ones, if the test were indeed reliable).

You may ask: There are many different ways to split a 40 item test into two halves-- (e.g., odd versus even items, 1st 20 versus last 20, etc), how do I decide which way to split it?

The answer's provided by two famous formulas: Cronbach's coefficient alpha (when the test answers are continuous like for likert scales) and Kuder-Richardson 21 (for when the test answers are dichotomous-- e.g., right versus wrong).

These forumulas actually calculate the average of all possible split half correlations and spit out a number that nicely captures the test's reliability.

Just wondering if anyone has read to this point:)
 
bpesta22 said:
So, I think it'd be reasonable to expect that a valid IQ test would predict GPA or years of education. And, they do (.5 and .55, respectively).
OK, then it isn't measuring intelligence, it's measuring success. Those are clearly not the same thing. You can't say that a test for intelligence is accurate if it predicts success. You are talking about a test for success, then.
Also, I think it's reasonable to expect that how smart you are (as measured by an IQ test) might affect how fast you learn your job, and how much you learn about your job, which then affects your job performance. And, IQ predicts about .5 for job performance and .55 for sucess in training.
In which jobs? How do you measure job performance? It seems to me that you are using a very narrow definition of intelligence, that fits what you're trying to measure. Not intelligence, but something else. Or, at least, only one aspect of intelligence.

Let's look at lawyers. (no comments, please.) To become a lawyer, you have to do well enough on standardized tests and in high school performance to get into college, perform well enough in college and do well enough on another standardized test to get into law school, and the perform well enough in law school and do well enough on another standardized test to be admitted to the bar. So there you have a group of fairly high IQ people, on the whole fairly successful.

However, lawyers generally recognize that the lawyers with the highest test scores and academic performance are not the best lawyers, even if they can learn the law better than others. There are qualities which make good lawyers that none of these previous steps have even attempted to measure, and yet which are unquestionably qualities of mental ability. (creativity, knowing how to persuade, how to adapt to changing circumstances, etc.) The most academically successful law students often don't have what it takes to even practice law in the real world. However, as it is not easy to quantify, and nearly impossible to test, the intelligence required to be a good and successful lawyer is different from intelligence as measured by an IQ test.

If you want to say that these tests measure certain aptitudes for academics and certain jobs, I don't doubt that that's true. But if you want to claim that these tests are a fair and accurate measure of intelligence, I just don't see it.

(Besides, how do you know the causation involved in a correlation between success and IQ score? Couldn't other factors be responsible?)
 

Back
Top Bottom