Woot; atheists are smarter than agnostics

I have an IQ around 156 and that was tested when I was in early 7th grade. I do not consider myself "intelligent" by any means. The IQ test is meaningless. Intelligence cannot be measured by a mere test, it must be tested by how the person can fully utilize his/her intelligence.

That's not true either. Someone can be a genius and not apply themselves. You just substiuted one flawed test of intelligence for another.

Just because you don't consider yourself intelligent that doesn't mean you are not.

IQ tests test certain aspects of intelligence. No one is claiming it is the end all of all that makes one smart. It is a good measure of certain mental skills though.
 
That is most definitely not an IQ test question, since it relies on learned knowledge, not brainpower. If the test consisted of 100 questions like that then it was a completely useless way to measure IQ.

IQ tests should measure cognitive reasoning abilities, recall, pattern recognition, and logic. They should never contain a question that relies on learned knowledge.

I distinctly remember the question, but either my memory's playing tricks on me, or the school psychologist was administering more than one test in the same session.
 
Vocabulary tests are among the best measures of intelligence and are featured on many standardized IQ tests.
 
Vocabulary tests are among the best measures of intelligence
Bollocks

;)

and are featured on many standardized IQ tests.

Maybe... but that don't mean the tests are worth more than Jacques Schitt

http://www.iqtest.com/whatisaniqscore.html
What is an IQ Score?

Originally, IQ, or Intelligence Quotient, was used to <snip/> detect children of lower intelligence in order to place them in special education programs. <snip/>

Today IQ testing is used not primarily for children, but for adults. Today we attempt to write tests that will determine an adult's true mental potential, unbiased by culture, and compare scores to the scores of other adults who have taken the same test.
<snip/>

Defining Intelligence

Most people have an intuitive notion of what intelligence is, and many words in the English language distinguish between different levels of intellectual skill: bright, dull, smart, stupid, clever, slow, and so on. Yet no universally accepted definition of intelligence exists, and people continue to debate what, exactly, it is. Fundamental questions remain: Is intelligence one general ability or several independent systems of abilities? Is intelligence a property of the brain, a characteristic of behavior, or a set of knowledge and skills?

The simplest definition proposed is that intelligence is whatever intelligence tests measure. But this definition does not characterize the ability well, and it has several problems. First, it is circular: The tests are assumed to verify the existence of intelligence, which in turn is measurable by the tests. Second, many different intelligence tests exist, and they do not all measure the same thing. In fact, the makers of the first intelligence tests did not begin with a precise idea of what they wanted to measure. Finally, the definition says very little about the specific nature of intelligence.
 
Bollocks

;)



Maybe... but that don't mean the tests are worth more than Jacques Schitt

http://www.iqtest.com/whatisaniqscore.html



Six, you really should try looking at some primary sources; you may be surprised-- embarrassed even by the ignorance displayed in your post.

Here's just one example:

Mill Hill Vocabulary scale
A Dictionary of Psychology | Date: 2001

Mill Hill Vocabulary scale n. An intelligence test designed to measure verbal intelligence, consisting of a list of 88 words divided into two sets of 44, published by the English psychologist John C(arlyle) Raven (1902–70) in its original form in 1944 as a companion to Raven's Progressive Matrices. The respondent's task is to explain the meanings of the words or (in an alternative form of presentation) to select the correct synonym for each word from a list of six alternatives provided. Most children with a mental age of 5 years can explain the meanings of the first few words (cap, loaf, unhappy), and between the ages of 5 and 16 children of average intelligence can usually define about three additional words per year. The most difficult words in the list, which only a small minority of adults can define, include recondite, exiguous, and minatory.

The correlation between scores on the Mill Hill Vocabulary scale and Raven's Progressive Matrices is approximately 0.75, notwithstanding the utterly different ways in which the two tests measure intelligence. MHV abbrev.[Named after the Mill Hill Emergency Hospital in London, the wartime location of the Maudsley Hospital, where it was developed]


The Raven's is vastly different from a vocab test. It has no words and involves only figuring out which pattern is next in a sequence. It is perhaps the purest measure of g in existence. Score on it, though, correlate so well with vocab that the two types of tests are packaged together as "companions" for sale.
 
Six, you really should try looking at some primary sources; you may be surprised-- embarrassed even by the ignorance displayed in your post.
OK... if you want to alleviate (some of) my ignorance, please describe (i) what you infer from the term 'intelligence' and (ii) how vocab tests are worth more than Jacques Schitt in measuring it
 
What he said, how can the mean of all peoples' IQs be more than 100?
When you exclude mentally disabled and people who are illiterate clearly you can get averages > 100. That doesn't negate the findings since you are comparing two groups, not making a declaration one group is better than average.

As for confounding factors, those also might not be relevant as long as your sampling included a proper cross section of "white adolescent Americans" which is the group identified in the study title. Nothing there said the conclusion applied to all atheists or all agnostics though the study suggests it applies more broadly in the abstract. But it's hard to tell if they are just hypothesizing about correlation in this study group and not hypothesizing about its applicability to a broader population without reading the entire publication.

Here is a link describing the study population identified in the abstract. Again, without the complete paper we don't know if they took a random sample from this group or surveyed the whole group. Nor can I tell from this link where/how the IQ scores were obtained.

I'm not saying the study proves anything. But dismissing it because the IQs don't average 100 is pretty weak reasoning.
 
Last edited:
Perhaps more to the point, how were they measuring the religious beliefs of the participants? If it was self identity then the study is fairly seriously flawed to begin with.

And what about those of us who are agnostic atheists?
Well I would say you have a less common understanding of the two terms. I think if you surveyed people's understanding of the definition of those two words, despite any technical truths in the definitions, most people understand 'atheist' to believe there are no gods and 'agnostics' to be unsure if there are or are not.

Not that people don't interpret those words differently or incorrectly. I posted about one woman who wrote in her blog she was an atheist because she believed in a god but not in any particular religion. So you are correct in that the accuracy of the survey depends on the consistency of the interpretation of the survey questions.
 
Having "whites only" will still mean that the average score should be 100.

Unless it is argued that "whites" consistently score above 100, of course.
Average IQs are determined by some standardized testing but the groups tested vary. The test is then given to people beyond the original "standardization" group. So it would be very likely that a slightly different mean for any group tested and then compared to the original group would be likely. In fact, it would be highly unlikely to match the original standardization group exactly.

And regardless of genetics and intelligence potential, a lack of exposure to a nurturing learning environment does have an impact on measurement of IQ despite the attempt to not measure 'learning'. You cannot get around it completely. Nor can you get around the fact IQ tests don't apply equally to all cultural groups. So whites can have higher IQ scores. That doesn't mean they are genetically more intelligent as a whole. But it can mean things like poorer nutrition in utero and early childhood.

According to the information on the population the study sample was derived from (link is 2 posts up)
Three subsamples make up the NLSY79:
&#56256;&#56442; A cross-sectional sample of 6,111 youths designed to be representative of noninstitutionalized civilian youths living in the United States in 1979 and born between January 1, 1957, and December 31, 1964 (ages 14 to 21 as of December 31, 1978)
&#56256;&#56442; A supplemental sample of 5,295 youths designed to oversample civilian Hispanic, black, and economically disadvantaged nonblack/non-Hispanic youths living in the United States during 1979 and born between January 1, 1957, and December 31, 1964
&#56256;&#56442; A military sample of 1,280 youths designed to represent the population born between January 1, 1957, and December 31, 1961 (ages 17 to 21 as of December 31, 1978), and enlisted in one of the four branches of the active military forces as of September 30, 1978
I have to assume they did not use the entire population in this sample.
 
Last edited:
Well, the IQ test has been in development now for 100 years, with 100s if not 1000s of academics researching the topic as well.

Trying to not to open a can of worms here (see any older thread for full debates on this), but I think the literature shows clearly that the following are facts:

1) Mental tests-- from vocabulary to block design-- are positively correlated; meaning people who score high on vocab also tend to score high on block design. This finding is so well replicated-- never not been replicated-- that it's referred to as the the law of the positive manifold.

2) We can extract the thing common to all these mental tests (i.e., the thing that produces the positive manifold) and measure it, just as surely as we measure any latent construct in science.

3) This thing common to all the tests we call "g," and it's what I'd define as intelligence or general mental ability. If that sounds circular, then I think intelligence is a relative index of the efficiency with which brains process information (both how much and how fast). These individual differences manifest themselves as differences in g.

4) You'd be hard pressed to find important variables not predicted by g, from education to health to income to crime rates to job performance to body symmetry and brain size and religiosity. No other variable yet discovered in social science-- hands down-- possesses the criterion validity that g does. Further, often g explains (statistically) why two other variables co-vary. For example, the relationship between race and IQ scores is completely attenuated when controlling for g.

5) There are more specific / narrow aspects of intelligence, but all of them correlate with g and none of them predict better than g.

6) You can rank mental tests based on how purely they measure g (versus other specific mental abilities and error). Vocabulary is one of the most g-loaded tests out there.

7) So, IQ measured with a vocabulary test predicts a very long list of important outcomes (never perfectly; rarely explaining more than 50% of criterion variance, but almost always emerging as the single best predictor).

It is the most powerful variable in social science.
 
SG I know hardly anything about sampling. My understanding is they took some type of sectional sample from the entire data set, where each data point used represented 100 cases in the NLSY.

I have no idea with whether an expert would quarrel with how they did it, but their n sizes are still pretty large, and I can't imagine it wouldn't represent the NLSY population well.
 
Last edited:
Mass posting-- I agree SG; I'd be surprised if environment had no effect on IQ. I do think there are clear links to nutrition levels and IQ, and maybe slightly less clear links to prenatal development and IQ.

I'm still trying to get my head around twin studies and how one controls for pre-natal environs. I think the fact that identical twins' IQs correlate much more strongly than do fraternal twins' rules out prenatal development as the only explanation.
 
When you exclude mentally disabled and people who are illiterate clearly you can get averages > 100. That doesn't negate the findings since you are comparing two groups, not making a declaration one group is better than average.

Why would you, as a matter of good practice, remove people with scores several standard deviations below the norm from your data without also removing people several standard deviations above the mean?


A score three standard deviations above the mean is as much an outlier as one three below.
 
When we compare groups who like various types of ice cream, it doesn't matter if we exclude those who like rum-raisin, and compare the two groups who like chocolate and vanilla. Chocolate lovers are no better than vanilla lovers.

If, however, we compare groups who have different IQs, it matters a great deal if we remove the dumbest group. To claim it doesn't negate the findings is simply fraud: That's equivalent to removing no-hit results from studies of psychics, and claiming that psychics score better than chance.

The whole point of IQ is to find out which group is better than the other. We don't segregate people by something that isn't better than the other. But that happens when we segregate people based on IQ.

If anyone wants to argue that the high IQ group is not better than the low IQ group, then why segregate the two groups in the first place?
 
Why would you, as a matter of good practice, remove people with scores several standard deviations below the norm from your data without also removing people several standard deviations above the mean?


A score three standard deviations above the mean is as much an outlier as one three below.
I have no idea who they used in their sample since I only have the abstract. All I was saying was you don't always have all the ranges of intelligence in a study. Why would you need to include developmentally disabled outliers in a study comparing IQs and religious beliefs? Seems absurdly obsessive compulsive to worry about whether or not you had people with IQs in the 50s in your sample in this study.

What I was saying still holds. The fact you don't have an average IQ of 100 in the study group in no way discredits the study. There may be many other limitations in this study. We can't tell from the abstract. There are good reasons as I noted that averaging 100 IQ is not a necessary expectation.
 
...
If, however, we compare groups who have different IQs, it matters a great deal if we remove the dumbest group. To claim it doesn't negate the findings is simply fraud: That's equivalent to removing no-hit results from studies of psychics, and claiming that psychics score better than chance....
Emphasis mine.

So you really think this study needed kids with IQs below 50?
An IQ of 50 or below. This is the threshold below which most adults cannot cope outside of an institution. They can typically be taught to read at a 3rd or 4th grade level. However, they cannot normally function in the customary classroom setting, and they require special training programs.

How about those with IQs below 30? That would be so severely disabled they can't live outside of an institution. How would you categorize them, Claus? Atheist or theist?:rolleyes:
 
Why would you, as a matter of good practice, remove people with scores several standard deviations below the norm from your data without also removing people several standard deviations above the mean?


A score three standard deviations above the mean is as much an outlier as one three below.
Because people with IQs below 50 aren't likely to be capable of even knowing what god beliefs were. :rolleyes:

Have you and Claus never seen people who are severely mentally disabled?
 
Yeah, see, it's not arbitrary. There's a reason that line is that, even if you refuse to accept it.
 
Yeah, see, it's not arbitrary. There's a reason that line is that, even if you refuse to accept it.


We do not know, do we? I thought there was not enough information to show which groups, if any, had been excluded from the sample. For example skeptigirl quoted a bit which said that it only included "non-institutionalised" youth living in the united states, in the main sample. That would not necessarily exclude the mentally disabled but it might exclude those attending very expensive boarding schools. So I do not see how your point applies here?
 

Back
Top Bottom