Are We Getting Cleverer?

In the mid 1980s, a phenomenon that has become known as the ‘Flynn Effect’ was noticed; IQ scores seemed to have been steadily rising since the 1930s when the first IQ tests were introduced. On closer inspection, this rise was found to be artificial: people were getting better at taking IQ tests but not actually getting any more intelligent. Below is an explanation of how IQ tests work, why the Flynn Effect has been accepted not to reflect a real rise in intelligence and an explanation of the most likely reason for the rise in IQ scores.

How IQ tests work.

IQ scores are calculated using a series of questions that are thought to tap into ‘general intelligence’, that is, they are designed to measure raw ability without being affected by the amount or type of formal education a person has had. A statistical process, known as standardisation, is used in order to make scores on IQ tests comparable: the actual score on an IQ test would be affected by a number of factors, such as age. In order to remove this effect a calculation is performed that turns the score into a ‘standardised score’, so, for example, the raw intelligence of an eight year old could be compared to an eighty year old, removing the affect of age. Every few years, a large random sample of IQ scores are collected and compared so that the calculation used to standardise scores can be adjusted. This is so that the ‘average’ score always remains at 100. This process also allows scores to be compared from different IQ tests. Standardisation involves the use of a statistical formula to convert the actual score (raw score) into a score that is comparable to other test takers (standardised score). So if two people took two different IQ tests, one might have a raw scores of 86 and the other 39. But if these scores cannot be compared to each other, they are meaningless. If the average scores of most people on the same two tests are also 86 and 39, both people would be said to have an IQ of 100 – the average IQ. Thus standardisation allows for comparison not only across ages but also across different tests.

All IQ scores can then be said to fall somewhere on what is known as the ‘normal distribution‘, this means that if plotted on a graph, it would be possible for an individual IQ score to be compared to the average IQ score, not just for the person’s age, gender and culture but for the whole population. This way, it is possible, for example, to test a child and ascertain whether they are above or below average, which is highly useful in identifying children who are likely to need extra help at school. An IQ of 100 is considered average, an IQ of 70 or below is indicative of a learning disability and an IQ of 120 or above is considered ‘superior’. Einstein is thought to have had an IQ of 160, although he never actually took a formal IQ test. The picture below shows the ‘normal distribution’ of IQ scores throughout the population: the percentages show the proportion of people with each score.

IQ Scores Rising!

In 1987 James Flynn collated data from 14 countries throughout the developed world in order to examine the raw scores on IQ tests. He took the unstandardised scores of people who had taken the test between the 1930s and the early 1980s and compared them. The scores that had been used over the years had always been the standardised scores but when he looked at the raw scores, he found that they had actually shown a steady and significant increase over time (Flynn, 1987). The use of standardised scores had masked this increase (a score that was average for that particular year would be altered so that it could be called 100) but Flynn noted that what was actually happening was that each year, the average score would rise by one or two points, so to return to our earlier example, say the person who scored 86 took the test in 1985, and say that 86 was the average score for that year, then, as stated before, they would receive a standardised IQ score of 100. However, according to the Flynn effect, in 1935 the average IQ score on this imaginary IQ test, might have been 76. Thus, when raw scores are compared, this person appears to be 10 points more ‘intelligent’ than an average person in 1935. Similar increases were found on several different IQ tests in 14 different countries and from this it appeared that in the space of 50 years the average IQ had risen by between 5 and 25 points. From this one of two conclusions must be drawn; either the human race is becoming exponentially more intelligent, or IQ tests do not actually measure intelligence.

So are we getting cleverer?

When this phenomenon was first noticed, in 1987, it caused a great deal of excitement. Of course, it made people feel good about themselves. Many explanations for this increase in intelligence were proposed, from better parenting and education, to improvements in nutrition. However, each of these explanations were gradually ruled out by experimental studies. Even the most intensive early parenting programs that provided stimulation to children from a very young age, produced maximum improvements in IQ score of only five points. Comparisons of the IQs of children who had not received any education at all, to those receiving a typical ‘Western’ education could also only account for IQ differences of three to five points. This effect would be even less in the population under study as most of the people tested in the 1930s would have attended school. Finally, very little evidence for the effect of nutrition could be found other than a circumstantial similarity between increases in IQ and increases in growth. Furthermore, In today’s society, there seems to be little evidence to suggest that we are intellectually superior to our grandparents’ generation. If indeed, we are ‘getting cleverer’ and have been steadily doing so since the beginning of the 20th Century, where is the evidence of this? You might think that the developments in technology seen over this period, from the cinema to televisions to hand-held viewing devices; or from record players, to tape players to ipods, are a reflection of our increased intelligence. You would be wrong. In fact, 40% less new inventions were registered in the 1980s than in the 1960s, suggesting that less new inventions are being created and not more. Similarly, there has been no rise in the production of masterpieces in the arts or literature, another development that would be expected as a direct result of such vast increases in intelligence. Indeed, you might also expect that children’s grades at school would be improving, since, according to their raw IQ scores, they are between 5 and 15 points more intelligent than the previous generation of school children. However, examination of SAT scores (tests of educational performance) in a large group of American school children suggests that this is not the case. In fact, these children were performing worse on SAT tests than the generation before them. This does not necessarily suggest they are getting ‘stupider’ but it clearly does not support the premise that they are getting cleverer!

What are IQ tests measuring?

It is clear that none of the evidence seems to point towards an actual increase in intelligence over the time period in question, so why were IQ scores rising so dramatically? The only assumption left is that IQ tests do not in fact measure intelligence. When this was first realised, it was a complex fact to accept, made all the more difficult by the fact that standardised IQ scores do predict how well a person will be likely to perform in education. Thus IQ tests must measure some kind of ability that is ‘better’ in more intelligent people and ‘less good’ in less intelligent people but that also improved in all people at the same rate over the course of 50 or so years. This is a difficult concept to explain and is perhaps best explained through an analogy. Say, for example, we accept as a fact that tall people are better runners than short people, so in a large group of people, you would expect a large amount of average height people, who were also average runners; then you would have just a few very tall people who were very good at running; and a few very short people who were very bad at it. Then imagine that for many years it had been generally accepted that height was a measurement of running ability. If in 1930, average height was 160cm and in 1980, average height had increased by 5cm, to 165cm and when you examined the heights of the whole population, you realised that the very tall and very short people were also around 5cm taller than their equivalents in 1930, this would mean that the whole population had got taller. If, however, the average height people were still actually no better at running than they were in 1930 and neither were the tall or short people, your assumption that height was a measure of running ability, would no longer stand. However, height would still be able to predict who was the best, worst and average runner. At this point, you would have to accept that while height was related to running ability, it is not the same as it. And this is similar to the information that had to be accepted about IQ. So, IQ can predict who will be best, worst and average at tasks that require intelligence, but is not a direct measure of intelligence.

So why are we better at taking IQ tests?

Most researchers now accept this and rather than looking for explanations for an ‘actual’ increase in intelligence, they started to look for reasons why the population as a whole seemed to have got better at taking IQ tests. The only explanation that made sense was that the IQ tests were biased towards people with better visual processing and analysis. In an attempt to measure ‘raw intelligence’ and not learning or language skill, most of the IQ tests that are widely used rely heavily on visual tasks, such as block or shape rotation or finding patterns a series of shapes.

It was this that led researchers to realise what is likely to be the single, largest explanatory factor in the rise in IQ test raw scores: we are better visual analysers than ever before. The period over which IQ scores have risen has also seen the greatest leaps in visual technology. This period saw the cinema become the home television set; the introduction of colour and animated film; the digitisation of film and photography, which allowed for manipulations of visual media; optical illusions and special effects became increasingly popular; the first three-dimensional films were made and finally the home computer and the increased availability of digital media playing and recording devices. With all of these forms of technology increasing our exposure to fast and complex visual stimuli, it is hardly surprising that we became better analysers of visual stimuli than people who rarely saw a motion picture and then only in black and white, in a cinema. Thus we are scoring more on IQ tests not because we are more intelligent but because we are better visual analysers. But because of mass exposure to visual media, the population as a whole has increased by the same amount. So, an average visual analyser is still an average IQ test taker and is likely to achieve average educational attainment. But this person is still very likely to have a raw IQ score that is several points higher than that of his grandfather, even if his grandfather was also of average intelligence.

So what does this all mean?

Intelligence is a broad and highly complex concept, it includes learning ability; potential for abstract thought; emotional understanding; the ability to exercise self-control; the planning of action; and much, much more. IQ tests do not measure all, or even most of these abilities. In fact, what IQ tests do measure is unclear. All we know is that IQ scores roughly predict an individual’s ability to learn, but we don’t really know why. The attempt to make IQ tests unreliant on verbal ability has meant that they rely heavily on visual processing ability. This means that IQ scores are biased towards better visual thinkers. For some reason, these better visual thinkers also seem to do better at school than less good visual thinkers; perhaps this highlights a weakness in our education system. With education in it’s present state, and no other existing measures of “intelligence”, IQ tests are still useful in identifying children who might need extra help at school, however, when measuring ‘genius’ or ‘learning disability’ IQ tests should be used with caution as they certainly only capture a small slice of an individual’s full potential.

Comments? Questions? Requests?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s