Assessing intelligence has traditionally focused on multiple forms of test data (T data). Standard measures of intelligence typically attempt to gather information on a wide variety of traits considered to be at the core of general mental ability. However, numerous efforts have been made to move beyond traditional assessment approaches. These have included measures of specific cognitive abilities, intellectual interests, and self-report measures of intelligence.
Testing cognitive abilities has traditionally included a variety of measurements and techniques, such as problem-solving tasks, assessments of school performance, information acquisition tasks, as well as matrix problems that require highly abstract conditional discriminations. The reason for the success in tapping general cognitive abilities using a variety of techniques largely has to do with the degree to which general mental ability permeates all learning, reasoning, and problem-solving abilities. Further, aggregations of measures of spatial skills, verbal reasoning, and quantitative abilities measure general mental ability more efficiently than aggregations of information items because the reasoning problems used in these measures typically capture a greater degree of common-factor variance associated with g (Gustafsson, 2002). Consequently, the most popular measures of general mental ability include a variety of assessments designed to tap several broad domains highly related to general mental ability, such as verbal, quantitative, reasoning, and visuospatial skills.
The search for alternative methods of measuring general mental ability more purely has often led to the use of elementary cognitive tasks (ECTs) that measure processing speed and working memory (Jensen, 1998). These tasks highlight the hierarchical nature of intelligence and our earlier point that assessments across different levels of abstraction typically constitute related but different methods. ECTs have proved to be a popular alternative methodology for measuring general mental ability because such tasks avoid the bias that may be introduced in measurement by prior training and experience. It also is argued that basic cognitive mechanisms underlie all thinking, reasoning, and decision-making processes, and therefore such mechanisms would be substantially related to general mental ability (Kyllonen & Christal, 1990).
Interestingly, Carroll's (1993) analysis of the structure of general mental ability showed that tasks measuring reaction time, inspection time, and discrimination ability were only weakly related to general mental ability. Indeed, early skepticism regarding the efficacy of using such measures to measure general mental ability was the result of such measures being used in isolation. However, it has been demonstrated that scores on such experimental tasks can be aggregated to form a reasonable representation of general mental ability if enough experiments are carried out across a variety of cognitive task domains (Green, 1978). It has been noted that correlations between combined reaction time scores from a number of ECTs and general mental ability approach the size typically seen with psychometric power tests (Jensen, 1998). Further, the combined scores from a number of ECTs can be used to predict upward of 70% of the heritable part of the variance in general mental ability. For the purposes of experimentation, it should be noted that aggregations of ECTs form two general factors, perceptual speed and working memory (Ackerman, Beier, & Boyle, 2002). These factors are, as a result of aggregation, both highly related to general mental ability, with working memory being the more highly related to g of the two (Ackerman et al., 2002).
Another approach to measuring general mental ability has been to use self-reports of intelligence or intellectual engagement (Paulhus & Harms, 2004). This approach has been much maligned by intelligence theorists because of the fact that self-report intelligence measures rarely exceed validities of .50 with typical tests of maximal performance of cognitive ability (Paulhus, Lysy, & Yik, 1998). Nonetheless, the search for better self-report measures has persisted because of the interest in finding a non-stressful and easily administered technique for obtaining performance information.
One of the more comprehensive and successful self-report measures of intelligence has been the Typical Intellectual Engagement (TIE) scale developed by Goff and Ackerman (1992). The premise behind this scale is that knowledge is accumulated over time through effort and motivated engagement in learning. It is therefore believed that this measure will better reflect daily behavior because it constitutes a test of typical intellectual performance. This is distinguished from a test of maximal intellectual performance, such as an SAT test, where it can be assumed that the individual is bringing their full cognitive resources to bear to succeed and attain a better outcome.
The TIE scale has been instrumental in integrating measures of the components of Ackerman's PPIK theory, a multimethod approach to understanding intellectual functioning that integrates intelligence-as-process, personality, interest, and intelligence-as-fcnowledge (Rolfhus & Ackerman, 1999). By assessing each of these domains, Rolfhus and Ackerman attempted to get a better approximation of the contribution of each to scores on knowledge and intelligence tests. Participants' general mental ability was assessed using a composite of verbal, mathematical, and spatial abilities. Their personalities and interests were assessed using standard measures of the Big Five personality traits, interests, and typical intellectual engagement. Sub jects also completed a battery of tests measuring their knowledge in a wide variety of domains including humanities, sciences, civics, and mechanics. This study demonstrated that a substantial higher-order Knowledge factor emerges from factor analysis of the knowledge domains that accounts for approximately 50% of the variance in domain knowledge. Further analyses showed that this general factor was significantly correlated with crystallized intelligence, which was represented by a composite of verbal ability tests. This suggests that the general knowledge factor is highly related, but not identical, to crystallized intelligence. These findings also suggest that a substantial part of the variance in knowledge test performance remains to be predicted by more domain-specific influences, such as interests and personality. For instance, Extraversión was shown to be negatively related to all but one of the domain knowledge tests, with Openness to Experience and Typical Intellectual Engagement also demonstrating significant, positive relationships across the knowledge domains. Measures of interests also proved to be related to domain knowledge scores, but were more specific with regard to matching content domains. Realistic interests were related to mechanical knowledge domains, Investigative interests were mostly related to science domains, and Artistic interests were most highly related to knowledge domains that reflected the humanities.
Like the domain of motives, one finds that combining tests of cognitive ability with measures taken from other domains, and thus other methods, maximizes our ability to predict important outcomes. One of the best multimethod studies that integrated multiple measures of intelligence, knowledge, interests, and personality measures to real-world performance outcomes was Project A (Campbell, 1985). Borman, White, Pulakos, and Oppler (1991) analyzed data from 4,362 first-term soldiers in nine U.S. Army jobs. Subjects were assessed for cognitive ability using the ASVAB, as well as job knowledge, dependability, and achievement orientation measures that were developed for the study. To assess performance, hands-on proficiency measures and supervisory ratings were taken, and the number of disciplinary actions and awards received were recorded. Path modeling demonstrated that although achievement orientation and dependability made independent, although small, contributions to supervisory ratings, the impact of general mental ability on supervisory ratings of job performance was completely mediated by job knowledge, which in turn was mediated by task proficiency. Further, dependability was positively related to job knowledge and negatively related to disciplinary actions. Achievement orientation was positively related to the number of awards a soldier received. The model demonstrated by this analysis shows that although general mental ability has a huge impact on job knowledge, and job knowledge is substantially related to task proficiency, it is by no means the largest of the contributors to job performance ratings by supervisors. Personality factors and outcomes associated with personality factors also make significant direct contributions to supervisory ratings.
There are many different approaches to the study and measurement of general mental ability. The most successful approaches, and consequently the most widely used, have used measures from across content domains to gain a fuller representation of the cognitive functioning required in reasoning, decision making, and other thought processes. Alternative approaches such as information processing techniques using elementary cognitive tasks have proved to be successful as indicators of general mental ability, but only when they are assessed and aggregated across modalities, content domains, and tasks. Other alternatives, such as self-report measures of intelligence and intellectual interest, have shown promise as indicators of general mental ability, but may be best suited to offering a more integrated picture of how basic brain processes, working memory, and personality may be related to real-world outcomes in intellectual functioning.
Was this article helpful?