The Organization Of Cognitive Abilities

There are literally hundreds of ability tests (Carroll, 1993; Cattell, 1971; Jensen, 1980, 1998; Sternberg, 1994),1 and a framework is needed to organize them. Over the years, proposals have been made to

Support for this article was provided by a Templeton Award for Positive Psychology, a NICHD Grant P30HD15052 to the John E Kennedy Center at Vanderbilt University, and a Cattell Sabbatical Award. Earlier versions of this manuscript profited from many excellent suggestions by Camilla P Ben-bow.

'Given the scope of phenomena surrounding ability tests, there are several topics that interested readers may wish to pursue that space limitations preclude. Following the publication of Herrnstein and Murray's (1993), The Bell Curve, for example, misinformation on all sides of the debate motivated the American Psychological Association (APA) to assemble a task force (and issue a report): "Intelligence: Knowns and Unknowns" (Neisser et al., 1996). In addition, Intelligence published a special issue entitled, "Intelligence and Social Policy" (Gottfredson, 1997). Two special issues of Psychology, Public Policy, and Law also appeared (Ceci, 1996; Williams, 2000). In addition, Sternberg's (1994) Encyclopedia of Intelligence is an excellent resource on tests, the history of testing, and creators of major advances; Thorndike and Lohman (1990) provide an excellent treatment of the organize cognitive abilities within 120 categories (Guilford, 1967), seven primary dimensions (Thur-stone, 1938), and one dominant dimension (Spearman, 1904). The former two abstract multiple abilities at uniform levels of molarity. However, as Snow (1986) has pointed out, as empirical evidence accrued, a clear but different picture emerged: The dimensionality and organization of human cognitive abilities is neither unitary (Anderson, 1983; Spearman, 1904, 1927) nor consisting of specific modules (Fodor, 1983; Gardner, 1983, 1993; Guilford, 1967). Rather, cognitive abilities are organized hierarchically, and tests designed to measure individual differences in cognitive abilities—when applied to a wide range of talent—have replicated this idea repeatedly (Carroll, 1993; Gustafsson, 2002; Snow & Lohman, 1989). With respect to the psychological import of dimensions within this hierarchy, dimensions at the highest level of generalization have the most referent generality (Coan, 1964)—or breadth and depth of their external relationships—whereas more molecular dimensions are relevant to fewer psychological phenomena (Brody, 1992; Cronbach & Snow, 1977; Gustafsson, 2002; Jensen, 1980, 1998).

The most definitive treatment of the hierarchical organization of cognitive abilities is Carroll's (1993), wherein he reviews (and reanalyzes) over 460 factor-analytic data sets collected over most of the past century. Carroll's (1993) hierarchical (three-stratum) model contains about 60 first-order stratum I factors, eight stratum II group factors, and one general factor or general intelligence ("g") at its vertex, stratum III (see Carroll, 1993, Figure 15.1, p. 626). Snow (Gustafsson & Snow, 1997; Mar-shalek, Lohman, & Snow, 1983; Snow & Lohman, 1989) and his students have corroborated this hierarchical structure through a more parsimonious radex scaling model (see Figure 8.1a). A complexity dimension (general intelligence, "g," or intellectual sophistication) is found at the core, and three content domains (or more specific abilities)— quantitative/numerical, spatial/mechanical, and verbal/linguistic—surround this general dimension. In Snow's radex model, two bits of information are required to conceptualize and locate a test in two-

dimensional space, complexity and content. Content and complexity are inextricably intertwined in all cognitive tests.

Figure 8.1a illustrates three different types of tests (viz., "A," "B," and "C"), and the subscripts of each letter denote tests of varying degrees of complexity (larger numbers are associated with more complex tests). Figure 8.1b illustrates the parallel between the radex and hierarchical factor-analytic solutions quantitatively, whereas Figure 8.1c illustrates these parallels structurally. This model is useful for conceptualizing and organizing the overwhelming number of ability tests, because it helps explain why tests covary or are psychologically close—because they share content or complexity. The letters in Figure 8.1a denote similarity in content, whereas subscripts denote degree of complexity.

To the extent that tests are highly correlated, they are found in close proximity in this two-dimensional space (Figure 8.1a); the distance between any two tests in this space indicates the magnitude of their correlation. Complex tests are found near the center (or centroid of the radex), whereas less-complex tests occupy the periphery. Geometrically, the radex is formed by a series of circumplexes and simplexes: Tests located on or near lines running from the origin of the radex to its periphery form simplexes (tests having similar content but differing in complexity form arrays on which correlations between tests decrease as they become farther apart). Second, circular bands formed by radii extending from the centroid at uniform distances define tests of comparable complexity, but that vary in content; these circular bands form circumplexes (circles on which tests may be arrayed and on which correlations between tests decrease as they become farther apart). Hence, knowing the complexity and content (quantitative, spatial, verbal) locates specific tests within the radex.

Figure 8.2 is an empirical example of a radex scaling of a number of ability tests and composites formed by various aggregations of ability tests (Marshalek et al., 1983); well-known clusters of fluid abilities (Gp), crystallized abilities (Gc), and spatial visualization (Gv) are readily identified in

Parallelism Between the Radex and the Hierarchical Factor Model

(a) Radex Scaling for 10 Tests

Was this article helpful?

0 0

Post a comment