Ability Tests

David Lubinski

Annually, literally millions of military personnel, students, and workers are evaluated with the aid of ability tests for educational opportunities, differential training, and promotion. Yet, the attributes assessed by these instruments and the extent to which they are distinguished from other assessments (e.g., achievement tests and measures of more circumscribed competencies) have been a source of confusion and contention ever since the advent of ability testing (Campbell, 1996; Cleary, Humphreys, Kendrick, & Wesman, 1975; Cron-bach, 1975; Thorndike & Lohman, 1990). In addition, there are literally hundreds of measures purporting to assess human abilities, and although the magnitude of redundancy in this area has been acknowledged for over 75 years (Kelley, 1927) and continues to receive attention (Lubinski, 2004), each distinct measure typically has a unique name and often is welcomed as bringing a fresh approach to ability testing. Most recently, new formulations of emotional, multiple, and practical intelligence have added complexity to this state of affairs. Happily, however, modern methods and findings can bring considerable clarity and parsimony to ability, achievement, and competency testing. This serves as the topic for this chapter.

This chapter is parsed into four sections: (a) the organization of cognitive abilities and measures thereof, (b) evaluating the constructs assessed by ability tests, (c) approaches to validation, and (d) augmenting the construct validation process through similar and different modalities. Across these sections, convergent and discriminant validity is stressed for isolating common and distinct constructs. In addition, two complementary albeit underappreciated concepts are also underscored: extrinsic convergent validity (Fiske, 1971) and incremental validity (Sechrest, 1963). The former is especially useful for ascertaining when two measures are conceptually equivalent and empirically interchangeable (reducing scale redundancy), whereas the latter is particularly helpful for evaluating when innovative measures capture unaccounted for criterion variance (constituting a genuine scientific advance).

Was this article helpful?

0 0

Post a comment