The term cognitive task analysis (CTA), sometimes referred to as cognitive job analysis, has been defined in various ways and is associated with numerous methodologies. Generally, CTA refers to a collection of approaches that purport to identify and model the cognitive processes underlying task performance (Chipman, Schraagen, & Shalin, 2000; Shute, Sugrue, & Willis, 1997), with a particular focus on the determinants of expert versus novice performance for a given task (Gordon & Gill, 1997; Means, 1993). Although the term CTA first emerged in the late 1970s, the field has grown substantially in the last decade, and some authors seem to have forgotten that most methodologies are adapted from the domain of cognition and expertise (see Olson & Biolsi, 1991, for a review of knowledge representation techniques in expertise). Instead, CTA is sometimes treated as if it evolved entirely on its own (Annett, 2000). The value added for CTA is not that it represents a collection of new activities for analyzing performance, but that it represents the application of cognitive techniques to the determination of expert versus novice performance in the workplace, facilitating high levels of knowledge and skill (Lesgold, 2000).
CTA is often contrasted with behavioral task analysis. Whereas the former seeks to capture the unobservable knowledge and thought processes that guide behavior (i.e., how people do their jobs), the latter seeks to capture observable behavior in terms of the actual task activities performed on the job (i.e., what people do on their jobs). Proponents of CTA claim that due to the increasing use of technology in the workplace, jobs are becoming increasingly complex and mentally challenging, necessitating a more cognitive approach to the analysis of job tasks (e.g., Gordon & Gill, 1997; Ryder & Redding, 1993; Seamster, Redding, & Kaempf, 2000); thus, it is believed that task analysis methodologies may be inadequate procedures for capturing how people perform in jobs that require cognitive skill. However, separating the unobservable cognitive functions of a job from the observable behavioral functions of jobs may limit the usefulness of the overall analysis, and both types of information are often necessary for a complete understanding of the tasks involved (Chipman et al., 2000; Gordon & Gill, 1997; Shute et al., 1997). Therefore, rather than be considered a replacement for task analysis approaches, CTA should be considered a supplement because neither method alone may be able to provide all of the information necessary for analyzing how an individual performs his or her job (Ryder & Redding, 1993).
At the same time, situations probably exist in which CTA is not necessary for fully understanding task performance. Because approaches to CTA are generally time-consuming, labor-intensive, and expensive endeavors (Potter, Roth, Woods, & Elm, 2000; Seamster et al., 2000), it would be wise to first consider the nature and purpose of the analysis before choosing a CTA methodology over a different job analysis methodology. Although most examples of CTAhave been conducted for highly complex jobs (e.g., air traffic controllers, air force technicians; Means, 1993), some investigations have been conducted for more commonplace jobs outside of the military domain (e.g., Mislevy, Steinberg, Breyer, Almond, & Johnson, 1999, for dental hygienists; O'Hare, Wiggins, Williams, & Wong, 1998, for white-water rafting guides; Hoffman, Shadbolt, Burton, & Klein, 1995, for livestock judges). It is easy to imagine the application of CTA techniques to any job that requires some degree of decision-making or cognitive skills; again, however, such analysis may not be necessary in order to gain an understanding of what constitutes effective performance.
As with traditional types of job analysis, CTA methodologies abound, and although they share the common goal of understanding the cognitive processes that underlie performance, there is little comparative information available as to which methods are appropriate under different circumstances and for different job settings (Chipman et al., 2000). (Seamster et al., 2000, do provide suggestions for which methods are appropriate for different skill domains.) In addition, there appears to be no evidence that any single approach is useful across all domains (Schraagen, Chipman, & Shute, 2000), or that different methods will result in the same data (Gordon & Gill, 1997); thus, the use of multiple approaches with multiple experts would likely yield the most meaningful information (Potter et al., 2000). Chipman et al. (2000) suggest that the following issues should be taken into consideration when choosing a CTA methodology: the purpose of the analysis, the nature of the task and knowledge being analyzed, and the resources available for conducting the analysis, including relevant personnel.
Some of the more common CTA techniques include PARI (prediction, action, results, interpretation), DNA (decompose, network, and assess), GOMS (goals, operators, methods, and selection), and COGNET (cognition as a network of tasks). Examples of techniques borrowed from the domain of expertise include interviews and protocol analysis. Information on these and other procedures is available in Hoffman et al. (1995); Jonassen, Tessmer, and Hannum (1999); Olson and Biolsi (1991); and Zachary, Ryder, and Hicinbothom (1998).
Because the use of CTA as a job-analytic technique is relatively recent, a number of issues have yet to be resolved. First, for someone new to the field of CTA, there is little documented information available concerning how to actually perform the different techniques, making replication difficult (Shute et al., 1997). In addition, the procedures are somewhat complex and difficult (Gordon & Gill, 1997), are not refined to the extent that standardized methods exist (Shute et al., 1997), and require that the analyst become familiar with the technical details of the particular domain being studied (Means, 1993). Thus, the amount of time and effort required by each individual involved in the analysis and the lack of information on how to conduct a CTA potentially limits the usefulness of the procedures in operational settings. This limitation is evidenced by the limited number of CTAs that are being performed by a relatively limited number of persons who are generally experienced in the domain of cognitive science (Seamster et al., 2000).
Second, there is little information available on how to use the data collected during a CTA—specifically, on how to go from the data to a solution, such as the design of training programs or other systems within organizations (Chipman et al., 2000; Gordon & Gill, 1997). The large quantity of data generated by a CTA makes development of a design solution even more difficult (Potter et al., 2000).
Third, there is a lack of information on the quality of the data gleaned from CTA techniques. Thus, researchers need to assess the relative strengths and weaknesses of the different techniques to determine the conditions under which the use of each technique is optimal—and finally, to assess the reliability and validity of the different techniques. Reliability could be assessed by comparing the results of different analysts using the same procedures, and validity assessment would involve comparing the results of multiple experts using multiple procedures (Shute et al., 1997). The lack of this kind of information is probably a result of the intensive nature of the data collection process.
To conclude, CTA represents an intriguing way of analyzing jobs. However, the lack of information available concerning the relative merits of different methodologies for conducting CTA limits applicability at present. An interesting area that is gaining in study is the application of CTA methodologies to team tasks and decision making to determine the knowledge shared by team members and how it is used to elicit effective performance (e.g., Blickensderfer, Cannon-Bowers, Salas, & Baker, 2000; Klein, 2000).
Was this article helpful?