Conclusions

Computerized assessment has progressed greatly in the past few decades. Dumb terminals connected to a mainframe computer have been replaced by multimedia personal computers with tremendous computational power, high-resolution color monitors, stereo sound, and full-motion video. These capabilities allow test developers to assess individual differ ences in ways that were impossible just a few years ago. The innovative assessments described in this chapter provide an indication of the variety of new tests. Certainly, the years ahead will see a proliferation of innovations.

Olson-Buchanan et al.'s (1998) Conflict Resolution Skills assessment provides an example of how computerized assessment has progressed. Their initial version of the test, created in the early 1990s, required a laser disk player and IBM's M-Motion Video Adaptor board, both of which cost more than $1,000 and were awkward to use. A skilled programmer spent months writing a Pascal program to play the video clips. The second version of the assessment replaced the laser disks with CDs, which had become standard by the mid 1990s. A critical issue, however, was whether a computer's video adapter card was fast enough to play full-motion video; some were, some were not. Software was developed using specialized computer-based training software, which took months to learn and proved to be very unstable. The third and current version is a Microsoft Access application; the program was written in a few days. Access is a very stable program; the frequent crashes of the previous version have been replaced by a program that runs reliably. Moreover, virtually all currently available computers have CD drives and video adaptor cards that play video smoothly.

There are many directions for future work in computerized assessment. Many new computerized tests will be developed to assess individual differences difficult to measure with paper-and-pencil multiple-choice items. As software tools improve, assessments using virtual reality may become widespread. Limitations on assessments may lie in the imaginations of test developers, rather than in computer hardware and software.

Concomitantly, work on scoring computerized assessments will be critical. Interestingly, researchers investigating computerized scoring of essays (Powers, Burstein, Chodorow, Fowles, & Kukich, 2002) recently challenged skeptics to beat their algorithm. A professor of computational linguistics provided the most successful entry; his bogus essay fooled the software into giving a spuriously high score. In general, however, the computer software was surprisingly effective in producing scores similar to those of human raters.

In sum, computerized assessment has made great strides during the past four decades. Com puter hardware and software can now implement the creative visions of test developers. Many new advances seem likely in the near future.

ures" (p. 357). Such equivalence is important when a test or scale was validated using paper-and-pencil administration samples: If the computer version does not produce scores that are equivalent, it must be revalidated for this administrative medium.

Many studies have examined the equivalence of computerized tests and their paper-and-pencil counterparts. From these studies we know that the nature of the test and the features of the administration format can threaten measurement equivalence. In a meta-analysis of cognitive ability tests, Mead and Drasgow (1993) showed that computer-based speeded tests were not equivalent to their paper-and-pencil counterparts, but carefully developed power tests were equivalent. Similarly, Richman, Kiesler, Weisband, and Drasgow's (1999) metaanalysis suggests that test format and test environment affect responses to noncognitive tests. For example, when there is a lack of anonymity, an inability to revise responses, and the assessment is administered in a group setting, respondents to the computer version have a greater tendency to distort their responses in a socially desirable direction. These results suggest that the more similar a computer-based test is to its paper-and-pencil counterpart, the more likely that scores are equivalent across media.

Some individuals may be disadvantaged by the use of computers as an administration medium. Anxiety has been shown to be related to a broad range of performance criteria. Certain respondents may experience greater anxiety during a computer-administered test (Llabre et al., 1987), and individuals who are not familiar with computers may also experience greater anxiety (Rosen & Maguire, 1990). However, research has been inconclusive as to whether computer anxiety affects performance on computer-based tests (Dimock & Cormier, 1991; Shermis & Lombard, 1998). With the increasing prevalence of computers in society, the lack of computer familiarity may become a nonissue.

Was this article helpful?

0 0
Stop Anxiety Attacks

Stop Anxiety Attacks

Here's How You Could End Anxiety and Panic Attacks For Good Prevent Anxiety in Your Golden Years Without Harmful Prescription Drugs. If You Give Me 15 minutes, I Will Show You a Breakthrough That Will Change The Way You Think About Anxiety and Panic Attacks Forever! If you are still suffering because your doctor can't help you, here's some great news...!

Get My Free Ebook


Post a comment