Techniques Against Common Errors and Problems

Every coin has two sides, and so the great advantage of revealing assessment materials to a large worldwide audience via the Internet also means that the collected information may be accessible for many people. There is evidence that confidential data is often openly accessible (an estimate runs at 25%-33%, and this is a cause for concern) because of configuration errors on the part of the researcher that can be easily made in certain operating systems (Reips, 2002b). Several measures help delete this problem: (a) choosing the right (secure) combination of operating system and Web server, (b) using a pretested system to develop and run the Web-based assessment, and (c) having people with good Internet knowledge test the Web-based assessment for security weaknesses.

In dealing with multiple submissions that may become a problem in highly motivating study scenarios (see the description of game-based Web experiments in Three Web-Based Assessment Methods, this chapter), one can use techniques for avoiding and techniques for controlling the respondents' behavior (Reips, 2002c). Avoidance of multiple submissions, for instance, can be achieved by limiting participation to members of a group known to the researcher, like a class, an online participant pool, or online panel (Göritz, Reinhold, & Batinic, 2002) and working with a password scheme (Schmidt, 1997). A technique that helps control multiple submissions is the sub-sampling technique (Reips, 2000, 2002b): For a limited random sample from all data sets, every possible measure is taken to verify the participants' identity, resulting in an estimate for the total percentage of multiple submissions. This technique can help estimate the number of wrong answers by checking verifiable responses (e.g., age, sex, occupation). Applications for Web-based assessment may include routines that check for internal consistency and search for answering patterns (Gockenbach, Bosnjak, & Göritz, 2004). Overall, it has repeatedly been shown that multiple submissions are rare in Internet-based research (Reips, 1997; Voracek, Stieger, & Gindl, 2001), and that data quality may vary with a number of factors (e.g., whether personal information is requested at the beginning or end of a study, Frick et al., 2001; information about the person who issues the invitation to the study, Joinson & Reips, in press; or whether scripts are used that do not allow participants to leave any items unanswered and, therefore, cause psychological reactance, Reips, 2002c).

Was this article helpful?

0 0

Post a comment