Reporting a Response

The final step in making a self-reported judgment is to communicate that judgment to the investigator. Anything that impedes the accuracy of this communication will affect the validity of the report. For instance, when reporting on illegal or other socially undesirable behaviors, participants may simply decide not tell the truth. Alternatively, when asked to respond using a Likert response scale, respondents may attempt to provide an accurate response, but different respondents may use the scale differently, resulting in unwanted method variance. As with all the other steps in the process of constructing a self-reported judgment, we must first ask what can go wrong when communicating a response. We can then go on to investigate the evidence that such errors do occur and the impact that these errors have on the validity of self-report measures.

Perhaps the most widely studied issue in the communication of self-reported judgments is the extent to which socially desirable responding distorts the validity of self-report measures. At its simplest, socially desirable responding can be defined as the tendency to endorse items that others would consider to be positive. Early work in the area focused on social desirability both as a property of items or scales and as an individual difference variable (Edwards, 1957; Messick, 1960; Wiggins, 1964). Edwards (1953, 1957), for instance, demonstrated that the probability that respondents would endorse an item could be predicted by the degree to which the trait or characteristic in the item was socially desirable. Researchers used this finding to argue that participants were not responding to the content of the items, but rather to the desirability of the items (see Hogan & Nicholson, 1988; Nicholson & Hogan, 1990, for a discussion). An alternative possibility, of course, is that desirable characteristics are, in fact, more common than undesirable ones (Edwards, 1953).

What is more troublesome for researchers interested in self-report methodology is that the tendency to endorse socially desirable responses varies across individuals, and this individual difference tends to correlate moderately to strongly with measures of adjustment. Messick (1960), for instance, showed that the tendency to respond in a socially desirable manner was reliably correlated with several clinical and personality scales. Such findings have led to the question of whether individual differences in personality and adjustment scales reflect individual differences in socially desirable responding to a greater extent than they reflect the content the scale developers intended to measure.

Attempts to understand and control for social desirability are complicated by the fact that most modern researchers believe that social desirability is not a single, unidimensional construct. Instead, most current models focus on a two-factor structure that may underlie the various measures of social desirability (Paulhus, 1984). The first of these factors reflects an intentional attempt to present oneself in a favorable light. Paulhus labeled this individual difference as impression management. He contrasted individual differences in this conscious process with individual differences in self-deception. According to Paulhus, self-deception was a more unconscious process that reflects respondents' belief that they are better than objective information would suggest.

Several theorists have offered suggestions on how to deal with the unwanted variance that socially desirable responding adds to scale scores (Block, 1965; Edwards, 1957; Nederhof, 1985; Paulhus, 1981). These suggestions vary depending on which aspect of social desirability one wants to control. For instance, some researchers have noted that socially desirable responding seems to be more pronounced in face-to-face interviews than in mail surveys or other more anonymous formats (e.g., Richman, Kiesler, Weisband, & Drasgow, 1999; Strack, Schwarz, Chassein, Kern, & Wagner, 1990). If so, the impact of social desirability may be reduced by ensuring anonymity. However, this strategy may work better for the more conscious process of impression management than for the more unconscious process of self-deception.

In addition, there are various statistical techniques and questionnaire construction techniques that researchers can use to limit the effect of social desirability. Paulhus (1981) organized these methods into three categories: rational, covariate, and factor-analytic techniques. Rational techniques focus on developing scales in which it is difficult to determine which items or responses are socially desirable or in which all items are matched for desirability (e.g., forced choice items in which respondents are asked to choose between two equally desirable responses can lessen the impact of social desirability).

The second strategy for dealing with social desirability is the use of covariation techniques (Paulhus, 1981). These methods require the administration of some measure of socially desirable responding in addition to the content scales of interest. If social desirability adds unwanted variance to a measure, then it should act as a suppressor variable. Thus, by first controlling for the effects of social desirability, the correlation between a self-report and an outcome or criterion variable should increase (Paulhus, 1981). However, the usefulness of this technique may vary depending on which aspect of social desirability one is measuring. A number of researchers have argued that the self-deception aspect of social desirability is related to measures of adjustment, and controlling for individual differences in self-deception may remove valid variance (McCrae & Costa, 1983; Paulhus, 1984).

To test this possibility, McCrae and Costa (1983) compared corrected and uncorrected self-reports of personality with the external criterion of spouse reports. If social desirability distorts test scores, then the corrected self-reports should correlate more strongly with the spouse reports than the uncorrected self-reports. However, their results indicated that correcting for social desirability failed to improve the validity of self-reported personality. Instead, McCrae and Costa sometimes found lower correlations between corrected self-reports and the criterion variables. This pattern of findings suggests that controlling for social desirability may remove meaningful variance from test scores.

The third approach to dealing with social desirability is useful when extracting factors from an item (or scale) correlation matrix (Paulhus, 1981). Early research on social desirability focused on the extent to which the factors that emerged when a broad array of personality and adjustment scales were factor analyzed represented content factors versus social desirability (e.g., Block, 1965; Messick, 1991). Paulhus (1981) argued that because socially desirable responding will affect most items, the first unro-tated factor that emerges from a factor analysis will reflect social desirability (Paulhus gives strategies for verifying this). If so, the first factor could be dropped, the item communalities adjusted, and the remaining factors rotated in any way that the researcher feels is appropriate. Presumably, this would result in factors that are free from influence of socially desirable responding.

There are two major types of effects that researchers examine when looking at the role of social desirability in self-reported assessment: the effect of social desirability on the criterion validity of a measure and the effect of social desirability on the underlying factor structure. Researchers have debated the pervasiveness and importance of these effects for decades (see, e.g., Block, 1965; McCrae & Costa, 1983; Messick, 1991; Rorer, 1965; Smith & Ellingson, 2002). However, in a recent series of studies within the organizational literature, Ellingson and her colleagues provided evidence that neither of these two types of effects tends to be large. Ellingson, Sackett, and Hough (1999) asked participants to complete personality inventories under two separate instructions, an honest condition and a "fake-good" condition. Ellingson et al. then corrected the faked scores for social desirability and compared corrected reports with the honest reports. They found that the corrected mean scores on the personality scales were closer in value to the honest scores, but that validity of the scales (as indicated by the correlation between the corrected and honest scores) was not improved after correction. In addition, when examining the implications for selection procedures in an organizational context, they concluded that "applying a correction made little difference in the proportion of correct selection decisions across various selection scenarios" (p. 163). Ellingson, Smith, and Sackett (2001) also examined the effects of social desirability on the factor structure of personality scales by using multigroup confirmatory factor analysis across groups of high and low socially desirable responders. Social desirability had very little effect on the factor structure of the measures (although other studies have found such effects; see Ellingson et al., 2001, for a review).

Social desirability is not the only process that can affect the communication of self-reported judg ments. Researchers have also focused on such response styles and response sets as acquiescence (the tendency to answer "true" or "yes"), deviance (the tendency to give strange or unusual responses), or extreme responding (the tendency to use extreme numbers). Anastasi (1988) noted that like research on social desirability, debate about these response sets and styles has focused on the extent to which these individual differences reflect irrelevant versus meaningful trait variance. Although debate about the pervasiveness of these response processes continues, researchers should be aware that these effects may influence the communication of self-reports and take steps to avoid them or measure their impact. Of course, as the other chapters in this volume make clear, multimethod research is one of the best ways to overcome the problems associated with communicating self-reported judgments.

Was this article helpful?

0 0

Post a comment