Selfreport Methods Need To Be Supplemented With Other Methods

Some fields in the behavioral sciences rely almost exclusively on global self-report surveys of respondents. Skeptics point to instances where self-report instruments have gone wrong and decry the numerous studies in which a bevy of self-report scales are merely correlated with one another. The proponents of the self-report technique cite the validity of self-report measures in some studies, as well as the virtue that people themselves often know information that cannot be obtained by other methods. Both views are correct—the measures are flawed and also have utility—meaning that self-report measures should be used in many studies, but must be supplemented with other types of measures.

An example of self-reports of grades, weight, and height, from a study conducted in Ed Diener's laboratory by Frank Fujita and Heidi Smith, is illustrative of the strengths and weaknesses of the method. We asked a group of 222 undergraduate respondents for their height in inches, their weight in pounds, and their grade-point averages at the university. They did not know that we would also acquire external objective measures of these variables—from a measuring tape, a balance scale, and their college transcripts—and correlate the two types of measures. How accurate were people? Respondents overestimated their height by 1.36 inches on average and their grades by .58 points on a 4.0 scale and underestimated their weight by 6.5 pounds. The correlations between the self-reported score and the objective indicators were extremely high for height, r = .96, and weight, r = .94, and moderate for grades, r = .41. Note that although the weight correlation is extremely high, most people underestimated their weight. Furthermore, the self-reported weight, despite its accuracy at the level of the cross-person correlation, was far off the mark for some subjects. Eight respondents out of the 146 for whom we had objective weight data underestimated their weight by more than 20 pounds! One respondent overestimated his height by 7 inches, and there were 11 individuals out of 197 who overestimated their grade-point average by 1.5 points or more, over a third of the full range of the grade scale going from 0 to 4.0! Two subjects misreported their grades by more than 2 points, equivalent to reporting an A average when one's grades are really Cs. Thus, the degree of accuracy appears to be relatively high when examining the correlations, but not so high when examining absolute accuracy or the accuracy of specific individuals.

Another interesting finding is that the underesti-mations and overestimations across the three domains were not correlated significantly with one another—the Pearson correlations ranged from .05 to . 11 between the three misreporting scores. In addition, none of the three misreporting scores came even close to correlating significantly with scales of social desirability such as the Crowne-Marlowe, the Balanced Inventory of Desirable Responding scales, or the Edwards Social Desirability Scale (see Paul-hus, 1991, for a review of these scales), with correlations ranging from -.02 to .13. Because the different misreporting scores did not correlate with each other, it is not surprising that they did not correlate with the social desirability scales either, suggesting that misreporting might be particular to the domain and situation rather than a general characteristic. The best predictor of the accuracy of grade estimation was having a high GPA, r = .67. In other words, people with high grades misreported their grades less than did people with low grades.

The lessons from Fujita's study are manifold. First, whether self-reports are considered accurate or inaccurate will depend on the self-report content (height was more accurately reported than grades), on the purposes of the study (e.g., whether one needs a precise measure or only a general estimate), and on whether one needs an absolute measure or a relative measure. Another clear lesson is .that just because there are high correlations between self-report and objective measures, even in the .90s, does not mean that the score is necessarily accurate for all purposes or for all individuals. Because the weight of college students shows large variations between individuals (from 99 to 233 pounds in our sample), even misestimates of 20 pounds might not result in a low correlation between the self-report and the objective measure, because the correlation is influenced by the variability of scores between individuals. Furthermore, a consistent tendency toward underestimation might leave a large correlation between the two scores intact. Nonetheless, underestimations of weight by over 25 pounds could be extremely important in many situations (think of the wedding dress that is 25 pounds too small). Furthermore, people's degree of exaggeration is inconsistent from one domain to another and is not necessarily predicted by scales of social desirability. We can take from this study that self-reports can be accurate or inaccurate, depending on the researcher's domain and purpose, and that much can be learned from augmenting self-reports with other types of data.

We extensively use self-report data in our own research and yet warn readers that additional forms of measurement are almost always desirable when it is possible to obtain them. Of course the validity of self-report is not an either-or question, because the validity is likely to vary across domains and across the question of "Validity for what?" We know from the work of Schwarz and Strack (1999) and others that self-report is not a simple process in which respondents generate a simple answer to a unitary question; rather, self-report involves complex inferences and mental processes. The point we will make later is that self-report responses, like the responses to all measures, need to be embedded in the theory so that it includes predictions of how the self-reports are generated.

Was this article helpful?

0 0

Post a comment