Multimethod Assessment

Because as social psychologists we often focus so much of our research energy on the laboratory experiment, we may be especially attuned to between- and within-method replications of our independent variables. However, we also accept the more typical conception of multimethod research that involves tapping into a dependent variable in multiple ways. Recognizing that any single source of measurement carries with it sources of bias and error, researchers ideally attempt to converge on the "truth" of a construct by assessing it through multiple measures, each with a different set of possible biases and error. When an independent variable produces similar effects across, say, a self-report, behavioral, and physiological measure, we can be more confident of the validity and generalizability of our conclusions than if only one dependent measure were used.

This logic has been endorsed repeatedly and enthusiastically in methodology texts in social psychology since the publication of Campbell and Fiske's (1959) seminal article (Aronson et al., 1990; Brewer, 2000; Cook, 1993; Houts, Cook, & Shadish, 1986; West, Biesanz, & Pitts, 2000). In practice, however, there is a regrettable overreliance on self-reports in social psychology (Diener & Scol-lon, 2002) stemming from a combination of factors, ranging from the economy and ease of use of self-report measures to inertia and satisfaction with such measures.

The poverty of dependent variable choices evident in much social psychological research is particularly unfortunate given that the majority of the measurement approaches described in this volume are well suited for tackling social psychological hypotheses. In the section that follows, we present an overview of important methodological features of these methods as well as an assessment of their appropriateness for and typical use within social psychology.

Table 26.1 presents a critical overview of the research methods covered in this volume. We offer this table as a way for researchers to compare easily the advantages and disadvantages of a given method, in addition to pointing interested readers to examples of recent social psychological studies using a method. For each method, we offer our opinion of the primary strengths and weaknesses of the methodology and present our sense (based on up-to-date review papers where available or based on estimates taken from research laboratories prominently identified with the methodology that have provided recent psychometric data) of its typical reliability and validity.

Next, we offer our subjective assessment of three important features of each method that might affect researchers' decisions to use it: (a) the directness of the inference afforded by the method, (b) the reactivity of the method, and (c) the ease of data collection using the method. By the directness of inference, we mean essentially the tightness of the conceptual link between the dependent measure provided by a method and the hypothetical construct of interest. A direct measure is one in which there are few, if any, plausible explanations for scoring high on the measure other than the participant actually having high standing on the hypothetical construct. An indirect measure is one in which a high score can mean other things besides high standing on the construct.

All other things being equal, a method that offers a more direct inference is usually a more valid measure, but in social psychology, all other things are often not equal. One of the disadvantages of more direct methods is that the directness of inference is often inversely related to the reactivity of the measurement (Webb, Campbell, Schwartz, Sechrest, & Grove, 1981). Thus, we offer also our assessment of each method's reactivity, defined as the extent to which the methodology raises participants' awareness of the construct being assessed and consequently their ability to modify their responses. When the construct of interest involves socially undesirable behaviors (e.g., prejudice), reactivity of measurement opens the real possibility of data distorted by response biases. In such cases, researchers might reasonably opt for a messier, less-direct form of measurement than a distorted—albeit direct—answer to their questions.

When interpreting our reactivity ratings, it is important to keep in mind that the judgment of reactivity refers to whether the dependent measure creates participants' awareness of the construct of interest and hence enables them to modify their responses with respect to that construct. Thus, a measure can be "reactive" in the sense of being obvious or intrusive yet still be considered "nonre-active" if the participants are unable to modify their behavioral response despite their awareness of the measurement process.

In some cases, a given method's variations are methodologically similar enough to each other that a single judgment of reactivity can be confidently offered for the method as a whole, as with global self-assessment methods (see Lucas & Baird, chap. 3, this volume). In other cases (Bakeman & Gnisci, chap. 10, this volume), though, the method category consists of a broad range of disparate measures, some of which are highly reactive but some of which may be completely nonreactive. For categories such as these, then, we are able merely to conclude somewhat lamely that the reactivity of measurement "varies."


Testing the tripartite structure of attitudes. The measurement of attitudes is one area of research in which multimethod assessment has proven especially useful. A long-standing claim about attitudes is that they have a tripartite structure consisting of affective, behavioral, and cognitive components. Breckler (1984) pointed out that using a single method, such as self-reports, to distinguish the three attitude components may produce overestimated correlations between components, simply because of shared method variance. Thus, measuring the three components using just self-reports may mask the presence of a robust tripartite struc-

Was this article helpful?

0 0

Post a comment