Info

ture. Also, theoretically, there is little reason to assume that people's attitudes are only a function of processes captured by self-report. Nonverbal measures of physiological responses and overt behavior may tap other aspects of a person's attitude. Furthermore, each of the three components is a hypothetical, unobservable construct, and as such "no single measure can be assumed to capture its full nature" (p. 1193). The more the assessment of each component is achieved through multiple and maximally distinct methods, the more measurement errors will cancel out. As measurement method overlap increases, measurement error can accrue, producing a misleading picture of the attitude construct.

The two studies reported by Breckler (1984) took especially effective advantage of a multimethod approach. In both studies the attitude object was the domain of snakes. In the first study, participants completed four measures of affect (Thurstone Affect, Positive Affect and Negative Mood via the Mood Adjective Check List, and heart rate) while in the presence of an actual, live snake. The behavioral component was first measured by asking participants to engage in series of increasingly closer physical contact to the snake. They were also shown a series of slides of various snakes and asked how close they would be willing to get to each type of snake, as a behavioral intention. Finally, they completed a Thurstone scale that was adapted to tap behavioral intentions. The cognitive component consisted of a Thurstone cognition scale, a Semantic Differential, and a participant-coded favorable or unfavorable thought listing. Covariance structure analysis favored a tripartite model. With the exception of heart rate, all the measures loaded most highly on their respective factors. Furthermore, the three factors were correlated with each other but only moderately so. Using multiple methods to tap each component reduced the likelihood that overlapping measurement error would exaggerate the sense that three components were highly correlated. Thus, the independence between components was given a better chance to emerge.

Breckler took further advantage of multimethod assessment strategies in Study 2. He tested his supposition that measuring the three attitude compo nents using only one type of measure would reduce the sense of independence between components. He reasoned that using a paper-and-pencil measure for all components would enhance the likelihood that all responses, even to behavioral questions, would actually be determined by participants' "verbal knowledge system." Participants in Study 2 were asked to imagine the presence of a live snake (rather than responding to an actual snake) and completed verbal report versions of the nonverbal measures used in Study 1 (in addition to the other verbal measures). Covariance structure analysis suggested that the three-factor model was superior to the one-factor model; however, compared to Study 1, the magnitude of this difference was small. Breckler (1984) argued that the use of the same method to measure the three components as well as an imagined stimulus "lead to an overestimate of correlations among affect, behavior, and cognition" (p. 1202). In Study 1 the average correlations among the components was .55, whereas in Study 2 it was .83.

Using an actual snake versus an imagined snake also appeared to affect participants' responses. This seemed especially evident when examining participants' responses to the negative and positive mood scales in the two studies. In Study 1, where participants reacted to an actual snake and thus were probably reporting more what they would actually feel about snakes, the correlation between positive and negative mood was only -.13 (ns). However, in Study 2, where participants imagined how they would react to a snake, the correlation was -.42 (p. < .01). It seems reasonable to suppose that the latter correlation more reflects participants' theories of how they react than how they might actually react.

Breckler's two studies show what a multimethod approach to tackling a question can yield. Study 1, using multiple types of measures for each attitude component, was able to show clearly that the measures designed to tap a particular component were more highly correlated with each other than they were with measures designed to tap the other components. And yet, each component was correlated enough with the other to suggest that each was sufficiently linked to a broader construct of a general attitude. The importance of multiple types of measures was highlighted by the contrasting pattern of result in Study 2, which used only one type of measure, verbal self-report. The three components, using this single-method approach, seemed much less independent than what was evident in the mul-timethod Study 1. Interestingly, the variability in the degree of independence between the attitude components evident in comparing the two studies became a springboard for understanding more fully the nature of the attitude construct. Breckler speculated about a number of factors that might make each component associated with a distinct or similar response system beyond measurement overlap, such as degree to which a person's behavioral response toward the object is voluntary and consistent with the other components. A multimethod approach also leads naturally to the suggestion that future research should involve conceptual replication using other attitude domains besides snakes. Presumably, the tripartite model would emerge across attitude domains, but variations in intercom-ponent consistency would hardly be a problem if the underlying reasons for such variability could be systematically tracked or introduced. Domains in which people have a lot more experience, those that are more concrete, and those whose responses are mediated by more than one response system are possibilities, each of which could be tested using a multimethod approach.

The measurement of prejudice. Some attitudes are more subject to socially desirable responding than others, and in such cases the high reactivity of self-report measures becomes an even greater problem. Not only are people sometimes motivated to misrepresent their attitudes for self-presentational reason, but they may also be unaware of their true attitudes (Greenwald & Banaji, 1995). Prejudicial attitudes are prime examples. Thus, social psychologists have searched for methods besides self-reports to measure such attitudes more accurately (Devine, 1989; Dovidio, Kawakami, Johnson, Johnson, & Howard, 1997; Fazio, Jackson, Dunton, & Williams, 1995; Greenwald, McGhee, & Schwartz, 1998). The most recent of these techniques is the Implicit Association Test (IAT;

Greenwald et al., 1998), mentioned earlier, which aims to measure unconscious attitudes through tapping automatically evoked negative and positive associations with attitude objects. The IAT holds much promise because its procedure, which is based on reaction times, appears impervious to self-presentational motives. In addition, as the measure seems connected to a response system distinct from self-report measures, it may "reveal unique components of attitudes that lie outside conscious awareness and control" (Cunningham, Preacher, & Banaji, 2001, p. 163). The IAT, therefore, is an assessment method that promises to measure attitudes in a less-reactive way than traditional self-report measures while at the same time tapping aspects of attitudes that other measures might not be able to measure, even if these measures did not suffer from high reactivity.

Recent research using the IAT has taken advantage of multimethod approaches as researchers compare and contrast the reliability and validity of the IAT with other measures. A study by McConnell and Leibold (2001) is an interesting example. Participants completed the IAT and explicit (self-report, semantic differentials, and feeling thermometer) measures of prejudice and then later met with a White and then a Black experimenter in a structured social interaction. Videotapes of these interactions were coded for a number of specific prejudicial behaviors. In addition to these codings, each experimenter also made global ratings of the participants' prejudicial behavior.

Unlike some previous work (Greenwald et al., 1998), the IAT and the explicit measures were moderately correlated with each other (r = . 42, p < .01) Both types of measures were correlated with prejudiced reactions, but the IAT was correlated with both experimenter global ratings and coders' ratings, whereas explicit measures were only correlated with experimenter ratings—even though coder ratings and experimenter ratings were correlated with each other.

The multimethod approach taken in this study allowed a number of important points to be made. First, the moderate correlation between the implicit and the explicit measures suggested that they measure overlapping but distinct constructs. This picture was further reinforced by the pattern of correlations between these two measures and the multiple measures of prejudiced reactions. Both implicit and explicit measures were correlated with coder ratings, but only the implicit measure was correlated with experimenter ratings as well, suggesting that the IAT can predict prejudiced reactions in a way that explicit measures cannot. Prior research by Dovidio et al. (1997) indicates that only implicit measures of prejudice correlate with the type of nonverbal behavior similar to what was coded for in the present study. Nonverbal behaviors are under less conscious control than verbal speech (Babad, Bernieri, & Rosenthal, 1989; Ekman & Friesen, 1969), and so it makes sense that the IAT, billed as a measure more closely linked to unconscious processes, should correlate with nonverbal behaviors. Only by including both implicit and explicit measures of attitudes and including both nonverbal and global ratings of prejudicial behavior would this more complex sense of how prejudicial attitudes operate and predict behavior have had the opportunity to emerge.

The virtues of multimethod approaches are also evident in what was not done in the McConnell and Leibold study. Of course, any study has limitations, and McConnell and Leibold listed a number of features of their procedure that call for replicating the results in a way that rules out alterative explanations or that adds to their generalizability. The design of the study entailed that participants interact with the Black experimenter close on the heels of completing the measure of prejudicial attitudes, making the accessibility of conscious racial attitudes more likely and enhancing the likelihood of attitude-behavior consistency (e.g., Fazio, Powell, & Williams, 1989). McConnell and Leibold (2001) speculated that their procedure would make it more likely that the Black experimenter would be categorized as "Black," also making participants' racial attitudes more predictive of their behavior toward the Black experimenter (Smith, Fazio, & Cejka, 1996). Replicating the study without this proximity of attitude measurement and prejudicial reaction measure would test these possibilities. McConnell and Leibold also speculated that one reason why their study found a correlation between the IAT and explicit measures is that their participants completed the IAT after completing the explicit measures. Prior work by Greenwald et al. (1998), in which no correlation emerged, placed the IAT before the explicit measures. The IAT probably sensitized participants to the issue of racial attitudes and thus may have heightened self-presentational motives. Examining this issue more systematically through a replication that varies the order of completing these measures is clearly a necessary step to take. We can add another multi-method suggestion. The McConnell and Leibold procedure might also have benefitted from stimulus sampling. Only one White experimenter and one Black experimenter were used, and thus it is quite possible that idiosyncratic features of either or both of the experimenters might have introduced confounds.

Was this article helpful?

0 0

Post a comment