Correlation and Association Models

The convergence of methods can be assessed by correlating the methods that are supposed to measure the same trait. Campbell and Fiske (1959) have extended this idea by defining an MTMM correlation matrix. In this matrix there is one indicator for each trait-method unit, and this matrix allows a thorough analysis of convergent and discriminant validity by comparing several correlation coefficients (e.g., the correlations between different methods measuring the same trait versus those between different methods measuring different traits). Schmitt (chap. 2, this volume) describes this approach in more detail. Campbell and Fiske's criteria for evaluating an MTMM matrix offer researchers a valuable and widely used approach to multimethod research. The interpretation of the MTMM correlations, however, is difficult when there are differences in the reliabilities of the measures because the reliabilities limit the sizes of the correlations (Millsap, 1995b). Therefore, analyzing and interpreting latent MTMM correlations that can be estimated by latent variable models is recom mended (Eid, Lischetzke, & Nussbeck, chap. 20, this volume; Rost & Walter, chap. 18, this volume).

The MTMM matrix was the basis for the refinement of correlation models testing specific hypotheses about the way trait and method influences are connected. It is based on Campbell and O'Connell's (1967) idea that the size of the correlations between traits depends on the similarity of methods used to measure the different traits, Swain (1975) developed a direct product model in which the correlations of an MTMM matrix are supposed to be a product of two correlations indicating convergent (correlation between methods) and discriminant (correlation between traits) validity. According to this model, the correlation Cor(Yjk, Y.() between an observed variable Yjk measuring the trait i with method k and an observed variable Y., measuring the trait j with the method I can be decomposed in the following way: Cor(Yjk, Yj}) = Cor(T,T ) x Cor(Mk, M,). The correlation Cor(T.,T.) represents the association between two traits (discriminant validity); the correlation CoKM^, M() denotes the correlation between the two methods (convergent validity). Moreover, the correlation between two observed variables measuring two different traits depends not only on the correlation of traits but also on the correlation of methods. If the same methods are used to measure the two different traits, the correlations of the observed variables equal the correlations of the two traits. If the same two traits are measured by two different methods, the correlation of the traits is attenuated by the correlation of the methods. Thus the smaller the correlations between the methods, the smaller are the expected correlations of the observed variables measuring the traits by these methods.

The Campbell and Fiske (1959) criteria can be evaluated by comparing different correlations of the direct product model (Browne, 1984; Cudeck, 1988; Marsh & Grayson, 1995). When considering the simplest example with two traits and two methods, these criteria can be evaluated as follows: (a) The correlations between two methods, for example, Cor(M1, M2), should be large indicating convergent validity, (b) The monotrait-heteromethod correlations [e.g., Cor{Yu, Y12)] should be higher than the heterotrait-heteromethod correlations [e.g., Cor(Yu,

Y22)]. Expressed in terms of the direct product model: CorCT^TJ x Cor(Mp M2) > Cor(TvT2) X Cor(Mj, M2). Because Cor(TpT1) = 1, this criterion is fulfilled whenever the correlation between traits is smaller than 1. (c) The monotrait-heteromethod correlations [e.g., Cor(Yn, Y12)] should be higher than the heterotrait-monomethod correlations [e.g., Cor(Yu, Y21)], for example: CoKTpTj) X Cor(M1, M2) > Cor(TpT2) X Cor(Mp Mj). Because CoKT^Tj) = Cor(Mv Ml) = 1, this requirement is fulfilled when Cor(Mv M2) > Cor(TvT2), and—more generally—when the between method-correlations are larger than the between-trait correlations, (d) The pattern of trait interrelationships should be the same considering the submatrices of the MTMM matrix (see Schmitt, chap. 2, this volume) comparing all possible method combinations. This requirement is always fulfilled when the direct product model is appropriate for the data because in a het-eromethod block, all trait correlations are weighted by the same method correlation, for example, Cor(Mv M2), ensuring that the ratio of two trait correlations is the same for all different mono- and heteromethod blocks taken into consideration.

Browne (1984) extended the direct product model to the composite direct product model, which also considers measurement error influences. Wothke and Browne (1990) have shown how this model can be formalized as a model of confirmatory factor analysis (see also Dumenci, 2000). The direct product models are attractive models because their parameters are closely linked to Campbell and Fiske's (1959) criteria. Their application is most useful when the expected MTMM correlations follow the proposed structure. They are, however, also limited. For example, they do not imply a partition of the variance in separate trait and method portions (Millsap, 1995b). Moreover, the models assume that the correlations between traits are the same for all monomethod blocks. This means, for instance, that the correlations between traits measured by self-report must equal the correlations between traits that are all assessed by peer report. This is a limitation of the model. Moreover, these models are based on single indicators for each trait-method unit, making the appropriate determination of reliability difficult.

Correlation methods are most appropriate for metrical variables as the convergence between two methods is represented by one value. For categorical (nominal, ordinal) variables, other coefficients are needed that take the categorical nature of the data into account because the convergence between methods could be different for the single categories of a variable. Consider two ratings for example: There might be high agreement for one category (i.e., whenever one rater chooses this category, the other rater chooses the same category) but low agreement for other categories. Because researchers are often less familiar with association models for categorical data than with classical correlation analysis, these methods will be explained in more detail in Fridtjof Nussbeck's chapter (chap. 17, this volume). He shows how association coefficients for categorical data can be defined and how loglinear modeling can be used to test specific hypotheses about the association and agreement with respect to categorical data.

Was this article helpful?

0 0

Post a comment