Multiple Observations Versus Multiple Methods

The issues we have raised so far are not unique and are unlikely to take the field of I/O psychology by storm. Others have made similar points without lasting effects (e.g., Dunnette, 1966). Much research in organizations is simply too constrained by realities of working in the field to take full advantage of conducting research according to Campbell and Fiske's (1959) approach. Therefore, we would like to reconsider some of the field's chronic measurement problems from a slightly different angle.

To begin, let us consider the correlation between a single assessment at two different time periods:

This correlation is affected by four factors. First, the longitudinal stability of the measured construct influences the observed correlation. This is the true score in classical test theory. Second, systematic error variance between Times 1 and 2 influences the observed correlation. Both these factors will cause the observed correlation to be high. The MTMM approach attempts to reduce artificial inflation of r . because of systematic, and correlated, error

variance. Through the use of multiple methods it triangulates on true construct variance and, ideally, reduces correlated error variance to zero.

Two other factors act to reduce r : random error and dynamic construct variance. Constructs are assumed stable in classical test theory; any fluctuation is assumed to be due to random error and lumped along with dynamic construct variance into the error term.3 This thought experiment illustrates that if constructs vary systematically and meaningfully across time, meaningful variance is being ignored. Further detail on other measurement issues is in a preceding chapter (Khoo, West, Wu, & Kwok, chap. 21, this volume).

We propose that the field of I/O needs to reduce the influences of systematic error variance, but should also concern itself with dynamic construct variance that artificially lowers observed correlations among constructs. Dynamic construct variance traditionally is combined with random error variance, but if constructs do change across time, then ignoring this variance excludes study of interesting phenomena.

For example, one of I/O psychology's most popular constructs, job satisfaction, is typically assessed at one time and correlated with variables collected at the same or different times. Researchers conclude that job satisfaction is systematically related to other variables and constructs. However, this research enterprise assumes that job satisfaction is a stable construct that does not vary appreciably across time. Evidence suggests that this is not a safe assumption. The first study that directly questioned this assumption asked whether individuals in positive moods reported higher job satisfaction than those in neutral moods. Results showed that individuals who were placed into positive moods at the time they took a job satisfaction survey scored significantly higher on it than those whose moods

3Most researchers estimate reliability using coefficient a. When reliability is computed in this way, dynamic construct variance would be assigned to true variance assuming individual items covary positively over time. To the extent that average item intercorrelation (coefficient a) is high because items covary across time rather than between persons will be the extent to which dynamic construct variance is assigned to true variance.

were not so manipulated (Brief et al., 1995). Further evidence indicates that individuals' levels of satisfaction vary widely across times of the day and days of the week when asked repeatedly in a diary design (Ilies & Judge, 2002). Both studies draw attention to the conclusion that satisfaction cannot be assumed to remain stable as events and feedback from behaviors impinge on it across time.

This does not deny that job satisfaction has a stable portion of variance, but unless this portion is large compared to its total variance, we cannot ignore dynamic fluctuations. It strains credibility to assume that construct job satisfaction does not vary across time and that any fluctuations are error variance.

The published stability coefficients for well-constructed measures of job satisfaction suggest, at first glance, that these measures are indeed stable across time. Coefficients vary across measures, studies, and time intervals, but stability coefficients ranging from .70 to .85 have been obtained (e.g., Smith, Kendall, & Hulin, 1969). These indicate acceptable levels of stability for traitlike measures and suggest the statelike fluctuations are minor. However, it is likely that much of the variance in job attitude scores that is treated as stable is actually systematic response variance associated with personality, concerns about the confidentiality of attitude scale responses, and other stable response artifacts. All forms of stable variance are lumped into construct variance and may significantly inflate the stability estimates of these measures.

One might argue that job satisfaction is a poor example compared to something more traitlike and stable, such as personality. However, evidence suggests that all the Big 5 personality dimensions vary as much across time as they do across individuals (Fleeson, 2001). These estimates of the within-versus between-person variance are at odds with the Tp T2 reliabilities of .80 to .85. Thus, not even "traits" such as personality are safe from the assumptions about stability.

This is not a trivial consideration. Our past theories about what was important to study and our past methods were, more or less, in alignment. We had theories about assumed static constructs, and we used methods best suited to studying static variables and constructs. It is not clear which came first; reciprocal influences are likely. However, it appears clear from both theoretical and empirical perspectives that although we have learned much about individuals in organizations, there is much that our methods have relegated to the trash bin of error variance that deserves to be resurrected and analyzed for lawful and consistent antecedents and consequences.

Was this article helpful?

0 0

Post a comment