Overreliance on self-report data potentially biases our results and may overestimate the construct validity of our measures. Although a construct such as organizational commitment may be efficiently and accurately measured by asking a series of questions about how committed one is to their organization, sole reliance on such measures restricts our ability to ensure we have captured the underlying construct. Further, we compound this reliance on a narrow range of assessments by using analyses that cannot distinguish between construct, trait, or true variance with systematic error variance due to methods. We tend to lump all nonrandom error into construct variance when it comes to estimating stability or reliability of our measures.

Decades ago there were calls to use multiple methods to achieve construct validity in psychological research. I/O psychology struggles with these calls because much research is done in the held, where it is difficult to gain access to employees to assess their characteristics and responses even once; forget about multiple ways. As a consequence, we argue that the field has evolved its theories to study only those variables and constructs that are measurable using self-report data gathered in surveys.

The efforts to expand measurement beyond single methods have met resistance; it is difficult to obtain low monomethod-heterotrait correlations. We suggest that this problem of achieving low monomethod-heterotrait correlations is somewhat intractable as long as human raters are involved, and they will be for I/O research for the near future.

Given this situation, what can we expect about the nature of I/O research? We only account for a portion of the total variance in our constructs, but the amount of covariance that we do account for may be an overestimate of the true state of affairs because while we are assessing manifestations of a portion of the total construct space, we are using methods that potentially share much correlated method variance. This tends to generate strong correlations among the few multiple measures of a construct.

It seems clear there are direct and mutual influences between theory and methods. Theories that cannot be tested with available methods are given little credence and are not studied. Our methods do a very good job assessing static traits or aggregating observations over arbitrary temporal intervals. This leads us to focus our theoretical/conceptual efforts on theories that address static questions and static analyses. A consideration of these issues may suggest we need to modify both our theories and our methods. Methods that allow us to address such variation in our theories should generate hypotheses about dynamic states, episodic behaviors, and fluctuations in patterns of individuals' responses.

As a potential solution we propose that researchers expand their use of longitudinal designs, in particular short-term longitudinal designs that tap dynamic constructs. We also urge that within-person variance in important constructs should be analyzed to determine possible implications for our theories. Our recommendations are not panaceas, but they do open possibilities for better understanding of change in constructs across time. We argue that it is only through studying and understanding change that our held can be freed from the confines of static research that is adept at documenting relationships, but relegates process theory to the introduction and discussion sections. Without methods that directly address process, it is difficult to parse the many possible process explanations for any given observed static relationship.

Chapter 29

Was this article helpful?

0 0

Post a comment