Development Of Clinical Psychology Measures

Many strategies have been advanced for the development of reliable and valid measures and for the evaluation of these properties of existing measures. These strategies include content validation, exploratory and confirmatory factor analysis, item performance characteristics and item-response theory strategies, internal consistency, temporal stability, and convergent and discriminant validity, among others (e.g., Haynes & O'Brien, 2000, Table 11-1).

Inferences about the reliability, validity, and itemlevel performance of measures used in clinical psychology are often based on estimating the degree to which variance in the measure of interest is associated with variance in another measure (e.g., shared variance with a gold standard measure of the same construct, variance of an item with an aggregation of items measuring the same construct, variance of a measure with itself across time, settings, sources, and so on). These strategies provide information, usually in the form of correlations or estimates of shared variance, that suggest how much confidence we can have that the measure truly measures what it is supposed to measure (Messick, 1995); for example, how much confidence can we have in using the measure to make clinical judgments about the characteristics and causes of a client's problems or the effectiveness of the client's treatment?

These estimates of shared variance can be difficult to interpret, especially if the shared variance between the two measures is based on the same method of measurement. For example, if the two measures share the same method (e.g., two self-report measures of depression), the amount of shared variance reflects both construct and method variance. Here it is impossible to know if a correlation of .90 between the first and second self-report measure of depression reflects strong convergent validity, strong method effects, or some combination of both. Monomethod research does not provide very useful information on the construct validity of measures. Interpretation of shared variance between monomethod measures is particularly difficult when the measures contain semantically similar items.

Was this article helpful?

0 0

Post a comment