Self Report

Many organizational results are based on data collected from employees who complete structured, self-report, paper-and-pencil or online surveys.1 To the extent that attitude, personality, opinion, and interest constructs are best measured using self-reports, and they often are, this is an acceptable method of obtaining data. However, as a field, we are limited by what appears to be an overreliance on self-reports.

In organizational studies, self-reports are a particular concern because surveys are commonly done with management's endorsement. Employees manipulate their responses not only for ordinary social desirability, but also because they are concerned about confidentiality. This can create an incentive for individuals to manipulate their responses lest they be caught reporting that their boss is an idiot or their colleagues are clueless. Unless the surveys are clearly anonymous or 100% guaranteed confidential, participants have an incentive to distort.

This creates a dilemma for researchers who attempt to diversify methodologically. Collecting data in addition to self-reports requires that participants be identifiable. One cannot give anonymous surveys if one needs to match to data collected using other methods. Therefore, researchers are in a dilemma: the best way to ensure that response manipulation is minimized is to give anonymous surveys, but this makes the possibility of matching to data collected using alternative methods impossible.

In attempts to create alternative methods, researchers occasionally word questions somewhat differently or use different response scales (e.g., using Likert, yes/?/no, or other verbal response formats). However, this does not generate different methods. The number of facets the resulting measures share with each other remains unfortunately large. Correlations among measures are artificially inflated because of the common measurement operations shared by the different response formats. The method variance contained in each measure gets treated as construct or trait variance. In addition,

'We exclude cognitive ability tests from our analyses and restrict ourselves to self-report surveys and scales that assess preference, opinion, attitude, personality or interest constructs. Ability assessments have measurement problems that are quite different.

self-reports only tap a portion of the construct space, the verbally accessible and socially acceptable part. The remainder of the construct space is seldom measured. This is significant if the portion that is measured is not representative of the whole space. Indicators of a construct may be seriously deficient.

Was this article helpful?

0 0

Post a comment