Critique Of Clinical Judgment

Human Anatomy And Physiology Premium Course

Human Anatomy and Physiology Study Course

Get Instant Access

A strength of clinical judgment is that mental health professionals can make use of a wide range of information. Automated assessment programs and present-day statistical prediction rules generally make use of limited information, for example, results from a single psychological test. In contrast, mental health professionals can make judgments after reviewing all of the information that is normally available in clinical practice. As noted earlier, in seven of the eight studies that found clinicians to be substantially more accurate than mechanical prediction rules (Grove et al., 2000), clinicians had more information available than did the mechanical prediction rules.

Mental health professionals can make reliable and valid judgments if they are careful about the information they use, if they avoid making judgments for tasks that are extremely difficult (tasks that are so difficult the clinicians are unable to make reliable and valid judgments), and if they are careful in how they make their judgments (Garb, 1998). For example, they can make reliable and valid diagnoses if they adhere to diagnostic criteria. Similarly, they can make moderately valid predictions of violence.

The focus of this section is on the limitations of clinical judgment. Results from empirical studies reveal that it can be surprisingly difficult for mental health professionals to learn from clinical experience. That is, a large body of research contradicts the popular belief that the more experience clinicians have, the more likely it is that they will be able to make accurate judgments. Numerous studies have demonstrated that when different groups of clinicians are given identical sets of information, experienced clinicians are no more accurate than are less experienced clinicians (Dawes, 1994; Garb, 1989, 1998; Garb & Boyle, in press; Garb & Schramke, 1996; Goldberg, 1968; Wiggins, 1973; also see Meehl, 1997). Remarkably, these results even extend to comparisons of mental health professionals and graduate students in mental health fields. These results, along with results on the value of training, will be described. Afterward, the reasons clinicians have trouble learning from experience will be described.

Experience and Validity

The validity of judgments will be described for presumed expert versus nonexpert clinicians, experienced versus less experienced clinicians, clinicians versus graduate students, and graduate students followed over time. Also described will be research on illusory correlations. Results from all of these studies describe the relations among presumed expertise, experience, and validity.

For the task of interpreting objective and projective personality test results, alleged experts have been no more accurate than other clinicians, and experienced clinicians have been no more accurate than less experienced clinicians (Graham, 1967; Levenberg, 1975; Silverman, 1959; Turner, 1966; Walters, White, & Greene, 1988; Wanderer, 1969; Watson, 1967). In these studies, all of the clinicians were given the assessment information. For example, in one study (Turner), expert judges were "25 Fellows in the Society for Projective Techniques with at least 10 years of clinical experience with the Rorschach" (p. 5). In this study, different groups of judges were to use

Rorschach results to describe the personality functioning of clients. Not only were the presumed expert judges no more accurate than a group of recently graduated psychologists (PhDs) and a group of graduate students in clinical psychology, they were not even more accurate than a group of "25 undergraduate psychology majors who were unfamiliar with the technique" (p. 5). In another study (Graham, 1967), one group of PhD-level psychologists had used the MMPI much more frequently than a less experienced group of psychologists. Also, the experienced group, but not the inexperienced group, demonstrated a broad knowledge of the research literature on the MMPI. In this study, as in the others, judgmental validity was not related to experience and presumed expertise.

The relation between experience and validity has also been investigated among psychiatrists. Results indicate that experience is unrelated to the validity of diagnoses and treatment decisions, at least under some circumstances (Hermann, Ettner, Dorwart, Langman-Dorwart, & Kleinman, 1999; Kendell, 1973; Muller & Davids, 1999). For example, in one study (Muller & Davids, 1999), psychiatrists who described themselves as being experienced in the treatment of schizophrenic patients were no more adept than less experienced psychiatrists when the task was to assess positive and negative symptoms of schizophrenia. In another study (Hermann et al., 1999), the number of years of clinical experience was negatively related to validity. Hermann et al. found that "psychiatrists trained in earlier eras were more likely to use ECT [electroconvulsive therapy] for diagnoses outside evidence-based indications" (p. 1059). In this study, experienced psychiatrists may have made less valid judgments than younger psychiatrists because education regarding the appropriate use of ECT has improved in recent years. If this is true, then the value of having years of clinical experience did not compensate for not having up-to-date training.

Results have been slightly different in the area of neu-ropsychology. Neuropsychologists with national reputations did better than PhD psychologists when using the BenderGestalt Test to diagnose organic brain damage (Goldberg, 1959) and when using the Halstead-Reitan Neuropsychologi-cal Test Battery to describe neurological impairment (Wedding, 1983). Otherwise, results in the area of neuropsy-chology have been similar to results obtained in the areas of personality assessment and diagnosis. For example, neuropsy-chologists with the American Board of Professional Psychology (ABPP) diploma have generally been no more accurate than less experienced and presumably less qualified doctoral-level neuropsychologists (Faust et al., 1988; Gaudette, 1992; Heaton, Smith, Lehman, & Vogt, 1978; Wedding, 1983).

One of the neuropsychology studies will be described. In this study (Faust et al., 1988), 155 neuropsychologists evaluated results from several commonly used neuropsychological tools (including the Halstead-Reitan Neuropsychological Test Battery). The judgment task was to detect the presence of neurological impairment and describe the likely location, process, and etiology of any neurologic injury that might exist. Clinicians' levels of training and experience were not related to the validity of their judgments. Measures of training included amount of practicum experience in neu-ropsychology, number of supervised neuropsychology hours, relevant coursework, specialized neuropsychology internship training, and the completion of postdoctoral training in neuropsychology. Measures of experience included years of practice in neuropsychology and number of career hours spent on issues related to neuropsychology. Status in the ABPP was used as a measure of presumed expertise. The results indicated that there is no meaningful relationship between validity, on the one hand, and training, experience, and presumed expertise, on the other.

An assumption that is frequently made without our even being aware that we are making the assumption is that clinical and counseling psychologists are more accurate than psychology graduate students. However, with few exceptions, this assumption has not been supported. In empirical studies, psychologists and other types of mental health professionals have rarely been more accurate than graduate students, regardless of the type of information provided to clinicians. This has been true when judgments have been made on the basis of interviews (Anthony, 1968; Grigg, 1958; Schinka & Sines, 1974), case history information (Oskamp, 1965; Soskin, 1954), behavioral observations (Garner & Smith, 1976; E. Walker & Lewine, 1990), recordings of psychotherapy sessions (Brenner & Howard, 1976), MMPI protocols (Chandler, 1970; Danet, 1965; Goldberg, 1965, 1968; Graham, 1967, 1971; Oskamp, 1962; Walters et al., 1988; Whitehead, 1985), human figure drawing protocols (Levenberg, 1975; Schaeffer, 1964; Stricker, 1967), Rorschach protocols (Gadol, 1969; Turner, 1966; Whitehead, 1985), screening measures for detecting neurological impairment (Goldberg, 1959; Leli & Filskov, 1981, 1984; Robiner, 1978), and all of the information that clinical and counseling psychologists normally have available in clinical practice (Johnston & McNeal, 1967).

Although mental health professionals have rarely been more accurate than graduate students, two exceptions can be described. In both instances, the graduate students were just beginning their training. In the first study (Grebstein, 1963; reanalyzed by Hammond, Hursch, & Todd, 1964), the task was to use Rorschach results to estimate IQ. Clinical psychologists were more accurate than graduate students who had not yet had practicum training, although they were not more accurate than advanced graduate students. In a second study (Falvey & Hebert, 1992), the task was to write treatment plans after reading case histories. Certified clinical mental health counselors wrote better treatment plans than graduate students in master's degree programs, but half of the graduate students had not yet completed a single class related to diagnosis or treatment planning.

Although mental health professionals were sometimes more accurate than beginning graduate students, this was not always the case. In one study (Whitehead, 1985), psychologists, first-year clinical psychology graduate students, and fully trained clinical psychology graduate students were instructed to make differential diagnoses on the basis of Rorschach or MMPI results. For example, one task they were given was to differentiate patients with schizophrenia from those with bipolar disorder. The first-year graduate students had received training in the use of the MMPI, but they had not yet received training in the use of the Rorschach. For this reason, the only Rorschach data given to beginning graduate students were transcripts of the Rorschach sessions. In contrast, the Rorschach data given to psychologists and fully trained graduate students included transcripts, response location sheets, and Rorschach scores (using the Comprehensive System Structural Summary; Exner, 1974). In general, all three groups of judges were able to make valid judgments (accuracy was better than chance), although they were significantly less accurate when the Rorschach was used as the sole source of data. A repeated measures analysis of variance indicated that accuracy did not vary for the three groups of judges, both for the Rorschach data and the MMPI data.

To learn about the relation between experience and validity, one can conduct a longitudinal study. In one study (Aronson & Akamatsu, 1981), 12 graduate students made judgments using the MMPI before and after they completed a year-long assessment and therapy practicum. All of the students had already completed a course on MMPI interpretation. To determine validity, graduate students' judgments were compared with criterion ratings made on the basis of patient and family interviews. Results revealed that validity increased from .42 to only .44 after graduate students completed their practicum. The practicum experience did not serve to improve accuracy significantly.

Studies on illusory correlations (Chapman & Chapman, 1967, 1969; Dowling & Graham, 1976; Golding & Rorer, 1972; Kurtz & Garfield, 1978; Lueger & Petzel, 1979; Mowrey, Doherty, & Keeley, 1979; Rosen, 1975, 1976; Starr & Katkin, 1969; R. W. Waller & Keeley, 1978) also demonstrate that it can be difficult for clinicians to learn from clinical experience (for a review, see Garb, 1998, pp. 23-25). An illusory correlation occurs when a person believes that events are correlated even though they really are not.

In a classic study that established the paradigm for studying illusory correlations, Chapman and Chapman (1967) hoped to learn why psychologists use the sign approach to interpret the draw-a-person test despite research that reflects negatively on its validity (Groth-Marnat & Roberts, 1998; Joiner & Schmidt, 1997; Kahill, 1984; Lilienfeld, Wood, & Garb, 2000, 2001; Motta, Little, & Tobin, 1993; Swensen, 1957; Thomas & Jolley, 1998). The sign approach involves interpreting a single feature of a drawing (e.g., size of figure, unusual eyes). It can be contrasted to the global approach, in which a number of indicators are summed to yield a total score. The global approach has a stronger psychometric foundation than the sign approach (e.g., Naglieri, McNeish, & Bardos, 1991).

In their study, Chapman and Chapman (1967) instructed psychologists to list features of drawings (signs) that are associated with particular symptoms and traits. They then presented human figure drawings to undergraduates. On the back of each drawing was a statement that described a trait or symptom that was said to be descriptive of the client who had drawn the picture. Undergraduates were to examine each drawing and then read the statement on the back. Afterwards, they were to describe signs that were associated with the traits and symptoms. The undergraduates were unaware that the experimenters had randomly paired the drawings and the statements on the back of the drawings. Remarkably, the undergraduates reported observing the same relations that had been reported by the clinicians.

The results of the Chapman and Chapman (1967) study indicate that clinicians respond to the verbal associations of human figure drawings. For example, both clinicians and undergraduates reported that there is a positive relation between unusually drawn eyes and watchfulness or suspiciousness.

The results from the Chapman and Chapman study help to explain why clinicians continue to interpret specific drawing signs even though the overwhelming majority of human figure drawing signs possess negligible or zero validity. Psychologists believe they have observed these relations in their clinical experience, even when they have not. Along with results from other studies on illusory correlation, the results from the Chapman and Chapman study show that clinicians can have a difficult time learning from experience.

Unanswered questions remain. Do psychologists who interpret projective drawings know the research literature on the validity of specific drawing signs? Would they stop making invalid interpretations if they became aware of negative findings or would they weigh their clinical experiences more heavily than the research findings? Research on experience and validity is important because it helps us understand the problems that can occur when psychologists ignore research findings and are guided only by their clinical experiences.

Training and Validity

Empirical results support the value of training. In some, but not all, studies, clinicians and graduate students were more accurate than lay judges. In other studies, mental health professionals with specialized training were more accurate than health professionals without specialized training.

When the task was to describe psychopathology using interview data, psychologists and graduate students outperformed undergraduate students (Grigg, 1958; Waxer, 1976; also see Brammer, 2002). However, for a similar task, they did not outperform physical scientists (Luft, 1950). Additional research needs to be done to clarify whether psychologists and graduate students did better than undergraduates because of the training they received or because they are more intelligent and mature.

When asked to describe psychopathology on the basis of case history data, clinicians outperformed lay judges when judgments were made for psychiatric patients (Horowitz, 1962; Lambert & Wertheimer, 1988; Stelmachers & McHugh, 1964; also see Holmes & Howard, 1980), but not when judgments were made for normal participants (Griswold & Dana, 1970; Oskamp, 1965; Weiss, 1963). Of course, clinicians rarely make judgments for individuals who are not receiving treatment. As a consequence, clinicians may incorrectly describe normals as having psychopathology because they are not used to working with them.

In other studies, judgments were made on the basis of psychological test results. Psychologists were not more accurate than lay judges (e.g., undergraduates) when they were given results from projective techniques, such as Rorschach protocols (Cressen, 1975; Gadol, 1969; Hiler & Nesvig, 1965; Levenberg, 1975; Schaeffer, 1964; Schmidt & McGowan, 1959; Todd, 1954, cited in Hammond, 1955; C. D. Walker & Linden, 1967). Nor were they more accurate than lay judges when the task was to detect brain impairment using screening instruments (Goldberg, 1959; Leli & Filskov, 1981, 1984; Nadler, Fink, Shontz, & Brink, 1959; Robiner, 1978). For example, in a study on the Bender-Gestalt Test (Goldberg, 1959) that was later replicated (Robiner, 1978), clinical psychologists were no more accurate than their own secretaries! Finally, positive results have been obtained for the MMPI. In several studies on the use of the MMPI, psychologists and graduate students were more accurate than lay judges (Aronson & Akamatsu, 1981; Goldberg & Rorer, 1965, and Rorer & Slovic, 1966, described in Goldberg, 1968; Karson & Freud, 1956; Oskamp, 1962). For example, in a study that was cited earlier, Aronson and Akamatsu (1981) compared the ability of graduate and undergraduate students to perform Q-sorts to describe the personality characteristics of psychiatric patients on the basis of MMPI protocols. Graduate students had completed coursework on the MMPI and had some experience interpreting the instrument. Undergraduates had attended two lectures on the MMPI. Validity was determined by using criterion ratings based on family and patient interviews. Validity coefficients were .44 and .24 for graduate and undergraduate students, respectively. Graduate students were significantly more accurate than undergraduates.

The value of specialized training in mental health has also been supported. For example, neuropsychologists are more accurate than clinical psychologists at detecting neurological impairment (e.g., S. G. Goldstein, Deysach, & Kleinknecht, 1973), psychologists with a background in forensic psychology are more accurate than other psychologists when the task is to detect lying (Ekman, O'Sullivan, & Frank, 1999), and psychiatrists make more appropriate decisions than other physicians when prescribing antidepressant medicine (e.g., making sure a patient is on a therapeutic dose; Fairman, Drevets, Kreisman, & Teitelbaum, 1998).

Was this article helpful?

0 0
Essentials of Human Physiology

Essentials of Human Physiology

This ebook provides an introductory explanation of the workings of the human body, with an effort to draw connections between the body systems and explain their interdependencies. A framework for the book is homeostasis and how the body maintains balance within each system. This is intended as a first introduction to physiology for a college-level course.

Get My Free Ebook


Post a comment