When one is asked to complete an exercise such as this, acceptance is always accompanied by a sense of the ridiculous, even arrogance. To write about the whole field of intellectual disabilities and the evidence base for its treatments in one chapter by two individuals cries for better judgement. Treatments for the range of problems associated with any generic client group such as autistic spectrum disorders, behavioural difficulties, anxiety, depression, substance abuse, aggression, sexual problems, relationship and social difficulties, should all be addressed. Specific treatment approaches such as pharmacological, the cognitive therapies, psychotherapy, social skills training, social problem solving, educational approaches, the vast range of behavioural therapies and the newer alternative therapies are all subsumed within the generic topic of intellectual disabilities. Intellectual disability is not equivalent to anxiety or depression nor is it equivalent even to schizophrenia in that no one is kidding anyone that they can find 'a cure'. Therefore no one is trying - the notion is ridiculous. Clinicians are directed to concomitant difficulties some of which might have an increased incidence in intellectual disabilities, others that occur in the same way as they would in other generic populations. One always wonders whether it would be seen as transparently ridiculous if one were to write a chapter on the evidence base for treatments developed for members of MENSA, the society for those with superior intellect. (Do we detect one or two of you raising an eyebrow at the possibility of a new research field?) Similarly, no one is looking for a cure for giftedness. One is left with two options - either refuse in a fit of pique, in which case intellectual disabilities is not represented in the volume, or accept, distinguished by the accompanying feelings of arrogance and grandiosity. Needless to say, we have taken the latter course.

When the notion of evidence-based practice was first introduced it was treated with a very cynical eye by the authors. As aspiring clinical psychologists in the late 1960s and 1970s, the

Handbook of Evidence-based Psychotherapies: A Guide for research and practice. Edited by C. Freeman & M. Power. Copyright © 2007 John Wiley & Sons, Ltd.

notion oflinking treatment approaches to experimental methods was axiomatic. Therefore, we felt that the whole essence of clinical practice within psychology was fundamentally based upon experimentally validated approaches and the notion of evidence-based practice was both latter day and irrelevant. The first issues of essential therapeutic journals contained evidence-based work on intellectual disabilities. Wolf, Risley & Mees (1964) used operant conditioning treatments to improve sleep problems, temper tantrums and other behaviour problems in an autistic child. The improvements were still evident at a six-month follow-up assessment. Berkowitz, Sherry & Davis (1972) taught self-feeding skills to 14 profoundly disabled boys aged between nine and 17 years. Using reinforcement and fading procedures they successfully taught the boys, who were still feeding themselves independently 41 months later. Miller, Patton & Henton (1972) used behaviour modification techniques to improve independent standing and self-feeding in a profoundly retarded child. Behaviour therapies have always insisted on establishing the effectiveness of procedures using a variety of increasingly ingenious experimental designs. In this study, Miller et al. used a simple reversal design to demonstrate that the procedures employed made the personal differences to the child since, when the behavioural procedures were removed, the improvements disappeared. Therefore the therapeutic procedures promoted his feeding and standing. As we shall see later, behavioural therapies have increased exponentially in scope and complexity but the science underpinning them has remained as good as that in any therapeutic field and better than most.

Were the behavioural therapies not able to validate themselves with a sound scientific base, then they would have been (rightly) dismissed. The evidence base is essential to its therapeutic investigation. However, looking across therapies for this client group, one quickly realises that this view is smug. Therapies spring up right left and centre and can be carried forward more effectively by evangelical zeal than they can by scientific rigour. Indeed, when dealing with a class of individuals who are generally devalued, as the current chapter does, one becomes aware that they may be vulnerable to anyone who takes a genuine interest, however idiosyncratic. It has therefore become extremely important to review the evidence base for the range of mainstream and alternative therapies that are currently used in the field of intellectual disabilities.

Evidence-Based Practice (EBP)

There is a tension between EBP and tailored psychological treatment, which is at the heart of many psychological therapies including the most scientifically validated group of therapies - the behaviour therapies. Good psychological treatment is tailored to individual requirements. Individual assessments and a functional analysis unique to that person will produce hypotheses about the personal, environmental and societal influences, which both cause and maintain the problem. Therefore, ideally, it should be impossible to group individuals into a generic class suffering from, say, anxiety or aggression. Behavioural therapies and some cognitive therapies in intellectual disabilities have developed on the basis of individual case studies. On the other hand, EBP requires a formatted protocol for treatment that can be used across a group of individuals randomly selected from the population of individuals who have that problem. This formatted approach can then be tested against a control group of similarly randomly selected individuals. Clearly this is the antithesis of idiomatic formulation and treatment and is an essential contradiction in much of the scientific literature on treatment effectiveness.

Deductive and Inductive Science

Most psychologists are brought up on one kind of scientific method: hypothetico-deductive science. This approach is characterised by development of null and alternative hypotheses, which are used to predict future results. Hence, hypothetico-deductive science begins with theory and then proceeds to data. For example, in the case of outcome studies we usually hypothesise that there will be a difference between the treatment and other groups on some measure and then go to look at the data after it has been collected to verify our theory. This approach to science is drilled into psychologists from the first year of undergraduate study and is the dominant approach to science. Indeed the double-blind cross-over placebo trial is viewed by many as the sine qua non of experimental approaches to treatment evaluation. But it is not the only approach. Several sciences are not experimental: the sciences of astronomy and palaeontology have a hard time manipulating independent variables.

An alternative approach is inductive science. Inductive science is characterised by beginning close to the data. Inductive science makes observations - many observations - and then manipulates independent variables that might influence the phenomenon of interest. It repeatedly asks the question, 'I wonder what would happen if...' (Chiesa, 1994, p. 153). Induction also refers to making statement of generalities based on many specific instances. For example, observations of the effects of withdrawing a reinforcer from a previously reinforced behaviour in a pigeon, a rat, a marmoset and a human, and withdrawing a reinforcer from previously reinforced human play, writing, talking and crying, might all permit one to induce some generalisation about the nature of extinction. This might constitute a general law of science (Chiesa, 1994).

Group and Small N Designs

The commonly used methods associated with hypothetico-deductive science are group experiments and analyses of variance. Great weight is placed on statistical significance as the arbiter of which observations are importance. These approaches were originally developed for use in agricultural research but are they good for research with people? Increasing the weight of an average potato is an important outcome for a farmer and someone selling fertiliser but who is interested in the average person?

Group designs have important limitations. First, the logic of group designs requires that samples are drawn randomly from a population. The results of the sample are then generalised to the population of interest. However, most group research neither defines the population of interest nor draws randomly from that population and hence is unable to generalise the results to the undefined population. Instead, evaluating psychological treatment by group designs is hobbled by sequences of studies drawn from many different samples of convenience that are representative of no population of interest. When effects are not replicated between experiments, it is unclear if the results reflect differences in samples, variable outcome of treatment or interactions between treatment and sample characteristics.

Group designs are limited in a more serious way. Enslaved to statistical significance and the mythical average subject - a client none of us have worked with - group designs often ignore the clinical or practical significance of the treatment outcome. Statistical significance is relatively easy to achieve with large groups; indeed trivial and unimportant differences can easily reach statistical significance if the group size is large enough and the measures reliable. Yet, within the differences between average subjects are large individual differences in treatment outcome, and interactions between individual differences and treatment effects. Hence, although the scores of majority of subjects may increase, others may not change and indeed others may decrease. Hence, a statistically significant F test is of little comfort to either the client or therapist if the person seeking help responds minimally, not at all, or adversely to the best available treatment. Clients and therapists alike are not truly interested in statistically significant changes in average subjects; they are interested in large and meaningful changes in multiple outcome measures in the client sitting in the office today!

Single-subject research designs involve a different approach to science. Rather than pursuing statistical significance, single-subject research pursues demonstrating a replicable and consistent functional relationship between an independent variable and client behaviour. If a therapist or experimenter can turn a behaviour on and off by systematically applying and withdrawing an independent variable and observe a systematic change in the client's behaviour, then we can say that we have truly identified an independent variable (Baer, Wolf & Risley, 1968). Hence, reversal, multiple baseline and other single-subject designs can show causal relationships between independent and dependent variables. Single-subject experimental designs can be clearly differentiated from case studies, including case studies with data. Single-subject designs demonstrate a functional relationship between treatment and outcome, whereas case studies do not.

Single-subject experimental designs by themselves do not directly address the social significance of behaviour change: it would be possible to have experimental control over an effect of trivial magnitude. Single subject designs do address social significance by a variety of methods known as social validity (Wolf, 1978). Social validity can be demonstrated using ratings of the importance of behaviour change from the client, and significant others in the environment. Sometimes comparative data from the behaviour of other typically functioning people can be used to evaluate the social significance of change. For example, if one wanted to increase the time on task of children with mild intellectual disabilities then one might observe the on-task behaviour of typical children in the classroom and use their data to indicate the range of typical performance in that environment. If the intervention results in children with intellectual disabilities spending less or more time on task than typical children, then one would not be satisfied with treatment outcome.


Given the caveats noted above - and with apologies to the host of outstanding researchers and therapists in intellectual disabilities who would have done it differently or better - we will proceed with our review of EBP in intellectual disabilities. This chapter first reviews the results of consensus panels on therapies with people with intellectual disabilities. The next section reviews the results of several generic meta-analysis. The subsequent three sections review the evidence base for four commonly used psychological therapies for people with intellectual disabilities: behaviour therapy, cognitive therapy, counselling and sensory therapies.

Empowered Happiness Bible

Empowered Happiness Bible

Get All The Support And Guidance You Need To Be A Success At Being Happy. This Book Is One Of The Most Valuable Resources In The World When It Comes To Everything You Need To Know To

Get My Free Ebook

Post a comment