A risk procedure should, amongst other things, minimise the opportunities for the misuse and misunderstanding of risk factors. For example, it might be more appropriate to adopt the image of a 'filter', rather than a checklist, when using risk factors. Only those pieces of information, identified by risk factors as being relevant to the feared (or sought) outcomes, should be allowed into the risk assessment. If the information is purely repetitive, for example it reminds the decision-maker to consider any criminal history but he or she had already done that, then it should not alter the assessment any further. And, when extra information is added to the risk assessment, its significance should be appreciated. We need to emphasise the quality of the information in the risk assessment, not simple the quantity.
Imagine that a risk decision has been taken, say, to grant an offender parole. No harm has resulted. Therefore it must have been a good decision. No, that does not follow! A poor decision may have been made but, fortunately, no harm has resulted. That is an example of good fortune rather than good decision-making! If we are going to justify risk-taking then we need to examine the process of decision-making, not just the product. A decision may have been made well, even when examined in retrospect with additional time and resources, but nevertheless led to harm. Without more, that appears to have been a justifiable decision. (However those involved may have been criticised because of the harmful outcome, before a proper assessment of the decision and decision-making process was undertaken.) By way of contrast a decision may have been made poorly but, nevertheless, not resulted in harm. Because no harm has resulted nobody is likely to complain. Indeed nobody may notice that it was a poor decision; that it was not a good, justifiable, decision.
Unfortunately legal practice does not help here. Nobody can sue for negligence if no one has suffered loss which can be compensated. Thus poor risk-taking practice may be overlooked. Indeed very many people will make the erroneous assumption that 'no loss' means 'no negligence' means 'good decision'. And risk-taking practice can be corrupted as people work to outcomes rather than processes, to the avoidance of harm rather than to the use of good processes.
But employers and professional bodies are entitled to take action against their employees and members, respectively, for poor professional practice. But, once again, they can only do this if somebody noticed that there was a poor process. So if we are to improve the quality of risk-taking decisions then we need to develop procedures that encourage good practice. And that must involve some system of feedback. We need systematic knowledge about how decisions are being taken, not just about the outcome of some decisions. This is another area for potentially productive collaboration between psychologists and lawyers. It may appear, certainly in harried practice, superfluous given the pressures of work. But such procedures should quickly come to constitute standards of professional practice. Thus, if they are followed—provision always being made for regular improvement as we learn more—they will help to prevent litigation because the professional standards will be clearer.
The quality of the information relied upon is relevant to risk management as well as risk assessment. For example risk assessors may conclude that they have poor quality information, or may not know how significant a particular piece of information, say gender, is in this particular case. They may have to accept that they cannot obtain, or it is inappropriate to spend more time or other resources in obtaining, more or better information. Thus they have to take a decision. But they can, and should, take their relative ignorance (no pejorative associations intended) into account when they devise and implement a risk management plan. If they know that they lack key information then they should account for that in how they implement the decision. A risk assessment based upon poor quality information, which may be an unavoidable feature of the case, rather than imply anything critical about the quality of the risk assessment, should lead to a more tightly controlled risk management plan. Risk assessment and risk management should be related. Good risk management can justify taking a risk decision, even when the risk assessment, on its own, suggested it should not have been taken.
Extensive studies have repeatedly shown that humans are poor decision-makers, in many circumstances (e.g. Janis and Mann, 1977; Rachlin, 1989; Slovic, 2000). Such research is particularly pertinent to risk decisions. Key reasons for poor performance have been identified. These include problems in perception (e.g. Slovic and Monahan, 1995). For example we tend to overestimate the likelihood of rare events. Thus we overestimate the likelihood of homicides, but underestimate suicides, committed by people with mental disorders. Also when we know that something has happened, say someone has been killed, we overestimate what we would have predicted was the likelihood of that, the homicide, occurring before we knew that it had. This is known as the hindsight error (for a discussion of some legal implications see Wexler and Schopp, 1989). This is very important because our courts, and tribunals of inquiry, work retrospectively and use hindsight. They may be aware of the problem. They may declare the importance of not relying on the benefits of hindsight but do we, do they, know what allowance should be made for it? A risk procedure could reduce the likelihood of, and/or the seriousness of, such errors of perception. For example, a risk procedure should require that decision-makers are familiar with base rate likelihood. At the very least should not those concerned about the dangerousness of a person with a mental disorder know, or have easy access to, data on the base rate for homicides and suicides by people with and without mental disorders? Experience suggests that if lawyers were to ask such people such questions, when they are acting as expert or professional witnesses, then the court should, currently, expect an embarrassed silence and/or erroneous answer.
And we make poor decisions when we have too much information. We cannot, simultaneously think about each piece of the information we have, its relative importance and accuracy. Think of all the pieces of information relevant to a decision whether to risk buying a particular bottle of wine. There is colour, grape variety, country of origin, area of production, alcoholic content, price, age (if relevant), and more, plus the relative importance of each of those points to us, and to anyone else we contemplate enjoying the bottle. Compare that risk with having to decide whether to release an offender on parole. The importance of the decision is so very much greater. Either we make decisions on only some of the information, for example the price and alcoholic content of the wine, or we develop procedures to cope with more complexity. The latter will involve reducing at least some of the information to paper (or equivalent) and concentrating on part of the problem at a time. It will often be possible to break a decision down into smaller parts, for example benefits and harms. Provided that both the analysis and the synthesis are appropriate, the information may be worked on sequentially rather than attempting to do it all simultaneously. Here is another area for urgent inter-disciplinary collaboration. Otherwise, in order to undermine or mock an expert witness, all that a cross-examining lawyer needs to do is demonstrate that the witness has claimed a super-human feat in working on lots of different pieces of information at the same time.
Another feature of risk-taking, which a risk procedure needs to address, is the arrangements for communicating effectively. Risk involves variables, degrees of outcomes and likelihood. The words we use to describe these variables are vague and ambiguous. With reference to outcomes how serious is 'serious'; how important is 'important'? It is often easier to make the point by reference to likelihood. In terms of percentages we may agree that 'certain' means 100% and 'impossible' means 0%. But what do the other words, which refer to degrees of likelihood, mean? Is, for example, something described as 'likely' expected to occur more or less often than half of the time? There is no rule, other than courtesy and the desirability of communication, that obliges us to use words in particular ways. Opinions differ. But a nurse might advise a doctor that something about a patient is 'likely,' implicitly meaning 75% likely, whilst the doctor 'hears' the word as only meaning 25% likely. Neither nurse nor doctor needs to be mistaken or acting in bad faith for the patient to be injured by a subsequent decision based on that information. And yet such professionals regularly communicate about risk in such terms.
Is there a point to spending time and money on quality risk assessments if the conclusions are going to be communicated in such a manner? Once again it will prove very easy for a lawyer to point out, even dramatically, that two 'professionals' apparently communicating about risk in fact did not do so. Even a simple failure to check roughly how each person used and understood such vague expressions is going to appear incompetent, and negligent. The House of Lords, in Bolitho v. City and Hackney Health Authority ( 3 WLR 1151) noted that courts concerned with questions of professional negligence would usually adopt and apply the standards of the profession concerned. But it reserved a right to impose its own standards if it considered the profession's standards were 'illogical'.
In the vast majority of cases the fact that distinguished experts in the field are of a particular opinion will demonstrate the reasonableness of that opinion. In particular, where there are questions of the assessment of the relative risks and benefits of adopting a particular medical practice, a reasonable decision necessarily presupposes that the relative risks and benefits have been weighed by the experts in forming their opinions. But if, in a rare case, it can be demonstrated that the professional opinion is not capable of withstanding logical analysis, the judge is entitled to hold that the body of opinion is not reasonable or responsible. (p. 1160)
Whilst 'logical' might be an unfortunate choice of expression, it is submitted that a failure to ensure effective communication about likelihood could, and should, fit within this category. It is not an answer for the experts to say that they do not know, or cannot be sure about the particular likelihood. That is understandable. The complaint is not that risk inevitably involves degrees of uncertainty. The complaint is that one may be thinking: 'My best estimate of likelihood is 75%, however I am sure it will fall within 65% to 85%', whilst the other professional hears 'About 25%'. Being unsure of your knowledge may be inevitable given the state of the science, and therefore be understandable. Failing to communicate what you mean, even if you mean to be vague, is not justifiable. People can communicate about risk in better ways (e.g. Heilbrun et al., 1999). Particularly in the future, when lawyers are better educated about risk and how decisions can be taken well or poorly, it will be negligent to fail to do so.
Was this article helpful?