Knowledge Representation And Inference In Diagnostic Expert Systems

As previously discussed, a diagnostic expert system contains an internal store of facts about diseases, including (1) the prevalence of each disease, (2) their findings, and (3) the statistical relationships between the findings and the diseases. We call this collection the knowledge base. In addition to the knowledge base, a diagnostic expert system contains an inference engine, which performs diagnostic reasoning (Figure 13.9).

To demonstrate how a diagnostic expert system uses Bayes rules to compute a differential diagnosis, we created mini-BOSSS, a tiny version of BOSSS. In particular, mini-BOSSS has a knowledge base with only two diseases and two findings. The diseases are foot and mouth disease (FMD) and mad cow disease (MCD). The findings are drooling saliva and whether there is more than one cow in the herd with this symptom.

5.1. Probabilistic Knowledge Bases

We note that there are several kinds of expert systems, including the probabilistic diagnostic expert systems that we have been discussing, and rule-based expert systems. We discuss rule-based expert systems later in this chapter.

A probabilistic knowledge base uses probabilities to represent the prevalence of disease and the relationships between findings and diseases. A development team creates a knowledge

figure 13.8 A mapping tool that displays geographic locations of reported cases.

base by a labor-intensive literature review supplemented by interviews with experts. Increasingly, developers use large data sets collected by hospital information systems (especially for disease prevalence) to develop knowledge bases. The process that developers use to elicit knowledge from experts and to

figure 13.9 Components and process flow in an expert system. The inference engine uses both patient data and medical knowledge to compute a differential diagnosis and to generate suggestions for additional collection of data. The inference engine obtains data from an inbound data-acquisition interface and it outputs the differential diagnosis and suggestions for additional data through an outbound results interface. In a freestanding diagnostic expert system such as Iliad or BOSSS, both the data-acquisition and results interfaces are screens that a physician interacts with. In an embedded system like Antibiotic Assistant, the data-acquisition interface is with hospital information systems (and to some extent with the user, but only for selected items of information that the user wishes to provide). The results interface could also be with another computer system such as a point-of-care system, which would present the differential diagnosis and suggestions through its own screens.

figure 13.9 Components and process flow in an expert system. The inference engine uses both patient data and medical knowledge to compute a differential diagnosis and to generate suggestions for additional collection of data. The inference engine obtains data from an inbound data-acquisition interface and it outputs the differential diagnosis and suggestions for additional data through an outbound results interface. In a freestanding diagnostic expert system such as Iliad or BOSSS, both the data-acquisition and results interfaces are screens that a physician interacts with. In an embedded system like Antibiotic Assistant, the data-acquisition interface is with hospital information systems (and to some extent with the user, but only for selected items of information that the user wishes to provide). The results interface could also be with another computer system such as a point-of-care system, which would present the differential diagnosis and suggestions through its own screens.

convert information in the literature into a knowledge base is referred to as knowledge acquisition or knowledge engineering (Feigenbaum, 1977).

5.1.1. Prior Probabilities (Disease Prevalence)

The prior probability is the prevalence of disease in the population. We represent prior probabilities using the notation P(Disease). For example, P(FMD) represents the prior probability of foot and mouth disease. Table 13.1 shows the prior probabilities of FMD and MCD that we use in our example. Table 13.1 also shows the prior odds of the diseases in mini-BOSSS. Odds are a simple mathematical transform of probabilities, which are essentially equal to probabilities when probabilities are less than 0.1. The reason that we show odds in Table 13.1 is that we use the odds-likelihood form of Bayes rules in our example. We will define odds and the odds-likelihood form of Bayes shortly.

5.1.2. Conditional Probabilities

A conditional probability is the chance of one event occurring given the occurrence of another event. In diagnostic expert systems, we use conditional probabilities to describe the probability of seeing a certain finding (one event) when a disease is present (another event). If the conditional probability is high, it means that the particular finding is often associated with the disease. The mathematical notation for the conditional probability of a finding, given a disease, is P(FindinglDisease). For example, P(drooling of saliva is presentlFMD is present) is the probability of observing drooling of saliva in a cow with foot and mouth disease. Table 13.2 lists the conditional probabilities for the findings and diseases in mini-BOSSS.

If you are familiar with the concepts of sensitivity and specificity (discussed in detail in Chapter 20), you may recognize that the conditional probability p(drooling of saliva is pres-entlFMD is present) is the sensitivity of the symptom drooling of saliva for the disease FMD. Similarly, p(drooling of saliva is absentlFMD is absent) is the specificity of drooling of saliva for FMD. Many readers will be quite familiar and comfortable with the concept of sensitivity and specificity of laboratory tests, but perhaps not with the idea of sensitivity and specificity of other findings. In fact, symptoms, travel history, results of physical examination, and results of laboratory tests are all nothing more than observations that we make about an individual that may help us discriminate between individuals with disease and those without disease. It is possible to measure a sensitivity, specificity (or, alternatively, likelihood ratios) for any of these diagnostic observations. Consider the diseases presented in mini-BOSSS. Cattle affected with FMD develop very severe mouth vesicles and ulcers. This mouth pain results in excessive salivation; therefore, it causes drooling of saliva. Cattle affected with MCD do not develop mouth wounds; therefore, they are not likely to be observed drooling saliva. The conditional probabilities in Table 13.2 reflect the observations above; P(drooling of salivalFMD) is equal to 0.95 and p(drooling of salivalMCD) is equal to 0.001.

If our knowledge base included laboratory tests for FMD and MCD, these conditional probabilities would be the sensitivities and specificities of the laboratory tests for FMD and MCD.

Researchers working in the field of diagnostic expert systems sometimes refer to conditional probabilities of findings for given diseases as textbook knowledge because these probabilities are often available in textbooks of medicine (human or veterinary). For example, a chapter in a textbook of veterinary medicine on FMD will contain many statements about the frequency with which different findings occur in animals with FMD. These frequencies are basic facts, highly relevant to diagnosis.

5.2. Probabilistic Inference Engines

A probabilistic inference engine is an algorithm that computes a differential diagnosis for a sick individual. In particular, it computes the posterior probability of each disease in its knowledge base, given the findings for a sick individual. In our example, the algorithm will compute the posterior probability p(FMD is presentldrooling of saliva is present, more than one animal is affected) and the posterior probability p(MCD is presentldrooling of saliva is present, more than one animal is affected).

A probabilistic inference engine uses Bayes rules to compute the posterior probability from the prior probability (or prior odds) and the sensitivities and specificities of the observed findings. Once it has computed the posterior probability of every disease in its knowledge base, the inference engine outputs a differential diagnosis, that is, a list of all diseases in the knowledge base sorted from most probable to least probable.

Bayes rules are sometimes referred to as Bayesian inversion because it inverts the conditional probability that a given finding will be observed in an individual with disease (textbook knowledge) into the probability that an individual has the disease, given that we observe a finding in that individual (diagnostic knowledge). That is, it inverts p(drooling of saliva is presentlFMD is present) to p(FMD is presentldrooling of saliva is present), which is exactly what a veterinarian needs to know. A veterinarian needs to know the likelihood that the cow has FMD or MCD, given the findings.

A complete discussion of the various algorithms that a probabilistic inference engine can use to compute posterior probabilities using Bayes rules would fill a book, such as the excellent textbook by Richard Neapolitan (2003). For teaching purposes, we will use a simple form of Bayes rules called the odds-likelihood form of Bayes rules. This simplified form rests on an assumption that findings are independent, given the disease, which is why researchers called this formulation (and similar formulations) naive Bayes. This simple form of Bayes rules works surprisingly well in many diagnostic expert systems, including the systems developed by Homer Warner (congenital heart advisor) and Homer Warner Jr. (Iliad), as well as the BOSSS. We will use the odds-likelihood version of Bayes rules to illustrate how BOSSS computes a differential diagnosis for two diseases, given two findings about a sick cow.

5.2.1. Definition of Odds

For clarity and convenient reference, we here provide the definition of odds:

Odds are simply a rescaling of probability from a range of 0 to 1 to a range of 0 to infinity (which you can prove to yourself by substituting probabilities of zero and one into Eq. 1). For probabilities less than 0.1, probabilities and odds are roughly equal. A probability of 0.1, for example, equals an odds of

0 0

Post a comment