Bayesian Modeling and Inference

Let H be some hypothesis of interest (e.g., the disease state of a patient) and let E denote available evidence (e.g., a patient's symptoms).We often are interested in knowing the probability of H in light of E, that is, P(H\E).

In order to derive P(H\E), we need a model that relates the evidence to the hypothesis. It often is easier to model how the hypothesis might lead to the evidence, namely P(E\H), than to model directly how the evidence implicates the hypothesis, namely P(H\E). When the hypotheses being modeled are causally influencing the evidence, it is natural to model in the causal direction from hypotheses to evidence. For example, we could model the probability distribution of cough in an individual with influenza (i.e., P ([cough = present I influenza = present])), and alternatively, the probability of cough given that there is no influenza (i.e., P(cough = present I influenza = absent)).

We cannot derive P(H\E) just from P(E\H); we also need to know the probabilities P(H) and P(E), and then combine them as follows:

By replacing P(E) with an equivalent expression, we obtain the following equation, which is called Bayes rule:

P(HIE) P(EIH ) •P(H ) , V ; £ P(E IH') • P(H')

where the sum is taken over all the hypotheses H' being modeled (e.g., "influenza" and "no influenza"). The terms in Bayes rules are referred to as follows: P(H) is the prior probability of H, P(E\H) is the likelihood of E given H, and P(H\E) is the posterior probability of H given E.

1 This chapter is based on a paper by Cooper [2004].

Was this article helpful?

0 0

Post a comment