Measuring Sensitivity

When evaluating a case detection algorithm, sensitivity is the proportion of cases that the algorithm classified correctly. When evaluating an outbreak detection algorithm, sensitivity is the proportion of outbreaks the algorithm detected. Since this measure ignores the time of detection, we refer to it as the overall sensitivity. Because time is important, this measure is not very informative about performance except for outbreaks that might go undetected by current methods because they are small or too diffuse to detect.

A more informative measure is sensitivity as a function of timeliness. For example, we can measure the proportion of outbreaks that an algorithm detects within three days from the start of the outbreak, while holding the false positive rate fixed. More generally, we define the sensitivity function S as follows: S(x) = proportion of outbreaks that were detected within x days from the reference date.

An evaluator can plot this sensitivity function against timeliness to produce a sensitivity-timeliness curve (Wallstrom et al., 2004). Figure 20.4 is an example of a sensitivity-timeliness plot for two algorithms. Algorithm 1 has a higher probability of detecting the outbreak until day five of the outbreak, but after that Algorithm 2 has a higher probability of having detected the outbreak. If very early detection is critical for this particular type of outbreak, then Algorithm 1 is preferable to Algorithm 2.

Was this article helpful?

0 0

Post a comment