Comparing The Univariate Algorithms

Let us take stock of the methods described up to this point. How do they rank against each other? We performed a test in which 2000 synthetic data sets were created. Each data set is a copy of the original synthetic data of Figure 14.5, except that each data set also involves a simulated inject outbreak. Half of the tests involved a "spike pattern'' in which one day had an increase of between 10% to 100% over the true count. The other 1000 tests involved a "ramp pattern'' similar to Figure 14.2 in which five days in a row had an increased count, with the increment for each outbreak day being larger than its predecessor. Noise was simulated in these ramps, and the ramp size was randomly chosen between 10% and 100% of the true count. Thus, there are a variety of outbreak patterns. Some are so easy to detect that any self-respecting algorithm should identify them; others are trickier, and some might be virtually impossible to detect. The results are shown in Table 14.1. For each algorithm, we measure the following four characteristics:

• What fraction of spike outbreaks are detected if alarm thresholds are set at a level that produces one false alarm every two weeks?

Average Performance During 1000 Simulated Outbreaks

Fraction of Spike Outbreaks Detected

Number of Days Until Ramp Outbreak Detected table 14.1 Experiments with Univariate Methods

Average Performance During 1000 Simulated Outbreaks

Fraction of Spike Outbreaks Detected

Number of Days Until Ramp Outbreak Detected

Method

1 FP every two weeks

1 FP every 10 weeks

1 FP every two weeks

1 FP every 10 weeks

Standard control chart

Was this article helpful?

0 0

Post a comment