An evaluator can use a computer simulator to generate data with which to evaluate an outbreak detection algorithm. The most available simulator to use might be the algorithm itself, run backwards to generate simulated data. This type of evaluation tests whether the algorithm performs well using data that match the algorithm's modeling assumptions exactly. WSARE (Wong et al., 2003) and BARD (Chapter 19) are two examples of algorithms that were evaluated by using their own models run in a reverse direction. At the other end of the spectrum, an evaluator could use an agent-based simulator, a program—such as the popular SimCity game—comprising many individual entities (or agents) that represent people (or animals) who exhibit behavioral characteristics of sick individuals. The evaluator introduces an infectious disease into the simulator and runs the simulator forward in time, resulting in a growing number of sick individuals who may be located in different neighborhoods and who may buy thermometers or visit physicians when falling ill. By summing the number of individuals who visit physicians or purchase thermometers on any given day in the simulation, the evaluator creates a time series with which to test the detection algorithm.
The key problem with fully synthetic data is validity; the simulator embodies many assumptions about what people (or animals) do, what their test results show when they are sick and when these behaviors or test results appear in data. At present, the body of evidence regarding the behavior of sick individuals (used to estimate the parameters required by simulators) is very small.
The advantages of using fully synthetic data are its availability and the control that the evaluator has over the properties of the simulation. The evaluator can change the infectivity of an organism, the size of the outbreak, the geographic distribution and many other characteristics. Because of the problem with validity, however, evaluators restrict their use of fully synthetic data to early evaluations of an algorithm designed to test whether the algorithm is working as intended and whether its performance can be improved. They also use fully synthetic data when comparing an algorithm against alternative algorithms (which could be an earlier version of the same algorithm).
Was this article helpful?