A laboratory evaluation of an algorithm may address one or more of the following questions:
• With what sensitivity and with what error rate can a case detection algorithm detect individuals with different syndromes or diseases?
• With what sensitivity and with what error rate can an outbreak detection algorithm detect outbreaks of different sizes and types?
• What is the smallest size outbreak that can be detected?
• When are outbreaks of different sizes detected?
• How accurately can an outbreak detection algorithm identify other characteristics of an outbreak, such as geographic location of cases, source, route of transmission, infectivity, and other individuals potentially exposed?
• Is a new or modified algorithm an improvement over existing algorithms?
• How can the performance of an algorithm be improved?
Although an algorithm may be capable of detecting many diseases or types of outbreaks, evaluators typically address these questions in reference to one disease or one type of outbreak, or for a limited set of diseases or outbreaks.
The general experimental approach to obtaining answers to these questions involves running the algorithm on surveillance data that contains known cases or outbreaks and determining whether the algorithm detected each case or outbreak, on what date, and whether there were any false alarms. Although the general approaches for evaluating algorithms for case detection, outbreak detection, and outbreak characterization are similar, there are sufficient differences to warrant separating the discussions into separate sections.
Was this article helpful?