Outbreak Detection

We use the term outbreak detection to refer to biosurveillance methods that detect the existence of an outbreak. A clinician may detect an outbreak by diagnosing a highly communicable disease such as measles or a rare disease such as anthrax. (A biosurveillance organization treats a single case of such a disease as evidence of an outbreak until proven otherwise.) An astute clinician may notice a cluster of cases, as happened in the 2003 hepatitis A outbreak, or a biosurveillance organization may detect an outbreak from analysis of surveillance data. Biosurveillance organizations are automating the collection and analysis of surveillance data, so a computer may detect an outbreak.

3.1. Outbreak Detection from an Individual Case of Highly Contagious or Unusual Disease

The 2004 SARS outbreak and the foot-and-mouth disease (FMD) outbreak in the United Kingdom, described in Chapter 2, illustrate a common means by which outbreaks are detected: that is a clinician, veterinarian, or pathologist encounters an individual with a rare disease. Outbreaks of measles, botulism, and tuberculosis often come to attention in this manner.

2 We note that clinicians and especially veterinarians working in agribusiness do not always work-up a case to the highest level of diagnostic precision due to cost-benefit considerations. For example, medical practice guidelines suggest that a clinician treating a woman with uncomplicated urinary tract infection may treat this relatively imprecise diagnosis without obtaining a urine culture (to establish a more precise bacteriological diagnosis) because the probability of curing the condition with a broad-spectrum antibiotic is high.

3.2. Outbreak Detection by an Astute Observer

The outbreaks of Lyme disease, hepatitis A, AIDS, cryp-tosporidium, SARS (2003), and Legionnaire's disease were detected by an astute observer who noticed a cluster of illness and reported its existence to a health department. Outbreaks caused by contamination of food are often discovered when affected individuals who have dined together phone each other upon waking up sick the next day, and one of them calls the health department.

3.3. Outbreak Detection by Biosurveillance Personnel

There are many examples of outbreaks that biosurveillance organizations detect through analysis of surveillance data. Some notifiable diseases, especially the enteric organisms that cause diarrhea, occur sporadically, and a single case report therefore does not constitute prima facie evidence of an out-break.The New York State Department of Health detected an outbreak at a county fair after receiving reports of 10 children hospitalized with Escherichia coli 0157 :H7 in counties near Albany, New York (CDC, 1999e). The Volusia County Health Department detected an outbreak when they received three reports of children with Shigella sonnei sharing a common exposure to a water fountain at a beach-side park (CDC, 2000d).

Hospital infection control units conduct similar surveillance of organisms of epidemiologic significance in the healthcare setting, such as antibiotic-resistant organisms, Clostridium difficile, and Legionella pneumophila. In addition, surveillance is done for hospital-acquired infections followed by trend analysis to access clustering of specific infection types (e.g., central-line associated blood-stream infections) and specific pathogens (e.g, Klebsiella pneumoniae).

3.4. Outbreak Detection by Computers

Increasingly, biosurveillance organizations use computers to analyze data to identify clusters of cases. These data may be cases reported by clinicians, veterinarians, or laboratories; aggregate data about the health of the population, such as sales of thermometers or diarrhea remedies; or a clinical data repository set up by a hospital for surveillance of nosocomial infections and levels of antibiotic-resistant organisms.

It is useful to note in the literature describing these approaches that the diagnostic precision of the data that are being analyzed by the detection algorithms can vary from notifiable diseases at the high end of diagnostic precision to "numbers of individuals absent from work'' or "unit sales of diarrhea remedies'' at the other end of the spectrum.

3.4.1. Automatic Cluster Detection from Notifiable Disease Data

Epidemiologists have long used the Serfling method to identify outbreaks of influenza retrospectively from pneumonia and influenza morbidity and mortality data (Serfling, 1963). But the use of computers to detect clusters in notifiable disease data is uncommon, perhaps because the necessary infrastructure is still being put into place in many jurisdictions. In current practice, epidemiologists use computers primarily to display and manipulate these data. The literature on automatic detection of clusters from notifiable disease data is, perhaps, as a result, relatively sparse at present (Hutwagner et al., 1997; Stern and Lightfoot, 1999; Hashimoto et al., 2000). A noteworthy exception is the use of clustering algorithms to analyze molecular fingerprints of enteric isolates (discussed above and in Chapter 8).

3.4.2. Automatic Cluster Detection from "Syndromic' Data

In contrast, there is a growing literature on the use of algorithms to detect clusters of cases or outbreaks with less diagnos-tically precise data, such as billing diagnoses.

Quenel and colleagues were the first to study detection of outbreaks from such data. They studied the sensitivity, timeliness, and specificity for detection of influenza outbreaks from 11 types of data (emergency home visits, sick leave reported to national health service, sick leave reported to general practitioners [GPs], sick leave reported by companies, sentinel GP visits, sentinel GP visits due to ILI, sentinel pediatrician visits, hospital fatality, influenza-related drug consumption, sentinel GP overall activity, and sentinel pediatrician overall activity).

Detecting outbreaks through analysis of cases of illness at an early stage is a relatively new approach for governmental public health and has been termed syndromic surveillance. Note that some investigators restrict the use of the term syn-dromic surveillance to methods for automatically detecting clusters of illness from case data, whereas other investigators use the term to also refer to monitoring of data aggregated from populations, such as total daily sales of thermometers, which are not case data (Buehler, 2004). Kelly Henning (2004) tabulated the various terms that have been used to refer to biosurveillance systems that provide early warning of disease outbreaks. Of these terms, we prefer "early warning systems'' as it is the most descriptive of their functions

The rationale for early warning surveillance is as follows: Although the diagnostic precision of case detection is low, a highly unusual number of individuals with early symptoms consistent with a disease (e.g., 100 individuals from a single zip code presenting in 24 hours with fever and cough) may provide an early warning of an outbreak. The diagnostic precision can then be improved quickly by testing affected individuals to achieve a more precise diagnosis.

Sentinel ILI clinicians and drop-in surveillance are simple forms of early warning surveillance. In the past five years, there has been a marked trend to automate these surveillance activities to reduce the cost and possibly improve the performance. Although organizations still conduct drop-in surveillance during special events, the appropriate role for drop-in surveillance is limited to special events in cities that have not created equivalent automated capability or in areas where the surveillance requires additional data to improve diagnostic precision. Even in those settings, the current trend is to install an automated system in advance of the event and to supplement it with manual data collection from hospitals that cannot participate in the automated process, or to augment the data collected automatically with additional data collected manually to improve diagnostic precision.

3.5. How and How Well Are Outbreaks Detected?

Two studies have analyzed how existing biosurveillance systems have detected outbreaks. Dato et al. (2001, 2004) reviewed 43 well-known outbreaks, finding that 53% of the outbreaks were detected by health department staff through review of case reports from clinicians and laboratories, and 28% were detected by an astute clinician or person with knowledge of an outbreak in a school or work setting. An additional eight outbreaks (19%) were detected by laboratory networks using advanced testing and fingerprinting of specimens (three), by public sexually transmitted disease clinics (two), and by the military, another government, and a university (one each).

Ashford et al. (2003) reviewed 1,099 outbreak investigations conducted in the United States and abroad by the CDC's Epidemic Intelligence Service from 1988 to 1999. Of the 1,099 outbreaks, 399 (36%) were first recognized by healthcare providers or infection control practitioners. Health departments were the first to recognize 31% of the outbreaks. Other entities that recognized outbreaks were surveillance systems (5%), ministries of health (2.7%), nongovernmental organizations (2%), the WHO (1.5%), and the Indian Health Service (1.1%). Forty-nine (4.5%) of outbreaks were reported by other sources such as private clinics, laboratories, or private citizens.

The study records were inadequate to establish the recognizing entity for the remaining 17% of outbreaks. The time delay from first case to recognition of the existence of an outbreak ranged from zero to 26 days. This study is also interesting because it analyzed 44 outbreaks caused by biological agents with high potential for use by bioterrorists.

Evidence indicates that some outbreaks are never detected, suggesting that there is room for improvement in current methods of outbreak detection. For example, the study by Dato et al. found multiple reports of outbreaks that involved contamination of nationally distributed products. However, the health departments of only one or two states detected these outbreaks, suggesting that outbreaks occurring in other states went undetected. The multistate outbreaks that were detected by only a few states involved commercially processed deli meat (CDC, 2000c), burritos (CDC, 1999c), orange juice (CDC, 1999b), parsley (CDC, 1999d), and dip (CDC, 2000e).

3.6. Diagnostic Precision and Outbreak Detection

The ability of a human or a computer to notice an anomalous number of cases above the background number of cases depends on the diagnostic precision of the surveillance data. For example, if a biosurveillance organization only collects information about the numbers of "sick'' cattle in a feedlot (low diagnostic precision) and there are typically 500 sick cattle on the feedlot, an outbreak of FMD affecting 10 cattle will not stand out against the background level of sick cattle. If, however, the case data are diagnostically precise (e.g., the cases are confirmed diagnoses of FMD), one such animal in a data stream will stand out against the background level of zero. Figure 3.6 illustrates this concept for SARS surveillance,

FIGURE 3.6 Diagnostic precision and minimum size of outbreak that can be detected. In this hypothetical example, the multiple boxes represent many cases in a population being detected automatically by computers from data available electronically. If the data available electronically support more diagnostically precise case detection, the size of a cluster that can be noticed above background levels will be smaller.

FIGURE 3.6 Diagnostic precision and minimum size of outbreak that can be detected. In this hypothetical example, the multiple boxes represent many cases in a population being detected automatically by computers from data available electronically. If the data available electronically support more diagnostically precise case detection, the size of a cluster that can be noticed above background levels will be smaller.

showing that if the diagnostic data available support a more diagnostically precise case detection (i.e., SARS-like syndrome rather than respiratory syndrome), then subsequent analysis of the case data is expected to detect smaller clusters of disease against the background levels of individuals presenting with respiratory illness.

3.7. Timeliness of Outbreak Detection

We close this section on methods for outbreak detection with a comment on the importance of timely detection of outbreaks. A biosurveillance system must detect an outbreak as quickly as possible to enable treatment of those already sick and to prevent further illness. The required timeliness varies by biological agent and route of transmission. Early detection is usually expensive, so the exact relationship between morbidity and mortality and time of detection for each type of outbreak is important. An outbreak of anthrax due to aerosol release, for example, must be detected within days of release or, ideally, at the moment of release because many people will sicken and die within days of the release.Therefore, significant resources should be expended to accomplish detection as close to day zero as possible. In contrast, detection of some diseases, even those as virulent as smallpox, as late as weeks from the onset of symptoms in the first case is still within the window of opportunity to reduce considerably mortality and morbidity (Meltzer et al., 2001).

Was this article helpful?

0 0
Swine Influenza

Swine Influenza

SWINE INFLUENZA frightening you? CONCERNED about the health implications? Coughs and Sneezes Spread Diseases! Stop The Swine Flu from Spreading. Follow the advice to keep your family and friends safe from this virus and not become another victim. These simple cost free guidelines will help you to protect yourself from the swine flu.

Get My Free Ebook

Post a comment