Water Monitoring Tests

Municipal water suppliers must test and meet requirements for a number of parameters, including water pressure, chlorine residual, chemical pollutants, radiological contaminants, and biological organisms. Although this chapter is intended to focus solely on biological monitoring, note that some testing methods are complementary. Water utilities test for conditions that predispose the water supply to contamination, as well as

for physical or chemical properties that can be viewed as proxies for contaminants.

Specifically, water utilities must maintain a water pressure of 20 pounds/inch2 (psi) in the water mains and a chlorine residual level of 0.2 mg/L to reduce opportunities for biological contamination (L. Lindsay, personal communication). The utilities also monitor pH, turbidity, odor, pesticides, and certain other chemicals in water to be released for consumption.

Many biological agents produce toxins, such as botulinum and cholera toxins, whereas other toxins are naturally occurring, such as ricin toxin, which originates from castor beans. Implementation of surveillance for bioagents occurs primarily at the local and state levels, and is part of any comprehensive water protection plan. Security measures can range from physical barriers (deterrents for unauthorized access such as walls and fences) to cyber-safety protections (firewalls and restriction of computer access to monitoring facilities), to actual biosensor surveillance technologies. Ideally, each level of security is implemented at several checkpoints, covering multiple sites along the distribution chain, including main plants, remote sites, distribution systems, wastewater collection systems, pumping stations, treatment facilities, and water storage depots. Although the entire spectrum of security measures is needed to protect our water supplies, experts cite more comprehensive, real-time monitoring technologies as the most important concern regarding the safety of our drinking water (U.S. Government Accountability Office, 2004).

Public health authorities, both state and local, have the means to detect and identify water-associated outbreaks, and these authorities are part of a network capable of notifying the public and implementing prevention and control measures. The efficacy of these measures depends on how early a pathogen can be detected. In December 2003, the EPA issued a Response Protocol Toolbox (RPTB) (Figure 9.2) that offered specific guidance for response to a compromise of the drinking water supply (EPA, 2003a). The toolbox includes algorithms and time points designed to maximize containment and minimize harm to citizens.

Detection of a naturally occurring water-borne pathogen transpires at many places, including the source; much testing in municipal water systems occurs at the customer's tap. Routine testing procedures can detect many organisms but usually not parasites or viruses (S. States, personal communication) Current detection methods have improved substantially since 2001; most molecular assays take a few hours to perform, which should, in theory, substantially reduce the time lag in detection. However, problems remain, especially in cases in which detection still consists of a human looking through a microscope at a slide. The availability of these rapid tests is a concern, as they are often cost-prohibitive, meaning that

http://www.epa.gov/ safewater/watersecurity/pubs/guide_response_module5.doc.)"/>
FIGURE 9.2 EPA Response Protocol Toolbox. (From http://www.epa.gov/ safewater/watersecurity/pubs/guide_response_module5.doc.)

samples must be transferred to outside laboratories for analysis, causing delays. Utilities that store water in reservoirs retain the water, often for days, before releasing it into the distribution pipeline, thus allowing sufficient time to detect and neutralize a threat. Water volumes greater than 100 million gallons can be held in this manner. Any relevant detection methods performed for that utility's water supply should, if positive, allow the utility to cancel or delay the release of contaminated water to the general public.

There are two problems with this scenario, however. The first issue is that many water supplies are derived from combined sources. Combined source waters distributed through the pipeline have experienced various forms of treatment and monitoring. Distributed water that is a combination of reservoir source and intake (upstream from the reservoir) completely negates the rationale for testing of reservoir sources. In addition, a combined source substantially complicates any attempt to ascertain where the introduction of contaminants occurred. Utilities that supply drinking water in the United States commonly use combined source water. The second problem is that the water supply has numerous access points downstream of the reservoir, at which intentional contamination could be introduced, such as back-siphoning through a local service line. If monitoring is meant to serve as an early warning system for terrorist activities, then the safeguarding of these access points must be addressed.

Continued development of the communication infrastructure among water utility companies, regulatory agencies, and public health organizations arises from the desire to protect the public and to contain outbreaks. For example, it would be desirable if municipal water suppliers provided water sampling data to the EPA database on a frequent basis; this has not yet been accomplished. A centralized surveillance system that automatically polls water suppliers' databases and analyzes the incoming data would be an integral part in enabling early warning of potential outbreaks, but it would not, by itself, allow real-time scrutiny; efficient population of that database is also a prerequisite. These gaps in the process could severely hamper any attempt to sequester contaminated supplies before they are sent to the public. Successful surveillance depends on the testing performed by the water suppliers. Improving early detection and contamination containment calls for more frequent sampling and use of on-site rapid testing techniques to produce immediate results.

Authorities often base organism-specific tests on the prevalent threats to the local service area, such as the inherent differences between rural supplies and larger metropolitan water sources. Often, the local supplier's tests identify a natural contamination problem and correct it before any illness is reported in a network. Any problem, but particularly those related to bioterrorism, must be reported quickly if the networks are to function properly. Rapid turnaround in reporting to the network allows identification of larger issues, such as contamination of multiple supplies. Otherwise, the local or state health department would generally issue the initial alert based on patient presentation. The latter type of public health alert is a more likely situation for intentional biological contamination introduced downstream of critical surveillance points, or for agents not routinely tested. A few governmental public health agencies now perform syndromic surveillance, monitoring trends in signs and symptoms related to increased patient presentation. These signs may range from a spike in sales of a particular over-the-counter medicine to an increased incidence of emergency department patients. The number of deployed systems is slowly increasing.

The information sharing and central water-quality databases, as well as regular compliance reviews, provide a feedback loop by which we can assess the biosurveillance effectiveness. Although these databases do not operate in real time, the networks have been vastly improved since the 2001 terrorist anthrax mailings, so that, now, interdepartmental contacts, via phone and e-mail, can help streamline the notification process in the event of an actual crisis situation. The network is more likely to be effective for early detection of a wide range of possible incidents. Although a wide range of incidents is possible, each with unique circumstances, the organizational framework is set up to provide rapid turnaround on any emergency samples (e.g., clinical, environmental). Once the need for select agent identification has been determined, preliminary results for analysis can be performed within a few hours. Because testing of select agents cannot be performed at every locale, inherent delays associated with incident identification, such as sample transportation to an appropriate testing laboratory, are still a problem and can lengthen the overall response time.

In general, larger water utilities perform testing with some type of high-throughput, automated testing equipment, plus a supporting system for confirmation. At this level, they make very limited use of bioassays that employ rapid, modern genetic or immunological methods. These types of assays are for the most part, utilized by the second tier of the surveillance infrastructure, such as state public heath (clinical and environmental) laboratories or local departments of environmental protection (DEPs). The suppliers covering heavily populated areas are the most likely to employ state-of-the-art water surveillance systems to provide a more effective early-warning system for bioterrorism threats (see "Molecular Methods''). Federal response teams, such as the U.S. Army Technical Escort Unit, use mobile state-of-the-art detection techniques, but again this process is more reactive rather than proactive, given the inherent delays in traveling to the incident site. These mobile units are not practical for routine monitoring of our water systems, owing to the high cost involved in their deployments. Although the current system functions well under normal circumstances, the available surveillance methods are not amenable to heightened security, other than through increases in the number of point sources monitored or in the number of specific organisms tested.

Until recently, even the use of advanced testing methods calls for subsequent verification by standard cell culture studies, still considered the gold standard for biological identification. Confirmatory cell culture is always performed but, because the process can take days, such tests are less applicable for early detection. Indeed, laboratories still cannot culture some water-borne viruses at all. Although an initial culture may raise an alarm, triggering shut down or rerouting of water delivery, the response may well be too late to prevent significant harm, given the limitations imposed by sample collection, delivery, and processing on rapid notification and incident response.

The water utilities have responded to this limitation by searching for more rapid, single-step tests. Consider the example of E. coli: because E. coli is the most abundant organism among fecal coliforms but is not generally found in the environment, its presence is the accepted indicator of water supply contamination. The gold standard for water utilities looking for this microbe begins with the utility first determining coliform density with a filtration step, then performing a standard culture on selective media. By this method, coliforms are easy to identify, but the two steps consume precious time and resources.

Recognizing this, nearly all utilities now employ a direct test. A 100 ml water sample is incubated for 24 hours in a flu-orogenic substance, 4-methylumbelliferyl-P-D-glucuronide (MUG), and the technician need only look for a simple color change. A yellow color signifies the presence of coliform bacteria; if the sample fluoresces under black light, E. coli is present in the sample. Although faster and cheaper, the new direct test still falls short of the ideal for early detection.

The EPA recognized this limitation when it issued its toolbox. The EPA specifies that a water utility should recognize a "possible'' threat within 1 hour of receiving a warning of some sort, and be able to confirm a threat to the water supply within 2 to 8 hours of identifying a "possible'' threat. Because laboratory verification can take longer, the EPA document states that a "preponderance of evidence'' is sufficient to establish confirmation of a threat requiring containment measures. The EPA advises that a utility need not establish positive identification of a pathogen to execute containment measures.

To be effective, surveillance strategies must employ continuous monitoring for the agents routinely being screened. U.S. water utilities do not employ this type of surveillance routinely. For contaminants not routinely monitored, or for private wells, the sole surveillance strategy available is strictly reactive, based on the reporting of infected individuals. This will obviously involve substantial delays, depending on the laboratory testing network and efficiency of its reporting system. Outsourcing of the initial testing to commercial laboratories can also be an impediment to timely reporting, if the contract laboratory has slower turnaround times. Ideally, water monitoring systems use testing facilities adjacent to water supply or treatment sites, essentially eliminating delays associated with sample transport.

0 0

Post a comment