Legal And Ethical Challenges

We discuss laws and regulations that promote data exchange in Part II of this book. In this section, we discuss the HIPAA Privacy Rule. Although the Privacy Rule was crafted to allow exchanges of data for purposes of public health surveillance (it exempts these exchanges explicitly from its scope), this rule often comes up during negotiations with holders of personal health information (PHI) (insurers, pharmacists, and healthcare systems). We also discuss the field of ethics, with principles related to privacy, confidentiality, and the tradeoff between public and individual interests that underlie many discussions of newer types of data.

3.1. HIPAA Privacy Rule

The HIPAA Privacy Rule, issued by the secretary of the DHHS in 2003, specifies regulations for protecting the privacy of PHI. Health plans, healthcare clearinghouses, and healthcare providers ("covered entities") that electronically transmit personal health information for specific reasons must follow these rules (CDC, 2003).

The Privacy Rule permits covered entities to disclose PHI to public health authorities. The Privacy Rule states that covered entities may disclose PHI without individual authorization to a public health authority legally authorized to collect or receive the information for the purpose of preventing or controlling disease, injury, or disability.

It also states, "De-identified data (e.g., aggregate statistical data or data stripped of individual identifiers) require no individual privacy protections and are not covered by the Privacy Rule." The Privacy Rule specifically indicates,

De-identifying can be conducted through statistical de-identification—a properly qualified statistician using accepted analytic techniques concludes the risk is substantially limited that the information might be used, alone or in combination with other reasonably available information, to identify the subject of the information; or the safe-harbor method—a covered entity or its business associate de-identifies information by removing 18 identifiers and the covered entity does not have actual knowledge that the remaining information can be used alone or in combination with other data to identify the subject. (45 CFR 164.514[b]) (Table 34.1)

After and satisfaction of the second condition, one calls the data set a limited data set (Table 34.2) HIPAA also requires that the covered entity execute a DUA with the organization or individual to whom it is releasing the data. The DUA must indicate, who is permitted to use or receive the limited data set, and provide that the recipient will not use or disclose the information other than as permitted by the agreement or as otherwise required by law; use appropriate safeguards to prevent uses or disclosures of the information that are inconsistent with the DUA; report to the covered entity any use or disclosure of the information, in violation of the agreement which it becomes aware; ensure that any agents to whom it provides the limited data set agree to the same restrictions and conditions that apply to the limited data set recipient with respect to such information; and not attempt to re-identify the information or contact the individual.

3.2. Ethics

Morality precedes the law. The reason HIPAA makes violations of confidentiality illegal, for instance, is that it is wrong to violate confidentiality, not the other way around. It would table 34.1 Eighteen Identifiers to be Removed for De-Identified Data

To de-identify using this method, the following identifiers of the individual—or of relatives, employers, or household members of the individual— are removed:

2. All geographic subdivisions smaller than a state, including street address, city, county, precinct, zip code*, and their equivalent geocodes, except for the initial three digits of a zip code if, according to the current publicly available data from the Bureau of the Census the following apply:

A. The geographic unit formed by combining all zip codes with the same three initial digits contains more than 20,000 people; and

B. The initial three digits of a zip code for all such geographic units containing 20,000 or fewer people is changed to "000."

C. Currently, 036, 059,063,102,203, 556, 592,790, 821,823, 830,831, 878, 879, 884, 890, and 893 are all recorded as "000."

3. All elements of dates (except year) for dates directly related to an individual, including birth date, admission date, discharge date, date of death; and all ages over 89 and all elements of dates (including year) indicative of such age, except that such ages and elements may be aggregated into a single category of age 90 or older.

4. Telephone numbers.

5. Fax numbers.

6. Electronic mail addresses.

7. Social security numbers.

8. Medical record numbers.

9. Health plan beneficiary numbers.

10. Account numbers.

11. Certificate/license numbers.

12. Vehicle identifiers and serial numbers, including license plate numbers.

13. Device identifiers and serial numbers.

14. Web Universal Resource Locators (URLs).

15. Internet Protocol (IP) address numbers.

16. Biometric identifiers, including finger and voice prints.

17. Full-face photographic images and any comparable images.

18. Any other unique identifying number, characteristic, or code—except covered identities may under certain circumstances—that assigns a code or other means of record identification that allows de-identified information to be re-identified.

*The first three digits of a zip code are excluded from the protected health information (PHI) list if the geographic unit formed by combining all zip codes with the same first three digits contains more than 20,000 persons.

table 3 4.2 HIPAA-Allowable Limited Data Set

A limited data set must have all direct identifiers removed, including the follwing:

• Name and social security number

• Street address, e-mail address, telephone and fax numbers

• Certificate/license numbers

• Vehicle identifiers and serial numbers

• Full-face photos and any other comparable images

• Medical record numbers, health plan beneficiary numbers, and other account numbers

• Device identifiers and serial numbers

• Biometric identifiers, including finger and voiceprints

A limited data set could include the following (potentially identifying) information:

• Admission, discharge, and service dates

• Dates of birth and, if applicable, death

• Five-digit zip code or any other geographic subdivision—such as state, county, city, or precinct—and its equivalent geocodes (except street address)

*URL indicates Web Universal Resource Locators; IP, Internet Protocol.

be wrong to violate confidentiality even if there were no HIPAA, no state statutes, and no litigators. Ethics, a branch of philosophy, is the study of morality.

It is by doing ethics, therefore, that we might discover that "violating confidentiality" is not always wrong. There might be reasons, at least on occasion, to place greater emphasis on other values. Public health, life and death, and national security might be examples of such values. This is not to say that public health or national security (terms that are rather vague if left at that) should always trump confidentiality; it is only to say that there will be circumstances in which morality permits the elevation of public health and other values over confidentiality. Applied ethics is the discipline that sorts these things out.

Biosurveillance systems pose many difficult and interesting ethical issues. Fortunately, we have some precedents for examining these issues, at least to the extent that they arise when computers are used in ordinary health care (Goodman, 1998), emergency response (Goodman, 2003), and data mining (Goodman, 2006). Here, we briefly survey, but do not resolve, a few of the ethical issues that arise in the use of biosurveillance systems. Any attempt at a comprehensive organizational integration for biosurveillance must attend at the outset to ethical issues raised by such systems.

3.2.1. Privacy and Confidentiality

It has long been known that privacy (generally, the right to be free of various intrusions) and confidentiality (generally, the right to be free of inappropriate acquisition of information about an individual) must be balanced against other rights or expectations. It would be perverse for a patient seeking medical care to suggest, for example, that her physician should not have access to her medical record because that would violate confidentiality. As a practical matter, personal health information, whether stored in stone tablets or electronic data bases, poses the following challenge: How can appropriate access be made easy, and inappropriate access made difficult or impossible?

To be effective, a biosurveillance system must acquire and analyze vast amounts of personal information. This information might be trivial, or it might be deeply personal. In a normal environment, one might be able to opt out of such a system. One can direct not to disclose or share personal information in contexts ranging from banking and financial services to health care and education, and the law permits or in some cases even requires that such an opt-out opportunity be available. In the case of biosurveillance, however, opt-outs will tend to degrade or weaken the system; if there are too many of them, the system is more likely to fail.

There are several ways a community might balance demands for privacy against the benefits reckoned to accrue from biosurveillance. They include various levels of anonymization of the data collected; strict controls on access to the data at any level of anonymization; and, most generally, buy-in of the community to be protected by the biosurveillance system in the first place. To the extent that citizens in a free society view a biosurveillance system as an extension of familiar—and trusted—public health surveillance, that system will be more acceptable. However, if one sees a biosurveillance system as intrusive and its operators or applications as not trustworthy, the system is more likely disdained.

3.2.2. Appropriate Uses and Users

The importance of assessing and determining appropriate use and appropriate users of clinical information systems has been apparent for decades (Miller et al., 1985). Determining appropriateness is not always a straightforward matter. One must ask several questions in any such determination. These questions include decisions about (1) who gets to make the determination, (2) what values one should embody in the determination and (3) what is to be done in cases of inappropriate use or inappropriate users.

One can answer the question as to who is responsible for vetting appropriateness by appealing to traditions of trust. Ideally, the world would be so that citizens in a democracy would, at least by tacit agreement, determine an appropriate user of a biosurveillance system. Then, public health authorities—and not law enforcement agencies or manufacturers of over-the-counter drugs, say—might come to be regarded as one class of appropriate users. This arrangement largely governs current public health and epidemiologic data collection. A distinction must be made between health and law enforcement or health services and corporate pharmacologies to make clear that not all users have the same privilege. (Much of the debate over the U.S. Patriot Act is about the extent to which law enforcement entities should have access to information that had previously been either private or available to entities other than the police.)

The values that underlie determinations of appropriate use and user must explicit and public. Because we value the lives of the many over the rights of a few, we, for instance, countenance quarantines, vaccination programs, and the like. In the context of biosurveillance, we similarly should be able to make explicit that we value the lives to be saved from disease outbreaks or bioterrorists at least as highly as we value being able to buy an inhaler at the local pharmacy without being eavesdropped. We encourage caution, however: we will usually be able to say we are putting life over privacy, over the right to move about freely, over liberty. Only when those purported to be saving the lives are trusted and only when the uses to which they put biosurveillance systems are reasonable and controlled will the incorporation of values be credible.

By "control" we mean a system or mechanism for forbidding or punishing an inappropriate use or an inappropriate user. This will require a mechanism for oversight. "Inappropriate" is a vague term, and its scope and limits should perhaps be negotiated in advance as part of any DUA and be subjected to ongoing review. Procedures for evaluating the propriety of a particular decision in a specific case must be—and be seen to be—loyal to guiding values and impartial, as well as flexible enough to accommodate judgment calls in cases in which reasonable people might disagree.

What should be clear about determining appropriate users and enforcing appropriate uses is that these processes cannot be undertaken in isolation from the community we earlier insisted had to trust those operating or managing a biosurveillance system. The public health model has served us well, with everything from vital statistics to HIV surveillance to diet education programs. That biosurveillance systems enjoy such precedents is a happy development for a community that seeks simultaneously to prepare for the worst while enjoying the best of what open societies have to offer.

3.2.3. Risk Communication

An ethically balanced and trusted biosurveillance system is the beginning. Such trust is, for better or worse, open to revision if not revocation. It is not enough to balance privacy and safety, and it is not enough to ensure appropriate uses and users. There is the subsequent and awesome task of making sure the system does what it is intended to do. Put differently, it will not do to develop an ethically optimized system that generates false alarms. In addition, it will be a disappointment (at the least) if our biosurveillance system fails to inform the kind of early warning system it is hoped or designed to support. Negotiating policies and procedures for disclosure of a public health risk is arguably one of the greatest challenges in crafting DUAs.

Indeed, the challenges of risk communication are among the most interesting and difficult for any form of public safety surveillance or tracking. This is especially true if we are to rely on or collaborate with the news media to warn biosurveillance stakeholders of risks (Friedman and Dunwoody., 1999). One might profitably compare the risk communication challenges faced by tropical storm forecasters to those faced by biosurveillance system operators. Indeed, many physicians and nurses find themselves needing to make sure that they "titrate" their duty to warn of risks with any number of probabilistic disclaimers.

If one sounds the alarm early and the risk does not materialize, one erodes credibility for future alarms. If there is too much harm done and one sounds the alarm too late, the system has failed at its prime purpose. Yet it is exquisitely difficult to get it just right. Data providers are likely to have loyalty duties to stakeholders with different needs and expectations. A water department, for instance, serves a community in different ways than does a pharmacy. Yet the collection and mining of data from these and other entities will require information disclosure protocols that take into account these disparate duties and harmonize them or make them congruent for the sake of public health and safety.

A number of approaches are available to address this challenge, and one should evaluate all approaches. These include negotiating the creation of entities analogous to the data safety and monitoring boards (DSMBs) that review and monitor clinical trials. These boards have the duty to interpret data and help decide if investigators should tell human subjects about new risks or dangers that might have emerged after the experiment began; in extreme cases, DSMBs have the responsibility to decide whether to stop a study altogether. (Stopping a trial early means the data collected so far will be useless or less useful than they would have been if the study were completed. However, if one completes the study, it might be at the expense of exposing subjects to unacceptable risks.) Another potentially useful entity would resemble a clinical or hospital ethics committee, a group of health professionals, and lay or community members who provide nonbinding advice to clinicians and others. A biosurveillance ethics committee should be able to take probabilistic data and weigh its disclosure against the hazards posed by the Scylla of too-early warning and the Charybdis of tardiness. We might even hypothesize that a combination DSMB/ethics committee qua risk communication special weapons and tactics (SWAT) team would be useful.

To be sure, the creation and operation of any such entity will also be of use in the negotiation of DUAs and other legal/ contractual documents governing data sharing. The clinical ethics committee model generally includes capacity to offer education, write and revise policy, and provide a consultation service. One should consider therefore requiring the process of organizational integration to include an ethics component to carry out these three functions.

What is called for throughout, if ethics is to be included in the development and use of biosurveillance systems, is that one informs this public service by a public process, that one identifies core values and incorporate them into policies that guide and govern operations, and that one protects and fosters the trust earned by public servants in democracies. There will be a great deal to learn along the way, but this is often the case with new enterprises with very high stakes.

Was this article helpful?

0 0

Post a comment