Most of us, whether we like it or not, already have some rough idea of the likely size and shape of our lives—how long we will live, in what state of health, and what we will eventually die of. We live in an age in which life expectancy patterns for populations and subgroups of populations are known and predictable. We know the frequencies of the principal causes of death and the ages at which they are most likely to prove fatal. We know accident rates, risk ratios, and predictive factors for high-risk behaviors. Most important, we know that the age and cause of death of our own parents is the best predictor of our own mortality. Of course, we cannot for the most part tell if we as individuals will actually contract a specific disease, or become the victim of an accident, or succumb to some other cause of death. Furthermore, we often deceive ourselves about the ways in which our own health maintenance habits (or lack thereof) influence our expectable lifespan. Nevertheless, in general, in an extremely rough, often inchoate, not fully recognized way, we have a sense of what to expect about our own deaths and the periods of morbidity that may precede them: what is likely to happen to us, more or less, and about when, at what age, and for what reasons it will occur. If our ancestors all lived into their 90's and died of "old age," that is, of conditions that occur primarily at very advanced ages, we have a pretty good chance of doing so too; if they died of heart attacks in their 50's or cancer in their 60's, our anxieties mount when we reach these ages.
Many factors—improved actuarial computation techniques, better recordkeeping of mortality and cause-of-death statistics, better inter- and intra-population comparison data, better prediction of the emergence of new pathogens like viruses and flu strains, and so on, are likely to contribute to a change in this picture. Most of them are likely to make the picture sharper and clearer. But the biggest factor in changing this picture will be the increased possibilities of genetic analysis, coupled with clinical and epidemiological data that make possible both population-wide and individual prognostication. As it becomes increasingly informative to trace an individual's genetic legacy and thus to identify inherited disorders and diseases, physiological characteristics, and disease susceptibilities, it will become increasingly possible to make more and more accurate predictions about how long a person is likely to live, in what condition and with what degree of function, and when and how that individual is likely to die. Of course, as the possibility of prediction increases so will the possibility of treatment, but people will still eventually die—in more predictable, forseeable ways.
This is the matter I want to explore. While I think human awareness of eventual illness and death will involve gradual change, barely perceptible to most individuals, I also think we must recognize that this is a process of change already well underway. This change will constitute a substantial transition from the present, and a huge departure from the past. What lies at the center of this change, I want to show, is the increasingly informative possibility of genetic prognostication (something of which we are already partly aware) about the size—the length, health characteristics, and cause of demise—of an individual's life.
I'm not the only one who wants to explore this possibility; Hollywood does too. Just the issue I want to explore has been very alarmingly raised—though I think trivialized—in the science-fiction film GATTACA, released in 1997.' I take the box-office success of this film as a symptom that the general public may be sensitive to this issue too, at least at a superficial level, and I even imagine that this sensitivity may itself be a sign of the changes I want to describe.
Was this article helpful?