As we try to improve our lives with a national health care plan we must not forget the “law of unintended consequences” to which Robert Merton alerted us in 1936. Two examples illustrate the danger. Few people foresaw that federal support for poor mothers with dependent children would contribute to the breakup of black families, but it did. Nor did people foresee that rent control would contribute to the trashing of cities, but it did.
With those failures in mind, what can we say about the possible unintended consequences of well-intended health care legislation? I suggest that one of the most important consequences will be this: we will be forced to reexamine the word “discrimination” and the actions that result from it. Lest I be misunderstood, let me say that the attack on race discrimination made in the 60’s was by no means a mistake. Far from it. But the condemnation of discrimination has extended far beyond that early application. A national health care system can succeed only if discrimination—appropriate, defensible discrimination—is made an integral part of it.
The need for discrimination is most apparent in the application of medicine to the far end of life. Medicine has not “conquered death,” nor does it seem either desirable or likely that it ever will. But modern medicine has certainly postponed our deaths and increased their costs. Even worse, delaying mortality with fancy medicine too often increases the suffering of the dying. A jungle of high-tech apparatus can keep life going long after the possibility of enjoying it has been lost.
The cost in money can be monumental. A few months of heroic medicine can cost more than all the medical care received during the preceding years. If being given the most expensive terminal care were made the “right” of all—and if most people availed themselves of this right—a national health care system would soon be bankrupt.
Those who reject high-tech medicine, allowing life to end naturally, are often accused of “playing God.” But how can the critics be so sure that God delights in seeing a human being suffer the indignity of having tubes stuffed up his nose while a heart stimulator and a breathing apparatus take over his most vital functions? Syringes drip medicines and nutrients into the veins; king-size diapers take care of the incontinent midsection; speech is often impossible, the “beneficiary” indicating his wants by blinking his eyes, assuming he can hear what is asked of him.
Americans rightly fear yielding power to the state, but in some matters the trend runs the other way. The law used to prevent anything that looked like suicide, even if it was no more than passively withholding medicine that would prolong a painful life. In recent years the Hemlock Society has made notable progress in getting the public to see the justice of not forcing terminally-ill people to endure avoidable suffering. More and more elderly people are now signing “living wills,” which limit the medical care to be given at the end of life. Once people decide that refusing expensive and unwanted artificial medical procedures is not playing God, ever more men and women will prefer to minimize their suffering rather than maximize the length of life. Seeking their own interest, individuals in a national health care system can advance the interests of the general body politic by refusing the most heroic medicine.
But the quasi-suicide made possible by a living will is easier to accept than one person “turning off the switch” for someone else. Many people call this murder. When someone goes into a coma, the vegetative condition may continue for years. In the absence of a living will the financial cost of keeping a “human vegetable” alive may run into the millions of dollars. The emotional cost maybe even greater: imagine the suffering of the parents of 15-year-old Suzanne Payette, who became comatose following a tonsillectomy in 1956. After 20 years of home care by her mother, the daughter died without regaining consciousness. Those who support turning off the switch on unconscious incurables must answer to charges that they are trying to create a “Brave New World” à la Adolf Hitler. They are warned about the “slippery slope” of ethics. The dangers are real; the criticisms must be met.
The expensive prolongation of a kind of life the individual does not want may also be contrary to the interests of the state. The key issue is this: the resources available for doing good are always limited. Money is limited, but arguments based on money often do not carry much conviction with people who arc undisturbed by the increase in the national debt. Our indifference to the abstraction of money has corrupted our judgment. More importantly, the physical realities behind the abstraction—medical resources such as hospitals, physicians, and nurses—are also limited.
Faced with demands that exceed resources, what should we do? Obviously, we have to ration the resources. But how? A lottery would, by definition, be fair; but would such indiscriminate rationing satisfy us? I think not. What we want is discriminative rationing. But what criteria should we use for discrimination?
For the far end of life a rational path has already been blazed in England and Canada. Though the details vary, in both countries expensive high-risk operations such as heart and kidney transplants are denied to patients over the age of 80. Three considerations weigh into this decision. First, the probability of surviving the operation is less for older people. Second, since their expectation of life has already been greatly reduced by age, the operation may essentially be wasted. Third, if the medical resources devoted to the aged were diverted to caring for the young, more total years of life would be saved. As an economist might put the matter: geriatric medicine comes at a high “opportunity cost.” That is, there are much greater opportunities for doing good by devoting the same resources to younger patients.
Turning our attention from the far end of life to the near end—to fetuses and newborn babies—we discover both similarities and differences. The medical specialty “neonatology” deals with problems of the newborn. As for the yet-to-be-born, it used to be that little could be done to correct abnormalities at this stage, but the competitive spirit among surgeons has produced surgery in utero, that is, operations on the unborn fetus. Thus has neonatology been augmented by antenatology, the two comprising perinatology (peri: near, around). Operations performed before birth are almost unbelievable in their delicacy; and of course the more delicate the operation, the more colossal the expense. The cost of perinatal medicine is advancing much faster than the Gross National Product. Surgeons as a group are famous for their enormous egos; it is likely that the competitive spirit drives them to see who can operate successfully on the youngest embryo (at the greatest expense, of course). Perinatology might be classified as an Olympic event, were it not for the fact that microsurgery can never be a spectator sport for the thousands. One questions the use of public funds to support such an esoteric venture.
There is one important difference between medicine at the two extremes of life: there is no chance of consulting an embryo or a very young infant to find out what its preference might be. This does not bother some physicians. Dr. C. Everett Koop, before he became Surgeon General of the United States, proudly reported his role in 22 of the 37 surgeries performed on a single baby. The ultimate effect on the subsequent life of the patient has not been reported. Enthusiasts of neonatal intervention need to be reminded that a considerable proportion of the “successes” actually end with serious functional problems in later life. Reporters are seldom around when the delayed consequences become apparent.
In the absence of the “informed consent” of infantile patients we must try to determine the economic and emotional costs imposed on parents and the community by neonatal intervention. There are cases on record of operations and postoperative care costing more than a half-million dollars. In recent years, babies born to drug-addicted mothers have introduced a new drain on the economy. The Los Angeles County Hospital reports that intensive care of drug-exposed newborns can mount to $1,768 per day. On the other coast, one such baby ran up a bill of more than a quarter-million dollars during its 247-day stay in the Howard University Hospital. One cannot but wonder how many more lives could have been improved, and even saved, by channeling the same amount of medical resources to regular checkups and immunizations for children who had far better prospects of living a normal life.
Already the state of Oregon has bit the bullet of discrimination. In administering federal Medicare funds the state refuses to pay for artificial insemination or in vitro fertilization (“starting life in a test-tube”). Evidently, Oregonians have been looking at opportunity costs. Medical costs vary widely from area to area, but a fair estimate would show that successful artificial insemination is not cheap and that in vitro fertilization costs some ten times as much as a normal conception and birth.
It is easy to empathize with an infertile couple who feel they must have a child at any cost, but community decisions are best made on the basis of opportunity costs to the community. If the majority of the people believe that the birthrate needs to be nudged upward, a given investment can produce more babies if funds assigned to subsidize births among infertile couples are diverted to pay for births among couples of proven fertility.
A new variation on the infertility theme has been recently introduced: creating artificial fertility among post-menopausal women. At considerable expense it is possible to implant a fertilized egg (from another woman) in the uterus of a 60-year-old woman, where it surprisingly thrives and develops normally. And one empathizes with the would-be mother. But again, there is the question of community interest. We have good evidence that extra costs (of several sorts) are imposed on the community when 13-year-old girls become mothers. Though it may be ungracious to say so, are there not reasons for expecting that 60-year-old mothers, as a class, will impose extra costs of a different sort on the community? Certainly their late-born children are more likely to become deprived of their mothers before they are old enough to vote.
A few years ago the economist Lester Thurow estimated that each new American baby requires an investment of some 5240,000 to turn it into an average citizen-worker-consumer. (Grossly abnormal babies require a great deal more investment, and the end product is likely to be less competent to run life’s race.) Considering all these facts, the mythical Man from Mars would no doubt think it odd that earthlings should view the production of children as a purely private matter, the prerogative solely of the fertile couple. In frontier days, when isolated couples took care of all the needs of their developing children, parenthood as an unqualified right made sense. But today, with every decade that passes, the larger community assumes more and more of the expenses of childrearing. An ancient maxim states that “he who pays the piper calls the tune.” Will public policy on parenthood and public health care eventually be determined by this old saw?
Last, and most difficult to deal with under a would-be universal health care system, are the middle years of life. For the wealthy few who pay their own medical bills, there would seem to be no serious problems (though medical facilities are limited no matter who pays the bills). No person, no committee can yet draw up a detailed plan for a stable system of publicly financed health care. The final solution (if there is one) is unknowable.
The costs and benefits of publicly financed medical care during the middle years depend on many factors: the age of the recipient; the probable future earnings of the particular individual; the probable costs of future medical treatments; and the plausibility of further advances in medical science. Discrimination takes place along many logical axes, and the best weighting of the various factors will not be speedily agreed upon. Controversy will continue.
We would like to foresee all of the unintended bad consequences of social innovations, but this is impossible almost by definition: if we could accurately foresee them we would take evasive action. After thinking long and hard, we will just have to do the best we can with the available knowledge.
We may have more success in predicting the good consequences of a national health care system. Beyond the aggregate gain in public health, conflict over costs should help persuade the general public that we live in a world of real limits. Such a statement would be a mere truism were it not for a stead) counterpressure exerted by entrepreneurs and advertisers in our highly commercial society. During the last two centuries the reality of limits has become a radical idea. We have been urged to “fly now, pay later!” Plastic money substitutes for paper money; spending is pushed harder than thrift.
Disputes over health care may push us over the threshold into a world in which limits become pervasive psychological realities once more. When shortages become obvious, individual discrimination—electing one alternative over another—is necessary if chaos is to be avoided. Discrimination by whole classes is both wasteful and cruel when the classes are races, as we learned a generation ago. But discrimination in the light of community need and individual merit is both efficient and just. Everyone likes to say “Yes!” but every explicit Yes implies No to a host of alternatives. A national health care system will be well justified if it reinstates discrimination as a proper function of the social order.
Leave a Reply