“Smuggler, embezzler, art forger, scientist.” Before the recent controversy over scientific fraud, that list might have been used on an SAT: “The first three deal in deception, the fourth deals in truth.” Today, however, science’s cultural image is not so unambiguously positive: scientists no longer seem immune from the moral lapses that can inflict people in other occupations. Fabrication, falsification, and plagiarism are by no means rampant within scientific laboratories, but sufficient ethical problems have surfaced—many in places assumed to have higher standards, as in prestigious journals—to raise alarm. And although academies in the humanities and social sciences might prefer to dismiss this as a dilemma only for science departments, the federal regulations and political suspicion resulting from shaken public trust affect all academic researchers, not just those in the physical and natural sciences.
Scientific “truth” matters for reasons other than morality (even though that should suffice). First, we pay for it. As U.S. Representative John Dingell recently admonished readers of the New England Journal of Medicine: “The foundation of public support for science . . . is trust . . . that scientists and research institutions are engaged in the dispassionate search for truth.” We also care because scientific data and evidence—and the secondary and tertiary conclusions based on them—influence so many personal and political decisions: individual choices about diet or health care, community choices about power generation or water treatment, government standards for auto emissions, judicial convictions that rely on scientific analysis of blood stains. Modern society routinely accepts a preponderance of information as “scientific knowledge” and does so more out of habit than rational evaluation of each new fact. Experience counts. Science’s record for accuracy has simply given little cause for concern—until recently.
An intelligent consumer of scientific information, like the intelligent consumer of other goods and services, would not, of course, necessarily assume that all such information is free of error or falsehood. Deception is not happiness, as Riccardo Nobili reminds us in The Gentle Art of Faking (1922): “to be well deceived means to be living in a fool’s paradise, a most costly dwelling that promotes no eternal joy.” Nevertheless, science’s reputation tends to deflate skepticism, to insure positive acceptance of all scientists do or say.
Even after following the issue of scientific fraud for over a decade, I still find it incredible that a scientist could not only accept public money to conduct research, fail to do it, and then coolly fabricate data and nonexistent “experiments” for journal articles, but also allow other professionals to base treatment of patients on the conclusions in those articles. And yet in the ease of Stephen E. Breuning, who in 1988 pled guilty to federal charges of having falsified research project reports to the National Institute of Mental Health (NIMH), that is exactly what happened. Breuning’s “data” indicated that psychotropic drugs were often overused—and that stimulant drugs were more effective—in treating hyperactive children. His conclusions were widely reported and accepted: from 1980 to 1983 (when he was first accused of misconduct), his published papers represented at least one-third of all scientific articles on the topic. From 1981 to 1985, his work had a “meaningful” impact in his field, as measured by citations to his work by others.
Although there is no evidence that any patient was harmed by these misrepresentations, the terrible potential became obvious to all. The case represented an important milestone in the political controversy in the United States because it shattered the argument that fraud only involved unimportant or uninfluential work, that deliberately falsified data would never penetrate the acceptable mainstream, that fakery was so inconsequential it could be safely ignored, and that the system was “self-cleansing.”
Fraud and deception among society’s heroes draw attention to contradictions and inconsistencies in its value systems. Because American culture applauds entrepreneurship, independence, and ambition, for example, scientists have been encouraged to develop independent imaginations and innovative research, to engage in intense competition, to strive for success. Ironically, Americans also want their white-coated heroes to be humble and generous in success, to share credit where credit is due, not to steal credit falsely. The discovery that a scientist has calmly and rationally cheated, lied, and deceived his colleagues and the public contradicts the common image of how scientists should act. It also creates doubt about the reliability of scientific advice—a disturbing uncertainty in a world where that advice is so pervasive.
Perhaps understandably, many scientists reject such candid analysis. They blame not the perpetrators of fraud but society for having insufficient faith in scientists, implying that better science education would improve understanding. Or they blame the analysts, characterizing sociologists, historians, and political scientists who study the topic as “anti-science,” as ignorant of science (on the theory that only kings or queens should write about the monarchy), or as hurting science’s public reputation by publicizing “a few bad apples.”
Scientists are, by and large, intelligent men and women—so why such hostility? Perhaps because the existence of unethical conduct among peers with similar backgrounds contradicts scientists’ self-images. The physicist and the chemist see themselves defending objectivity in an irrational world, struggling to refute pseudoscience in a world swimming in superstition. Society reinforces those images by assigning scientists the role of seekers, determiners, and guardians of truth.
At first, such attacks on criticism and commentary were confined to the perceived “negativism” of analysts or science journalists (William Broad and Nicholas Wade, who wrote the first popular book on scientific fraud, drew extraordinary fire). But when congressional committees that oversee management of federally funded research began to raise questions (as is their responsibility), a few prominent scientists challenged Congress’s authority to investigate research fraud, even when it occurred in federally funded projects. That resistance to scrutiny, as well as scientists’ failure to coordinate among themselves any plan for improving research integrity, left the field open for congressional action.
In a political setting, attention concentrates appropriately not on who trusts but on who is trusted. “Accountability” signifies the responsibility of a government official (or, in the case of government-subsidized research, the person or institution receiving a grant or contract) to prove himself worthy of the public’s trust, to account for financial expenditures or equipment use, and to follow relevant government policies. The concept of political accountability is, in fact, crucial to understanding why “scientific fraud” has become more important in the United States as a political, not a moral, issue. The post-World War II organization of research initially attempted to shift managerial power over science away from Washington, to allow individual researchers and institutions autonomy in conducting research. In return, scientists promised unimpeachable accountability.
Vannevar Bush’s Science—The Endless Frontier (1945), which outlined the postwar plan, named “publicly and privately supported colleges and universities and the endowed research institutes” as the best home for basic research, providing an environment “most conducive to the creation of new scientific knowledge and least under pressure for immediate, tangible results” and thereby insulating science from the presumed taint of commercial gain. Despite the attraction of reliable, ample research support, some scientists were uneasy about the strings potentially attached to government funding. In 1945, Frank Jewett (then president of the National Academy of Sciences and formerly president of Bell Laboratories) warned that “every direct or indirect subvention bv Government is not only coupled inevitably with bureaucratic types of control, but likewise with political control and with the urge to create pressure groups seeking to advance special interests.” The rhetoric of the Bush report attempted to reassure Jewett and his supporters that university administrators and federal grant agencies could buffer bureaucratic controls, insuring the “scientific worker . . . a substantial degree of personal and intellectual freedom.” To further assure sensitivity to science’s special needs, scientists would oversee management of research laboratories and scientists would head the government science agencies and advise on policymaking. This scheme was akin to establishing a department to fund public housing but requiring that it be headed by a housing contractor and that departmental policy be controlled by a board composed exclusively of other contractors. Scientists, of course, claimed to be different, to be free of conflict of interest or self-interest when they contracted for knowledge-production. They promised intellectual integrity, comprehensive expert review at all stages of research (the “peer review” system for proposals and publications), and political accountability.
In retrospect, this plan seems inherently unstable; certainly it had few parallels in contemporary American government. Yet throughout the 1960’s and 70’s, the research system was healthy and productive, the science agencies monitored performance thoroughly and conscientiously, and no one questioned the overall trustworthiness of grant recipients. Political controversy surrounded the use of human subjects and animals in experiments, the safety of genetic manipulation, and the disposal of hazardous byproducts of research, but fabrication, falsification, and plagiarism were considered to be matters of ethics not policy, moral rather than political issues.
In the 1980’s, criticism of university management practices, especially indirect cost-accounting practices and the hiring of public relations firms to acquire “pork barrel” grants without peer review, helped to create a less favorable political climate for academic science. Soon, congressional committees charged with overseeing R&D began to investigate why the management and evaluation systems created to monitor research and research communication had failed to detect or prevent several outrageous examples of misconduct.
The first significant legislative attention to scientific fraud took place in 1981, in hearings of the House Committee on Science and Technology’s Subcommittee on Oversight and Investigations, chaired by Albert Gore, Jr. (then U.S. Representative from Tennessee). Gore opened the hearing with a statement that encapsulated Congress’s interest: “At the base of our investment in research lies the trust of the American people and the integrity of the scientific enterprise.” The senior officials at the National Institutes of Health and the other prominent scientists who testified seemed unresponsive to such reminders. No “real” scientist could ever commit fraud (and hence those who committed fraud were never real scientists) and, because so few cases of fraud had been found, no elaborate management or training programs were needed to prove it.
When congressmen heard testimony that NIH was not only funding scientists who had been accused of wrongdoing but that its policy also did not prohibit funding after admission of guilt, they expressed outrage. These were not, however, the usual ritual statements of Capitol Hill: they typified instead a growing, genuine sense of political betrayal, a violation of a trust relationship of the utmost seriousness. As delays in the rule-making process within NIH continued through the 1980’s, and few universities moved forward with developing procedures to investigate allegations or promote integrity, members of Congress continually warned that, if scientists and universities did not soon demonstrate a commitment to change, then tough and unpleasant federal regulations would follow.
In addition to the Breuning case, several other extensive, well-publicized episodes of scientific fraud helped to shape the content and direction of political attention. The activities of John Darsee, for example, which had been discussed in the Gore hearings, became front-page news in 1981. This case drew special attention because of the research topic and source of funding (Darsee was participating in an important, multi-institutional cardiology study funded by the NIH) and the prestige of Darsee’s mentors and affiliations (training at Emory University, post-doctoral appointment at Brigham and Women’s Hospital in Boston, and assistant professorship at Harvard Medical School). Sterling credentials did not provide a shield against corruption. Coworkers at one point reportedly watched in disbelief as Darsee falsified data from a heart monitor attached to a dog, attempting to make it appear that the information had been collected over several days not hours.
The Darsee case also drew attention to the growing problem of “irresponsible coauthorship.” Some of his colleagues had willingly accepted credit as “coauthors” on articles they had neither written nor read and then blithely disavowed responsibility when the articles were challenged. Many coauthors discovered that Darsee had done them no favor—their resumes were longer but their names were attached to tainted manuscripts. For some of Breuning’s coauthors, professional problems dragged on for months. Over 50 articles remained in dispute while Breuning refused to discuss the matter publicly and the journals refused to retract them unless all coauthors cooperated. In a case against cardiological radiologist Robert Slutsky, a University of California-San Diego investigation determined that of 137 manuscripts he had written between 1978 and 1985, 77 were valid, 48 “questionable,” and 12 “fraudulent.” Slutsky, too, had “favored” junior colleagues with coauthorship (in part to disguise his improbable rate of production), but was less cooperative in assisting retraction.
The attention that these cases brought to the issue of coauthor responsibility did result in some positive change. Certainly, there is increased awareness among young coauthors and some reassessment of publication policies within laboratories. The greatest change has occurred with the journals, many of which now require that submissions be accompanied by a letter attesting that all coauthors have read and approved the manuscript.
Not all issues raised by scientific fraud have been settled so easily. Retractions, for example, help to preserve a journal’s reputation for accuracy, but the concept of “retracting” errors also relates to maintenance of a field’s intellectual integrity. Scholars see forgeries as “corrupting” the body of knowledge defining a field’s substance; they can create misunderstandings, lend support to false theories, or route causative explanations down the wrong path. Unfortunately, fake scientific knowledge is not like fake art—we cannot simply tear an offending page from a journal like we remove forged paintings from museum walls. “Retraction” is actually just a notice (usually published in a later issue of the same journal) that the previous article should be disregarded. Handling retractions efficiently and fairly continues to be a significant problem for scientific publishing. With electronic data bases, retraction notes can be tagged or linked to the original article. But what about subsequent references to the work? Or cross references? And what about data in limbo—questioned but not yet proven false?
Even a sympathetic observer of the political negotiations surrounding scientific fraud might conclude that none of the parties to the debate—universities, government officials, accused scientists and their lawyers, whistleblowers and their lawyers, congressional oversight committees, and people claiming to speak for the scientific community—has much interest in resolving it swiftly or simply. At no time has everyone agreed on a common goal or acceptable outcome to the debate. Some of the hearings presided over by John Dingell resemble Monty Python sketches—silly scientists, overly serious whistleblowers, hardhearted legislators—and might even be funny were the stakes not so high—millions in grants, tenuous careers, and Nobel reputations. Some of this hype and hysteria appears to have receded in the past year, but it would be difficult to decide whether this is due to journalistic boredom, political hiatus, a temporary truce, or a genuine shift to reasonableness on all sides.
In response to especially nasty fights over NIH investigating procedures (and dissatisfaction with the length and style of investigations), Congress recently changed the NIH Office of Scientific Integrity into the Office of Research Integrity (ORI), an independent entity reporting to the Secretary of Health and I luman Services (thereby moving control of ethics investigations from NIH to its parent agency). The same legislation requires any entity (university or private laboratory) conducting biomedical or behavioral research for NIH under grant, contract, or cooperative agreement to develop procedures for investigating allegations of misconduct, to cooperate with ORI investigations, and to protect whistleblowers who make allegations in good faith. Fewer fireworks have surrounded National Science Foundation (NSF) efforts to address this issue. Perhaps this is because most high-profile cases have been in the biomedical sciences. But NSF also moved more adroitly in establishing its own internal policies and office for investigating allegations, in developing instructions for its grantees, and in promulgating a definition of misconduct.
Exhaustive discussion, especially at the university level, has centered on how to define “misconduct.” NSF, for example, prohibits “fabrication, falsification, plagiarism, or other serious deviation from accepted practices in proposing, carrying out, or reporting results from activities funded by NSF [or] retaliation of any kind against a person who reported or provided information about suspected or alleged misconduct and who has not acted in bad faith.” The definition applicable to HHS-funded research is being revised but (with the exception of the “whistleblower” protection clause) contains language similar to the NSF definition.
To label any conduct as violating “accepted practice” invites problems, of course, because it leaves open the question of who determines “acceptability”—which field, institution, group, or individual? What about interdisciplinary work—whose standards should apply then? The standards of all fields? Or only those of the field in which the investigator was trained? The answer will dictate who is involved in the investigation, will influence the type of evidence or witnesses sought, and will affect the comprehensiveness of investigation and fairness of outcome.
Working definitions of ethical practice also tend to change with time. As FFM. Paull in Literary Ethics (1928) observed, “It is commonplace in ethics that practices once deemed innocent became gradually to be regarded as crimes as civilization advances . . . the standard of morality changes with the ages.” The assumptions and inferences one may appropriately draw from statistical data have continually changed during this century as measurement techniques have grown more precise. The mutability of scientific standards creates a dilemma when one must establish a standard in law. Precise definition would avoid undesirable subjectivity in investigation and adjudication but could fail to be sufficiently tough when science changes rapidly.
Many of the problems related to scientific communication—who should be listed as a coauthor, how much attention should be given to negative results—arise because standards for such behavior have always been implicit, unwritten. Electronic communication in science will pose additional problems, forcing journals to delineate more crisply the boundaries between responsible and irresponsible authorship, between ownership and theft of ideas. Policies and standards for every part of scientific publishing, from its managerial structure to its economics, rooted in print-era attitudes and relationships, will have to be reexamined.
Concepts like “truth” and “trust”—as scholars in every field can attest—reflect perception (or the interpretation of perception) as much as reality. Your “truth” may be my lie—and vice versa. For the public image of science in the United States, the perception has become reality. Scientific fraud represents a potential public relations disaster, especially when abuse of government funding appears to nonscientists like a blatant violation of political trust.
The irony of this controversy is that so little was necessary to avoid it. Those who faked and falsified did not need to do it—they were capable of conducting honest research and were generally already establishing successful careers. The falsifications or fabrications gained them relatively little in the short run and eventually cost them—and the rest of science—a great deal. Scientists also had ample warning of the burgeoning political distress and time to implement codes of appropriate conduct for laboratories, associations, institutions; to change a climate that applauded “science at any cost” and rewarded ambition and accomplishment rather than generosity and honesty; and to discuss ethical issues with graduate students.
The editors of the Journal of the American Medical Association, when announcing in 1989 a new policy requiring coauthors to validate participation and responsibility, noted that the “small additional bother . . . is designed to protect all of us from the shadow that has fallen over the scientific and medical communities.” This tentative call for looking beyond one’s nose (or resume) to the interest of all researchers and all society has been echoed even more forcefully by the new editor of the New England Journal of Medicine, who warns the “fame-and-fortune viper” who “creates data where there are none” that only trouble and disgrace will follow such deception. Until it becomes “fashionable” to care more about integrity and honesty in science than money and public image, however, research communities in all fields will not be free from the shadow of mistrust.
Leave a Reply