all sides.rnIn response to especially nast fights o’er Nil I investigatingrnprocedures (and dissatisfaction with the length and style of investigations),rnCongress recently changed the Nil I Office of ScientificrnIntegrity into the Office of Research Integrity (ORI), anrnindependent entity reporting to the Secretary of Health andrnI luman Services (thereby moving control of ethics investigationsrnfrom NIH to its parent agency). The same legislation requiresrnany entity (university or private laboratory) conductingrnbiomedical or behavioral research for Nil I under grant, contract,rnor cooperative agreement to develop procedures for investigatingrnallegations of misconduct, to cooperate with ORIrninvestigations, and to protect whistleblowers who make allegationsrnin good faith. Fewer fireworks have surrounded NationalrnScience Foundation (NSF) efforts to address this issue.rnPerhaps this is because most high-profile cases have been in thernbiomedical sciences. But NSF also moved more adroitly in establishingrnits own internal policies and office for investigatingrnallegations, in developing instructions for its grantees, and inrnpromulgating a definition of misconduct.rnExhaustive discussion, especially at the university level, hasrncentered on how to define “misconduct.” NSF, for example,rnprohibits “fabrication, falsification, plagiarism, or other seriousrndeviation from accepted practices in proposing, carrying out, orrnreporting results from activities funded by NSF [or] retaliationrnof any kind against a person who reported or provided informationrnabout suspected or alleged misconduct and who hasrnnot acted in bad faith.” The definition applicable to HHSfundcdrnresearch is being revised but (with the exception of thern”whistleblower” protection clause) contains language similar tornthe NSF defmition.rnTo label any conduct as violating “accepted practice” invitesrnproblems, of course, because it leaves open the question of whorndetermines “acceptability”—which field, institution, group, orrnindividual? What about interdisciplinary work—whose standardsrnshould apply then? The standards of all fields? Or onlyrnthose of the field in which the investigator was trained? Thernanswer will dictate who is involved in the investigation, willrninfluence the type of evidence or witnesses sought, and willrnHell: Previewrnby Paul RamseyrnThe heart treasures its self-blame,rnIts loneliness and regret,rnAnd nurses its smudge potsrnsullenly,rnThinking that cold shall enter andrnoverwrite.rnBut the end is not vet.rnaffect the comprehensiveness of investigation and fairness ofrnoutcome.rnWorking definitions of ethical practice also tend to changernwith time. As FFM. Paull in Literary Ethics (1928) observed,rn”It is commonplace in ethics that practices once deemed innocentrnbecame gradually to be regarded as crimes as civilizationrnadvances . . . the standard of morality changes with thernages.” The assumptions and inferences one may appropriatelyrndraw from statistical data have continually changed duringrnthis centur’ as measurement techniques have grown morernprecise. The mutability of scientific standards creates a dilemmarnwhen one must establish a standard in law. Precise definitionrnwould avoid undesirable subjectivity in investigation andrnadjudication but could fail to be sufficiently tough when sciencernchanges rapidly.rnMany of the problems related to scientific communicationrn—who should be listed as a coauthor, how much attentionrnshould be given to negative results—arise because standards forrnsuch behavior have always been implicit, unwritten. Electronicrncommunication in science will pose additional problems,rnforcing journals to delineate more crisply the boundaries betweenrnresponsible and irresponsible authorship, between ownershiprnand theft of ideas. Policies and standards for every partrnof scientific publishing, from its managerial structure to its economics,rnrooted in print-era attitudes and relationships, willrnhave to be reexamined.rnConcepts like “truth” and “trust”—as scholars in every fieldrncan attest—reflect perception (or the interpretation of perception)rnas much as reality. Your “truth” may be my lie—andrnvice versa. For the public image of science in the UnitedrnStates, the perception has become reality. Scientific fraud representsrna potential public relations disaster, especially whenrnabuse of government funding appears to nonscientists like arnblatant violation of political trust.rnThe ironv of this controversy is that so little was necessary tornaoid it. Those who faked and falsified did not need to do it—rnthey were capable of conducting honest research and wererngenerally already establishing successful careers. The falsificationsrnor fabrications gained them relatively little in the shortrnrun and eventually cost them—and the rest of science—arngreat deal. Scientists also had ample warning of the burgeoningrnpolitical distress and time to implement codes of appropriaternconduct for laboratories, associations, institutions; tornchange a climate that applauded “science at any cost” and rewardedrnambition and accomplishment rather than generosityrnand honesty; and to discuss ethical issues with graduate students.rnThe editors of the journal of the American Medical Association,rnwhen announcing in 1989 a new policy requiring coauthorsrnto validate participation and responsibility, noted that thern”small additional bother… is designed to protect all of us fromrnthe shadow that has fallen over the scientific and medicalrncommunities.” This tentative call for looking beyond one’srnnose (or resume) to the interest of all researchers and all societrnhas been echoed even more forcefully by the new editor ofrnthe New England journal of Medicine, who warns the “fameand-rnfortune ‘iper” who “creates data where there are none”rnthat only trouble and disgrace will follow such deception. Untilrnit becomes “fashionable” to care more about integrity andrnhonesty in science than money and public image, however,rnresearch communities in all fields will not be free from thernshadow of mistrust. crn24/CHRONICLESrnrnrn