Webdiary - Independent, Ethical, Accountable and Transparent
header_02 home about login header_06
header_07
search_bar_left
date_box_left
date_box_right.jpg
search_bar_right
sidebar-top content-top

The conundrum of scientific fraud

Steve FullerSteve Fuller is Professor of Sociology at the University of Warwick, and author of The Philosophy of Science and Technology Studies.

by Steve Fuller

Science, and the behavior of scientists, has never been perfect. Consider the Korean scientist Hwang Woo-suk, whose claim to have extracted stem cells from human embryos that he cloned turned out to be based on phony research. Hwang has just been fired by Seoul National University, and six of his co-workers have been suspended or had their pay cut.

Hwang and his colleagues are hardly alone. In response to the recurrence of well-publicised and highly damaging scandals in recent years, many universities and some entire national research funding agencies now convene “institutional review boards” to deal with breaches of what has come to be known as “research ethics.”

But are such boards necessary? If so, in what spirit should they conduct their work?

From artists to scientists, all intellectual workers are preoccupied with the giving and taking of credit. Hiring, promoting, and rewarding academic staff is increasingly based on “citation counts” – the number of times someone receives credit in peer-approved publications. Even if someone’s work is criticised, it must be credited properly.

Credit is often given simply because someone has produced a creditable work. But, increasingly, the fixation on credit reflects the work’s potential monetary value. And, however one defines the creditworthiness of intellectual work, one thing is clear: reported cases of fraud are on the rise.

Sometimes fraud consists in plagiarism: the culprit takes credit for someone else’s work. However, especially in the most competitive scientific fields, fraud often takes the form of forgery: the culprit fabricates data. Plagiarism is the sin of the classroom; forgery is the sin of the laboratory. But in neither case has a standard method emerged for ensuring the integrity of research – or of the scientists who carry it out.

A useful simplification in addressing the potential work of research ethics panels is to consider two models of review: “inquisitorial” and “accusatorial.” Whereas the inquisitorial model presumes that fraud is rampant, but often undetected, the accusatorial model adopts a less paranoid stance, presuming that researchers are innocent until proven otherwise.

The natural context for an inquisitorial system is a field where scientific integrity is regularly under threat because research is entangled with political or financial interests. Often, these entanglements are unavoidable, especially in the case of biomedical research. Here the inquisitors are part cost accountant, part thought police. They conduct spot-checks on labs to ensure that the various constituencies for scientific research are getting their money’s worth.

Inquisitors must be invested with the power to cut off funding to transgressors, perhaps even excommunicating them, by barring a scientist from practicing or publishing in the future. Hwang, for example, has lost his research license and has been banned from assuming another public post for five years.

However, the credibility of such a system relies on the inquisitors’ ability to uphold standards of science that are genuinely independent from special interests both inside and outside the research community. Otherwise, inquisitorial ethics reviews could come to be regarded as nothing more than intellectual vigilantism.

Consider the Danish Research Council’s ominously named “Committee on Scientific Dishonesty,” which was convened in 2002 following complaints raised against the political scientist Bjørn Lomborg, whose book The Skeptical Environmentalist purported to demonstrate that ecologists systematically biased their interpretation of data to support their political objectives.

The Committee initially found Lomborg guilty of much the same error that he had alleged of the ecologists. However, the Committee refused to comment on whether his behavior was normal or deviant for the field in question. This enabled Lomborg to appeal the verdict, successfully, claiming that the Committee had victimised him for political reasons, given his recent appointment as director of a major national institute for environmental research.

In the end, the Committee on Scientific Dishonesty was itself reorganised. In retrospect, the Committee probably should have refused the case, given that Lomborg’s book was subject to intense public scrutiny, often by experts writing (favorably) in The Economist and (unfavorably) in Scientific American. His was a genuine case in which intellectual work was given a fair trial in the proverbial “court of public opinion” and required no further oversight.

Indeed, Lomborg’s case would seem to support the accusatorial model of ethics review, which assumes that scientists adequately regulate their own affairs through normal peer-review procedures. Here, scientific integrity is understood not as a duty to stakeholders—funders, companies, or, as with Hwang, politicians concerned about national prestige—but as a collective responsibility that is upheld by identifying and correcting errors before they cause substantial harm.

The accusatorial system is designed for those relatively rare cases when error slips through the peer-review net, resulting in some concrete damage to health or the environment, or causing the corruption of later research that assumes the validity of fraudulent work. To lodge an accusation, the accuser must establish that some harm has been committed, which is then shown to have been the fault of the accused.

The countervailing assumptions of the inquisitorial and accusatorial systems reflect the ambiguity of the concept of scientific fraud. Many so-called frauds are cases in which researchers claimed credit for work they have not really carried out, but that pointed in the right direction and was eventually completed by others. Strictly speaking, they are guilty more of confusing the potential and the actual than the true and the false.

Indeed, by today’s standards, Galileo and Mendel committed fraud, since they likely massaged their data to fit a neat mathematical formula. However, they remain scientific visionaries because others built so effectively on their “work.” Of course, in their cases, no money changed hands and no one’s life was jeopardized. Perhaps, from the point of view of research ethics, that makes all the difference.

Copyright: Project Syndicate, 2006.
www.project-syndicate.org

left
right
spacer

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Drug trials

Will Howard: "But it made me wonder how much bad data is being worked with now. What if the data were from a drug trial?"

I don't know about the prevalence of fraudulent research in drug trials, but getting access to studies conducted by drug companies but not published has been a big problem. Without them it is impossible to properly evaluate claims for efficacy and safety of drugs.

In recent years registers of clinical trials have been set up, putting public-image-type pressure on companies which do not register trials. In May the World Health Organization plans to launch a common portal which will enable public access to all of these registers and require a minimum standard of publication.

The initiative comes in the wake of several cases of companies withholding negative research findings that sparked public outrage. Merck of the United States withdrew Vioxx from the market in 2004 after the drug was linked to an increased risk of heart attack and stroke, and in 2003, GlaxoSmithKline of the United Kingdom warned that the antidepressant, Paxil, should not be prescribed to minors as it could increase the risk of suicide.

In response, the International Committee of Medical Journal Editors (ICMJE), representing the world’s leading medical journals, agreed not to publish the results of any clinical trial unless that trial had been registered in a public register before the enrolment of the first patient 

These initiatives are huge advances for public health and safety which show what is possible against even very powerful economic muscle.

Method, practise and perfidy

Malcolm B Duncan, a nitpick: in my view. Fuller's piece is more about ethical transgressions in scientific practise than a critique of scientific method per se.

Will Howard, I think the issue of data exclusion is something that is often not well done in (non-medical) biological sciences. There does seem to be a tendency to exclude datum that don't fit the general trend. Your advocacy of explicit indications of excluded data is commendable, but I suspect it may not be common practise. I prefer to include all data unless I can justify an exclusion on the basis of the failure of an experimental treatment, or the mis-identification/ mis-classification of a subject's state etc. I'd be extremely reluctant to exclude data on the basis that it is inconsistent with a probability distribution. But then I work in a field that often involves experimental studies conducted under field conditions, where we generally have only a vague idea of the processes at work, and where data often display great variability.

I've only read snippets of Lomborg's book, but the little I saw left me with the impression that there was indeed an ethics issue involved.  My recollection is that Lomborg consistently neglected to include bleedingly-obvious explanatory factors in his analyses. For instance, how much credence should we give someone who, in criticising attempts to link the incidence of breast cancer rates and synthetic hormones (The Sceptical Environmentalist, p 18) , 'forgets' to mention that improvements in medical interventions over the period 1940-1996 might have had some effect on mortality rates?  I'm not impressed by this sort of invidious misrepresentation: in my view Lomborg has a case to answer.

For a far more credible read, I'd strongly recommend A Critique for Ecology by R.H. Peters (1991, Cambridge UP).

What Conundrum?

This is not a conundrum. The solution is simple.

Withdraw all funding from sociology and kindred departments worldwide (political science, Middle East and international studies etc) and make the sociologists and Middle East experts get real jobs.

And give the money to the science and technology departments.

But it wasn't

When I finally decide that I want a sociologist to comment on scientific method, would some obliging Webdiarist be kind enough to shoot me?

Not fair to Webdiarists.

Malcolm, it's very unfair of you to ask a fellow Webdiarist to shoot you. Could I suggest you send your request to Donald Rumsfeld? He is a man who has a lot of experience when it comes to shooting civilians and I'm sure he'll happily oblige.

You will probably be given the choice of being shot by conventional bullets or, if you really want to go out with a bang, a Depleted Uranium shell. In the latter instance, perhaps your radioactive body will be better preserved than most.

At that special moment when the graves open up you may step forward whole, law book in hand, ready again to defend the weak and innocent (or even the strong and guilty).

Fiona: Hi Daniel, nice idea to approach Rummy, but I don’t think he's had any experience in the field. The Veep, however, is well-known for his hands-on approach…

The passing of the bar

Not bloody likely, Daniel Smythe: (a) the bastard can't shoot straight and (b) I never really wanted to die of the old hearty.

Scientific fraud

Tough issue. A lot depends on the due diligence provided by peer review, and a kind of honor system among scientists. It's scary how easy it is to fake data if you know what it should look like. In most cases referees, editors, and even co-authors have to take it on faith that the data they're looking at is genuine.

Particularly in fields like mine, where we're making observations in nature, it's often impossible to replicate a result. In contrast, experimental science depends on laboratory replication.

The author notes that many scientific misdeeds are sins of omission, where data points, or whole experiments, are excluded from published analysis. There are often good statistical, procedural, or analytical reasons to exclude observations. I'm a strong advocate of indicating these exclusions, and publishing even excluded or rejected data. The reason is not ethical as such, it's scientific. I've seen a number of cases (including in my own work) where data initially rejected as anomalous or outliers turn out later to be important, and valid, observations.

An ethically benign, but potentially just as harmful error occurs when data formatting allows erroneous data into the literature. And sometimes these data persist for a long time. A student of mine was working with a data set from my thesis, which had been published in a respected peer-reviewed journal and archived at a US-based data repository since its publication. He found an error, fortunately minor, which I had to correct and send in to the data bank.

But it made me wonder how much bad data is being worked with now. What if the data were from a drug trial? I've read some of Lomborg's work, and I agree with the author of this piece about the oversight function of critical commentary working properly in this case. Lomborg's writings may be wrong, but I don't see a scientific ethics issue in the controversy surrounding his book.

A Couldabeen

This could have been an interesting piece. But I'm left with the feeling that Fuller is not sufficiently on top of his subject matter to keep himself out of trouble. I'm surprised too that no mention was made of the temptations and pressures that grants systems place on scientists.


"Indeed, by today’s standards, Galileo and Mendel committed fraud, since they likely massaged their data to fit a neat mathematical formula."

This is a mis-representation of the Mendel case, and incredibly unfair to Galileo, who pretty much single-handedly invented the logic and fundamental methods of quantitative experimental science.

I'm assuming that Fuller is referring here to R A Fisher's re-analysis and historical reconstruction of Mendel's pea breeding experiments. Fisher found that - statistically speaking - some of Mendel's results were improbably good, but rejected any suggestion that Mendel fabricated the data (Fisher 1936, pp 132-33). Instead, it seems that Mendel, or perhaps an assistant, rejected some data that he believed to have been the result of mistakes in experimental procedures. Hardly the stuff of scientific fraud.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
© 2005-2011, Webdiary Pty Ltd
Disclaimer: This site is home to many debates, and the views expressed on this site are not necessarily those of the site editors.
Contributors submit comments on their own responsibility: if you believe that a comment is incorrect or offensive in any way,
please submit a comment to that effect and we will make corrections or deletions as necessary.
Margo Kingston Photo © Elaine Campaner

Recent Comments

David Roffey: {whimper} in Not with a bang ... 13 weeks 1 day ago
Jenny Hume: So long mate in Not with a bang ... 13 weeks 2 days ago
Fiona Reynolds: Reds (under beds?) in Not with a bang ... 13 weeks 4 days ago
Justin Obodie: Why not, with a bang? in Not with a bang ... 13 weeks 4 days ago
Fiona Reynolds: Dear Albatross in Not with a bang ... 13 weeks 4 days ago
Michael Talbot-Wilson: Good luck in Not with a bang ... 13 weeks 4 days ago
Fiona Reynolds: Goodnight and good luck in Not with a bang ... 13 weeks 5 days ago
Margo Kingston: bye, babe in Not with a bang ... 14 weeks 2 days ago