The file drawer bust

Daniele Fanelli, researcher at the University of Edinburgh in Scotland, probes the problem of quantity versus quality in scientific research. Sugandh Juneja had a few questions:  

By Sugandh Juneja
Published: Tuesday 15 June 2010

imageOn the analysis in his recent study

I drew a large, random sample of papers from all disciplines declared to have tested a hypothesis and of which the main author was working in the US. I verified whether these papers were more or less likely to support the hypothesis they tested. More positive results were found in states where academic journals publish more papers per capita, independent of discipline and research expenditure. This is a finding, which supports long-standing concerns that a publish or perish culture is forcing scientists to make their results look good.

On relevance of scientific research in light of the findings

Scientific research is as relevant as ever. It inevitably has flaws, which come in the form of bias or sometimes even dishonesty, since it is a human enterprise. I think bias is partly inevitable, because it is a by-product of the very forces—ambition, pride, passion—that drive scientists in their work. But this does not mean that everything we say in science is false.

On what would keep scientific credibility intact

We must, first of all, accept that science cannot be perfect and keep this in mind when we deal with it. Of course, there are external causes of bias that point to real problems in the system. My study, in particular, suggests a problem with policies that are increasingly reinforcing a scientific culture of quantity instead of quality.

On not reporting negative results

We do not know for sure how much of the bias observed in my study stems from studies with unpublished negative outcomes rather than those manipulated to look positive. Positive results sometimes occur by chance; if only these are published, we could literally have entire research fields built on non-existent phenomena.

On the study’s relevance to developing countries

The US was an ideal model but I think the problem might be present in all countries where academic competition and publication pressures are high. China and South Korea are cited as examples. I do have data on other countries and conducting the same kind of analysis elsewhere might be difficult for various technical reasons.

On the “positive” result of the study

I think we should never be sure of any result that has been observed and published for the first time. A number of psychological, sociological and statistical factors, all beyond the scientists’ conscious control, make findings less likely to be true. It is only through several, independent replications that a finding becomes a scientific fact.
The study was published in PLoS One on April 21, 2010

Subscribe to Weekly Newsletter :

Comments are moderated and will be published only after the site moderator’s approval. Please use a genuine email ID and provide your name. Selected comments may also be used in the ‘Letters’ section of the Down To Earth print edition.