Pressures to Publish Increase Scientists Bias

Last week PLoS One published an interesting study (H/T Climate Shifts) by behavioral ecologist Daniele Fanelli concerning the relationship between professional pressures to publish material and bias in the scientific literature:
Daniele Fanelli. PLoS ONE. 21 April 2010.
The objectivity and integrity of contemporary science faces many threats. A cause of particular concern is the growing competition for research funding and academic positions, which, combined with an increasing use of bibliometric parameters to evaluate careers (e.g. number of publications and the impact factor of the journals they appeared in), pressures scientists into continuously producing “publishable” results.
Competition is encouraged in scientifically advanced countries because it increases the efficiency and productivity of researchers. The flip side of the coin, however, is that it might conflict with their objectivity and integrity, because the success of a scientific paper partly depends on its outcome. In many fields of research, papers are more likely to be published, to be cited by colleagues, and to be accepted by high-profile journals if they report results that are “positive” – term which in this paper will indicate all results that support the experimental hypothesis against an alternative or a “null” hypothesis of no effect, using or not using tests of statistical significance….

Many factors contribute to this publication bias against negative results, which is rooted in the psychology and sociology of science. Like all human beings, scientists are confirmation-biased (i.e. tend to select information that supports their hypotheses about the world)  and they are far from indifferent to the outcome of their own research: positive results make them happy and negative ones disappointed. This bias is likely to be reinforced by a positive feedback from the scientific community. Since papers reporting positive results attract more interest and are cited more often, journal editors and peer reviewers might tend to favour them, which will further increase the desirability of a positive outcome to researchers, particularly if their careers are evaluated by counting the number of papers listed in their CVs and the impact factor of the journals they are published in.

Fanelli analyzed 1316 scientific papers published by authors working in the United States between the years of 2000 and 2007. The initial results of this analysis were not surprising: the number of published articles that rejected the null and proved the author’s hypothesis was much larger than the number of published articles that proved the null. This bias towards positive results makes sense; except in cases where an established hypothesis is being challenged, proving a new hypothesis will generate more interest than disproving a hypothesis ever will. (e.g. the fellow who establishes a relationship between political affiliation and heart attack rates will be making a much bigger splash than the guy who announces the two are unrelated.)
Having established the general bias in favor of positive results, Fanelli proceeds to analyze the results by the location of their authors. This is where things are most intriguing. The percentage of published articles that confirm the hypothesis varies considerably (25%-100%) from state to state. 
“Positive” results by per-capita R&D expenditure in academia.
As Fanelli states:

 [Studies were] more likely to support a tested hypothesis if their corresponding authors were working in states that produced more academic papers per capita.

 This does not mean these positive results were fabricated; as the author notes, it is more likely that hypotheses were changed to match the data after observations had been recorded or experiments conducted. This, combined with negative results that were simply unpublished, should account for most of the positive bias in ‘competitive’ states.
Most interestingly, Fanelli reports that this model varied little by discipline. The two exceptions were toxicology and neurology, which ‘had a significantly stronger association between productivity and positive results’ than did other sciences, such as ecology or space science. The authors admit, however, that the sample sizes for these disciplines are too small to preclude chance. A more thorough analysis of bias, discipline, and productivity is a project reserved for the future.
FURTHER READING

Daniele Fanelli. PloS One. 29 May 2009.

Another article by Fanelli on a similar topic. I trust that it will be no less interesting to readers of The Stage.


Leave a Comment

No Comments Yet