The objectivity and integrity of contemporary science faces many threats. A cause of particular concern is the growing competition for research funding and academic positions, which, combined with an increasing use of bibliometric parameters to evaluate careers (e.g. number of publications and the impact factor of the journals they appeared in), pressures scientists into continuously producing “publishable” results.
Competition is encouraged in scientifically advanced countries because it increases the efficiency and productivity of researchers. The flip side of the coin, however, is that it might conflict with their objectivity and integrity, because the success of a scientific paper partly depends on its outcome. In many fields of research, papers are more likely to be published, to be cited by colleagues, and to be accepted by high-profile journals if they report results that are “positive” – term which in this paper will indicate all results that support the experimental hypothesis against an alternative or a “null” hypothesis of no effect, using or not using tests of statistical significance….
Many factors contribute to this publication bias against negative results, which is rooted in the psychology and sociology of science. Like all human beings, scientists are confirmation-biased (i.e. tend to select information that supports their hypotheses about the world) and they are far from indifferent to the outcome of their own research: positive results make them happy and negative ones disappointed. This bias is likely to be reinforced by a positive feedback from the scientific community. Since papers reporting positive results attract more interest and are cited more often, journal editors and peer reviewers might tend to favour them, which will further increase the desirability of a positive outcome to researchers, particularly if their careers are evaluated by counting the number of papers listed in their CVs and the impact factor of the journals they are published in.
[Studies were] more likely to support a tested hypothesis if their corresponding authors were working in states that produced more academic papers per capita.
No Comments Yet