(This post is based on Significance magazine)
“Most published research results are wrong.” It is a well-known claim, first made in 2005 by John Ioannides of Stanford University; and whether or not the “most” is correct, the situation in recent years seems only to have got worse.
Last year Bayer Healthcare reported that its scientists could not reproduce some 75% of published findings in cardiovascular disease, cancer and women’s health – causing two-thirds of their new drug development projects to be delayed or cancelled. This year scientists from pharmaceutical company Amgen repeated experiments in 53 “landmark” papers, but managed to confirm findings from only six of them
Ioannides is on the advisory board of a California company that is attempting to address the problem. Science Exchange is offering a “Reproducibility Initiative”. Its aim is to validate published papers. Scientists who want their findings to be confirmed apply to it; the initiative chooses a suitable lab to repeat the study and to determine whether the results match.The labs are matched blindly to ensure independence. The original lab pays the reproducing lab for its work, which should be less than 20% of the original cost; the end result is a “validation certificate” from the second lab, and the option for the validation results to be published in PLoS One, linked to the journal of original publication. The original lab gains through enhanced reputation, and therefore the ability to attract future funding; it would also gain easier and more lucrative licensing arrangements for commercial development of the results – a pharmaceutical company such as Bayer, for example, could develop a new drug with much more confidence in the science behind it. A drawback is that if the repeat study does not in fact reproduce the results the original authors are still free to publish, apparently without mentioning the failed repeat. But Christopher Haskell, of Bayer, warns that the project will work best when people understand that science is rarely straightforward. Even if a validation study does not replicate the original findings, he says, that does not necessarily mean that the original paper is wrong. “It may be right, but just hard to reproduce.”
Source: Nature, doi:10.1038/nature.
1. Begley, C. G. and Ellis, L. M. (2012) Drug development: Raise standards for preclinical cancer research. Nature, 483, 531–533.