Reproducibility Project Dismisses Landmark Cancer Studies Without Investigation

February 3, 2017

Contributed Commentary by Harri Jarvelainen

February 3, 2017 | Getting scientific experiments right needs dedication. It’s been years since my time on a lab bench, but I will never forget the amount of persistence and trial & error needed to perfect a new technique. Things were especially precarious during my post-doc (at NYU School of Medicine and Max-Planck Institute for Infection Biology) when I worked with mouse models of intestinal infection. Experimenting with two living organisms–mice & bacteria–resulted in a level of variation and complexity that made setting up new models–such as optimization of the conditions and mastering the techniques–a task that felt never-ending. Most scientists, especially those in biology, have probably experienced the same.

The Reproducibility Project first made headlines a few years ago when it published a finding that just one-third to one-half of the examined results in psychology were successfully replicated. I can recall reading about it a little, but since the science that the Project attacked was (hmm, a bit soft and fluffy?) psychology, I quickly laughed it off. After this initial report, the Reproducibility Project has been well funded and now has turned its attention to my own field, medicine. The initial results are just out and this time I am much less amused.

This high-profile project published results for its first five papers a couple of weeks ago and the replication results are murky–only two of the five papers could be rigorously replicated. And the exaggerated headlines have followed (“5 Big Cancer Studies Might Be Complete Bogus“).

The scientists whose results (and consequently reputation) were on the line are understandably not happy with the findings. One such scientist is Erkki Ruoslahti, a cancer biologist at the Sanford Burnham Prebys Medical Discovery Institute in La Jolla, California, whose results on a tumor-penetrating iRGD peptide could not be replicated by the Project. Prof Erkki Ruoslahti told Nature “Have three generations of postdocs in my lab fooled themselves, and all these other people done the same? I have a hard time believing that”. I have to say he has a point: I did a quick PubMed search on iRGD, and since the initial publication in 2010 there have been approximately 50 papers exploring the peptide, published by at least 5 different–apparently independent–academic groups around the world.

Who do you think got the experiment right? The academic groups dedicated to their own research, or the Reproducibility Project mass-reproducing diverse types of biological research by outsourcing it to groups against monetary compensation?

We cannot underestimate the troubles the influential and headline-making Reproducibility Project is doing to the academic groups targeted, not to speak of all their efforts and years of research now facing invalidation. There is even a danger that the development of new cancer therapeutics is prevented, by weakening the funding to conduct the necessary preclinical and clinical research.

When I struggled to get my own experiments right during my post-doc, not being able to reproduce my own results was not a reason to panic. Reproducing research, especially biological research, is difficult due to a number of intangible reasons. Sometimes they are just “tricks” or nuances in various experimental techniques, sometimes just normal variation in biology. The Reproducibility Project seems to be oddly oblivious to this fact–they are not reporting any efforts to troubleshoot or replicate their own replication studies. This is puzzling because the cancer models used, such as the transplantation model, are especially difficult to master and prone to error.

Today, more than 15 years since my post-doc, I still deal with the reproducibility issue as part of my everyday work, even though now most of my research involves standards that are called Good Laboratory Practice (GLP). The GLPs are probably the most rigorous quality systems in life sciences: they harness various management controls and other methods to ensure the uniformity, consistency, reliability, reproducibility, quality, and integrity of safety studies in animals. The GLP regulations are actually codified into law (in the Code of Federal Regulations), and failure to follow these regulations could even land you in jail. Now, even with these most rigorous control methods to ensure reproducibility, in my experience the studies conducted in two different CRO (Contract Research Organization) laboratories, following the exact same protocols and using the same animal strains, can hardly ever reproduce the results from another laboratory (this is why I always strongly advise my clients not to change laboratories in the middle of their program). Fortunately, as long as I follow the rules, I am not going to be penalized if somebody fails to replicate my experiment – the professionals in GLP field can recognize that it is just part of (complicated) life.

Harri Jarvelainen, DVM, Ph.D., Prof, is a preclinical consultant with 20 years of experience in toxicology and pharmaceutical discovery & development. In his current Beijing, China –based consultant role, he helps global biotech companies to complete their early development packages at Chinese CROs, and to file their regulatory submissions (e.g. INDs) to agencies such as the US FDA. He can be reached at harri@toxconsulting.com.