This guest post was written by Mark Wanner from The Jackson Laboratory. He has previously written a guest post for us in 2013 responding to an article in the New York Times. This article is adapted from his earlier post on the The Jackson Laboratory blog, Genetics and your health, here. This focuses on a recent Nature commentary by Steve Perrin, which has been misunderstood by many in the animal rights community. Mark also discusses ways of improving the accuracy of the mouse model.
In February 2013, I wrote a post about the use of mice in preclinical research. It was largely in response to a New York Times article about a scientific paper that impugned data obtained from mice used in trauma and sepsis research. The NYT article in turn implied that research using mouse models for human disease was pretty much useless, or misleading at best.
My counterpoint at the time was that research using inbred mouse strains (or in this case a single inbred mouse strain), while valuable for understanding basic biology, can be very difficult to translate to human medicine for a variety of reasons. It also does nothing to address human genetic variation and the accompanying variability of responses to any one therapy or drug.
So can mice be good experimental models for human disease? Yes, they certainly can, but it’s imperative that changes be made on a broad scale to preclinical (both biomedical and pharmaceutical) research. That’s something that scientists at The Jackson Laboratory have long advocated, and now it’s the point of a comment piece in Nature published in late March titled “Preclinical research: Make mouse studies work” that has generated significant coverage and discussion.
Noise in the data
In the commentary, Steve Perrin, chief scientific officer at the ALS Therapy Development Institute, describes how findings in mice have failed to translate to more effective ALS therapies. Unlike the NYT article, however, Perrin doesn’t imply that mice are necessarily a poor disease model system. He instead asserts that much preclinical research uses mice quite poorly, with specific examples from the ALS field.
Perrin has ample reason to broadcast his concerns. He’s working with a patient population that is inexorably dying. As he says, “patients with progressive terminal illnesses may have just one shot at an unproven but promising treatment.” Sadly, trials of about a dozen treatments that showed survival benefits in a mouse model yielded only one that “succeeded” in human patients in recent years. And even that one, a drug called Riluzole, had minimal benefits.
With the stakes so high, you would think that any experimental therapy that reaches the clinical trial stage would have robust animal data backing it up. That is often not the case, however. As Erika Check Hayden points out in a follow-up piece in Nature News, a particular ALS mouse model that carries a mutation in a protein called TDP43, has a disease phenotype that is quite different from that of humans: “TDP43 mice usually died of bowel obstructions, whereas humans with the disease tend to succumb to muscle wasting, which often results in the inability to breathe.”
TDP43 is but one example of what Perrin calls “noise,” preclinical data that may look good but provides no insights into clinical realities because the research was not sufficiently careful or rigorous. Care and rigor don’t come easily, however, especially for the behind-the-scenes work of developing and characterizing the mouse models needed before good research can even begin. Perrin acknowledges in conclusion: “This is unglamorous work that will never directly lead to a breakthrough or therapy, and is hard to mesh with the aims of a typical grant proposal or graduate student training programme. However, without these investments, more patients and funds will be squandered on clinical trials that are uninformative and disappointing.” Or, as Derek Lowe states more bluntly in a commentary on his “In the Pipeline” blog, which covers the pharma industry: “Crappy animal data is far worse than no animal data at all. . . . If you don’t pay very close attention, and have people who know what to pay attention to, you could be wasting time, money, and animals to generate data that will go on to waste still more of all three.”
For decades, The Jackson Laboratory (JAX) has worked to improve the efficacy of its mouse models for preclinical research. It has long recognized the limitations inherent in working with only one or two strains of inbred mice—imagine testing a drug in only one or two people!—and has spearheaded the development of mouse populations (Collaborative Cross and Diversity Outbred) that provide effective models of human genetic variation. It works to fully characterize both the genotypes and phenotypes of the mouse strains it distributes and to share the data with the research community. It has been at the forefront of developing mice that express human disease genes and/or recreate the human immune response.
“JAX has provided leadership from the beginning, even before disease foundations and funding agencies realized this was a problem,” says Associate Professor Greg Cox, Ph.D., who studies neuromuscular degeneration, including forms of ALS, at JAX. “It is nice to finally hear the message coming from someone other than the ‘fanatical’ mouse biologists. It is up to us to make sure that poorly designed mouse genetics experiments stop, both for the sake of good biology and for future decisions regarding clinical applications of the research.”
So how do you design experiments well? Perrin lists four ways to fight “noise.” The first three are basic ways to correctly manage research animal populations—exclude irrelevant animals (i.e. unrelated mortality), balance for gender and split littermates—but the fourth, track genes, may be the most vital. If you don’t know the animals’ precise genotypes and as much as you can about normal and disease phenotypes, it’s just about impossible to generate relevant data. Differences between background strain genetics can yield highly misleading results, making correct strain characterization essential. Also, inheritance between generations needs to be carefully tracked.
Another way to significantly improve the power of preclinical research is to use mouse panels that reflect human genetic diversity rather than one or two inbred strains. As long ago as 2009, JAX Professor Ken Paigen and collaborators at the University of North Carolina at Chapel Hill effectively implemented a new approach to testing drugs for potential toxicity. Paigen and colleagues tested acetaminophen, the commonly used NSAID, on 40 different mouse models chosen specifically for their strain genetics. The research revealed several gene variations associated with toxic reactions, which the researchers then matched with those in human patients experiencing adverse reactions to the drug. Such screening, which could also provide essential information regarding the effects of genetic variation on efficacy and general side effects, is not part of the current standard drug testing process.
Perrin calls for a community effort to generate the mouse models needed to undertake effective preclinical research. JAX has already served as a vital hub to several such efforts, collecting, curating and distributing mouse strains useful for research into many diseases. These mouse repositories provide researchers access to quality control, standardization and mouse genetics expertise unattainable without a central resource of this nature.
Last July I wrote about the pervasiveness of positive bias in preclinical research findings and the associated problems. Now Perrin’s commentary indicates that such positive bias is based on generally poor data. More thought and care are not only important for preclinical research, they’re absolutely necessary. Using mice in a way that provides valuable, translatable preclinical data takes far more up-front time and money, investments that can be difficult to justify in competitive pharma and academic settings. But the costs of not doing good research—and generating “crappy” animal data—are immeasurable on both financial and human scales.