January 7th 2020
Juan Carlos Marvizon, Ph.D.
David Geffen School of Medicine at UCLA, VA Greater Los Angeles
The buzz is everywhere when animal research is mentioned: experiments in animals are outdated because computer models and other modern techniques are replacing them. For example, you may have heard statements like these:
“Researchers have developed a wide range of sophisticated computer models that simulate human biology and the progression of developing diseases. Studies show that these models can accurately predict the ways that new drugs will react in the human body and replace the use of animals in exploratory research and many standard drug tests.” Says PETA.
“Scientists at private companies, universities and government agencies are developing new cell and tissue tests, computer models and other sophisticated methods to replace existing animal tests. These alternatives are not only humane; they also tend to be more cost-effective, rapid and reliable than traditional animal tests.” Says the Humane Society of the United States.
“But research shows computer simulations of the heart have the potential to improve drug development for patients and reduce the need for animal testing.” Says Scientific American.
“Computer models could replace animal testing.” Reads the headline of Global Biotech Insights.
“During the past several years, OCSPP and ORD have worked together to make significant progress to reduce, replace and refine animal testing requirements. Beginning in 2012, the Endocrine Disruptor Screening Program began a multi-year transition to validate and more efficiently use computational toxicology methods and high-throughput approaches that allow the EPA to more quickly and cost-effectively screen for potential endocrine effects. In 2017 and 2018, ORD and OCSPP worked with other federal partners to compile a large body of legacy toxicity studies that was used to develop computer-based models to predict acute toxicity without the use of animals.” Reads the memo by Andrew R. Wheeler, the Administrator of the Environmental Protection Agency, in which he announced a reduction in animal testing of potentially toxic chemicals.
Data mining in PubMed
There is an easy way to check if these claims about computer models replacing animal research are true. Since the final product of scientific research are scientific articles, or “papers”, we can compare the number of papers generated with computer models and with animals to evaluate the actual productivity of the two approaches. There is a freely-accessible repository of all the papers published anywhere in the world: PubMed. It is run by the United States government, specifically by the US National Library of Medicine, part of the National Institutes of Health (NIH). In PubMed you can do keyword searches to find articles on any topic, so I used it for data-mining to compare the number of papers using animal research and computer models. In the “Search results” page there is a nifty graphic on the top left, with bars representing the number of papers per year containing the keyword. Below is a “Download CSV” link that allows you to get those numbers in a spreadsheet. I imported the numbers into a graphics program (Prism 8, by GraphPad) to create the graphics that I am going to show you.
There are several ways to enter a keyword in a search. You can search for the keyword anywhere in the article (“All Fields”). However, this is not useful for my goal because if an article mentions “computer model”, this does not mean that this was the main method used in the paper. My favorite method to restrict a search is to look for the keyword only in the title or the abstract of the paper (“Title/Abstract”). Still, this is not optimal because different authors may use different words for the same concept. For example, the terms “computer model” and “computer simulation” are synonyms. To deal with the problem of synonyms PubMed uses Medical Subject Headings (MeSH, homepage, Wikipedia), a sort of a thesaurus to facilitate searching by linking synonymous terms, so if you enter one of them it retrieves all the terms that are related. This is called doing an “extended search”. PubMed can perform MeSH searches by MeSH Major Topic, MeSH Subheading or MeSH Terms. These different types of MeSH record types are explained here. A descriptor, Main Heading or Major Topic are terms used to describe the subject of each article. Qualifiers or Subheadings are used together with descriptors to provide more specificity. Entry Terms are “synonyms or closely related terms that are cross-referenced to descriptors”. Therefore, I performed my searches using MeSH Terms to avoid having to find the exact wording of a MeSH Major Topic. When you introduce a keyword as MeSH Term, for example ‘mice’, PubMed searches that word and all its synonyms, in this case ‘mouse’, ‘Mus’ and ‘Mus Musculus’.
Figures show the number of papers in the period 1975-2017, because 1975 seems to be the year when PubMed starts gathering most of the papers written in the world. Records appear incomplete before that date. It also seems that it takes up to two years for PubMed to complete its collection of citations, since the number of papers in every search drops substantially during the last two years. Hence, I excluded data from 2018 or 2019.
Papers using animals, mice, rats and non-mammals
A good place to start is to look at the number of papers using all animals, mice, rats, mice or rats, and non-mammals. Figure 1 shows that the number of papers using any kind of animals has been increasing linearly since 1975 and presently amounts to more than 100,000 papers per year. A large fraction of these papers uses rats or mice, and their number increase linearly in parallel with the number of papers using all animals. However, studies using rats have remained constant since 1990, while the number of papers using mice has been increasing exponentially. The blue line in the graph is an exponential curve, which provides an excellent fit for the mouse data. Therefore, scientists have been dropping rats in favor of mice, likely because of the increasing availability of transgenic mice, which allow performing sophisticated experiments. The number of papers using non-mammals (mostly birds, fish, insects and worms) has also been increasing exponentially, and recently surpassed the number of studies using rats.
Papers on humans and clinical trials
A search with the MeSH Term ‘animal’ without excluding humans yields a very high number of papers. This is because there a large number of papers on humans, which are shown in Figure 2 together with the results for non-human animals and mice or rats. Clearly, there many more papers on humans than on animals. They increase exponentially while the studies on animals increase linearly, so that the difference becomes greater with time. While in 1975 the number of papers on humans was roughly double of the papers on animals, today there are six studies on humans for every study on animals.
However, this does not mean that animal research is being replaced with research on humans. Strictly speaking, research on humans is conducted in clinical trials, so let us see what happens when we do a PubMed search on clinical trials. Looking at the Y-axis scale of Figure 3 we can see that there are many fewer papers reporting clinical trials than papers on humans. Today there is one clinical for every 50 papers on humans. This is because papers on humans are medical case reports, epidemiological studies and other medical observations. These could be considered research, but certainly not the kind of research on physiological and biochemical mechanisms that can replace animal research.
The number of clinical trials has increased over time but does not follow a clear trend, either linear or exponential. There was a steep drop around 1990 followed by a rapid increase until 2003. Since then, the number of clinical trials has remained constant. The result of this search is consistent with the reports in ClinicalTrials.gov, which lists 34,128 studies. Since a clinical trial run for several years, this could produce the number of annual papers shown in Figure 3.
Papers using computer models
Now we have enough background information to compare the number of papers on computer models with different types of animal and human research. Figure 4 shows the evolution in the number of papers using computer models over time. Barely any papers were published before 1985. After that, the number of studies increased slowly until 2001 and rapidly from 2001 to 2008, when it stopped growing.
At that point, the number of studies using computer models was similar to the number of clinical trials and 40 times less than the number of animal studies. Overall, their number fits reasonably well an exponential curve, but this is largely due to their initial growth. However, many of these studies use computer models in combination with animal studies. As shown in Figure 4, excluding the papers that used animals reduced the number of papers by almost two thirds. Moreover, the stagnation in the number of computer model studies after 2008 becomes more apparent when we exclude the studies that also use animals. If computer models were replacing animal studies what we would see is an increase in the papers exclusively using computer models. Instead, what we see is that a large number of papers use both computer models and animals, either because the models are used to analyze results obtained with animals or because animal experiments are used to validate the model.
The MeSH Term ‘computer simulation’ has five different subcategories: Augmented Reality, Molecular Docking Simulation, Molecular Dynamics Simulation, Patient-Specific Modeling and Virtual Reality. Searches with ‘augmented reality’ and ‘virtual reality’ as MeSH Terms yielded just a few hits. According to Wikipedia, molecular dynamics “is a computer simulation method for analyzing the physical movements of atoms and molecules” and is used in biomedical research to study the 3-dimensional structures of proteins and other biomolecules. Molecular docking is used to study the interaction of small molecules with their ‘docking pockets’ or ‘binding sites’ in proteins like enzymes or neurotransmitter receptors. This is a great tool to design new drugs that interact with these proteins. Patient-specific modeling is used to plan surgeries and to model organ function. Clearly, none of these techniques can be used to replace animal research; rather, they complement it. As we can see in Figure 4, molecular dynamics and molecular docking comprise a good fraction of the recent papers using computer models. Patient-specific modeling generates a very small number of papers.
Figure 5 shows a comparison between the number of papers generated in 2015 with computer models, with or without animals, and the papers derived from clinical trials or the use of different animal species. Most of the papers that year used mice or rats. Computer models produced many fewer papers, but this number was similar to the number of papers on clinical trials. When we consider papers using exclusively computer models, their number was much smaller and comparable with those using single animal species like dogs, cats and primates. Interestingly, papers using non-human primates are similar in number to those using zebrafish, the fruitfly Drosophila or the worm C. elegans, showing the relative importance of studies in non-mammals and invertebrates. If we add the number of papers using these species, they vastly outnumber the papers using computer models exclusively. Figure 1 shows that the number of papers using any kind of animal in 2015 was 120,000.
Implications for the 3Rs: reduce, replace and refine
Figure 1 shows that, overall, the use of animals in research has been increasing since 1975 and will likely continue to grow in the future. A major part of this increase is due to the exponential growth in the use of mice and non-mammal species. I will examine the question of whether mammals are being replaced by non-mammals or by in vitro methods in a future article. However, it is clear is that computer models are not replacing animals in research. The number of studies using computer models is relatively small and has not increased much in the last 10 years. When we count only studies that use computer models without animals, their number is much smaller and has not increased at all since 2008. Moreover, at present many of the papers using computer models deal with molecular dynamics and molecular docking, methods that complement but do not replace animal experiments. These types of papers have been increasing and some of them may include the use of animals.
Of course, the number of papers using animals does not reflect the actual number of animals used in research. Papers using monkeys use just a few of them; papers on mice and rats typically use hundreds of animals, while papers using fruit flies use tens of thousands of them. However, the number of papers does tell us the relative contribution of each species to the scientific endeavor. Also, given that the number of animals per paper for a given species is not likely to change much over time, an increase in the number of papers for that species is likely to reflect an increase in the number of animals used.
Therefore, the use of animals in research is not being reduced overall but continues to increase linearly. Regarding replacement, it is likely that charismatic species like dogs, cats and monkeys are being replaced by mice and non-mammals (an issue that I will examine in a future article). However, animal research is clearly not being replaced by computer models.
Why computer models will not replace animal research
But, how about the future? Surely, the enormous growth of computer power and artificial intelligence will determine that sooner or later biomedical research will move from animals to computer models, right?
Well, no. There are some fundamental issues that determine that, for the foreseeable future, we will need the actual bodies of animals or humans to extract information from them. Even though future computers will help enormously to accelerate biomedical research, they will not be able to tell us what happens inside our bodies or the bodies of animals. We will have to tell them.
The reason for this lies in the nature of life itself. Living beings have been created by evolution, which is a contingent process. The word ‘contingent’ means that there is an element of randomness in a process that makes it impossible to predict its outcome. In the words of evolutionary biologist Stephen Jay Gould, if we went back in time and run evolution again, we would end up with a completely different set of living beings. All the enzymes, intracellular signaling pathways, ion channels, neurotransmitter receptors, hormone receptors, membrane transporters, etc., responsible for the functioning of our bodies were created by contingent processes. Not entirely random, but still impossible to predict. For example, imagine that you were to design a new car. You will be constrained by physics if you wanted the car to work, but the car could still have infinite different looks. It may have four wheels, or three, or six. It could ride high as an SV or low as a sport car. An external observer could not predict how it would look and how it would work. Likewise, if you told a computer ‘find out how neurons in the spinal cord process pain’, the computer would not be able to tell you. Somebody would have to look at those neurons and find out. You have to feed that information to a computer before it can do anything with it.
The amount of information in our bodies, in each of our cells, is staggering. We have barely started to scratch it. The human genome contains 20,000 to 25,000 genes, and we still don’t know what most of them do. A computer, no matter how powerful, is not going to tell us. And knowing what each of those genes does is only a small part of the story. We need to know how the proteins encoded by those genes interact with each other to generate metabolism. The only way to do that is to take the body of an animal and look inside. A computer cannot guess what goes on inside the body, just like it cannot guess the content of a book that it has not read.
The advancement of computer technology in the information revolution has been so amazing that we have become convinced that there is nothing an advanced computer can’t do. That is why it is so easy for animal rights organizations to convince the public that we can eliminate animal research and replace it with computer models. Even organizations that supposedly defend animal research have helped this misconception by promoting the idea that eventually it will be Replaced (one of the three Rs) by computer models, in vitro research or clinical trials. That is simply not true. As I have shown here, as scientific productivity increases, so does the use of animals. It has not decreased, we are just using fewer animals of some species (dogs, cats, rabbits, primates) by using more animals of other species, like mice and zebrafish. And, as Figure 4 shows, research using computer models is relatively small and is not growing fast enough to ever catch up with animal research.
In conclusion, computer models are not replacing and likely will never replace animal research. Computers can do amazing things, but they cannot guess information that they do not have. There are limits to what is possible, and this is one of them.