The Limits of Computer Simulations

Following on from the last post about the limits of fMRI technology, we will now look further at the limits of another so called “alternative” – computer simulations.

Animal rights groups also argue (Warning: AR website) that advanced computer simulations can replace the use of animals in our research.  This position, again, reflects the poor understanding of what goes into a computer simulation and the limitation of the results.

Simply put, computer simulations produce the results of mathematical models (a set of equations) that investigators postulate capture the basic laws governing a physical system.  We can be successful at simulating how air flows across the profile of an airplane because physicists have developed good mathematical models of how matter behaves at these scales (the field of classical mechanics).   Such physical ‘laws’ are developed by scientists by first observing patterns in experimental data (note the emphasis on experimental) and try to envision a simple set of mathematical equations that could capture these patterns.  The postulated laws are then tested by predicting how systems would behave under different conditions, and experiments are conducted to test their validity.  When predictions fail, it sends scientists back to the drawing board.  It is the interplay between mathematical models and experimental work that allows scientists to refine our models, both in physics and in life sciences.

The Blue Gene Supercomputer was used to approximate brain function
The Blue Gene Supercomputer was used to approximate brain function

Neuroscientists are following on the steps of physicists in trying to come up with mathematical models for brain function.  An example is the successful development of a mathematical theory for the generation of action potentials by neurons, the so called Hodgkin-Huxley equations.  These equations have been successfully tested in a multitude of new experimental paradigms and we now consider it a well established law.  This work, done largely in the squid giant axon, and led them to share the Nobel prize in Medicine in 1963.

As important as this development was, it only provides a tiny amount of information about the workings of the brain.  The brain is composed of around 100 billion neurons, each with approximately 100,000 connections.  To simulate how a brain behaves it is not enough to understand how axons propagate action potentials, we also need to understand how neurons are connected to each other, measure the ‘strength’ of such connections, and figure out how is that each neuron (which is rather ‘dumb’ by itself) can cooperate with thousand of others to perform the computations we take for granted every day, such as reaching out for a cup of coffee, recognizing faces, and so on.  Even if we had full knowledge of the working of individual neurons, we would still not know how a brain really works.  To argue the opposite, would be to argue that just by knowing how a transistor works, we would have full knowledge of how a computer operates.

Science aims at explaining complex phenomena by describing them using a simple set of mathematical equations or laws.  Neuroscientists are building up their knowledge bottom up, by first developing models of how individual neurons work and how they communicate.   From a modest beginning of trying to understand how cells generate action potentials, theoretical neuroscience has advanced tremendously in the last few decades and into a field of itself.  We have reached the point where models of how neuronal populations code for information in certain areas of the brain are being applied to the development of neural prostheses that will allow paralyzed or amputated patients to control artificial limbs.   This work, developed in electrophysiological studies with monkeys, is now being successfully translated into humans.

However, we are still many, many years away at being able to develop models and simulations that capture the working of large neuronal circuits, let alone the entire brain.  As we work towards this goal, the interaction between models and experiments is critical.  We cannot verify the correctness of a model without comparing its predictions to actual data.   As a consequence, both computer simulations and animal work will be required to advance our knowledge of brain function in years to come.

Regards

Dario Ringach

2 thoughts on “The Limits of Computer Simulations

  1. Yet another good example is the work of Charles Peskin at the Courant Institute. Here, pure mathematical research into the solution of fluid dynamics equations with moving boundaries, has been applied to a very concrete problem: understanding the flow of blood through the human heart.

    Some neat animations can be found here: http://tinyurl.com/mgjesa

  2. That reminds me of a bioinformatics conference I attended in 2004 where Prof. Dennis Noble of Oxford University http://noble.physiol.ox.ac.uk/People/DNoble/ was a speaker. Prof. Noble is a leading cardiologist and computational biologist who was there to discuss the virtual heart model that he and his coleagues have developed and are using to assess how drugs affect the action of the heart. At the end of the talk he was asked whether the model could replace animal tests for cardiotoxicity and replied that it couldn’t, though it or similar models would be useful for eliminating many harmful candidate drugs before they get to animal testing and to help design those tests.

    A recent review “Computational Models of the Heart and Their Use in Assessing the Actions of Drugs” by Prof. Noble makes it clear how important studies in animals, including guinea pigs, rabbist, sheep, and dogs, have been and continue to be in providing the biological data required to build and subsequently assess model of cardiac function

    http://www.jstage.jst.go.jp/article/jphs/107/2/107_107/_article

Comments are closed.