Author Archives: Editor

Nobel Prize 2016 – how yeast and mouse studies uncovered autophagy

Congratulations to Professor Yoshinori Ohsumi Tokyo Institute of Technology on being awarded the 2016 Nobel Prize in Physiology or Medicine for “for his discoveries of mechanisms for autophagy“!


Yoshinori Ohsumi. Image: Tokyo Institute of Technology

The process of autophagy is hardly one familiar to most people, but is is absolutely crucial to all complex life on out planet, including ourselves. The name autophagy comes from the Greek words for “self” and “eating” and describes the ordered process through which cells break down and recycle unnecessary or damaged structures or proteins, and allows the cell to reach an equilibrium between the synthesis and degradation of proteins.

The discovery of autophagy

The process itself was identified through studies in tissues of mice and rats back in the 1950’s and 1960’s, by scientists including Christian de Duve, who was subsequently awarded the Nobel Prize in Physiology or Medicine in 1974 for this and other work. They first discovered that mammalian cells contain a compartment which they termed the lysosome where proteins are broken down, and then that proteins and other molecules that were to be degraded were first isolated from the rest of the cell by the formation of a membrane sac around the protein in question  (later called the autophagosome). The process through which the autophagosome fused with the lysosome to deliver its protein cargo for degradation was given the name autophagy by Christian de Duve.



Progress in understanding how autophagy worked was slow, as at the time the genes or proteins involved in regulating the process had been identified. With the research methods available at the time it was difficult to measure autophagy as it happened in mammalian cells, and hence difficult to determine how altering different components affected the overall process, a key step towards understanding their role. It may have seemed an unpromising field to join, but Yoshinori Ohsumi had a different career philosophy to most researchers, which he described in an interview given in 2012:

I am not very competitive, so I always look for a new subject to study, even if it is not so popular. If you start from some sort of basic, new observation, you will have plenty to work on.

From cells to genes

What was needed was a simple experimental system in which to study the process, and the bakers yeast Saccharomyces cerevisiae  – a simple single celled organism separated from us by hundreds of millions of years of evolution, but sharing many of our key biological processes – was one candidate. Yoshinori Ohsumi had worked with yeast, and in particular had identified many proteins in a subcellular component of the yeast cell known as the vacuole, which was important as there was evidence that the vacuole performed the same role in yeast cells as the lysosome in mammalian cells. Still, as the Nobel Prize website highlights there were still hurdles to overcome as he began his study of autophagy in yeast at the end of the 1980’s:

But Ohsumi faced a major challenge; yeast cells are small and their inner structures are not easily distinguished under the microscope and thus he was uncertain whether autophagy even existed in this organism. Ohsumi reasoned that if he could disrupt the degradation process in the vacuole while the process of autophagy was active, then autophagosomes should accumulate within the vacuole and become visible under the microscope. He therefore cultured mutated yeast lacking vacuolar degradation enzymes and simultaneously stimulated autophagy by starving the cells. The results were striking! Within hours, the vacuoles were filled with small vesicles that had not been degraded (Figure 2). The vesicles were autophagosomes and Ohsumi’s experiment proved that authophagy exists in yeast cells. But even more importantly, he now had a method to identify and characterize key genes involved this process.

With an experimental system available Yoshinori Ohsumi and his team studied the process of autophagy in thousands of mutant strains of yeast, and identified 15 individual genes (most of them of previously unknown function) that are essential for the process in yeast, tho order in which the key events in autophagy take place, and the roles of the individual genes in them. This was the work for which he was awarded the Nobel Prize.

From yeast genes to us!

But it is not the end of the story! Identifying the genes essential for autophagy in yeast, and their roles in the process, was a major breakthrough, but what about humans and other mammals?

It turns out that that in humans and other mammals there are counterparts to almost all the yeast autophagy genes, though the situation is made a lot more complicated by the face that mammals have more than one copy for each of the genes…starting with yeast was a wise move! Professor Noboru Mizushima of the University of Tokyo made an important advance when, working with Yoshinori Ohsumi,  he developed a transgenic mouse in which a protein called LC3 that is found in the autophagosome membrane is fused to Green Fluorescent Protein (GFP – see Nobel Prize for Chemistry 2008) which allowed him and his colleagues to observe and monitor the process of autophage in vivo in mice for the first time.

Laboratory Mice are the most common species used in research

This LC3-GFP transgenic mouse proved to be a very powerful research tool for studying mammalian autophagy, allowing not only the role of indicudual genes in the process to be determined, but also the role of autophagy itself in processes as diverse as early embryonic development, tumor suppression, nerve cell survival and function, and protection against infection.

This research is still at a relatively early stage, but techniques such as the LC3-GFP system in mice – and others used in organisms such as fruit flies, are showing us how defects in autophagy contribute to many diseases, including neurodegenerative disorders such as Parkinson’s Disease, and metabolic disorders such as type 2 Diabetes. While the development of specific therapies to correct these defects in autophagy is still some way off, it is already clear that understanding autophagy has the potential to improve the treatment of a wide range of illnesses.

What the work of Yoshinori Ohsumi demonstrates once again is the crucial contribution of basic biological research in model organisms that may at first glance appear to share little with us to the advancement of medicine.

Speaking of Research



USDA publishes 2015 Animal Research Statistics

Congratulations to the USDA/APHIS for getting ahead of the curve for a second time and making the US the first country to publish its 2015 animal research statistics. Overall, the number of animals (covered by the Animal Welfare Act) used in research fell 8% from 834,453 (2014) to 767,622 (2015).

These statistics do not include all animals as most mice, rats, and fish are not covered by the Animal Welfare Act – though they are still covered by other regulations that protect animal welfare. We also have not included the 136,525 animals which were kept in research facilities in 2015 but were not involved in any research studies.

USDA Statistics_2016_A

The statistics show that 53% of research is on guinea pigs, hamsters and rabbits, while 11% is on dogs or cats and 8% on non-human primates. In the UK, where mice, rats, fish and birds are counted in the annual statistics, over 97% of research is on rodents, birds and fish. Across the EU, which measures animal use slightly differently, 93% of research is on species not counted under the Animal Welfare Act (AWA). If similar proportions were applied the US, the total number of vertebrates used in research in the US would be between 11 and 25 million, however there are no statistics to confirm this.

USDA Statistics_2016_B

If we look at the changes between the 2014 and 2015 statistics we can see a drop in the number of studies in hamsters, rabbits, cats and the “all other animals” category. Notably, there was a 7.3% rise in the number of non-human primates used although this comes the year after a 9.9% fall in their numbers.

USDA Statistics_2016_C

There has been a downward trend in the number of AWA-covered animals used in the last three decades, with a 64% drop in numbers between 1985 and 2015. It is also likely that, similar to the UK, a move towards using more genetically altered mice and fish has reduced the numbers of other AWA-covered species of animals used. In the UK this change in the species of animals studied has contributed to an overall increase in the numbers of animals used in research in the past 15 years.

Rises and falls in the number of animals used reflects many factors including the level of biomedical activity in a country, trending areas of research, changes to legislations at home and abroad, outsourcing research to and from other countries, and new technologies (which may either replace animal studies or create reasons for new animal experiments).

It is important to note that the number of animals cannot be tallied across years to get an accurate measure of total number of animals. This is because animals in longitudinal studies are counted each year. Thus, if the same 10 animals are in a research facility for 10 years, they would appear in the stats of each year – adding these numbers would incorrectly create the illusion of 100 animals being used.

Speaking of Research welcomes the open publication of these animal research statistics as offering the public a clear idea of what animal research goes on in their country.

We mightn’t like it, but there are ethical reasons to use animals in medical research

Trichur Vidyasagar, University of Melbourne

The media regularly report impressive medical advances. However, in most cases, there is a reluctance by scientists, the universities, or research institutions they work for, and the media to mention animals used in that research, let alone non-human primates. Such omission misleads the public and works against long-term sustainability of a very important means of advancing knowledge about health and disease.

Consider the recent report by Ali Rezai and colleagues, in the journal Nature, of a patient with quadriplegia who was able to use his hands by just thinking about the action. The signals in the brain recorded by implanted electrodes were analysed and fed into the muscles of the arm to activate the hand directly.

When journalists report on such bionic devices, rarely is there mention of the decades of research using macaques that eventually made these early brain-machine interfaces a reality for human patients. The public is shielded from this fact, thereby lending false credence to claims by animal rights groups that medical breakthroughs come from human trials with animal experiments playing no part.

Development of such brain-machine interfaces requires detailed understanding of how the primate brain processes information and many experiments on macaques using different interfaces and computing algorithms. Human ethics committees will not let you try this on a patient until such animal research is done.

Image: Understanding Animal Research

Image: Understanding Animal Research


These devices are still not perfect and our understanding of brain function at a neuronal level needs more sophistication. In some cases, the macaque neural circuitry one discovers may not quite match the human’s, but usually it is as close as we can get to the human scenario, needing further fine-tuning in direct human trials. However, to eliminate all animal research and try everything out on humans without much inkling of their effects is dangerous and therefore highly unethical.

The technique Dr Rezai’s team used on human patients draws heavily upon work done on monkeys by many groups. This can be seen by looking at the paper and the references it cites.

Another case in point is the technique of deep brain stimulation using implanted electrodes, which is becoming an effective means of treating symptoms in many Parkinson’s patients. This is now possible largely due to the decades of work on macaques to understand in detail the complex circuitry involved in motor control. Macaques continue to be used to refine deep brain stimulation in humans.

Ethical choices

The number of monkeys used for such long-term neuroscience experiments is relatively small, with just two used in the study above. Many more are used for understanding disease processes and developing treatment methods or vaccines in the case of infectious diseases such as malaria, Ebola, HIV/AIDS, tuberculosis and Zika.

Approximately 60,000 monkeys are used for experiments for all purposes each year in the United States, Europe and Australia.

However, if one looks at what is at stake without these experiments on non-human primates, one must acknowledge a stark reality. In many cases, the situation is similar to that which once existed with polio. Nearly 100,000 monkeys were used in the 1950s to develop the polio vaccine. Before that, millions of people worldwide, mostly children, were infected with polio every year. Around 10% died and many were left crippled.

Now, thanks to the vaccine, polio is almost eradicated.

Similarly, about 200 million people contract malaria every year, of whom 600,000 (75% being children) die, despite all efforts to control the mosquitoes that transmit the disease. Development of a vaccine is our best chance, but again primates are necessary for this, as other species are not similarly susceptible to the parasitic infection.

Circumstances are similar with other devastating ailments such as Ebola, HIV and Zika. The ethical choice is often between using a few hundred monkeys or condemning thousands or more humans to suffer or die from each one of these diseases year after year.


Reports of medical breakthroughs conveniently leave out animals used in the process.
Novartis AG/Flickr, CC BY

In the popular press and in protests against primate research, there is sometimes no distinction made between great apes (chimpanzees, bonobos and gorillas) and monkeys such as macaques, leading to misplaced emotional reactions. To my knowledge, invasive experiments on great apes are not done anywhere, because of the recognition of their cognitive proximity to humans.

While the ape and human lineages separated six million years ago, there is an additional 20 to 35 million years of evolutionary distance from monkeys, which clearly lack the sophisticated cognitive capacities of the apes.

With urgent medical issues of today such as HIV, Ebola, malaria, Zika, diabetes and neurological conditions such as stroke and Parkinson’s disease, monkeys are adequate to study the basic physiology and pathology and to develop treatment methods. There is nothing extra to be gained from studying apes.

Alternatives have limitations

Opponents of animal research often cite the impressive developments of computer modelling, in-vitro techniques and non-invasive experiments in humans as alternatives to animal experiments. These have indeed given us great insights and are frequently used also by the very same scientists who use animals.

However, there are still critical areas where animal experimentation will be required for a long time to come.

Modelling can be done only on data already obtained and therefore can only build upon the hypotheses such data supported. The modelling also needs validation by going back to the lab to know whether the model’s predictions are correct.

Real science cannot work in a virtual world. It is the synergy between computation and real experiments that advances computational research.

In-vitro studies on isolated cells from a cell line cultured in the lab or directly taken from an animal are useful alternatives. This approach is widely used in medical research. However, these cells are not the same as the complex system provided by the whole animal. Unless one delves into the physiology and pathology of various body functions and tries to understand how they relate to each other and to the environment, any insights gained from studying single cells in in-vitro systems will be limited.

Though many studies can be done non-invasively on humans and we have indeed gained much knowledge on various questions, invasive experiments on animals are necessary. In many human experiments we can study the input to the system and the output, but we are fairly limited in understanding what goes on in between. For example, interactions between diet, the microbiome, the digestive system and disease are so complex that important relationships that have to be understood to advance therapy can only be worked out in animal models.

Of course, animals are not perfect models for the human body. They can never be. Species evolve and change.

However, many parts of our bodies have remained the same over millions of years of evolution. In fact, much of our basic knowledge about how impulses are transmitted along a nerve fibre has come from studying the squid, but our understanding also gets gradually modified by more recent experiments in mammals.

Higher cognitive functions and the complex operations of the motor system have to be studied in mammals. For a small number of these studies, nothing less than a non-human primate is adequate.

The choice of species for every experiment is usually carefully considered by investigators, funding bodies and ethics committees, from both ethical and scientific viewpoints. That is why the use of non-human primates is usually a small percentage of all animals used for research. In the state of Victoria, this constitutes only 0.02%.

Medical history can vouch for the fact that the benefits from undertaking animal experiments are worth the effort in the long run and that such experimentation is sometimes the only ethical choice. Taken overall, the principle of least harm should and does prevail. There may come a day when non-invasive experiments in humans may be able to tell us almost everything that animal experiments do today, but that is probably still a long way off.

Priorities in animal use

The ethical pressure put on research seems to be in stark contrast to that on the food industry. It is hypocritical for a society to contemplate seriously restricting the use of the relatively small number of animals for research that could save lives when far more animals are allowed to be slaughtered just to satisfy the palate. This is despite meat being a health and environmental concern.

To put this in perspective, for every animal used in research (mostly mice, fish and rats), approximately 2,000 animals are used for food, with actual numbers varying between countries and the organisations that collect the data.

The ratio becomes even more dramatic when you consider the use of non-human primates alone. In Victoria, for every monkey used in research, more than one million animals are used for meat production. However, the monitoring of the welfare of farm animals is not in any way comparable to that which experimental animals receive.

Reduced use of livestock can greatly reduce mankind’s ecological footprint and also improve our health. This is an ethical, health and environmental imperative. Animal experiments, including some on non-human primates, are also an ethical and medical imperative.

Trichur Vidyasagar, Professor, Department of Optometry and Vision Sciences and Melbourne Neuroscience Institute, University of Melbourne

This article was originally published on The Conversation. Read the original article.

Herding Hemmingway’s Cats: Book review

What can cats with six toes, flies with wimpy testis, fish with hips, and mice with socks tell us about how our genes work? Turns out, they – together with a cast of characters ranging from bacteria to our own species – can tell us quite a lot.

In Herding Hemmingway’s Cats: Understanding how our genes work Dr Kat Arney takes the reader on a journey through the past and present of the science of genetics, exploring the key discoveries and concepts that are beginning to explain the complex processes through which the hereditary information in our genes constructs us “in all our wobbly, unique and mysterious glory”.

Can this cat be herded? Image: Marc Averette

Can this cat be herded? Image: Marc Averette

It’s a somewhat daunting challenge for a book that weighs in at just over 250 pages, but Dr Arney succeeds with a book that is accessible and entertaining without ever taking its subject for granted. This is in no small way due to the structure of the book, which unfolds in a series of interviews with pioneering scientists – some of whom have Nobel prizes, others who most surely will – whose work has uncovered many different ways in which our genes end up making stuff we need when and where we need it (mostly). Amid the details of their discoveries about phenomena such as junk DNA, gene splicing, imprinting, and RNA interference there are many fascinating glimpses into their personalities, motivations, and occasionally rivalries.


For all that Herding Hemmingway’s Cats provides an insight into the tremendous progress that science has made in understanding how genes are controlled, anyone looking for a triumphalist hagiography need look elsewhere.

In the 13 years since the publication of the draft human genome science has learned a lot about the protein coding regions of our genes – the 1.5 % of our  DNA whose sequence is translated into amino-acids that make up the proteins in our cells – our understanding of the function of the non-coding regions of our genes and the areas in between genes is still in its infancy. This is important because while many inherited diseases are due to errors in the protein coding regions, most of the differences we see between each of us individual human beings and between our species and others are due to differences found in this other 98.5% of our genome.

Dr Arney doesn’t shy away from these gaps in our knowledge and deficiencies in our understanding, she positively revels in them, so if you think we know nearly all there is to know about how are genes work than prepare to be surprised. With the help of her interviewees, she  throws buckets of cold water over some popular (and for some profitable) ideas about how the environment can influence the activity of genes, deftly skewers a few much quoted – but unwise – statements by leading geneticists, and shows how even many standard scientific textbooks are surprisingly inaccurate when it comes to explaining the ways in which genes are organized and regulated within cells.

The  interviewees – who are not all always in agreement with each other – are allowed to tell much of the story, and that’s OK, as it allows the author to show the often messy and imperfect reality of cutting-edge science. She approaches her interviews with a lot of humour and an open mind, but also a determination to get to the heart of the matter. Occasionally the author does allow her impatience with some current trends in genetic research to show, for example when discussing the work of scientists who trawl through the human genome looking for associations between small genetic variations called single nucleotide polymorphisms (a.k.a. SNPs, pronounced snips) and particular traits or diseases (in this case those linked to mental health problems) she writes:

But while this might yield a few more interesting links, I’m increasingly feeling that there are limited further gains to be made… To be fair to the snip-hunters, their discoveries do sometimes provide a useful chisel for researchers to start prising open the biological processes that underpin a disease. Not many people want to do that, though, because it’s hard. It involves doing tricky experiments, often using animal models, and taking years to unpick what’s going on. Much easier to apply for a million-pound grant and go fishing for yet more snips instead (I’ll get off my soap-box for now).

She needn’t apologize; her soap-box moment is most apt. This book is at heart a collection of stories of stories about scientists who spotted something odd in an experiment, and then, rather than shrugging their shoulders and moving on, did the tricky experiments, often using animal models, and put in the years to unpick what’s going on. In most cases they are still unpicking it, but through their failures and successes they have already transformed the way we understand how our genes work.

So who is this book for? It’s perfect for undergraduate biology students who are just starting to learn about genetics,and for those of us who have studied genetics in the past and wish to catch up with the current state of the art, but really it’s for anyone who is curious about how the information in our genes becomes us.

Herding Hemmingway’s Cats is a fascinating, funny, and at times provocative celebration of basic science, and an excellent debut by a new author whose enthusiasm for her subject we are sure will entertain and inform readers around the world.

Paul Browne

Herding Hemmingway’s Cats: Understanding how our genes work by Dr Kat Arney is published by bloomsbury Sigma, and is available in book stores nationwide, and online on Amazon as an audio book, hardback and e-book.

Exciting cells and controlling heartbeats – could optogenetics create drug-free treatments?

A laser-controlled brain or a heart that beats in time to a disco light display sound like some of the more vivid imaginings of science fiction writers. But scientists are gathering together tricks that may allow us to do just that – and they could be used to create drug-free therapies.

This is the growing field of optogenetics, where proteins that change their shapes in response to light pulses can be used to control the electrical activity of cells inside living animals.

The tools have been gathered from far and wide. There are the Channelrhodopsins – sensory receptors – from algae, which respond to blue light, exciting cells by letting positive charges into the cell. The Halorhodopsins, isolated from extremophile bacteria – bacteria living in extreme conditions, in this case salt pools – let negative charge into cells in response to yellow light, and shut the excited cell down. A similar trick to de-excite cells is used by Archeorhodopsins, isolated from another extremophile, which pumps positive charge out of the cell in response to yellow light.

By taking parts from human neurotransmitter receptors and these bacterial light-sensitive domains, we can also create more complicated machines in the lab, such as Hylighter, which depresses activity in neurons on exposure to one colour until it is switched off by exposure to a second colour of light.

Using blue and yellow to manipulate. Lights by Shutterstock

Using blue and yellow to manipulate. Lights by Shutterstock

In theory, this means that by combining pulses of blue and yellow light, neurons and muscles can be switched on and off to order, over incredibly short timescales (thousandths of seconds). Ultimately, this could lead to therapies whereby excitable cells can be “helped along” without the use of drugs and all the dangers that come with long-term use of drug-based cures.

Dancing flies and light-guided fish

Scientists have started exploiting this technology to increase our understanding of the circuits that underpin behaviour, with sometimes spectacular results: flies that dance on cue, for example, or fish that can be steered by light as they swim.

Two studies recently brought the possibilities of light-based electrical stimulation as human therapy to the fore. Researchers at the University of Bonn looked to see if they could control heart beats by applying light simulation to animals whose heart cells were made to express Channelrhodopsin. A combination of Channelrhodopsin and Halorhodopsin allowed another team of researchers to “take over” the heart’s pacemaker cells in zebrafish, overriding its natural rhythm, until the lights were turned off.

Where was I? Mouse by Shutterstock

Where was I? Mouse by Shutterstock


In Nobel Prize winner Susumu Tonegawa’s lab, they found that memories that could not be recalled in mice with Alzheimer’s could be retrieved by exposing cells in the brain’s memory forming centres to optogenetic stimulation. Cells that expressed Channelrhodopsin were made more excitable by exposing them to bursts of light, allowing a “power boost” that helped these neurons maintain active connections, in turn helping to retrieve memory of a past event.

This startling result suggested that Alzheimer’s patients could be forming new memories all the time, and may only need a helping hand to maintain the weak connections they form. While this would not stop the changes that make Alzheimer’s patients forget, it might extend the time during which they could retain their memories.

Down to the practicalities

Tonegawa’s study looked at how mice retrieved memories of a sound they had heard at the same time as receiving a short electric shock – something that mice with Alzheimer’s don’t normally remember. After boosting neurons in the brain region that builds these memories by stimulating their firing with Channelrhodpsin, the neurons in this region were helped to form the proper connections to maintain this memory. Tonegawa’s work, then, concentrated on systems that scientists know very well – the fight-or-flight reflexes we develop when something unpleasant happens.

We don’t yet understand the detailed circuitry of the brain that is probably of more interest to Alzheimer’s sufferers and their families: the mechanisms that control the subtle tasks our brain performs for us every day, our recall of loved one’s faces or the location of our car keys. Optogenetics will only ever be as useful as our knowledge of where these fleeting memories are stored.

Nor are these interventions the stuff of emergency medicine. To help an injured heart or a forgetting brain, for example, we would need to know if the patient’s cells were healthy enough to still function, or whether they were too damaged to be properly integrated within their network, which would make exciting them useless.

In this case, we can consider, as some labs have done, taking cells (such as the patient’s own stem cells) and turning them into heart muscle cells or neurons in the lab. If these “replacement” cells can then be made to express Channelrhodposin, for example, they could be injected into damaged tissue in the patient to supply the (light-controlled) function of the original damaged tissue.

This, however, brings up all the associated difficulties that tissue replacement therapies, such as stem cell therapies, create: how to integrate cells into existing tissues, how to stop them integrating where they are not needed, and in the brain, how to get them to integrate into the right networks.

For if excitable cells are found to still be healthy enough to support electrical communication, and to only require optogenetics to turn up the volume of their signals, we still have to get our genetically-encoded construct into the right cells. We also need to find a way to shine light on them (perhaps we would have to wear a fibre-optic pacemaker) and fine tune our stimulation to each individual patient.

For chronic diseases, this may all be worthwhile, but the investment of time and expertise for the procedure will be considerable and is unlikely to change much as the technology advances. It’s clear we have a long way to go, but we may yet have our brains tripping the light fantastic.

Laura Swan, Cell biologist, University of Liverpool

This article was originally published on The Conversation. Read the original article.

How zebrafish help advance cancer research

Do sharks get cancer?

Despite the widely touted myth that sharks do not develop cancer, fish of all species do occasionally develop spontaneous tumours. This is of course also true for the most common of laboratory fish, the zebrafish. In this article, I will give you a brief overview of how the unique properties of the zebrafish have been exploited by scientists to generate very useful models to study the molecular basis of various cancers.

The use of zebrafish in cancer biology goes right back to when scientists first started using them in the lab, at which point it was noticed that they spontaneously develop various kinds of tumours. However, using these naturally occurring malignancies to study cancer development is rather impractical – not only would you need a lot of fish due to the rarity of these cancers, but there would also be a lot of heterogeneity as to what kinds of tumours develop. This is clearly not ideal if you want to study the molecular basis and treatment options of one particular cancer.

From disease to model

Subsequently, carcinogenic chemicals were used to speed up the onset of cancer development. However, similar to using naturally occurring tumours, this strategy is not terribly useful for studying one particular kind of cancer, as the resulting tumours can still be very diverse(although some substances tend to always cause the same type). This approach is mostly used to identify cancer-causing chemicals during human and environmental safety testing.

To study one specific cancer type in detail, scientists started to create zebrafish carrying particular loss of function mutations (i.e. genes that lose activity due to a change), or overexpressing certain cancer-causing oncogenes (i.e. genes that cause cancer when they are overly active). Usually, this leads to the early development of only one – or at most a few – types of cancer. The first of these more specific models were acute lymphoblastic leukaemia (ALL) models, but nowadays there are models for cancers of various tissues, ranging from the brain to the pancreas.

Most of these mutant models were originally created using mutagenizing drugs followed by screening for a phenotype, but recently the research community has shifted to more targeted techniques. These make use of novel genome editing tools, such as the CRISPR-Cas9 system to switch off certain genes. The overexpression of specific genes on the other hand was usually achieved using proteins called transposases to integrate novel genetic information, but very recently the CRISPR-Cas9 system has also been tweaked to do the same.

Why study cancer in fish?

So why would anyone bother to go through this effort and do all this in fish, if we can just use the more closely related mice or rats? Apart from the lower expense and easier generation of large numbers of fish, the main reason why fish are used is that visualizing particular cells is much easier than in other organisms. This is mainly due to two factors: the existence of various transgenic fish lines in which a particular cell type is labelled, and the existence of transparent adult fish (the casper fish, as below).


Transparent fish like these Casper fish shown here allow researchers to track cells inside the body of adult fish much more easily than ever before

The ease of labelling specific cell types has been exploited elegantly for studying the clonal expansion of cancer cells that drives tumor growth in vivo as it happens, as well for the study of cancer metastases. Now that adult transparent zebrafish have enabled even easier in vivo imaging, the approach has been used successfully to visualize the process by which metastases arise and cancer cells distribute throughout the body.

Understanding the origins of melanoma

A recent paper from Charles Kaufman of the Harvard Stem Cell Institute and colleagues nicely illustrates how these advantages can be very powerful indeed. In this paper published in Science magazine, the researchers used a zebrafish melanoma model that they had developed a few years earlier which expresses gene variants associated with the cancer in humans, and combined this with a newly developed transgenic zebrafish line, in which cells expressing a gene known as Crestin, which is involved in early neural development , are labelled in green. The Crestin gene is normally not expressed in adult humans, but is switched on again in melanomas. This is also why this combination is interesting; emerging melanoma cells will re-express the normally silent gene and be labelled fluorescently.

This method allowed the researchers to track melanoma development from the very first tumour cell to the macroscopically visible tumour comprised of millions of cells. The very early changes that have to occur for cancer to develop can now be studied at much greater detail than before, as these very early tumorigenic cells are extremely hard (or completely impossible) to distinguish from normal cells if they are not labelled. In this specific case the researchers identified the activation of several gene pathways that are usually involved in neural crest development in the embryo as key events in the initiation of melanoma, and believe that their findings could lead to a new genetic test for suspicious moles in patients. Their work suggests a model of cancer development where normal tissue becomes primed for cancer when oncogenes are activated and tumour suppressor genes are silenced or lost, but where cancer develops only when a cell in the tissue reverts to a more primitive, embryonic state and starts dividing.

This paper increased our understanding of the underlying biology of the very early stages of tumour development, and a detailed understanding of these early steps might be very important when developing preventative or therapeutic drugs.

Image: Kaufman, C.K., et al, 2016. A zebrafish melanoma model reveals emergence of neural crest identity during melanoma initiation. Science, 351(6272), p.aad2197. DOI: 10.1126/science.aad2197

Image: Kaufman, C.K., et al, 2016. A zebrafish melanoma model reveals emergence of neural crest identity during melanoma initiation. Science, 351(6272), p.aad2197. DOI: 10.1126/science.aad2197

In summary, the field of zebrafish cancer biology has made great advances in the last decade and will continue to do so with the increasing popularity of genome editing techniques. The easy visualization of particular cell types leads to distinct advantages of using zebrafish, particularly for the study of metastases and the very early stages of cancer development.

Jan Botthof

Kaufman, C.K., Mosimann, C., Fan, Z.P., Yang, S., Thomas, A.J., Ablain, J., Tan, J.L., Fogley, R.D., van Rooijen, E., Hagedorn, E.J. and Ciarlo, C., 2016. A zebrafish melanoma model reveals emergence of neural crest identity during melanoma initiation. Science, 351(6272), p.aad2197. DOI: 10.1126/science.aad2197