In defense of “The scientific basis for the support of biomedical science”

During our panel discussion, Dr. Greek criticized a classic study that appeared in the pages of Science by Comroe and Dripps, entitled “The scientific basis for the support of biomedical science”, which set out to analyze the time sequence of discoveries that had led to major medical advances.

Comroe and Dripps analyzed the top ten clinical advances in cardiovascular and pulmonary medicine in the last 30 years (prior to the study which was done in 1976).   Their goal was to identify the key scientific discoveries had led to these advances.  With the help of consultants and physicians, they read and carefully examined 4,000 individual articles, identifying 2,500 of them as being essential for the development of the body of knowledge that lead to these breakthroughs.

The main result of the study was 41% of all articles considered to be essential for later clinical advances were not clinically oriented at the time of the study, and that 62% of key articles were in fact the result of basic research exploring fundamental questions of biology.  This figure could in fact be considered lower bounds (underestimating the value of basic research), as any given study was categorized as “clinically oriented” even if it was done entirely on animals with a basic question in mind but merely mentioned in passing an interest or relation to a particular disease.

Another interesting outcome of this study was a very rather detailed chronological list of the key elements involved in the development of electrocardiography.  From the early manifestation of electricity in ancient times, to Galvani and the discovery of bio-electricity, Volta, Purkinje, the first ECG recording in frogs and humans, and the development of ECG devices (see their Table 3).  Such a clear sequence of causal events leading to major breakthroughs is what the opposition usually demands as proof for the contribution of animals in medical advances.

The methodology of the study was criticized by Richard Smith eleven years after the publication of the original study.  Here are his central complaints:

“Comroe and Dripps [asked] 40 physicians to list the advances that they thought most important.  They do not, however, say in their paper whether they asked 40 and fewer replied or whether they asked more and only 40 replied.  Nor do they say how they selected these 40, and nor do they say in their methods why they chose only physicians, although they are defensive in their discussion about having done so

From the replies Comroe and Dripps produces a list of the top cardiovascular and pulmonary advances and sent them to ’40 to 50 specialists in each field’ and asked them to vote on the list.  ‘Their votes selected the top t10 advances’.  Again this is very imprecise for a paper published in Science.  Exactly how many specialists were contacted?  How many responded?  How were they asked to vote?  How were the votes put together?”

Richard Smith concluded the study was therefore “unscientific”.

There is, in fact, some degree of validity in the criticism that there is a lack of detail in the original Comroe and Dripps.  However, I would submit these are minor flaws and it should be possible to fill in the missing pieces in a reasonable way  to allow a replication of the study.   Calling the results of the study “unscientific” is not warranted.

In any case, in an effort to take a second look at these issues Grant et al in 2003 decided to address a similar question to that of Comroe and Dripps but using different methods (this was the study cited by Dr. Greek in our panel discussion).  First, Grant and colleagues opted to look at the leading advances in neonatal intensive care.  The first part of the study was performed in a similar way as that of Comroe and Dripps: coming up with a list of clinical advances in one specific area (neonatal intensive care) using a Delphi survey.  The top three advances they identified by voting of experts in this area were mechanical ventilation, replacement surfactant and antenatal steroids (their Fig 2.1).

Second, instead of reading and reviewing articles from the literature to identify the key elements of knowledge that contributed to these advances, this team opted for an automated bibliographic analysis of the literature based on a genealogy tree of articles.   Basically the method works as follows.  First, start by searching for articles within the last 5 years that deal with one of the clinical advances of interest (such as lung surfactants using a keyword search).  Keep the top 5% of the most cited papers.  Presumably, this set is of some importance and it will represent the first generation of papers.  Next, generate the next set of papers by looking at the full set cited by the first set.  Rank this new set according to the number of times they each have been cited and, again, keep the top 5% (this set represents the 2nd generation of papers).  And so on.

The method seems automatic and bias-free.  But is it generating meaningful results?  Where is the 5% threshold coming from?  Normally, in science, only a handful of studies in any one period of time provides the key elements that are necessary to drive a breakthrough.  In Table 3 from Comroe and Dripps, for example, they identified only 21 key studies between 1900 to 1967 — about 1 key discovery every 3.2 years.    Thus, I would suggest that 5% is too high a threshold, which only helps to add a tremendous amount of noise in the literature under study at each generation.   Further, the authors never consider citation rates.  So, a paper that received 50 citations in 5 years might rank higher than a paper that received 49 citations in 2 months.

Even if we assume the analysis is yielding a reasonable collection of papers, the authors split the papers in each generation into five categories or levels: level 1 (clinical observation), level 2 (clinical mix), level 3 (clinical investigation) and level 4 (basic research) and finally level 5 (unknown).  Yet, such classification was arrived at depending on the journal the research was published, which seems a rather crude method.

As a matter of fact, in a recent email communication with Dr. Jonathan Grant about these issues, he wrote back saying that:

“The issue I have with my analysis is […] the metric (research levels) for classifying basic and other research. Although I know of no other way of doing this bibliometrically I have come to believe it is too crude.”

In addition to this issue, one should note that by splitting all the papers into more categories (not just basic and clinical as Comroe and Dripps did) the absolute percentage numbers one would expect for each is category is automatically reduced.  In fact, the highest percentage of studies in any one generation falls in the “unknown” category (~40% of them).  Comparing absolute percentage levels from this study versus Comroe and Dripps is not possible.

Grant and colleagues were careful enough to express reservations about their results when they wrote:

“In reaching this conclusion we are acutely aware of the significant limitations to the revised methodology and, therefore, we caution against the over-interpretation of our results.”

A caveat Dr. Greek should consider mentioning when referring to this study.

In closing, I’d like to offer the readers a challenge.  Consider the two medical advances in neonatal intensive care Grant et al identified with the help of experts: ventilation and surfactants.   It seems to me that anyone with knowledge of medical history in the field will immediately recognize the role animals played in their development.  Don’t take my word for it.  Instead, read the story as told by one investigator that was directly involved in these discoveries.   It is a wonderful tale that will take you from the basic physics of capillarity and surface tension (yes, basic science again), to the elucidation of the composition of lung surfactants in animals, to the treatments that save the lives of thousands of premature babies each year which would have otherwise died 50 years ago.

The inescapable conclusion is that lives of countless premature babies are saved today thanks basic research with animals.

It is as simple as that.

Regards

Dario Ringach

3 thoughts on “In defense of “The scientific basis for the support of biomedical science”

  1. The issue of the categories into which Grant et al. divided publications is interesting. They allocate the paper to
    clinical observation, clinical mix, clinical investigation, basic research and finally unknown according identity of the journals in which the papers were published.

    Interestingly they give examples of the journals. For “basic research” the example is Nature, which is indeed a journal that specialises in basic research, and for “Clinical Observation” they give the example of the BMJ, which is indeed a journal that specialises in clinical research.

    The picture starts to get murkier with “clinical mix” and “clinical investigation”. The example given for “Clinical mix” is the New England Journal of Medicine, and that of “clinical investigation” is Immunology. While NEJM does concentrate on clinical research a look at Immunology will quickly reveal that most of the papers published there are in fact reporting basic research http://www3.interscience.wiley.com/journal/118493028/home

    Since it is probably safe to assume that a good proportion , probably the majority, of the papers in the “Unknown” category are reporting basic research (since clinical journals are usually quite obviously clinical) then it is clear that many, perhaps most, of the papers reporting basic research that Grant et. al found were assigned to categories other than “basic research”. Dr. Jonathan Grant is certainly correct in acknowledging that his method of allocating papers to categories was too crude. I’m fairly confident that a more accurate classification of papers would bring the total much closer to the 62% that Comroe and Dripps proposed.

    Overall this shows the difficulty in trying to use metrics based analysis of publications to determine the contribution of different areas of scientic endevour to medical progress. Subjective though their method may have been Comroe and Dripps work almost certainly produced a more accurate assessment of the true contribution of basic science to medical advances.

Comments are closed.