Yves here. KLG proposes some significant changes to biomedical research funding, such as ending the system of annual funding with longer-term commitments, more emphasis on productivity, and broader allocation credit for what actually is team work.
By KLG, who has held research and academic positions in three US medical schools since 1995 and is currently Professor of Biochemistry and Associate Dean. He has performed and directed research on protein structure, function, and evolution; cell adhesion and motility; the mechanism of viral fusion proteins; and assembly of the vertebrate heart. He has served on national review panels of both public and private funding agencies, and his research and that of his students has been funded by the American Heart Association, American Cancer Society, and National Institutes of Health.
In the previous post of this series, Science on Trial, Part 1, we discussed the perceived necessity of a new ethics of scientific research in the “sped-up and scaled up world of science during a global pandemic.” In my view the answer is, “No, we do not need new ethics.” But we do need to change how research is supported if we are going to take advantage of the advances that have inarguably improved the speed, efficiency, reach, and goal-directedness of Biomedical Science, when properly practiced. And by “we,” as always, I mean all of us as members of a polity and society. How we have gotten to this point was considered in our previous discussion of the Bayh-Dole Act of 1980 and the relative contributions of publicly supported research and private, i.e., corporate, activities in the advancement of Biomedical Science over the past 30+ years. It is clear that the results of publicly supported research form the essential foundation upon which all biomedical science rests.
The current problems of Biomedical Science in our sped-up and scaled up world were defined as Honesty, Fairness, Objectivity, Reliability, Skepticism, Accountability, and Openness. I have based much of what follows on my personal experience in biomedical research in a career that began as a laboratory dishwasher and proceeded along a winding path to academic, grant-writing “Principal Investigator” [1]. As PI, I was responsible for a lab full of undergraduate and graduate students plus two senior research professionals who kept the operation going. Although I worked at the lab bench whenever possible, my primary responsibility was to read and think, while staring out the window, in a constant effort to maintain our momentum. Which, of course, depended on our funding: No grants, no lab, no research, no people, and no job (for me). This was demanding and at times all-consuming, but I was somewhat successful during that period of my professional life.
My experience remains the impetus of this post, but these issues have been addressed in a compendium recently published by two biomedical scientists, Ferric C. Fang, MD, of the University of Washington School of Medicine, and Arturo Casadevall, MD-PhD, of Johns Hopkins University Schools of Public Health and Medicine: Thinking about Science: Good Science, Bad Science, and How to Make it Better (F&C) American Society of Microbiology/John Wiley & Sons, 2024; $56 direct from ASM. At 524 pages, including endnotes and index, this remarkable work is a substantial handbook in 32 concise chapters (listed at the link). My few quibbles are inconsequential, mostly related to how clinicians and basic scientists differ in their approaches to “training.” I plan to read Thinking About Science with our preclinical medical students, who tend to accept what is published without question. This is something they must be “untrained” to do.
Let us begin with Honesty, which F&C cover well. I can count on the fingers of one hand, with four left over, the dishonest scientists I have knowingly encountered since the mid-1970s (yes, perhaps I have led a charmed life). And even then, the “circumlocutions” had more to do with exaggeration of his particular contributions than the data and results themselves. There was never any question about those, which were presented in all their raw glory straight from the lab notebook in regular lab meetings throughout our time in the same lab. I was fortunate that my first job in science was in a laboratory that considered the work of any member the work of every member, where “no-holds-barred,” often heated, discussions were expected on every aspect of every project, all the time. No manuscript ever escaped my first research group without a word-by-word examination, and that has remained true for my own group.
Still, the occasional genuine cheat will surface. Why do scientists cheat? Well, some people apparently cannot help themselves, but they are, or were, very rare [2]. From F&C (p. 200), most of the time scientists on the edge “are motivated by the fear of loss rather than by a desire for gain…Scientists who have committed misconduct typically do so out of fear of losing research funding, employment, or prestige…(creating)…a ‘hypermotivation’ [3] to cheat, which may overcome the desire of scientists to behave in an ethical manner.” Indeed, as Eric Poehlman, who was convicted of using fabricated data in grant applications, explains himself (F&C, p. 201):
I placed myself…in an academic position (in) which the amount of grants that you held basically determined one’s self-worth. Everything flowed from that. With that grant I could pay people’s salaries, which I was always very, very concerned about. I take full responsibility for the type of position that I had that was so grant-dependent. But it created a maladaptive behavior pattern…which I should have but was not able to stand up to. I saw my job and my laboratory as expendable if I were not able to produce.
I would describe this as the most unfortunate adaptation to an extremely maladaptive environment. Still, no excuses can be accepted and Poehlman rightfully lost his career and went to prison, which is a rare event. Another scientistguilty of similar fraud has escaped the latter penalty, to my knowledge. A few have been punished with prison recently.
Regarding this maladaptive environment, which has only gotten more capricious over the course of my professional life, let us consider the “economics” of Biomedical Science, i.e., research that is funded by the National Institutes of Health, the National Science Foundation just over the border from “biomedical,” and other sources such as the American Heart Association and the American Cancer Society (F&C, Chapter 19). How Fair is it? Not very. First, there is priority – who gets credit for what in a discipline that is essentially a team effort. F&C cover this from several angles, including a number Nobel Prizes (Chapter 21) that left out one or more “discoverers.” I am intimately familiar with one of those, and they make a good, if marginal point. But funding is dependent on the claim of being first, with “productivity” measured in “primary or corresponding authorship.” This is usually the first or last author listed on a paper. All authors are responsible for the content of a scientific paper. This has been forgotten lately. The corresponding author takes the lead in publication. In my experience, priority is very often not a simple matter. But it can be measured (i.e., counted) very easily by administrators. This also contributes to maladaptive behaviors among all parties involved.
Scientific publishing has also been corrupted by the inappropriate use of “Impact Factors” attached to journals. We have discussed this before through an article by Stuart Macdonald. Suffice it to say that publishing is now more of a game than it ever has been, especially in Biomedical Science. How to fix (some of) this according to F&C, Chapter 19 (with my notes):
- Replace the current system of annual funding allocations with stable long-term support indexed to productivity (a very good idea that begs the question: “What is productivity and how do you measure it?)
- Replace the winner-takes-all economic model with a fairer allocation of credit (in my experience scientists generally don’t read much outside of their discipline, but they could take lessons about their fate as members of a “middle class” from Hacker and Pierson’s Winner-Take-All Politics)
- Reward scientific rigor rather than publication output (which is especially important in the current era of neo-predatory, pay-to-publish, open-access journals)
- Recognize team science and collaborative contributions (yes, science is a team sport)
- Abolish prizes to individual scientists for specific discoveries (of marginal utility in the greater scheme of Biomedical Science and other “Nobel” and “non-Nobel” disciplines)
This all would be an outstanding start, and it leads directly to the Immodest Proposal of my title. In my view we must “fix” Biomedical Science in the immediate future. And if we do not do this soon, its prospects are dim.
My immodest proposal is that we double the extramural research budgets of the National Institutes of Health and the National Science Foundation and thereby return each agency to its original goal of supporting scientific research as a public good.
This is not as outrageous as it sounds, and we can afford it. All that is needed is the will to do so while reordering our priorities. But this also requires that I explain myself.
In 1965 NIH funded about 50% of the R01 research grant applications it received (F&C, Figure 31.1). The “R01 grant” is the primary mechanism for funding investigator-initiated research applications to NIH. As the scientists who taught me explained, “Back in those days we met to decide which grants not to fund.” This is an exaggeration but only slightly so. Success rates declined for the next 30 years to about 30% in 1995. After the doubling of the NIH research budget during the Clinton administration, success rates temporarily improved to about 33%. They have declined steadily since, with a slight upward blip due to the American Recovery and Reinvestment Act (ARRA) of 2009 in the aftermath of the Great Financial Crisis. Current overall success rates hover at 20%. Thus, four out of five NIH research project grants (RPG) are rejected. In some institutes, e.g., the National Cancer Institute, success rates are often in the single digits. Recent accounting shows that as many as nine out of ten applications to study cancer are rejected.
While funding for RPGs in 2022 was $25.4B, this is roughly 20% less than 1995, adjusted for inflation, and much less as a fraction of the economy. We have not kept up, in either funding levels or success rates. And while the opportunity costs of decreases in public research funding are unknowable, they are certainly high. Perhaps COVID-19 illustrates why. The original SARS epidemic was caused by SARS-CoV-1. It lasted less than a year, from November 2002 through June 2003. About 800 of 8400 SARS patients died. A therapeutic monoclonal antibody against SARS (albeit in hamsters, which is a good experimental model) was identified in 2006 (pdf) but serious research on SARS ended there for the most part. This should not have happened, but after the short, mostly Asian SARS outbreak subsided, so did funding for research on a disease that had disappeared so quickly. It is quite reasonable to believe that the past four years would have been different if SARS had rightfully been the subject of continued research into coronaviruses that cause human disease [4]. That would have required generous support, spread among different research groups looking at coronaviruses from every imaginable perspective.
On the other hand, it could be argued that a 20% success rate is sufficient, that reviewers can reliably identify those in the highest quintile. I have served on and chaired review panels for the past 15 years. My experience, which is shared by my colleagues, indicates there is no way for a review panel to distinguish among applications in the “top third” of the pool, at best. These applications are all alike. They address a significant problem [5], are complete and well written and supported by a strong foundation, and have been submitted by a PI who is qualified and well positioned to be successful. They differ only in subjective opinion among members of the panel or Program Directors responsible for final approval. Thus, which applications are funded is essentially random. This harkens back to an inverse of the earlier period when review panels met to decide which grants not to fund. Unless and until we return to the days when a one-third of grants in a pool are funded after initial review and one-third will be funded upon revision, our opportunity costs will continue to rise even though we cannot know what these are and what we have missed [6].
However, we do know what we are missing in the development of a strong community of biomedical scientists who are willing to work together to increase our scientific knowledge. What was “publish or perish” 30-40 years ago has now become “publish and still perish.” And no wonder. Biomedical research has become a zero-sum game in which one group’s success means another group’s failure. Thus, Objectivity, Reliability, Accountability, and especially Openness have unfortunately gone by the wayside, with dire consequences.
As a concrete example of this, I know a biomedical scientist who as an Assistant Professor was an excellent teacher and research advisor who turned down prospective students because his lab was full. He supported his laboratory from various sources, including NIH and other agencies, and graduated three PhD students who did very well (each was awarded a competitive national predoctoral fellowship), and one MD-PhD student who has become a leading academic physician in his specialty. This early career scientist was also recognized as an important contributor in his specific scientific community. He was the Principal Investigator on a substantial NIH award roughly equivalent to an R01. However, there was a local Program Director above him who was granted institutional credit for the award. The inexorable result: No NIH grant to his credit, no tenure, no lab, no job. This outcome of a real-life season of “Survivor” [7] is not uncommon. It must stop. This is something we as a polity and a society cannot afford. It is too expensive, but as with much in our late neoliberal political economy such costs remain an unaccounted negative externality.
This lack of public funding necessary to support biomedical research is addressed as “the primary problem of inadequate funding” in F&C (Chapter 31). A proposed solution is for review panels to “approve” those RPG applications worthy of support and then choose the “winners” in a lottery. This is already standard procedure, even if it is not recognized. This would not be an improvement, nor would it be fair to those who are simply unlucky when the clock does not allow for more than a few chances in the lottery.
The pressures put on Biomedical Science by this regime have a broad reach, as noted throughout F&C, while deleteriously affecting each of the Seven Principles from Part 1: Honesty, Fairness, Objectivity, Reliability, Skepticism, Accountability, and Openness. The damage has extended to the philosophy of science, too. Neither F&C nor I mean the discipline “Philosophy of Science” here. Rather, our target is the crapification of Biomedical Science due to a thoroughgoing neoliberalization (Philip Mirowski) that has severely diminished its promise and effectiveness among those committed to the life of a research scientist.
This begins when graduate students join “training programs” with primary emphasis on speed, visibility, and impact that will ensure continued funding above all. A former colleague’s simple directive to his students was “Your task is to publish research in a high-impact journal.” Truly impactful research would require a deep knowledge of the foundation of his research group’s particular specialty, but that would undoubtedly be “old and out of date” and therefore irrelevant. In my experience very little impactful research is completed without a deep understanding of the history and development of the discipline. His guidance should have been “think well while pursuing a path that is likely to answer important questions, and if our ideas are correct a “high-impact journal” will welcome our work.”
In a promising development, the School of Public Health at Johns Hopkins has developed a program to “train (sic) individuals…(to be generalists)…with broader knowledge of science and philosophy…(by)…including courses in epistemology, communication, ethics, and causality” (F&C, Box 31.1). To which I can only respond with an “amen” in the hope that this initiative is widely adopted. The one essential answer to this prayer is increased funding of basic Biomedical Research that allows a “thousand flowers to bloom,” as during the development of vaccines against smallpox and cholera in the late-19th century, work that was understood to reach back much further in Asian and European history.
Another serious maladaptive consequence of the lack of adequate public support for Biomedical Science is the political and administrative shortcut represented by “an increasing emphasis on targeted research funding” (F&C, p. 392). This has been particularly evident in my experience at NSF, where programs have asked for “multidisciplinary, theoretical” approaches to specific problems. At an international meeting about 15 years ago I briefly discussed an idea with a rotating NSF Program Director who was also a well-known cell biologist at a strong research university. With a smile, he told me that unless I proposed “Cana Science,” I would be wasting my time. Cana Science? That would be the equivalent of turning water into wine as Jesus of Nazareth did at the wedding in Cana, his first public miracle described only in the non-synoptic Gospel of John. His fair point was that unless I proposed something theoretically “out of this world,” I shouldn’t bother [8]. I bothered anyway, with no success at that time. But I can be patient.
Will more public support mean better science? That depends on the meaning of “better.” By allowing biomedical scientists to once again pursue interesting questions during a long career, projects will inevitably “fail” sometimes. But these are also likely to identify unexpected pathways to a deeper understanding of biology and medicine through serendipity. This should be the goal of Biomedical Science. A renewed commitment to science as a public good will also rebalance the relationship between Biomedical Science and Biomedicine (NIH SBIR/STTR, and the Healthcare System, including PhRMA and AHIP and perhaps the AMA). Biomedicine has recently given us shortages of old but effective anticancer drugs and profusion of expensive new and probably dangerous drugs that are used as work-arounds to our unnaturally unhealthy diet. We scientists must do better in a society that demands more from us.
Can we afford to double the budgets of NIH and NSF and maintain the momentum developed? Yes, just as we did in the 1950s, 1960s, and 1970s, when we recover the will and are seriously willing to reorder our priorities. The unstated corollary to my immodest proposal is repeal of the Bayh-Dole Act of 1980 and the extensive use of march-in rights to recover the public good from Biomedicine as it is currently practiced.
The true performance of the current scientific community has been revealed by COVID-19. I plan to return to this soon in an examination of power relationships between and among research institutions and scientists, and technology, politics, and society. There are solutions to this mess!
Notes
I thank LS for suggesting the subtitle to this post, but he is completely innocent of the deficiencies in my argument.
[1] As the late Professor Joel Horowitz taught me in Introductory Sociology as a college freshman, we should seldom generalize about our personal experience. True. But that also depends on how general and extensive one’s experience has been. I believe mine has been both in the practice of biomedical research but that remains up to the reader.
[2] Serial fraudster John Darsee was finally found out while making a name for himself in the laboratory of an eminent cardiologist at Harvard Medical School, where he published five major papers in his first 15 months in that laboratory. This should have been the first clue that something was amiss. His “productivity” led to the offer of a faculty position at Harvard Medical School. In the end, his license to practice medicine in New York was revoked.
[3] S. Rick and G. Loewenstein, Journal of Marketing Research 45: 645-648 (2008). Available after registration.
[4] MERS appeared in 2012, and through 2021 about 2500 cases were reported, of which nearly 900 died for a death rate approaching 35%. MERS-CoV is not very transmissible, or the toll would have been much higher. Human coronavirus virulence, pathobiology, and transmissibility remain important subjects as we enter the fifth year of COVID-19.
[5] Significance cannot be determined ex ante in the judgment of review panels or program directors; covered previously here.
[6] The other third of applications are “not competitive” for various reasons and are likely to remain in that category. That leaves 67% of applications worthy of funding. With normal attrition, this would approach ~50%, which is the sweet spot of the mid-1960s golden age that really was. It has never been easy to maintain a funded research group, but it was previously possible. Now, this has become nearly impossible for many if not most would-be biomedical scientists, especially those who view scientific research as a vocation serving the public good while lacking a dominant “grantsmanship gene.”
[7] “Survival of the fittest” is not from Charles Darwin. It was first used by Herbert Spencer. His Social Darwinism has become the popular neoliberal understanding of meritocracy as an evolutionary process. Darwin did not object strenuously to the expression, but his largely undefined concept of “fitness” certainly did not extend to the political economy of Victorian England and the United States after the Civil War and Reconstruction or the high neoliberalism of the 21st-century Professional Managerial Class (PMC).
[8] I lacked the wit at that moment to reply that I was being asked to follow the example of Philippus Aureolus Theophrastus Bombastus von Hohenheim – Paracelsus – the 16th-century alchemist.