By KLG, who has held research and academic positions in three US medical schools since 1995 and is currently Professor of Biochemistry and Associate Dean. He has performed and directed research on protein structure, function, and evolution; cell adhesion and motility; the mechanism of viral fusion proteins; and assembly of the vertebrate heart. He has served on national review panels of both public and private funding agencies, and his research and that of his students has been funded by the American Heart Association, American Cancer Society, and National Institutes of Health.

I admit it.  The undoubtedly targeted ad that appeared in the right margin of my Firefox homepage the other day said, “It’s weird to be the same age as old people.”  True.  This reminded me that I have been doing science for a long time and cannot imagine having done anything else in what has become my professional life.  From the moment I walked into the libraries (both main and science) in my undergraduate institution, which is the “flagship” state university in my home state, I was enthralled.  Most of my friends could not wait to get out and “get on with their lives,” even though our alma mater was then and has maintained its deserved reputation as Party Central.  And in retrospect they probably had the better idea.  But it was also a serious university, and I never left academic life.  I have seen a lot.

From the beginning, the Current Journals table in the Science Library was a revelation: As many as 50 new issues of journals from all over the world appeared every day.  From Geography to Quantum Mechanics and everything in between (1).  Naturally, I concentrated on the biology journals, from American Naturalist to Evolution to Journal of Biological Chemistry.  I got my first job as a student worker as a dishwasher in a teaching laboratory at the beginning of my second year, and that one thing led to a long apprenticeship followed by a series of positions up the so-called chain. I no longer have a laboratory of my own, full of students and leavened with the essential research technicians, and while I miss that, other activities have become just as rewarding and probably more useful.  Besides, part of me believes I may one day return to the lab.  As both Gandhi and Erasmus may have said: “Work as if you will die tomorrow. Study like you will live forever.”  Or at least until you get back in the lab.

What follows is both a description and personal lament at the state of my profession (2).  While it is indeed true that I have always had stars in my eyes when it came to academic life, I was early and often reminded that scientists are simply people, some more straightforward than others, some more interested in their “careers” than the quality of their work.  But it is also true that at the beginning of my life in science, the idea of the disinterested scientist whose goal was to understand the natural world was very real (pre-Bayh-Dole Act of 1980).  Not that this has disappeared, but institutional imperatives from the Dean’s or Director’s office to the Office of the Director of the National Science Foundation have made such research much more difficult.

Which brings me back to my beginnings when the world of scientific literature was truly something special.  Stuart Macdonald has recently published a review in the journal Social Science Information with the title “The gaming of citation and authorship in academic journals: a warning from medicine.” (3). This paper is regrettably behind a rather stout paywall that was surmounted by my institutional library, but I will do my best to describe it here.  The primary argument is that “peer review no longer maintains standards in academic publishing, but rather covers up the gaming of citation and authorship that undermines these standards.”

It is a fair statement that this is a direct result of the development of the Journal Impact Factor (JIF), which was introduced by Eugene Garfield.  Basically, JIF has become a proxy for the importance of a journal in its field and by extension the importance of the work published in the journal (4).  At one level this is, of course, perfectly natural.  Nature has a high impact factor and is where Watson and Crick published their one-page paper in 1953 on the DNA double helix that resulted in a Nobel Prize in 1962.  But the structure of DNA came before JIF.  And it is beyond ridiculous this paper is behind a paywall seventy years later.  I digress, but this is another telling issue about the scientific research and its publication that is on my agenda for this series.

As a “scientometric” (Ugh!) tool, JIF has its uses.  For example, the data compiled as part of the initial work by Eugene Garfield allows one to easily track when a concept or term first appeared in the literature.  But JIF itself is easily gamed and the exact method for how it is calculated remains a money-making proprietary secret.  From Wikipedia, but this is an accurate statement of facts based on my reading and long experience:

“Impact factors began to be calculated yearly starting from 1975 (when I began my first full-time research position) for journals listed in the Journal Citation Reports (JCR). ISI was acquired by Thomson Scientific & Healthcare in 1992, and became known as Thomson ISI. In 2018, Thomson-Reuters spun off and sold ISI to Onex Corporation and Baring Private Equity Asia.  They founded a new corporation, Clarivate, which is now the publisher of the JCR.”

High impact factors mean more money for publishers, especially online open-access journals, and one of their primary business tools is publishing the most cited articles rather than the best articles, which are those that advance a particular field, if not today or tomorrow.

So, what does this mean for the practice of science?  Somewhat unintuitively, what is conventional is what gets cited.  Therefore, authors should stick to the known or the popular.  From the beginning of the JIF era, it became clear that scientists should not go beyond the acceptable if they wanted to thrive: “The latest research and bright ideas are to be avoided because they link to little else and this makes articles difficult to cite.  Demand is for run-of-the-mill, water-is-wet articles, old standards that everyone has been citing for years and which serve as evidence that an article is embedded in the literature.”  This practice has also led to the proliferation of the LPU – least publishable unit – which are often strung together to produce a list, and little else.  How do journals and editors game citation?  I was taken aback at what is in the actually interesting literature on JIF and related topics (in the form of other papers cited by Macdonald, not all of which I have read so far):

We … [used] … to make our acceptance criterion those articles that we felt would make a contribution to the international literature.  Now our basis for rejection is often ‘I don’t think this paper is going to be cited.’ (editor of medical journal as quoted in Chew et al., 2007, p.146)

We have noticed that you cite Leukemia [just once in 42 references]. Consequently, we kindly ask you to add references of articles published in Leukemia to your present article. (editor of Leukemia to author as quoted in Smith, 1997)

Given the state of biomedical publishing and its connection to Evidence-Based Medicine, I have no idea why these two passages surprised me.  But they did.  A related practice when writing a grant proposal is to salt the bibliography with likely members of the review panel, for which rosters are sometimes available.  Some reviewers are likely to be influenced by this accepted form of “grantsmanship” (I still hate this word, but I have also done it).

And this brings us to the explicit treatment of biomedical practice and publishing in Stuart Macdonald’s review:

Medicine provides a particularly vivid example of the failure of peer review to cope with the reality of academic publishing (see Jefferson et al., 2007; Cochrane Database Systemic Reviews 2: MR000016; see here for analysis of a recent Cochrane production on a subject of the day).  In medicine, peer review serves less to guarantee academic standards than to make even the most egregious publishing practices look respectable (see Fanelli, 2009, no paywall; Bosch et al., 2012, (no paywall). ‘The journal editor says: what’s wrong with publishing an industry-funded editorial or review article as long as it gets appropriate peer review?’ (Elliott, 2004, p.21, another paywall)

It is also important to remember that editors of some leading journals are aware of this and have been for a long time:

We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong. [Editor of the Lancet: Richard Horton, Genetically modified food: consternation, confusion, and crack-up, 2000, p.248, paywall (see endnote 5)]

Horton again: “(M)edical journals have devolved into information laundering operations for the pharmaceutical industry.”  Yes, we know this now, thanks to The Illusion of Evidence-Based Medicine, reviewed here previously.  This is also covered extensively using different but complementary sources by Stuart Macdonald, who is particularly attuned to the history of sketchy practices in scholarly publishing.

And this brings us to the problem of peer review itself.  Where did it come from and what is wrong with it? 

Peer review of academic publications is said to have begun with the first scientific journal in the West: Philosophical Transactions of the Royal Society (1655).  This is an exaggeration, but for the past 200+ years peer review has been the rock upon which scientific research has been established as an essential foundation for understanding the natural world. 

But “(W)hat is the role of peer review when the frequency of citation has become the primary means of measuring the quality…with little regard for any other assessment of quality?”  I have been a peer reviewer for 30 years.  What once was a duty has become a chore, unrewarded and underappreciated.  Still, when asked to review, I do, especially in reviewing research applications for the funding agency that has supported the work of my laboratory and graduate students since I was a postdoctoral fellow (6). 

Since the early 1990s editors have sometimes had to beg for reviewers, and this has now reached a crisis across the scientific literature.  This is described in a very good “Career Feature” by Amber Dance in the 16 February 2023 issue of Nature: Peer Review Needs a Radical Rethink.

The usual critiques are presented.  Peer review is a terrible time sink: From Belazs Aczel and colleagues, who used a dataset of 87,000 scholarly journals to show that in 2020 alone, peer reviewers spent 15,000 cumulative years, mostly working for free:

Background: The amount and value of researchers’ peer review work is critical for academia and journal publishing. However, this labor is under-recognized, its magnitude is unknown, and alternative ways of organizing peer review labor are rarely considered.

Results: We found that the total time reviewers globally worked on peer reviews was over 100 million hours in 2020, equivalent to over 15 thousand years. The estimated monetary value of the time US-based reviewers spent on reviews was over 1.5 billion USD in 2020. For China-based reviewers, the estimate is over 600 million USD, and for UK-based, close to 400 million USD. (emphasis added)

As noted by the author, “Many scientists are increasingly frustrated with journals – Nature among them – that benefit from unpaid work of reviewing while charging high fees to publish in them or read their content…(a Springer Nature spokesperson says)…we’re always looking to find new and better ways of recognizing peer reviewers for their valuable and essential work…(and pointed out that)… in a 2017 survey of 1,200 Nature reviewers…87% said they considered reviewing their academic duty, 77% viewed it as safeguarding the quality of published research, and 71% expected no reward or recognition for reviewing.”  I suppose it is something to add Nature to that line in your CV where you list peer reviewing as “Professional Service.”  My CV has that section, and playing the game, my publication list includes citation numbers and JIF information, when I get around to updates. 

Still, there can be few business models more lucrative than getting 15,000 years of work (in 2020) valued at $2.1 billion in the three largest scientific communities – US, China, UK – for free.  Which is why many, including yours truly, have started limiting most peer reviewing to non-profit journals of professional societies, which still exist, and public and governmental funding agencies.

Peer review can be very slow, too.  This has led to the proliferation of preprint servers, which publish unreviewed manuscripts.  Preprints have been a thing in physics for years, but they are relatively new to biology and biomedical sciences.  They do get the results out quickly, but a preprint is just that, preliminary.  And preliminary does not count as one of the professional contributions necessary but not nearly sufficient to ensure funding and career advancement for academic scientists.

Dance begins her article with the complaint of an editor who sent 150 invitations for review of an article from April 2022 to November 2022 with no takers.  The journal is Frontiers in Health Services, one of the 196 titles published online by Frontiers Media, a for-profit open-access publisher established in 2007.  Frontiers journals have created a niche in the world of academic publishing.  Their journal dashboards are attractive and easy to use.  And perhaps the most useful for the modern scientific author, real time links to number of views and citations, social buzz (mentions in blogs, social media) and demographics (location of readers).  The more important, if not unprecedented, practice associated with Frontiers journals is that editors and reviewers are acknowledged on the title page of each article.  This solves two problems with peer review as currently practiced: (1) Editors and reviewers get credit if not payment for their work, and (2) Reviewers are held publicly responsible for the quality of the research, to the extent the manuscript contains all necessary information to review it fairly and completely.

But there is also this, which requires another disclosure: I reviewed one manuscript for a Frontiers biology journal in 2022.  Which brings us to the question implied in the study of Aczel and colleagues mentioned previously: 87,000 scholarly journals?  Really.  How many scientific journals are there?  SCOPUS currently lists 41,462 indexed titlesgoing back to 1788  Whatever the total, it is very large. 

Can we possibly need this many journals?  Obviously, the answer is “no.”  As noted in many studies, most scientific articles are rarely cited.  This assertion, which is based on research using the Science Citation Index that was developed by Eugene Garfield, has been disputed.  It may be true, but few working scientists of my acquaintance are in the un-cited category, even if all of us have a few papers that received little attention, probably deservedly so.  Sometimes the result of even the most ingenious experiment is somewhat underwhelming.  Such is the nature of research when the answer is unknown, as it should be in every original experiment.

This brings us back to the nature of the scientific literature and what it all means.  I have covered this before regarding COVID-19 and will avoid repeating myself too much, but if the public is to believe what scientists and scholars from all disciplines write and say, academic and scientific publishing must regain its footing.  Essentially all scientific journals are online these days, so that is not the problem. 

But a breaking point has been passed.  Too much is published too fast and those who are not disinterested readers are able to pick and choose a piece of the literature that suits their purpose of the moment.  The business of scientific publication has taken over the practice of scientific research.  Not all of the so-called scientific literature has been effectively peer-reviewed, and that includes many of the 336,686 “COVID” entries that have appeared in PubMed in about 40 months.  After 40 years, “AIDS” returns 300,212 entries.  Not to denigrate the significance of COVID-19, but this says much more about the business of scientific publishing than the practice of biomedical science.

Moreover, some “hyperprolific” authors “publish” more than one scientific paper a week.  This is not possible (OK, not legitimate), either physically or mentally, and can only mean that authorship is disconnected from the research reported.  As John Ioannidis of Stanford states here:

There are two main reasons we have authorship: credit and responsibility. I think both are in danger.

In terms of credit, if you have a system that is very vague, idiosyncratic, and nonstandardized, it’s like a country with 500 different types of coins and no exchange rate. And in terms of responsibility, it also raises some issues about reproducibility and quality. With papers that have extremely large numbers of contributors, is there anyone who can really take responsibility for all that work? Do they really know what has happened?

For the five consecutive years 2018-2019-2020-2021-2022, John Ioannidis has 89-81-82-74-46 publications indexed in PubMed.  That would be a total of 372, or 74 per year (I have not audited this total manually).  In the first eight weeks of 2023, Dr. Ioannidis has 17 publications indexed in PubMed as of 27 February.  This will be a truly banner year for him if that trend holds.

By way of comparison, Francis Crick published fewer than 55 papers (some on the list are duplicates) beginning a 40+ year career in 1952 when he was 36 years old.  And I found a reprint of the Nature paper that resulted in the Nobel Prize, in the American Journal of Psychiatry (2003) as part the 50th-year anniversary of the paper that started modern molecular biology (including mRNA vaccines)!  For those of you who have not read it, enjoy!  It requires very little specialized knowledge to appreciate the beauty of the work and the elegance of the result (8).

More on the reproducibility crisis in the biomedical sciences later.  But for now?  Be careful of what you read in the scientific literature and what is reported about the same.  It pains me no end to say that, but this too will pass.

Note added in proof:  I do get emails, this time in the form of another “Work” Career Feature from Nature entitled “Hyperauthorship and What It Means for ‘Big Team’ Science (2 March 2023, p. 175-177).  Peter Higgs posited the eponymous boson in 1964.  Alone.  Eventual experimental confirmation required 2,932 authors.  A subsequent accurate measurement of the mass of the Higgs boson required 5,154 “coauthors.”  A 9-page paper on the effect of SARS-CoV-2 vaccination on post-surgical outcomes included 15,025 “coauthors” in a consortium.  Maybe so.  But: “The more authors you’re working with, the more complicated things get (and) that requires some pretty new thinking, from both researchers and journals, and the people who evaluate science.” (emphasis added)  Yes, indeed.  A hyperauthored paper reminds me of the philosopher Timothy Morton’s hyperobjects – things of “such vast temporal and spatial dimensions that they defeat traditional ideas about what a thing is in the first place.”  I read about half of that book.  Traditional ideas do not  lead to an understanding of anthropogenic global warming, for example.  True enough, and many traditional ideas need to disappear.  But my spidey sense is activated when I see a biomedical research paper with more than 10 authors from two or more institutions.

This entry was posted in Dubious statistics, Guest Post, Media watch, Science and the scientific method on by Yves Smith.