Yves here. With science (particularly medicine) corrupted by commercial interests, yet elite authorities and mouthpieces insisting that the masses defer to “the science,” debates and decisions over safety are moving more and more into the political realm. That normally would not be a bad thing. Most favor risk avoidance with respect to large scale experiments on the public, consistent with the dictates of the precautionary principle. However, many views are now influenced by finely-tuned PR campaigns….again on behalf of monied interests. So even with more informed layperson input on novel and potentially dangerous technologies…who can mind the minders if the minders are very clever at cherry-picking and spinning relevant information?

Please note that the post includes three infographics that are useful but not critical to the post. Sometimes I can see in the code how to resize them, but these had no obvious clues. If any helpful readers can advise, please e-mail me at yves-at-nakedcapitalism-dot-com with “Resize” in the subject line. Or you can view them at the original location.

By Michael Schulson, a contributing editor for Undark whose work been published by Aeon, NPR, Pacific Standard, Scientific American, Slate, and Wired, among other publications., and Peter Andrey Smith, a senior contributor at Undark, whose stories have been featured in Science, STAT, The New York Times, and WNYC Radiolab. Originally published at Undark

The project was so secret, most members of Congress didn’t even know it existed.

In 1942, when an elite team of physicists set out to produce an atomic bomb, military leaders took elaborate steps to conceal their activities from the American public and lawmakers.

There were good reasons, of course, to keep a wartime weapons development project under wraps. (Unsuccessfully: Soviet spies learned about the bomb before most members of Congress.) But the result was striking: In the world’s flagship democracy, a society-redefining project took place, for about three years, without the knowledge or consent of the public or their elected representatives.

After the war, one official described the Manhattan Project as “a separate state” with “a peculiar sovereignty, one that could bring about the end, peacefully or violently, of all other sovereignties.”

Today’s cousins to the Manhattan Project — scientific research with the potential, however small, to cause a global catastrophe — seem to be proceeding more openly. But, in many cases, the public still has little opportunity to consent to the march of scientific progress.

Which specific experiments are safe, and which are not? What are acceptable levels of risk? And is there science that simply should never be done? Such decisions are arguably among the most politically consequential of our time. But they are often made behind closed doors, by small groups of scientists, executives, or bureaucrats.

In some cases, critics say, the simple decision to do the research at all — no matter how low-risk a given experiment may be — advances the field toward riskier horizons.

In the text and graphics that follow, we attempt to illuminate some of the key people who are currently entrusted with making these weighty decisions in three fields: pathogen research, artificial intelligence, and solar geoengineering. Identifying such decision makers is necessarily a subjective exercise. Many names are surely missing; others will change with the incoming administration of Donald Trump. And in every field, decisions are rarely made in isolation by any one person or even small group of persons, but as a distributed process involving varying layers of input from formal and informal advisers, committees, working groups, appointees, and executives.

The extent of oversight also varies across disciplines, both domestically and across the globe, with pathogen research being much more regulated than the more emergent fields of AI and geoengineering. For AI and pathogen research, our focus is limited to the United States — reflecting both a need to limit the scope of our reporting, and the degree to which American science currently leads the world in both fields, even as it faces stiff competition on AI from China.

With those caveats in mind, we offer a sampling — illustrative but by no means comprehensive — of people who are part of the decision-making chain in each category as of late 2024. Taken as a whole, they appear to be a deeply unrepresentative group — one disproportionately White, male, and drawn from the professional class. In some cases, they occupy the top tiers of business or government. In others, they are members of lesser-known organizational structures — and in still others, the identities of key players remain entirely unknown.

Pathogen Research

Most research with dangerous bacteria and viruses poses little risk to the public. But some experiments, often called gain-of-function work, involve engineering pathogens in ways that may make them better at infecting and harming human beings.

The scientists who do this work say their goal is to learn how to prevent and fight future pandemics. But, for a portion of such experiments, an accidental lab leak could have global repercussions.

Today, many experts are convinced that Covid-19 jumped from an animal to a person — and most evidence collected to date points squarely in that direction. Still, some scientists and U.S. government analysts believe that the Covid-19 pandemic may have originated at a Chinese laboratory that received U.S. funding

Whatever the reality, the possibility of a lab leak has heightened public awareness of risky pathogen research.

One of the secretive committees that makes decisions about potential gain-of-function research is housed with the National Institutes of Health. The other is part of the Administration for Strategic Preparedness and Response within HHS. Spokespeople for both offices declined to share details about the committees’ memberships, or even to specify which senior officials coordinate and oversee the committees’ activities.

“I think some of this is for good reason, like preserving the scientific integrity and protecting science from political interference,” said one former federal official who worked outside of HHS, in response to a question about why details about oversight are often difficult to pin down. (The official spoke on condition of anonymity because the views expressed may not reflect those of their current employer.) “I think some of this is also driven by an inability of HHS to understand how to navigate increasing public scrutiny of this kind of work,” the official added, describing the lack of transparency around the special HHS review panel as “totally crazy.”

Artificial Intelligence

If pathogen research is mostly funded and overseen by government agencies, AI is the opposite — a massive societal shift that is, in recent years, led by the private sector.

The consequences of the technology are already far-reaching: Automated processes have denied people housing and health care coverage, sometimes in error. Facial recognition algorithms have falsely tagged women and people of color as shoplifters. AI systems have also been used to generate nonconsensual sexual imagery.

Other risks are hard to predict. For years, some experts have warned that a hyperintelligent AI could pose profound risks to society — harming human beings, supercharging warfare, or even leading to human extinction. Last year, a group of roughly 300 AI luminaries issued a one-sentence warning: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Many other experts, especially in academia, characterize those kinds of warnings principally as a marketing stunt, intended to deflect concern from the technology’s more immediate consequences. “The very same people who are making and profiting by AI are the ones who are trying to sell us on an existential threat,” said Ryan Calo, a co-founder of the University of Washington’s Center for an Informed Public.

“It’s cheaper to guard against existential threat that is future speculative,” he said, “than it is to actually solve the problems that AI is creating today.”

Despite calls for regulatory scrutiny, no federal agency comparable to the U.S. Food and Drug Administration conducts pre-market approval for AI systems, requiring developers to prove the safety and efficacy of their product prior to use.

Federal regulatory agencies have made limited moves to oversee specific applications of the technology, such as when the Federal Trade Commission banned Rite Aid from using face-recognition software for five years. At the state level, California’s governor recently vetoed a controversial bill that may have curbed the tech’s development.

Solar Geoengineering

In theory, injecting particles into the atmosphere could reflect sunlight, cooling the planet and reversing some of the worst effects of climate change. So could altering clouds over the ocean so that they reflect more light.

In practice, critics say, solar geoengineering could also bring harms, both directly (for example, by changing rainfall patterns) or indirectly (by sapping resources from more fundamental climate solutions like reducing greenhouse gas emissions.) And once interventions are underway, they may be difficult or dangerous to stop.

Right now, the science on geoengineering largely consists of computer models and a handful of small-scale tests. But in 2022, worried about where the field was trending, hundreds of scientists and activists called for a moratorium on most research. Some experts suggest that even small, harmless real-world tests are paving the way for future, riskier interventions.

Within the U.S., no single government agency exercises clear-cut control over the decision of whether to test or use that technology, although certain outdoor experiments could plausibly trigger regulators’ attention — for example, if they affect endangered species. Globally, experts say, it remains unclear how existing international treaties or agencies could limit solar geoengineering, which could allow a single country or company to unilaterally alter the global climate.

“It’s a very small group of people” making decisions about solar geoengineering, said Shuchi Talati, founder of the Alliance for Just Deliberation on Solar Geoengineering. “It’s a very elite space.”

This entry was posted in Free markets and their discontents, Global warming, Guest Post, Health care, Media watch, Politics, Risk and risk management, Science and the scientific method on by Yves Smith.