Longtermism, an idea that has been attracting attention lately, says that while we should help people alive today, we should also care about those who might live in the future. We should try to maximize the number and happiness of these possible people.

And how exactly should we do that? William MacAskill, a philosopher and leader of longtermism, wrestles with this question in his new bestseller What We Owe the Future. MacAskill’s book is packed with can-do optimism and insights into a wide range of topics, from the history of slavery to the possibility of extraterrestrial life.

MacAskill excels at conveying the vastness of our potential future, during which our descendants might colonize other star systems. Our actions now, he argues, might determine whether trillions of our ancestors live well or poorly, or whether they live at all. “We need to act wisely,” he says in a recent New York Times essay.

Critics have charged that longtermism is too white, male and coldblooded, but I admire MacAskill’s passion for helping others. He has helped raise millions of dollars for charitable causes, such as fighting diseases in poor regions, while giving most of his income away, according to a profile in the New Yorker. I urge young people in search of a mission to check out What We Owe the Future. But I have objections to MacAskill’s pitch for longtermism, mainly that he worries too much about artificial intelligence and too little about capitalism and militarism.

In What We Owe the Future, MacAskill dwells on how harmful ideologies, such as totalitarianism, can become entrenched, or “locked in.” This problem concerns me too. Right now, humanity seems pretty locked into capitalism, which has been adopted even by communist China. Yes, capitalism has helped increase humanity’s net wealth over the past few centuries, but it has severe side effects. Capitalism is a Darwinian system, with winners and losers, and it has bequeathed us climate change as well as inequality.

Can we do better than capitalism? Are fairer economic systems possible? MacAskill never addresses these questions; “capitalism” doesn’t appear in his index as a stand-alone item. Is MacAskill reluctant to criticize capitalism because he hangs out with, and raises money from, the free-market libertarians of big tech?

MacAskill views scientific innovation as essential for solving our current problems, such as climate change and pandemics, and creating a better future. He fears that innovation is stagnating at a time when we can ill afford it. He compares humanity with a climber scaling a cliff with no safety net; if we stop climbing, we’ll get tired and fall off the cliff. So, we need to keep climbing—that is, innovating.

I’d like to see continued innovation in clean energy, but I don’t see innovation per se as beneficial, especially not in the context of capitalism. Medical innovation, for example, has boosted the profits of American health care providers without producing proportional improvements in health (although the rapidity with which biotech firms produced vaccines for COVID-19 was impressive). Per-capita health care costs are much higher in the U.S. than in any other country, while Americans’ health lags.

Innovation in artificial intelligence has helped rich, powerful humans to become richer and more powerful. But MacAskill seems to worry less about human-controlled AI than about autonomous, intelligent machines. He fears they will rise up and enslave or exterminate us, as in countless sci-fi flicks; we must take steps to ensure that an AI “takeover” doesn’t happen. MacAskill has apparently fallen for recent hype about artificial intelligence. There are no signs that machines will become self-motivated any time soon, if ever.

Equally implausible is another scenario mentioned by MacAskill: that human psyches can be digitally reproduced and “uploaded” into computers. Uploading would require cracking the neural code, the algorithms that transform brain activity into perceptions, thoughts and memories. But the neural code—which could also benefit AI research—is one of those problems that look less tractable over time. Researchers show no signs of converging on a plausible explanation of how brains make minds.

MacAskill focuses on the threat of war between “great powers,” especially those possessing nuclear arms or bioweapons. Otherwise, he doesn’t give war the attention it deserves. War poses the greatest threat to our near-term and long-term future. War not only kills and maims people; it also drives them from their homes, creating huge refugee populations. And preparations for war consume over $2 trillion a year (more than a third attributable to the U.S.). That money could help us tackle poverty, pandemics, climate change, social injustice and other problems, which war often exacerbates.

War is perpetuated by the ideology of militarism, which is as deeply entrenched as capitalism. Militarism assumes that war is a permanent feature of the human condition, and hence that nations must maintain armies to protect themselves from each other. Militarism is an apex problem, one that makes other problems worse. Militarism corrupts science. The U.S. military is a major funder of research on artificial intelligence, quantum computing, neural interfaces and other fields—not to mention nuclear weapons. U.S. innovation in weaponry triggers destabilizing arms races with other nations.

MacAskill says would-be altruists, when prioritizing problems, should consider two criteria: Is the problem neglected, and it is tractable? Militarism, to my mind, satisfies both criteria. MacAskill himself notes that the risks of war “have largely fallen out of the mainstream conversation among those fighting for a better world.” Many people, including activists and others I’ve polled over the years, see peace between nations as a utopian pipe dream.

If we can overcome our fatalism, I believe, the problem of militarism will turn out to be tractable. Virtually everyone except warmongers and arms dealers would welcome the end of war. War between nations is a top-down problem; Russia’s Vladimir Putin and Ukraine’s Volodymyr Zelensky could agree to end the war in Ukraine today.

The question is, how can antagonistic nations demilitarize safely, without raising the risk of preemptive attacks? How can we minimize the economic disruption, including the loss of jobs, resulting from demilitarization? How will nations and other groups resolve conflicts nonviolently? Do nations, individually or collectively, need some minimal force to protect themselves against attacks by rogue nations or violent, apocalyptic groups?

I would love to see MacAskill and other smart, scholarly activists hack the problem of war, infecting politicians and other leaders with their zeal. When I peer into the future, I envision a world in which war between any two nations has become inconceivable, just as war is today between Germany and France. Resources once devoted to death and destruction are used to improve human well-being. We should begin trying to create this world now. We owe it to the future.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.