China’s tech industry recently gave the U.S. tech industry — and along with it, the stock market — a rude shock when it unveiled DeepSeek, an artificial intelligence model that performs on par with America’s best, but that may have been developed at a fraction of the cost and despite trade restrictions on A.I. chips.
Since then there have been a lot of frantic attempts to figure out how they did it and whether it was all above board. Those are not the most important questions, and the excessive focus on them is an example of precisely how we got caught off guard in the first place.
The real lesson of DeepSeek is that America’s approach to A.I. safety and regulations — the concerns espoused by both the Biden and Trump administrations, as well as by many A.I. companies — was largely nonsense. It was never going to be possible to contain the spread of this powerful emergent technology, and certainly not just by placing trade restrictions on components like graphic chips. That was a self-serving fiction, foisted on out-of-touch leaders by an industry that wanted the government to kneecap its competitors.
Instead of a futile effort to keep this genie bottled up, the government and the industry should be preparing our society for the sweeping changes that are soon to come.
The misguided focus on containment is a belated echo of the nuclear age, when the United States and others limited the spread of atomic bombs by restricting access to enriched uranium, by keeping an eye on what certain scientists were doing and by sending inspectors into labs and military bases. Those measures, backed up by the occasional show of force, had a clear effect. The world hasn’t blown up — yet.
One crucial difference, however, is that nuclear weapons could have been developed only by a few specialized scientists at the leading edge of their fields. The core idea that powers the artificial intelligence revolution, on the other hand, has been around since the 1940s. What opened the floodgates was the arrival first of vast data sets (via the internet and other digital technologies) and then of powerful graphic processors (like the ones from Nvidia), which can compute A.I. models from those data troves.
Another difference: Each nuclear weapon has to be constructed out of steel and fissile material. Some A.I. models, on the other hand, can fit on a USB stick, and can be endlessly replicated and built upon just by plugging that stick into new laptops.
Initially developing a new model, like ChatGPT, is a very costly process, but it’s the output, known as the model weights, that are so valuable, and so replicable. Companies like OpenAI, which has loudly proclaimed that A.I. poses an existential threat to humanity, kept these model weights to themselves, lest others piggyback on all that expensive development work to produce something even more powerful.
And if those protection-minded companies made a lot of money because of the U.S. government’s defensive measures? Well, that’s just the price of keeping humanity safe, right?
Those companies had an ally in President Joe Biden — especially, said his chief of staff Bruce Reed, after he watched “Mission: Impossible — Dead Reckoning Part One,” a story of A.I. gone rogue. Having already signed one executive order restricting the sale of those crucial chips to China, Biden signed another to establish safety and security mandates.
The Trump administration is operating under the same faulty logic. Just one day into his new term, the president and OpenAI’s chief executive, Sam Altman (fresh off his $1 million pledge to Trump’s inaugural fund) announced a vast computing infrastructure venture. Called StarGate, it is billed as a multi-hundred-billion-dollar bid to retain U.S. advantage in the fast-growing industry.
DeepSeek chose the very next day as the moment to publish a paper letting the world in on its great coup. At least they’re having fun, I guess. The company says it spent only a small fraction of what OpenAI and others spent, because it was able to optimize its software and train its model more efficiently. Advances like that have allowed many other technologies to become cheaper and more widely available. Still, not everyone believes that account, especially given questions about China’s respect for intellectual property rights and trade restrictions. Could the company have amassed a forbidden stash of Nvidia chips? Maybe. Could the cost of developing the model have been higher than was disclosed? Some estimates suggest so. OpenAI says that DeepSeek may have stolen some of its work. I’m gutted for the company that built a commercial product by hoovering up a big chunk of the internet then claiming it was “fair use.” (The New York Times has sued OpenAI and Microsoft over whether the use of news content in their A.I. systems is a fair use.)
But whatever DeepSeek did, it, and others, can keep doing it. Already, many AI companies are building on DeepSeek’s model. Individuals are downloading it or querying it for only a tiny fraction of what OpenAI charges.
Within the industry, there’s a popular trope that the real turning point will be the development of A.G.I., or Artificial General Intelligence, when A.I. reaches human-level intelligence and potentially becomes autonomous. The implication, then, is that what’s happening now is just a kind of warm-up, which no one needs to worry too much about. That’s a convenient falsehood. We have reached the other A.G.I. turning point: Artificial Good-Enough Intelligence — A.I. that is fast, cheap, scalable and useful for a wide range of purposes — and we need to engage with what’s happening now.
Many observers have described this as a Sputnik moment. That’s incorrect: America can’t re-establish its dominance over the most advanced A.I., because the technology, the data and the expertise that created it are already distributed all around the world. The best way this country can position itself for the new age is to prepare for its impact.
If the inevitable proliferation of A.I. endangers our cybersecurity, for example, instead of just regulating exports, it’s time to harden our networked infrastructure — which will also protect it against the ever-present threat of hacking, by random agents or hostile governments. And instead of fantasizing about how some future rogue A.I. could attack us, it’s time to start thinking clearly about how corporations and governments could use the A.I. that’s available right now to entrench their dominance, erode our rights, worsen inequality. As the technology continues to expand, who will be left behind? What rights will be threatened? Which institutions will need to be rebuilt and how? And what can we do so that this powerful technology with so much potential for good can benefit the public?
It is time, too, to admit that the interests of a few large, multinational companies aren’t good proxies for the interests of the people facing such a monumental transformation.
Whatever else DeepSeek may have done to get us here, perhaps forcing that realization is something we can be grateful for.