Ensuring that A.I. serves humanity was always a job too important to be left to corporations, no matter their internal structures. That’s the job of governments, at least in theory. And so the second major A.I. event of the last few weeks was less riveting, but perhaps more consequential: On Oct. 30, the Biden administration released a major executive order “On the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence.”
It’s a sprawling, thoughtful framework that defies a simple summary (though if you’d like to dig in, the A.I. writer Zvi Mowshowitz’s analysis, and roundup of reactions, are both excellent). Broadly speaking, though, I’d describe this as an effort not to regulate A.I. but to lay out the infrastructure, definitions and concerns that will eventually be used to regulate A.I.
It makes clear that the government will have a particular interest in regulating and testing A.I. models that cross particular thresholds of complexity and computing power. This has been a central demand of A.I. safety researchers, who fear the potential civilizational consequences of superintelligent systems, and I’m glad to see the Biden administration listening to them, rather than trying to regulate A.I. solely on the basis of how a given model is used. Elsewhere, the order signals that the government will eventually demand that all A.I. content be identifiable as A.I. content through some sort of digital watermarking, which I think is wise. It includes important sections on hardening cybersecurity for powerful A.I. models and tracking the materials that could be used to build various kinds of biological weapons, which is one of the most frightening ways A.I. could be used for destructive ends.
For now, the order mostly calls for reports and analyses and consultations. But all of that is necessary to eventually build a working regulatory structure. Even so, this quite cautious early initiative met outrage among many in the Silicon Valley venture-capital class who accused the government of, among other things, attempting to “ban math,” a reference to the enhanced scrutiny of more complex systems. Two weeks later, Britain announced that it would not regulate A.I. at all in the short term, preferring instead to maintain a “pro-innovation approach.” The European Union’s proposed regulations may stall on concerns from France, Germany and Italy, all of whom worry that the scrutiny of more powerful systems will simply mean those systems are developed elsewhere.
Let’s say that U.S. regulators saw something, at some point, that persuaded them they needed to crack down, hard, on the biggest models. That’s always going to be a judgment call: If you’re regulating something that can do terrible harm before it does terrible harm, you are probably regulating it when the terrible harm remains theoretical. Maybe the harm you fear will never happen. The people who stand to lose money from your regulations will be very active in making that case.