The Department of Homeland Security has seen the opportunities and risks of artificial intelligence firsthand. It found a trafficking victim years later using an A.I. tool that conjured an image of the child a decade older. But it has also been tricked into investigations by deep fake images created by A.I.
Now, the department is becoming the first federal agency to embrace the technology with a plan to incorporate generative A.I. models across a wide range of divisions. In partnerships with OpenAI, Anthropic and Meta, it will launch pilot programs using chatbots and other tools to help combat drug and human trafficking crimes, train immigration officials and prepare emergency management across the nation.
The rush to roll out the still unproven technology is part of a larger scramble to keep up with the changes brought about by generative A.I., which can create hyper realistic images and videos and imitate human speech.
“One cannot ignore it,” Alejandro Mayorkas, secretary of the Department of Homeland Security, said in an interview. “And if one isn’t forward-leaning in recognizing and being prepared to address its potential for good and its potential for harm, it will be too late and that’s why we’re moving quickly.”
The plan to incorporate generative A.I. throughout the agency is the latest demonstration of how new technology like OpenAI’s ChatGPT is forcing even the most staid industries to re-evaluate the way they conduct their work. Still, government agencies like the D.H.S. are likely to face some of the toughest scrutiny over the way they use the technology, which has set off rancorous debate because it has proved at times to be unreliable and discriminatory.