“We have to be bold and responsible at the same time,” he said.

“The reason to be bold is that in so many different realms A.I. has the potential to help people with everyday tasks, and to tackle some of humanity’s greatest challenges — like health care, for instance — and make new scientific discoveries and innovations and productivity gains that will lead to wider economic prosperity.”

It will do so, he added, “by giving people everywhere access to the sum of the world’s knowledge — in their own language, in their preferred mode of communication, via text, speech, images or code,” delivered by smartphone, through television, radio or e-book. A lot more people will be able to get the best assistance and the best answers to improve their lives.

But we also must be responsible, Manyika added, citing several concerns. First, these tools need to be fully aligned with humanity’s goals. Second, in the wrong hands, these tools could do enormous harm, whether we are talking about disinformation, perfectly faked things or hacking. (Bad guys are always early adopters.)

Finally, “the engineering is ahead of the science to some degree,” Manyika explained. That is, even the people building these so-called large language models that underlie products like ChatGPT and Bard don’t fully understand how they work and the full extent of their capabilities. We can engineer extraordinarily capable A.I. systems, he added, that can been shown a few examples of arithmetic, or a rare language or explanations of jokes, and that then can start to do many more things with just those fragments astonishingly well. In other words, we don’t fully understand yet how much more good stuff or bad stuff these systems can do.

So, we need some regulation, but it needs to be done carefully and iteratively. One size will not fit all.