For most people, ChatGPT is more of a toy than a tool. You can ask it silly questions, but it’s not robust enough to write high-quality work memos or school essays. Yet the people behind artificial intelligence programs believe these systems will someday become a regular part of our lives, helping us in day-to-day routines.
How? Some type of A.I. will usher in that reality not because it will be perfect or display human-level intelligence. It will simply perform a task better than people can now. We don’t yet know what that task or the piece of A.I. will be. Perhaps it will be a task that seems small but nonetheless takes up time, like writing an email response or organizing a schedule. Or it could be bigger, such as driving a car. Either way, the shift will be enough to get the public to widely adopt it.
Phone cameras are a useful analogy. They typically take lower quality photographs than stand-alone cameras. But most people have embraced them because they are so convenient, packaged in devices most of us carry everywhere. That sort of usefulness is a much lower bar for A.I. to meet than creating the kind of all-knowing, all-doing A.I. depicted in science fiction.
Then, widespread adoption could help A.I. rapidly improve further. The technology is built on data. And the more people use A.I., the more data developers can collect to adapt their programs. Today’s newsletter will look at how that future could start.
Better, not perfect
There is a pithy way to describe how technology progresses: It has to be better, not perfect.
One example is A.I. that can code. People who don’t know how to code already use bots to produce full-fledged games, as my colleagues Francesca Paris and Larry Buchanan explained. And some professional programmers use A.I. to supplement their work.
The current technology is imperfect. It can make mistakes, and it struggles with more complicated tasks or programs. But the same is true for human coders. “Humans are not perfect at many of the tasks they perform,” said Helen Toner, a director at Georgetown University’s Center for Security and Emerging Technology.
By this standard, coding bots do not have to be flawless to replace existing work. They merely have to save time. A human coder could then use that extra time to improve on the A.I.’s work, brainstorm other ideas for programs or do something else entirely.
Unintended consequences
A.I. outcomes won’t always be good. With phone cameras, people sacrifice photo quality for convenience. The trade-offs could be more consequential with artificial intelligence.
Consider an A.I. that can write well. At first, the quality might fall short of writing you can do yourself. Still, like a coding A.I., it could give you time that you could use to sharpen the draft, focus on research or complete a different task.
But this bot also might not care about some qualities that humans value. Perhaps it will spin out falsehoods that some writers won’t catch before publishing online. Or bad actors could use A.I. to create and distribute well-written disinformation more efficiently.
In other words, what an A.I. does can fail to align with its creators’ or users’ goals. “It’s a very general technology that’s going to be used for so many things,” Katja Grace, an A.I. safety researcher, said. “So it’s much harder to anticipate all the ways that you might be training it to do something that could be harmful.”
Here’s a real-world example: Ted Rall, a political cartoonist, recently asked ChatGPT to describe his relationship with Scott Stantis, another cartoonist. It falsely suggested Stantis had accused Rall of plagiarism in 2002 and that both of them had a public feud. None of this happened.
But current A.I. technologies frequently produce these kinds of tall tales — what experts call hallucinations — when asked about real people or events. Experts aren’t sure why. One potential explanation is that these systems are primarily programmed to put out convincing, conversational writing, not to distinguish fact from fiction. As similar A.I. replaces human tasks or current technologies (such as search engines), the falsehoods could mislead many more people.
Exponential growth
A.I. is developing incredibly quickly. The computing power behind the technology has grown exponentially for decades, and experts expect it to continue doing so. As impressive as GPT-4 is, we could plausibly get A.I. programs that are many times as powerful in the span of years.
This technology is developing so quickly that lawmakers and other regulators have been unable to keep up. More than 1,000 tech leaders and researchers recently called for a pause on A.I. development to establish safety standards. So far, there is no sign those calls have been heeded. Tech companies like Google and Microsoft have instead resisted internal dissent against releasing their A.I. programs and have pushed them out to the public as quickly as possible.
For more
THE LATEST NEWS
Politics
International
Business
Other Big Stories
-
SpaceX’s Starship, a rocket set to carry humans to the moon in the coming years, exploded minutes into its test launch. Still, engineers say the test provided useful data.
Opinions
The rise in anorexia in young girls is a sign they’re taking their pain out on themselves. We need to determine why, Pamela Paul argues.
Whether microplastics have a damaging effect on our well-being is uncertain, Mark O’Connell writes, making it the perfect scapegoat for any malady.
Diamond district: In this small slice of New York, old-world jewelers and TikTok stars work side by side.
Celestial spectacle: See the total solar eclipse over Western Australia.
Polly wants a chat: Scientists let parrots call their parrot friends over video.
Advice from Wirecutter: How to limit exposure to chemicals in cookware.
Lives Lived: Loren Cameron was a photographer and activist whose groundbreaking portraits of himself and other transgender people inspired a generation. He died at 63.
SPORTS NEWS FROM THE ATHLETIC
Taunting and crotch shots: James Harden, the 76ers guard, and Nic Claxton, the Nets center, were both ejected from the N.B.A. playoff’s Game 3 in Brooklyn. See the flagrant fouls.
The Oakland A’s: The baseball team said it had agreed to a land deal that could see the franchise move to Las Vegas by 2027.
Pop in the desert
Today is the start of the second weekend of Coachella, the California festival that has grown into one of the biggest annual events for pop music and celebrity sightings. Some highlights from the first weekend:
-
Blackpink became the first K-pop act to headline the festival, delivering a 90-minute set that The Guardian described as a “high-octane stream of pop bangers.”
-
The pop-punk band Blink-182 surprised fans with a reunion show, playing together for the first time in almost a decade.
-
The most divisive performance came from Frank Ocean, who reimagined his songs in new styles. Variety called it “a near-disaster,” while The Los Angeles Times labeled it “an instant classic.”
This weekend’s shows begin today, at noon Pacific time, and continue through Sunday night. You can watch live on Coachella’s YouTube page.