I want to make a confession: I don’t understand a lot of the hype around artificial intelligence.
Like a lot of other people, I tried ChatGPT after it was released, and I was impressed. But I’ve been mostly disappointed since then. When I’ve asked it to analyze a data set, its answers have included errors. When I ask about historical events, the information isn’t much better than what’s on Wikipedia. When I ask about recent events, the bot tells me that it doesn’t have access to data after Jan. 2022.
I don’t doubt that A.I. will eventually be a big deal. But much of the discussion today feels vague and impenetrable for nonexperts. To get a more tangible understanding, I asked my colleagues Cade Metz and Karen Weise, who cover A.I., to answer some questions. We’ve turned their answers into today’s newsletter.
David: Am I wrong to be unimpressed so far?
Cade and Karen: A lot of people have told us they share your experience. Our editor recently asked us to list impressive things people were doing with ChatGPT, and we really had to think about it.
One example does seem to be writing. We are writers by profession, but writing does not come easily for many people. Chatbots can help get out a first draft. Cade knows a dentist who uses it to help write emails to his staff. Karen overheard some teachers in a coffee shop say they were using it to draft college recommendation letters. A friend used it to produce a meal plan for a weeklong vacation, asking it to propose menus and a grocery list that was a helpful starting point.
But the chatbots have an inherent problem with producing wrong information, what the industry calls “hallucinations.” A lawyer representing Michael Cohen, the onetime fixer for Donald Trump, recently submitted a brief to a federal court that mistakenly included fictitious court cases. As it turns out, a Google chatbot had invented the cases.
David: What’s an example of something meaningful that people may be able to do with A.I. soon?
Cade and Karen: Companies like OpenAI are transforming chatbots into what they call “A.I. agents.” Basically, this is a fancy term for technology that will go out onto the internet and take actions on your behalf, like searching for plane flights to New York or turning a spreadsheet into a chart with just a few words of commands.
So far the chatbots have primarily focused on words, but the newest technology will work from images, videos and sound. Imagine uploading images of a math question that included diagrams and charts, and then asking the system to answer it. Or generating a video based on a short description.
David: Let’s talk about the dark side. The apocalyptic fears that A.I. will begin killing people feel sci-fi-ish, which causes me to dismiss them. What are real reasons for concern?
Cade and Karen: A.I. systems can be mysterious, even to the people who create them. They are designed around probabilities, so they are unpredictable. The worriers fret that because the systems learn from more data than any human could consume, they could wreak havoc as they are woven into stock markets, military systems and other vital systems.
But all the talk of these hypothetical risks can reduce the focus on more realistic problems. Already we are seeing A.I. produce better misinformation for China and other nations and write more seductive and successful phishing emails to scam people. A.I. has the potential to make people even more distrustful and polarized.
David: The lack of regulation over smartphones and social media has aggravated some big societal problems in the past 15 years. If some government regulators called you into their office and asked how to avoid being so far behind with A.I., what lessons would your reporting suggest?
Cade and Karen: Regulators need to educate themselves from a broad range of experts, not just big tech. This technology is extremely complicated, and the people building it often exaggerate both the positives and the negatives. Regulators need to understand, for instance, that the threat to humanity is overblown, but other threats are not.
Right now there is very little transparency around almost every aspect of A.I. systems, which makes it hard to keep in check. A prime example: These systems learn their skills from massive amounts of data, and the major companies have not disclosed the particulars. The companies might be using personal data without consent. Or the data might contain hate speech.
Related: Research from Stanford University suggests that A.I. tools have not increased cheating in high schools so far, The Times’s Natasha Singer explains.
Do you use A.I. in your everyday life? If so, tell us how — with an email to themorning@nytimes that has “AI use” in the subject line.
THE LATEST NEWS
Iran Explosions
Prodigy: Meet the 13-year-old from Oklahoma thought to be the first person to “beat” Tetris.
Social Qs: “My friend offered to dogsit, then backed out when her mother died. Now what?”
Tales of the underworld: ValTown, an account on X, spotlights gangs and drug kingpins of the 1980s and 1990s — and how crime and celebrity often intersect.
From London to Paris: A European rail renaissance is well underway.
Lives Lived: Maurice Hines was a high-wattage song-and-dance man who rose to stardom as a child tap-dancing with his brother, then performed on and off Broadway. Hines died at 80.
SPORTS
Golf: Rory McIlroy said the Saudi-financed LIV Tour exposed flaws in how the PGA Tour deals with sponsorships and player commitments.
Blowout: The Grambling State women’s basketball team beat the College of Biblical Studies by 141 points, a record margin.
Teenager: Britain’s newest star is a 16-year-old darts player. There are few people in sports with parallel successes, Victor Mather writes.
ARTS AND IDEAS
Star in the making: For those who don’t follow football, Travis Kelce’s leap into the culture last year — hosting “Saturday Night Live,” dating the world’s biggest pop singer — seemed to come from nowhere. But his success wasn’t a fluke: His business managers, the Eanes brothers, had been slowly constructing it for years. “We positioned Travis to be world famous,” André Eanes said.