One of the strangest episodes in the history of the tech industry ended as start-up events often do: with a party in San Francisco’s eclectic Mission District.

Late Tuesday, OpenAI said Sam Altman was returning as its chief executive, five days after the artificial intelligence start-up’s board of directors forced him out. At the company’s San Francisco office, giddy employees snacked on chicken tenders, drank boba tea and champagne, and celebrated Mr. Altman’s return deep into the night.

Mr. Altman’s reinstatement capped a corporate drama that mixed piles of money, a pressure campaign from allies, intense media attention and a steadfast belief among some in the A.I. community that they should proceed with caution with what they are building.

Now OpenAI, which for two days appeared to be on the brink of collapse just a year after introducing the popular ChatGPT chatbot, will replace a heavily criticized board of directors with a more traditional group including former Treasury Secretary Lawrence Summers and a former executive from the software giant Salesforce.

More board members, who could be plucked from OpenAI’s biggest investor, Microsoft, and the A.I. research community, are expected to join soon. Mr. Altman was not named to the board on Tuesday night, and it was not clear if he ever will be.

On Wednesday, what appeared to be emerging from the mess was a company better suited to handle the billions of dollars thrown its way and the attention it has received since it released ChatGPT. But some already argue that it will not be as attuned to OpenAI’s original mission to create A.I. that is safe for the world.

The OpenAI debacle has illustrated that building A.I. systems is testing whether businesspeople who want to make money can work in sync with researchers who worry that what they are developing could eventually eliminate jobs or become a threat if technologies like autonomous weapons grow out of control.

The tech industry — perhaps even the world — will be watching to see if OpenAI is any closer to balancing those dueling aspirations than it was a week ago.

“We’ll look back on this period as a very brief, highly dramatic blip that gave us a public and dramatic reset,” said Aaron Levie, the chief executive of Box, an online data storage provider. “This needs to be a trustworthy organization that’s aligned with its board, and at the end of it all, OpenAI is a more valuable organization than it was a week ago.”

When Mr. Altman, 38, was fired just after noon on Friday, OpenAI was pitched into chaos. Its employees and Microsoft, which has invested $13 billion in the company, were blindsided.

The A.I. company has an unusual governance structure. It is controlled by the board of a nonprofit, and its investors have no formal way of influencing decisions. But no one anticipated that four members of the board — including OpenAI’s chief scientist, Ilya Sutskever, a co-founder — would suddenly remove Mr. Altman, claiming that he could no longer be trusted with the company mission to build artificial intelligence that “benefits all of humanity.”

The fallout was immediate. OpenAI’s president, Greg Brockman, who also helped found the company eight years ago, quit in protest.

The board had grown increasingly frustrated with Mr. Altman’s behavior over the last year and thought it needed to get him under control, according to two people familiar with the board’s thinking. One episode, in particular, illustrated how fraught the relationship between the board and Mr. Altman had become.

Both sides focused on an October research paper co-written by Helen Toner, an OpenAI board member who serves as a director of strategy at Georgetown University’s Center for Security and Emerging Technology.

Mr. Altman complained to Ms. Toner that the paper seemed to criticize OpenAI’s efforts to keep its technologies safe while praising a rival. He argued that “any amount of criticism from a board member carries a lot of weight,” he wrote in an email to colleagues.

Ms. Toner defended the paper as academic research, but Mr. Altman and other OpenAI leaders, including Mr. Sutskever, later discussed whether she should be removed from the board, a person involved in the conversations said.

But Mr. Sutskever, who is worried that A.I. could one day destroy humanity, unexpectedly sided with Ms. Toner and two other board members: Adam D’Angelo, chief executive of the question-and-answer site Quora, and Tasha McCauley, an adjunct senior management scientist at the RAND Corporation.

During a video call on Friday, Mr. Sutskever read Mr. Altman a statement that said Mr. Altman was being fired because he was not “consistently candid in his communications with the board.”

Over the next five days, Mr. Altman and his allies pressed the board to bring him back and for the board to resign. On Sunday, he and company executives negotiated at OpenAI’s offices. In the early afternoon, a delivery driver with a dozen drinks from the Boba Guys chain arrived on a motorbike outside with two bags. Then a second delivery driver appeared.

That night, the talks collapsed, and the board named Emmett Shear, a co-founder of Twitch, as interim chief executive.

But Microsoft offered a Plan B: to hire Mr. Altman to run a new A.I. research lab for Microsoft with Mr. Brockman. OpenAI’s executives orchestrated a letter from employees saying they’d follow Mr. Altman to Microsoft if he wasn’t reinstated. More than 700 of OpenAI’s 770 employees signed, including Mr. Sutskever, who said in a post on X that he “deeply regretted” his role in ousting Mr. Altman.

The pressure made the other board members dig in their heels, three people familiar with their thinking said. They were appalled that Mr. Altman and his allies were encouraging a mutiny, and wondered if it could be illegal because the employees had a contractual obligation to the company, not to its chief executive. And they thought that as a board they were acting with integrity and fulfilling their obligation to the nonprofit’s mission.

The board was still determined to force Mr. Altman to change his behavior, two people familiar with the board’s deliberations said. It also had concerns about some of his recent efforts to raise funds for personal interests, such as a drug development start-up, at the same time that he was raising money for OpenAI.

The talks from Saturday through Tuesday centered on how to create a board that everyone could trust. For the current members, that meant finding directors who would check Mr. Altman’s power and push for an independent investigation into his behavior.

While Microsoft supported Mr. Altman’s return to OpenAI, the company worked on backup plans, one person familiar with the matter said. Microsoft employees started to prepare offer letters and to line up immigration lawyers for OpenAI staff on work visas, the person said.

OpenAI’s three board members spent most of Tuesday on Google Meet video calls, discussing board options. They spoke with the chief executive of Microsoft, Satya Nadella, several times, one of these people said.

Mr. Altman’s allies offered a board slate of Mr. D’Angelo, Mr. Summers and Bret Taylor, a seasoned Silicon Valley executive. Mr. Taylor, who will be the new board’s chair, oversaw the $44 billion sale of Twitter to Elon Musk when he led Twitter’s board last year.

Mr. Taylor and Ms. McCauley did not respond to requests for comment. No one involved in discussions has explained how Mr. Summers became an option, and he did not respond to requests for comment on Wednesday.

But he has recently established himself as an authority on A.I. and economics. Mr. Summers has warned that ChatGPT will come for the “cognitive class,” changing how doctors make diagnoses, editors work on books and Wall Street traders invest. He has also served on the boards of other technology companies, including the financial services company Block, formerly known as Square.

The board considered Mr. Summers to be an independent thinker with enough management experience to hold his ground against Mr. Altman, said two of the people familiar with the negotiations.

By Tuesday evening, they had a deal. Thanksgiving helped. Despite all their disagreements, everyone agreed the chaos should not spill into Thursday, one person said.

But there is still plenty of work to be done. Over the next six months, the board will analyze and potentially change OpenAI’s unusual structure, one of these people said.

After the decision to bring back Mr. Altman, OpenAI workers filled employee Slack channels with heart emojis and images of a frog, known as “froge,” that has become an unofficial corporate mascot, three employees said.

Late Tuesday, employees gathered at the company’s office to drink boba tea — an inside reference to news coverage over the weekend. Mr. Brockman posted a selfie with dozens of smiling workers in the office around midnight.

The caption read: “we are so back.”

Erin Griffith and Yiwen Lu contributed reporting.