A former Facebook executive, an AI researcher, a tech entrepreneur and a computer scientist were the four OpenAI kingmakers who plunged the start-up into crisis last week when they fired its chief executive.
The abrupt decision by board members Adam D’Angelo, Helen Toner, Tasha McCauley and Ilya Sutskever to oust Sam Altman set off a dramatic chain of events and fuelled speculation about their motives and competency to manage what has become the world’s most high-profile AI start-up.
By late Tuesday in California, D’Angelo was the only survivor of the corporate crisis, as Altman was reinstated by the company and a new board was announced, led by former Salesforce chief executive Bret Taylor as chair.
OpenAI is unconventionally structured as a partnership between a research non-profit and for-profit subsidiary. The board oversees both, but its core mandate is to pursue artificial intelligence “that is safe and benefits all of humanity” rather than to look after the interests of investors.
How the four had come to hold the keys to the future direction of the leading AI company remains unclear. Neither investors nor staff could explain how the slimmed-down board, which is half the size it was in 2021, was appointed.
Among OpenAI employees, shock at Altman’s dramatic firing curdled into frustration, with the board offering no specific reason for their decision beyond saying he had not been “consistently candid”.
Elon Musk, the outspoken X owner and former OpenAI board member who helped launch the start-up in 2015, has called on one of the four to “say something” to explain the move, while Vinod Khosla, an early investor, said the board had “set back the promise of artificial intelligence”, in an opinion piece in The Information on Monday.
Some people who know the board members said they were intelligent, thoughtful and well-placed to fulfil their mandate to serve humanity. Others pointed to their relative lack of corporate experience, the poor handling of Friday’s announcement and the subsequent fallout.
One person who worked with D’Angelo at the Quora question-and-answer site he runs as chief executive said he was a poor communicator and that the board’s lack of communication was “not surprising”.
D’Angelo has expressed concerns in the past about the dangers of new technologies. When he joined the OpenAI board in 2018, he said work on AI “with safety in mind” was “both important and under-appreciated”.
Writing in 2017 while at Y Combinator, which invested in Quora, Altman said D’Angelo was “one of the few names that people consistently mention when discussing the smartest CEOs in Silicon Valley”, while Yishan Wong, the former chief of Reddit, said D’Angelo was “ridiculously rational”.
He remains at the company on the new board, with the other initial members being Altman, Taylor and former US Treasury secretary Larry Summers.
Jeffrey Ding, an AI researcher at George Washington University, said Toner, whom he has known since 2018, had been clear-eyed about the risks and opportunities of generative AI.
She had “really good judgment” and the “rare ability to speak to both sides of debates about AI and AI governance”, he said, adding that Toner was “very clear-minded” and open to “new ideas, revising her opinions and being receptive to feedback”.
Toner and Ding co-authored a paper in June that said avoiding the regulation of AI because tighter rules would “let China pull ahead” was “not a good argument”, she summarised on X.
Toner in May cautioned against over-relying on AI chatbots, saying there was “still a lot we don’t know” about them, and said in October that the US government should “take action to protect citizens from AI’s harms and risks, while also promoting innovation and capturing the technology’s benefits”.
Toner clashed with Altman, according to reports, over an academic paper she co-authored that compared the approaches to safety taken by OpenAI and rival company Anthropic.
Less is known about the low-profile McCauley, who, like Toner, is a supporter of effective altruism — an intellectual movement that has warned of the risks that AI could pose to humanity.
Toby Ord, who sits on the advisory board of the Centre for the Governance of AI research group alongside Toner and McCauley, said both were “highly intelligent, thoughtful and morally serious, with a deep knowledge about AI risk and governance”.
They were “exactly the kind of people one would want to have on the board of a non-profit tasked with the mission of supervising a for-profit subsidiary that is trying to develop artificial general intelligence”, he added.
McCauley was “one of the most thoughtful people I’ve ever worked with. Even during a crisis, she’s remarkably level-headed and calm”, said one person who has worked closely with her. “I find it very hard to picture her acting rashly or recklessly.”
Opinions were split on Sutskever, who is an OpenAI co-founder and co-author of the formative paper that launched the deep-learning era.
Critics rounded on the computer scientist for his role in the coup against Altman. Others pointed to Sutskever’s focus on AI safety as head of a team dedicated to controlling increasingly advanced AI. This had jarred with the culture of restless innovation embodied by Altman, according to people familiar with the matter.
“If you value intelligence above all other human qualities, you’re gonna have a bad time,” Sutskever wrote in an X post in October.
Musk wrote on X that Sutskever had “a good moral compass”, adding that he “would not take such drastic action unless he felt it was absolutely necessary”. Sutskever had later realigned himself with Altman, saying he “deeply [regretted] my participation in the board’s actions”.
However, he is no longer on the board, while co-founder Greg Brockman, its chair until Friday when he resigned over Altman’s removal, is back at the company. “Returning to OpenAI & getting back to coding tonight,” he wrote on X late on Tuesday.
Toner capped a tumultuous few days with her own epitaph as board member, writing on X: “And now, we all get some sleep.”
Read the full article here