Why Was Sam Altman Fired from OpenAI? ‘Q-Star’ AI Breakthrough in Focus

News Room

Sam Altman looks set to return triumphantly to OpenAI but questions still linger around why he was fired from the artificial-intelligence start-up in the first place. The focus is on whether a disagreement over the safety of its AI technology brought matters to a head. 

OpenAI’s previous board members haven’t publicly disclosed the reason they chose to fire Altman last week and no one else –including the company’s major backer
Microsoft
(ticker: MSFT)– has come out with a clear explanation either. 

While Altman’s tentatively agreed return and the departure of the majority of the former board might smooth matters over, it leaves a question that still needs to be answered: Why was he sacked?

The initial explanation given by the previous board was only that Altman hadn’t been “consistently candid” in his communications. While early speculation linked that to reports he was exploring outside ventures in hardware and chips ventures, the lack of concrete detail suggests the cause might have been deeper. 

“We think this gives credence to the thesis that OpenAI’s board was motivated by a fundamental philosophical difference between themselves and Mr. Altman’s push for commercialization,” wrote Macquarie analyst Frederick Havemeyer in a research note. 

Ahead of Altman’s firing, staff researchers at OpenAI wrote a letter to the board warning an internal project named Q*, or Q-Star, could represent a breakthrough in attempts to create an artificial general intelligence –an AI which can surpass humans in a range of fields– and the potential danger of such technology, Reuters reported on Thursday, citing sources familiar with the matter. 

OpenAI didn’t immediately respond to a request for comment on the report. 

It’s worth taking claims about big leaps in AI with a pinch of salt. Google last year fired an engineer who claimed an unreleased AI system had become sentient. While Google’s Bard is a contender in the chatbot wars, few would suggest that any of the company’s AI products show signs of sentience just yet.

However, it does seem that the rapid pace of AI development was likely at the heart of the previous board’s dispute with Altman. Former board member Helen Toner clashed with Altman over a research paper she co-authored that appeared to criticize OpenAI’s efforts to keep its AI technology safe, the New York Times has reported, citing emails.

If it is eventually confirmed that Altman’s prioritization of commercializing AI was the reason for his firing, that’s probably a positive for OpenAI’s investors, including Microsoft, who are also interested in seeing the technology monetized as quickly as possible. However, it will do little to reassure the decels who are advocates of a deceleration or slower approach to progress in AI.

OpenAI’s nonprofit structure will also remain an issue as the company is charged with overseeing a balance between commerce vs safety.

Write to Adam Clark at [email protected]

Read the full article here

Share this Article
Leave a comment