//
1 min read

The dismissal of OpenAI CEO Sam Altman might be linked to the potentially dangerous nature of the organization’s AI model, which poses a threat to humans

Before OpenAI CEO Sam Altman was abruptly fired, a group of staff researchers raised concerns about a new artificial intelligence algorithm that could pose a threat to humanity, according to two sources familiar with the matter.

Some members of OpenAI see the project called Q* (pronounced Q-Star) as a potential breakthrough in the quest for artificial general intelligence (AGI), defined as autonomous systems capable of outperforming humans in most economically valuable tasks.

The letter to the board and the discovery of the AI algorithm were significant events preceding Altman’s removal. An internal message from CTO Mira Murati acknowledged the project and the letter to the board.

The firing of Altman was attributed to a “breakdown in communication” rather than financial, business, safety, or security/privacy issues, according to an internal memo.

The new model, Q*, demonstrated proficiency in solving certain mathematical problems, though it currently performs at a grade-school level.

Concerns about hastened commercialization without fully understanding the consequences were among the factors leading to Altman’s dismissal, along with the board’s unease about safety issues related to the AI model.

While researchers warned of potential dangers, they did not specify the nature of the safety concerns. The letter also highlighted the work of an “AI scientist” team exploring the optimization of AI models for better reasoning and scientific work.

After Altman’s firing and Emmet Shear’s appointment, over 700 staff members threatened to resign in support of Altman, leading to his reinstatement on Tuesday.

Leave a Reply