Was Sam Altman’s Sacking By OpenAI’s Board Over ‘Q-Star’ Breakthrough Seen As Threat To Humanity?


The mystery surrounding the brief dismissal of OpenAI CEO Sam Altman last Friday, who has since been reinstated, might revolve around a Reuters report that suggests Altman’s removal was due to a breakthrough in artificial general intelligence (AGI), which could threaten humanity.

In the days before Altman was sent off into exile, several staff researchers penned a letter to the board about a significant breakthrough – called Q* and pronounced Q-Star – that allowed the AI model to “surpass humans in most economically valuable tasks.”

Reuters sources said the AI milestone was one of the significant factors that led to the board’s abrupt firing of Altman last Friday. Another concern was commercializing the advanced AI model without understanding the socio-economic consequences.

The source said the model could solve mathematical problems but only “on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success.”

Also, before Altman was sacked, he might have referenced Q* at the Asia-Pacific Economic Cooperation in San Francisco:

“Four times now in the history of OpenAI—the most recent time was just in the last couple of weeks—I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward,” said Altman.

An internal conflict at OpenAI has surfaced and is rooted in an ideological battle between those pushing for rapid AI advancement and those advocating for a slower, more responsible approach to development.

Reuters spoke with an OpenAI spokesperson who confirmed the existence of the project Q* and the letter to the board before Altman’s firing.

So why is Q* a breakthrough?

Well, as tech blog 9to5Mac explains:

Currently, if you ask ChatGPT to solve a math problems, it will still use its predictive-text-on-steroids approach of compiling an answer by using a huge text database and deciding on a word-by-word basis how a human would answer. That means that it may or may not get the answer right, but either way doesn’t have any mathematical skills.

OpenAI appears to have made a breakthrough in this area, successfully enabling an AI model to genuinely solve mathematical problems it hasn’t seen before. This development is said to be known as Q*. Sadly the team didn’t use a naming model smart enough to avoid something which looks like a pointer to a footnote, so I’m going to use the Q-Star version.

Q-Star’s current mathematical ability is said to be that of a grade-school student, but it’s expected that this ability will rapidly improve.

This technological development could be some of the first signs AGI, a form of AI that can surpass humans, is imminent for commercialization.

AGI has the potential to surpass humans in every field, including creativity, problem-solving, decision-making, language understanding, etc., raising concerns about massive job displacement. A recent Goldman report outlines how 300 million layoffs could be coming to the Western world because of AI.

The Q* breakthrough and the rapid advancement of this technology now make sense why the board abruptly fired Altman for his rush to develop this technology without studying the model’s impact on how it threatens humanity.

Altman recently said, “I think this is like, definitely the biggest update for people yet. And maybe the biggest one we’ll have because from here on, like, now people accept that powerful AI is, is gonna happen, and there will be incremental updates… there was like the year the first iPhone came out, and then there was like every one since.”

Meanwhile, Musk, who has been warning about AI threatening humanity, called the Reuters report “Extremely concerning.”

Adding this hilarious tweet:

Suppose AGI is here (or nearing). The next couple of years are going to be wild.


This entry was posted in Uncategorized. Bookmark the permalink.