Select Page

The King is Dead! Long Live the King!: Inside the Power Struggle at OpenAI

Written by Jeff Drake
5 · 19 · 24

The King is Dead! Long Live the King!: Inside the Power Struggle at OpenAI

[By now the only legitimate justification one could have for not knowing anything at all about artificial intelligence (AI) is if they lived completely off the grid for the past 3 years. If you find yourself leaning towards the claim that this person might be you, or if you are unsure what terms like, “AGI,” “ASI,” or “alignment” mean, then you should read these posts by me: “The Rise of Super-intelligence,” “Navigating the AI Landscape,” in order to better understand the implications presented within the post below.]

Do you remember the old Chinese curse: “May you live in interesting times”? Well, within the AI industry, and even more so in the OpenAI corporation, this week has been very “interesting” indeed, in a Chinese curse kind of way! And this isn’t just idle gossip or scuttlebutt, although there is some of that herein, because the interesting stuff that has been happening may affect all of us eventually.

While the following post may only seem like a review of some corporate drama, it is more than this. OpenAI seems poised to be the first to achieve AGI.

The King is Dead!

Allow me to build the context for my claim. To do this, I have to take you back to late last year. You may, or may not have, seen the news at that time, or perhaps you saw the headlines and didn’t look further as this was news about the OpenAI corporation.

One Friday afternoon (November 17, 2023), out of the blue apparently, the board of directors fired the CEO of OpenAI, Sam Altman, one of the original founders of the business. Within the AI industry, this was huge news! Huge! And then on the following Monday, Altman was just as suddenly, rehired! The industry was left with a big WTF post-it note stuck to their foreheads! What caused the board to fire Altman? Why was he rehired? These questions would remain unanswered for some time, leading to all kinds of rumor-mongering.

Pieces of this story were eventually put together by people who pay way more attention to this kind of thing than I do. It appears that a key member of the AI alignment team, Ilya Sutskever, a co-founder of the OpenAI business, apparently saw something he didn’t like, perhaps in the way the company was doing it’s AI development, or something in Sam Altman, himself. Ilya is the one who apparently approached the board and created the purge of Altman. Traitor!

The backlash to forcefully removing Altman was swift. Rather than side with Sutskever, the entire company rallied behind Altman and he was put back in power after a few days. You’ve heard of the saying, “You come at the King, you’d better not miss!”? Well, Sutskever missed, big time. As you might guess, things were rather strained between Altman and Sutskever after this.

I should point out that there has been little to nothing said by either Altman or Sutskever that points to a conflict or bad feelings, etc. Their public texts and emails to each other have been the picture of a great corporate love-fest. However, this hasn’t been the case with other employees who have left the company recently.

Ilya is in center of photo, Altman to the right, next to woman.

Ilya Sutskever disappeared immediately after Altman was reinstated. Oh, surprise! During the months that followed, Altman continued to push the line publicly that he and Ilya would continue to work together. Most people discounted this, as the trust was lost between Altman and Sutskever. It was just a matter of time before something else would have to change.

And that time was just a few days ago, the day after the big OpenAI announcement of ChatGPT 4o. Suddenly we see an appearance of Sutskever on X (formerly Twitter). Ilya posted a note that said:

“After almost a decade, I have made the decision to leave OpenAi. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of @merettm. It was an honor and a privilege to have worked together, and I will miss everyone dearly. So long, and thanks for everything. I am excited for what comes next – a project that is very personally meaningful to me about which I will share details in due time.”

The people he mentions above are: @sama (Sam Altman), @gdb (Greg Brockman), @miramurati (Mira Murati), @merettm (Jakub Pachocki). Note that Jacob is taking over Ilya’s former position as a research leader.

Altman, in the interim between being thrown out of his job and the email Ilya sent above, gave everyone the impression that Ilya was still working at OpenAI, albeit behind the scenes. Now, people think that in reality, Ilya was not continuing to work at OpenAI, but rather he and Altman were spending this time figuring out the legal and financial specifics of Ilya leaving the company. This makes sense to me. It also makes sense to me that Ilya was probably also working on something else during this time, the “project” he refers to in his post above. Accompanying the post was a photograph showing Altman in a big corporate group hug of sorts, with Ilya and several other members of the OpenAI team.

Altman then posted a reply text that was rather dripping with gratitude, praise and warmth towards Ilya. However, not everyone left OpenAI recently surrounded with an abundance of warm feelings.

Jan Leike, a machine-learning researcher, co-leader (with Ilya) of the Alignment project at OpenAI, posted a tweet this week that simply said, “Yesterday was my last day as head of alignment, superalignment lead and executive.”

Perhaps the most interesting thing to note here is that this is yet another key individual in OpenAI focused on AI safety, who is abandoning ship. But, Leike isn’t all hearts and flowers about his leaving, although he thanked his team for all their hard work. Instead, he finished his text with:

“Stepping away from this job has been one of the hardest things I have ever done, because we urgently need to figure out how to steer and control AI systems much smarter than us.”

He continues:

“I joined because I thought OpenAI would be the best place in the world to do this research.

However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

“I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics.”

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.”

“Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.”

“Building smarter-than-human machines is an inherently dangerous endeavor.”

“But over the past years, safety culture and processes have taken a backseat to shiny products.”

Perhaps the best example of the “shiny products” Leike was referring to is the release of ChatGPT 4o last week, the day after Ilya resigned. ChatGPT 4o is arguably one of the greatest AI models to ever have been made. It’s multi-modal, it has voice, and memory! It is very cool and very useful. Observers noticed that the release did not really address security or safety during the release effort.

Leike finishes his posts with:

“We are long overdue in getting incredibly serious about the implications of AGI.

Only then can we ensure AGI benefits all of humanity.”

“OpenAI must become a safety-first AGI company.”

Jan Leike concludes:

“To all OpenAI employees, I want to say:

Learn to feel the AGI.

Act with gravitas appropriate for what you’re building.

I believe you can “ship” the cultural change that is needed.

I am counting on you.

The world is counting on you.”

Here’s where speculation kicks into gear. People are now believing that Leike and Ilya both quit over the same concerns about safety and the direction OpenAI was taking in this regard.

For example, I read recently that there were two camps within the company regarding how to properly align the AGI and eventual ASI. One camp, reflected more in what Leike had to say about alignment, said more and proper planning was required, lots and lots of work, probably involving a slow-down in AGI development to ensure the alignment capabilities of the model kept pace with its general development. This seems to make perfect sense, doesn’t it? But wait, slow down development of the AGI? That was no doubt not very palatable to Altman and crew.

The alternative method of aligning the AGI and ASI was this: focus on a much smaller alignment problem, like just aligning the next iterative model. Then, when that model arrives, properly aligned, use it to align the next iterative version of the model, then do it again, and again, until you eventually arrive with an ASI that is perfectly aligned. Cool, right! Wrong. Not very cool. Critics, including myself, believe this is over-simplistic and doesn’t take into account that one of the problems with these AIs is that new capabilities “emerge” from them all the time. How would this method adjust to these changes? To me, assuming that each iterative version of ChatGPT would just be more of the same and thus easily aligned sounds like folks are just whistling past the graveyard.

Long Live the King!

So, OpenAI today has Sam Altman back in charge leading the company, and back from the dead, stronger than ever, hell-bent, as far as I can tell, on being the first to achieve AGI. Based on interviews I’ve seen, to Altman, being first to AGI is everything. One reason for this is the belief that the company who has created an AGI first, will have a monumental leg-up on everyone else, the so-called, “also-rans.” More than this, whoever gets AGI first will also be first to achieve ASI (artificial super-intelligence). This makes sense, given that the ASI will probably run into unforeseen problems that only an AGI could resolve.

But what does this all mean to those of us in “the madding crowd”?

There is now a question floating around the internet, which asks, “What did Ilya see?” What was it, people wonder, that caused Ilya Sutskever to try and overthrow Sam Altman? Was it simply a lack of focus, a lack of required resources? Or did he something else in addition to these concerns, something only a few corporate eyes may be allowed to see – the next version of their AI, perhaps called, ChatGPT 5? I think OpenAI may be closer to AGI than most people think. And the idea of having an AGI that is not aligned with the values and desires of the human race is concerning, to say the least!

 

 

 

 

 

 

Please follow and like me:

Let us know what you think…

Comments

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

More Like This

Related Posts

2024-06-21 AI Update!

2024-06-21 AI Update!

2024-06-21 AI Update! This past week, OpenAI added a new member to their board of directors: former US Army general and NSA Director Paul Nakasone. Nakasone was the longest-serving leader of US Cybercom. Up to last January, OpenAI had a ban on adding any military and...

read more
2024-04-26 Conversations with Claude

2024-04-26 Conversations with Claude

2024-04-26 Conversations with Claude

A Genius In Your Pocket!

Full disclosure: I can’t claim to have come up with the title of this article by myself. I read it somewhere and honestly, the statement was made about the very near …

read more
2024-04-22 Conversations with Claude

2024-04-22 Conversations with Claude

2024-04-22 Conversations with Claude

If you don’t follow me on Facebook you may not have heard me say that I recently dropped my subscription to Google’s AI, Gemini Advanced, and switched to Anthropic’s AI, Claude. I got fed up with …

read more

Author

Jeff Drake

Retired IT consultant, world-traveler, hobby photographer, and philosopher.