OpenAI Staff Threaten Mass Exodus to Join ex-CEO Altman

OpenAI shocked the tech world when it fired former CEO and co-founder Sam Altman. JUSTIN SULLIVAN / GETTY IMAGES NORTH AMERICA/AFP/File
OpenAI shocked the tech world when it fired former CEO and co-founder Sam Altman. JUSTIN SULLIVAN / GETTY IMAGES NORTH AMERICA/AFP/File
TT
20

OpenAI Staff Threaten Mass Exodus to Join ex-CEO Altman

OpenAI shocked the tech world when it fired former CEO and co-founder Sam Altman. JUSTIN SULLIVAN / GETTY IMAGES NORTH AMERICA/AFP/File
OpenAI shocked the tech world when it fired former CEO and co-founder Sam Altman. JUSTIN SULLIVAN / GETTY IMAGES NORTH AMERICA/AFP/File

Hundreds of staff at OpenAI threatened to quit the leading artificial intelligence company on Monday and join Microsoft, deepening a crisis triggered by the shock sacking of CEO Sam Altman.
In a fast-moving sequence of events, Altman, who was ousted by the board on Friday, has now been hired by Microsoft where he will take the lead in developing a new advanced AI research team, AFP said.
There was talk Monday that OpenAI is interested in Altman returning, and that he may be open to the idea under certain conditions.
"We want to partner with Open AI and we want to partner with Sam so irrespective of where Sam is he's working with Microsoft," chief executive Satya Nadella said in a streamed Bloomberg interview.
"That was the case on Friday. That's the case today. And we absolutely believe that will be the case tomorrow."
In a letter released to the media, the vast majority of OpenAI's 770-strong staff suggested they would follow Altman unless the board responsible for his departure resigned.
"Your actions have made it obvious that you are incapable of overseeing OpenAI," the letter said. "Microsoft has assured us that there are positions for all OpenAI employees at this new subsidiary should we choose to join."
A key AI executive at Microsoft confirmed that they all were welcome from OpenAI if the board that removed Altman doesn't resign.
Among the signatories was co-founder Ilya Sutskever, the company's chief scientist and a member of the four-person board who pushed Altman out.
"I deeply regret my participation in the board's actions," Sutskever said in a post on X, formally Twitter. "I never meant to harm OpenAI."
Another signatory was top executive Mira Murati, who was appointed to replace Altman as CEO when he was removed on Friday, but didn't last the weekend in the job.
"We are all going to work together some way or other, and I'm so excited," Altman said on X.
OpenAI has appointed Emmett Shear, a former chief executive of Amazon's streaming platform Twitch, as its new CEO despite pressure from Microsoft and other major investors to reinstate Altman.
After the startup's board sacked Altman, US media cited concerns that he was underestimating the dangers of its tech and leading the company away from its stated mission -- claims his successor has denied.
Nadella wrote on X that Altman will lead a new advanced AI research team at Microsoft, joined by OpenAI co-founder Greg Brockman.
Global tech titan Microsoft has invested more than $10 billion in OpenAI and has rolled out the artificial intelligence pioneer's tech in its own products.
Nadella said Microsoft remains committed to its partnership with OpenAI.
The drama was the talk of Silicon Valley on Monday.
"I know that some people are going to hate me for this, but this is the best show I've seen in my life," added Miguel Fierro, the tech giant's Principal Data Scientist Manager.
Altman shot to fame with the launch of ChatGPT last year, which ignited a race to advance AI research and development, as well as billions being invested in the sector.
His sacking triggered several other high-profile departures from the company, as well as a reported push by investors to bring him back.
But OpenAI stood by its decision in a memo sent to employees Sunday night, saying "Sam's behavior and lack of transparency... undermined the board's ability to effectively supervise the company."
'Badly' handled
Shear confirmed his appointment as OpenAI's interim CEO in a post on X on Monday, while also denying reports Altman had been fired over safety concerns regarding the use of AI technology.
"It's clear that the process and communications around Sam's removal has been handled very badly, which has seriously damaged our trust," Shear wrote.
Generative AI platforms such as ChatGPT are trained on vast amounts of data to enable them to answer questions, even complex ones, in human-like language.
They are also used to generate and manipulate imagery.
But the tech has triggered warnings about the dangers of its misuse -- from blackmailing people with "deepfake" images to the manipulation of images and harmful disinformation.



AI is Learning to Lie, Scheme, and Threaten its Creators

A visitor looks at AI strategy board displayed on a stand during the ninth edition of the AI summit London, in London. HENRY NICHOLLS / AFP
A visitor looks at AI strategy board displayed on a stand during the ninth edition of the AI summit London, in London. HENRY NICHOLLS / AFP
TT
20

AI is Learning to Lie, Scheme, and Threaten its Creators

A visitor looks at AI strategy board displayed on a stand during the ninth edition of the AI summit London, in London. HENRY NICHOLLS / AFP
A visitor looks at AI strategy board displayed on a stand during the ninth edition of the AI summit London, in London. HENRY NICHOLLS / AFP

The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals.

In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair, AFP reported.

Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed.

These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work.

Yet the race to deploy increasingly powerful models continues at breakneck speed.

This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses.

According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.

"O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems.

These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives.

- 'Strategic kind of deception' -

For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios.

But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception."

The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes.

Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up."

Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder.

"This is not just hallucinations. There's a very strategic kind of deception."

The challenge is compounded by limited research resources.

While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed.

As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception."

Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS).

No rules

Current regulations aren't designed for these new problems.

The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving.

In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules.

Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread.

"I don't think there's much awareness yet," he said.

All this is taking place in a context of fierce competition.

Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein.

This breakneck pace leaves little time for thorough safety testing and corrections.

"Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.".

Researchers are exploring various approaches to address these challenges.

Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach.

Market forces may also provide some pressure for solutions.

As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it."

Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm.

He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.