ChatGPT Maker OpenAI Ousts CEO Sam Altman

OpenAI CEO Sam Altman participates in a discussion entitled "Charting the Path Forward: The Future of Artificial Intelligence" during the Asia-Pacific Economic Cooperation (APEC) CEO Summit, Thursday, Nov. 16, 2023, in San Francisco. (AP)
OpenAI CEO Sam Altman participates in a discussion entitled "Charting the Path Forward: The Future of Artificial Intelligence" during the Asia-Pacific Economic Cooperation (APEC) CEO Summit, Thursday, Nov. 16, 2023, in San Francisco. (AP)
TT

ChatGPT Maker OpenAI Ousts CEO Sam Altman

OpenAI CEO Sam Altman participates in a discussion entitled "Charting the Path Forward: The Future of Artificial Intelligence" during the Asia-Pacific Economic Cooperation (APEC) CEO Summit, Thursday, Nov. 16, 2023, in San Francisco. (AP)
OpenAI CEO Sam Altman participates in a discussion entitled "Charting the Path Forward: The Future of Artificial Intelligence" during the Asia-Pacific Economic Cooperation (APEC) CEO Summit, Thursday, Nov. 16, 2023, in San Francisco. (AP)

The board of the company behind ChatGPT on Friday fired OpenAI CEO Sam Altman - to many, the human face of generative AI - sending shock waves across the tech industry.

OpenAI's Chief Technology Officer Mira Murati will serve as interim CEO, the company said, adding that it will conduct a formal search for a permanent CEO.

"Altman's departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities," OpenAI said in the blog without elaborating.

Greg Brockman, OpenAI president and co-founder, who stepped down from the board as chairman as part of the management shuffle, quit the company, he announced on messaging platform X late on Friday. "Based on today's news, i quit," he wrote.

The departures blindsided many employees who discovered the abrupt management change from an internal message and the company's public facing blog. It came as a surprise to Altman and Brockman as well, who learned the board's decision within minutes of the announcement, Brockman said.

"We too are still trying to figure out exactly what happened," he posted on X, formerly Twitter, adding, "We will be fine. Greater things coming soon."

The now four-person board consists of three independent directors holding no equity in OpenAI and its Chief Scientist Ilya Sutskever. The organization did not immediately answer a request for comment on Brockman's claims.

Backed by billions of dollars from Microsoft, which does not have a board seat in the non-profit governing the startup, OpenAI kicked off the generative AI craze last November by releasing ChatGPT. The chatbot became one of the world's fastest-growing software applications.

Trained on reams of data, generative AI can create human-like content, helping users spin up term papers, complete science homework and even write entire novels. After ChatGPT's launch, regulators scrambled to catch up: the European Union revised its AI Act and the US kicked off AI regulation efforts.

Altman, who ran Y Combinator, is a serial entrepreneur and investor. He was the face of OpenAI and the wildly popular generative AI technology as he toured the world this year.

Altman posted on X shortly after OpenAI published its blog: "i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people. will have more to say about what’s next later."

Altman did not respond to requests for comment.

Murati, who has worked for Tesla, joined OpenAI in 2018 and later became chief technology officer. She oversaw product launches including that of ChatGPT.

At an emergency all-hands meeting on Friday afternoon after the announcement, Murati sought to calm employees and said OpenAI's partnership with Microsoft is stable and its backer's executives, including CEO Satya Nadella, continue to express confidence in the startup, a person familiar with the matter told Reuters.

The Information previously reported details of the meeting.

"Microsoft remains committed to Mira and their team as we bring this next era of AI to our customers," a spokesperson for the software maker told Reuters on Friday.

In a statement published on Microsoft's website, Nadella said: "We have a long-term agreement with OpenAI... Together, we will continue to deliver the meaningful benefits of this technology to the world."

Earthquake

The shakeup is not the first at OpenAI, launched in 2015. Tesla CEO Elon Musk once was its co-chair, and in 2020 other executives departed, going on to found competitor Anthropic, which has claimed it has a greater focus on AI safety.

Well wishers and critics piled onto digital forums as news of the latest shuffle spread.

On X, former Google CEO Eric Schmidt called Altman "a hero of mine," adding, "He built a company from nothing to $90 Billion in value, and changed our collective world forever. I can't wait to see what he does next. I, and billions of people, will benefit from his future work- it's going to be simply incredible."

"This is a shocker and Altman was a key ingredient in the recipe for success of OpenAI," said Daniel Ives, an analyst at Wedbush Securities. "That said, we believe Microsoft and Nadella will exert more control at OpenAI going forward with Altman gone."

The full impact of the OpenAI surprise will unfold over time, but its fundraising prospects were an immediate concern. Altman was considered a master fundraiser who managed to negotiate billions of dollars in investment from Microsoft as well as having led the company's tender offer transactions this year that fueled OpenAI's valuation from $29 billion to over $80 billion.

"In the short term it will impair OpenAI's ability to raise more capital. In the intermediate term it will be a non-issue," said Thomas Hayes, chairman at hedge fund Great Hill Capital.

Other analysts said Altman's departure, while disruptive, would not derail generative AI's popularity or OpenAI or Microsoft's competitive advantage.

"The innovation created by OpenAI is bigger than any one or two people, and there is no reason to think this would cause OpenAI to cede its leadership position," said D.A. Davidson analyst Gil Luria. "If nothing else, Microsoft's stake and significant interest in OpenAI's progress ensure the appropriate leadership changes are being implemented."

As late as Thursday evening, Altman showed no signs of concern at two public events. He joined colleagues in a panel on the sidelines of the Asia-Pacific Economic Cooperation (APEC) conference in San Francisco, describing his commitment and vision for AI.

Later he spoke at a Burning Man-related event in Oakland, California, engaging in an hour-long conversation on the topic of art and AI. Altman seemed relaxed and gave no indication anything was wrong, but left right after his talk was over at 7:30 p.m.

The event organizer said at the event that Altman had another meeting to attend.



Neuralink Plans ‘High-Volume’ Brain Implant Production by 2026, Musk Says

Elon Musk steps off Air Force One upon arrival at Morristown Municipal Airport in Morristown, New Jersey, US, March 22, 2025. (AFP)
Elon Musk steps off Air Force One upon arrival at Morristown Municipal Airport in Morristown, New Jersey, US, March 22, 2025. (AFP)
TT

Neuralink Plans ‘High-Volume’ Brain Implant Production by 2026, Musk Says

Elon Musk steps off Air Force One upon arrival at Morristown Municipal Airport in Morristown, New Jersey, US, March 22, 2025. (AFP)
Elon Musk steps off Air Force One upon arrival at Morristown Municipal Airport in Morristown, New Jersey, US, March 22, 2025. (AFP)

Elon Musk's brain implant company Neuralink will start "high-volume production" of brain-computer interface devices and move to an entirely automated surgical procedure in 2026, Musk said in a post on the social media platform X on ‌Wednesday.

Neuralink did ‌not immediately respond ‌to ⁠a Reuters ‌request for comment.

The implant is designed to help people with conditions such as a spinal cord injury. The first patient has used it to play video ⁠games, browse the internet, post on ‌social media, and ‍move a cursor ‍on a laptop.

The company began ‍human trials of its brain implant in 2024 after addressing safety concerns raised by the US Food and Drug Administration, which had initially rejected its application in ⁠2022.

Neuralink said in September that 12 people worldwide with severe paralysis have received its brain implants and were using them to control digital and physical tools through thought. It also secured $650 million in a June funding round.


Report: France Aims to Ban Under-15s from Social Media from September 2026

French President Emmanuel Macron holds a press conference during a European Union leaders' summit, in Brussels, Belgium December 19, 2025. (Reuters)
French President Emmanuel Macron holds a press conference during a European Union leaders' summit, in Brussels, Belgium December 19, 2025. (Reuters)
TT

Report: France Aims to Ban Under-15s from Social Media from September 2026

French President Emmanuel Macron holds a press conference during a European Union leaders' summit, in Brussels, Belgium December 19, 2025. (Reuters)
French President Emmanuel Macron holds a press conference during a European Union leaders' summit, in Brussels, Belgium December 19, 2025. (Reuters)

France plans to ban children under 15 from social media sites and to prohibit mobile phones in high schools from September 2026, local media reported on Wednesday, moves that underscore rising public angst over the impact of online harms on minors.

President Emmanuel Macron has often pointed to social media as one of the factors to blame for violence among young people and has signaled he wants France to follow Australia, whose world-first ‌ban for under-16s ‌on social media platforms including Facebook, Snapchat, TikTok ‌and ⁠YouTube came into force ‌in December.

Le Monde newspaper said Macron could announce the measures in his New Year's Eve national address, due to be broadcast at 1900 GMT. His government will submit draft legislation for legal checks in early January, Le Monde and France Info reported.

The Elysee and the prime minister's office did not immediately respond to a request for comment on the reports.

Mobile phones have been banned ⁠in French primary and middle schools since 2018 and the reported new changes would extend that ban ‌to high schools. Pupils aged 11 to ‍15 attend middle schools in the French ‍educational system.

France also passed a law in 2023 requiring social platforms to ‍obtain parental consent for under-15s to create accounts, though technical challenges have impeded its enforcement.

Macron said in June he would push for regulation at the level of the European Union to ban access to social media for all under-15s after a fatal stabbing at a school in eastern France shocked the nation.

The European Parliament in ⁠November urged the EU to set minimum ages for children to access social media to combat a rise in mental health problems among adolescents from excessive exposure, although it is member states which impose age limits. Various other countries have also taken steps to regulate children's access to social media.

Macron heads into the New Year with his domestic legacy in tatters after his gamble on parliamentary elections in 2024 led to a hung parliament, triggering France's worst political crisis in decades that has seen a succession of weak governments.

However, cracking down further on minors' access to social media could prove popular, according to opinion ‌polls. A Harris Interactive survey in 2024 showed 73% of those canvassed supporting a ban on social media access for under-15s.


Poland Urges Brussels to Probe TikTok Over AI-Generated Content

The TikTok logo is pictured outside the company's US head office in Culver City, California, US, September 15, 2020. (Reuters)
The TikTok logo is pictured outside the company's US head office in Culver City, California, US, September 15, 2020. (Reuters)
TT

Poland Urges Brussels to Probe TikTok Over AI-Generated Content

The TikTok logo is pictured outside the company's US head office in Culver City, California, US, September 15, 2020. (Reuters)
The TikTok logo is pictured outside the company's US head office in Culver City, California, US, September 15, 2020. (Reuters)

Poland has asked the European Commission to investigate TikTok after the social media platform hosted AI-generated content including calls for Poland to withdraw from the EU, it said on Tuesday, adding that the content was almost certainly Russian disinformation.

"The disclosed content poses a threat to public order, information security, and the integrity of democratic processes in Poland and across the European Union," Deputy Digitalization Minister Dariusz Standerski said in a letter sent to the Commission.

"The nature of ‌the narratives, ‌the manner in which they ‌are distributed, ⁠and the ‌use of synthetic audiovisual materials indicate that the platform is failing to comply with the obligations imposed on it as a Very Large Online Platform (VLOP)," he added.

A Polish government spokesperson said on Tuesday the content was undoubtedly Russian disinformation as the recordings contained Russian syntax.

TikTok, representatives ⁠of the Commission and of the Russian embassy in Warsaw did not ‌immediately respond to Reuters' requests for ‍comment.

EU countries are taking ‍measures to head off any foreign state attempts to ‍influence elections and local politics after warning of Russian-sponsored espionage and sabotage. Russia has repeatedly denied interfering in foreign elections.

Last year, the Commission opened formal proceedings against social media firm TikTok, owned by China's ByteDance, over its suspected failure to limit election interference, notably in ⁠the Romanian presidential vote in November 2024.

Poland called on the Commission to initiate proceedings in connection with suspected breaches of the bloc's sweeping Digital Services Act, which regulates how the world's biggest social media companies operate in Europe.

Under the Act, large internet platforms like X, Facebook, TikTok and others must moderate and remove harmful content like hate speech, racism or xenophobia. If they do not, the Commission can impose fines of up to 6% ‌of their worldwide annual turnover.