ChatGPT Maker OpenAI Ousts CEO Sam Altman

OpenAI CEO Sam Altman participates in a discussion entitled "Charting the Path Forward: The Future of Artificial Intelligence" during the Asia-Pacific Economic Cooperation (APEC) CEO Summit, Thursday, Nov. 16, 2023, in San Francisco. (AP)
OpenAI CEO Sam Altman participates in a discussion entitled "Charting the Path Forward: The Future of Artificial Intelligence" during the Asia-Pacific Economic Cooperation (APEC) CEO Summit, Thursday, Nov. 16, 2023, in San Francisco. (AP)
TT
20

ChatGPT Maker OpenAI Ousts CEO Sam Altman

OpenAI CEO Sam Altman participates in a discussion entitled "Charting the Path Forward: The Future of Artificial Intelligence" during the Asia-Pacific Economic Cooperation (APEC) CEO Summit, Thursday, Nov. 16, 2023, in San Francisco. (AP)
OpenAI CEO Sam Altman participates in a discussion entitled "Charting the Path Forward: The Future of Artificial Intelligence" during the Asia-Pacific Economic Cooperation (APEC) CEO Summit, Thursday, Nov. 16, 2023, in San Francisco. (AP)

The board of the company behind ChatGPT on Friday fired OpenAI CEO Sam Altman - to many, the human face of generative AI - sending shock waves across the tech industry.

OpenAI's Chief Technology Officer Mira Murati will serve as interim CEO, the company said, adding that it will conduct a formal search for a permanent CEO.

"Altman's departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities," OpenAI said in the blog without elaborating.

Greg Brockman, OpenAI president and co-founder, who stepped down from the board as chairman as part of the management shuffle, quit the company, he announced on messaging platform X late on Friday. "Based on today's news, i quit," he wrote.

The departures blindsided many employees who discovered the abrupt management change from an internal message and the company's public facing blog. It came as a surprise to Altman and Brockman as well, who learned the board's decision within minutes of the announcement, Brockman said.

"We too are still trying to figure out exactly what happened," he posted on X, formerly Twitter, adding, "We will be fine. Greater things coming soon."

The now four-person board consists of three independent directors holding no equity in OpenAI and its Chief Scientist Ilya Sutskever. The organization did not immediately answer a request for comment on Brockman's claims.

Backed by billions of dollars from Microsoft, which does not have a board seat in the non-profit governing the startup, OpenAI kicked off the generative AI craze last November by releasing ChatGPT. The chatbot became one of the world's fastest-growing software applications.

Trained on reams of data, generative AI can create human-like content, helping users spin up term papers, complete science homework and even write entire novels. After ChatGPT's launch, regulators scrambled to catch up: the European Union revised its AI Act and the US kicked off AI regulation efforts.

Altman, who ran Y Combinator, is a serial entrepreneur and investor. He was the face of OpenAI and the wildly popular generative AI technology as he toured the world this year.

Altman posted on X shortly after OpenAI published its blog: "i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people. will have more to say about what’s next later."

Altman did not respond to requests for comment.

Murati, who has worked for Tesla, joined OpenAI in 2018 and later became chief technology officer. She oversaw product launches including that of ChatGPT.

At an emergency all-hands meeting on Friday afternoon after the announcement, Murati sought to calm employees and said OpenAI's partnership with Microsoft is stable and its backer's executives, including CEO Satya Nadella, continue to express confidence in the startup, a person familiar with the matter told Reuters.

The Information previously reported details of the meeting.

"Microsoft remains committed to Mira and their team as we bring this next era of AI to our customers," a spokesperson for the software maker told Reuters on Friday.

In a statement published on Microsoft's website, Nadella said: "We have a long-term agreement with OpenAI... Together, we will continue to deliver the meaningful benefits of this technology to the world."

Earthquake

The shakeup is not the first at OpenAI, launched in 2015. Tesla CEO Elon Musk once was its co-chair, and in 2020 other executives departed, going on to found competitor Anthropic, which has claimed it has a greater focus on AI safety.

Well wishers and critics piled onto digital forums as news of the latest shuffle spread.

On X, former Google CEO Eric Schmidt called Altman "a hero of mine," adding, "He built a company from nothing to $90 Billion in value, and changed our collective world forever. I can't wait to see what he does next. I, and billions of people, will benefit from his future work- it's going to be simply incredible."

"This is a shocker and Altman was a key ingredient in the recipe for success of OpenAI," said Daniel Ives, an analyst at Wedbush Securities. "That said, we believe Microsoft and Nadella will exert more control at OpenAI going forward with Altman gone."

The full impact of the OpenAI surprise will unfold over time, but its fundraising prospects were an immediate concern. Altman was considered a master fundraiser who managed to negotiate billions of dollars in investment from Microsoft as well as having led the company's tender offer transactions this year that fueled OpenAI's valuation from $29 billion to over $80 billion.

"In the short term it will impair OpenAI's ability to raise more capital. In the intermediate term it will be a non-issue," said Thomas Hayes, chairman at hedge fund Great Hill Capital.

Other analysts said Altman's departure, while disruptive, would not derail generative AI's popularity or OpenAI or Microsoft's competitive advantage.

"The innovation created by OpenAI is bigger than any one or two people, and there is no reason to think this would cause OpenAI to cede its leadership position," said D.A. Davidson analyst Gil Luria. "If nothing else, Microsoft's stake and significant interest in OpenAI's progress ensure the appropriate leadership changes are being implemented."

As late as Thursday evening, Altman showed no signs of concern at two public events. He joined colleagues in a panel on the sidelines of the Asia-Pacific Economic Cooperation (APEC) conference in San Francisco, describing his commitment and vision for AI.

Later he spoke at a Burning Man-related event in Oakland, California, engaging in an hour-long conversation on the topic of art and AI. Altman seemed relaxed and gave no indication anything was wrong, but left right after his talk was over at 7:30 p.m.

The event organizer said at the event that Altman had another meeting to attend.



Rise in 'Harmful Content' Since Meta Policy Rollbacks, Survey Shows

The logo of Meta is seen at the entrance of the company's temporary stand ahead of the World Economic Forum (WEF) in Davos, Switzerland January 18, 2025. (Reuters)
The logo of Meta is seen at the entrance of the company's temporary stand ahead of the World Economic Forum (WEF) in Davos, Switzerland January 18, 2025. (Reuters)
TT
20

Rise in 'Harmful Content' Since Meta Policy Rollbacks, Survey Shows

The logo of Meta is seen at the entrance of the company's temporary stand ahead of the World Economic Forum (WEF) in Davos, Switzerland January 18, 2025. (Reuters)
The logo of Meta is seen at the entrance of the company's temporary stand ahead of the World Economic Forum (WEF) in Davos, Switzerland January 18, 2025. (Reuters)

Harmful content including hate speech has surged across Meta's platforms since the company ended third-party fact-checking in the United States and eased moderation policies, a survey showed Monday.

The survey of around 7,000 active users on Instagram, Facebook and Threads comes after the Palo Alto company ditched US fact-checkers in January and turned over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X.

The decision was widely seen as an attempt to appease President Donald Trump's new administration, whose conservative support base has long complained that fact-checking on tech platforms was a way to curtail free speech and censor right-wing content.

Meta also rolled back restrictions around topics such as gender and sexual identity. The tech giant's updated community guidelines said its platforms would permit users to accuse people of "mental illness" or "abnormality" based on their gender or sexual orientation.

"These policy shifts signified a dramatic reversal of content moderation standards the company had built over nearly a decade," said the survey published by digital and human rights groups including UltraViolet, GLAAD, and All Out.

"Among our survey population of approximately 7,000 active users, we found stark evidence of increased harmful content, decreased freedom of expression, and increased self-censorship".

One in six respondents in the survey reported being the victim of some form of gender-based or sexual violence on Meta platforms, while 66 percent said they had witnessed harmful content such as hateful or violent material.

Ninety-two percent of surveyed users said they were concerned about increasing harmful content and felt "less protected from being exposed to or targeted by" such material on Meta's platforms.

Seventy-seven percent of respondents described feeling "less safe" expressing themselves freely.

The company declined to comment on the survey.

In its most recent quarterly report, published in May, Meta insisted that the changes in January had left a minimal impact.

"Following the changes announced in January we've cut enforcement mistakes in the US in half, while during that same time period the low prevalence of violating content on the platform remained largely unchanged for most problem areas," the report said.

But the groups behind the survey insisted that the report did not reflect users' experiences of targeted hate and harassment.

"Social media is not just a place we 'go' anymore. It's a place we live, work, and play. That's why it's more crucial than ever to ensure that all people can safely access these spaces and freely express themselves without fear of retribution," Jenna Sherman, campaign director at UltraViolet, told AFP.

"But after helping to set a standard for content moderation online for nearly a decade, (chief executive) Mark Zuckerberg decided to move his company backwards, abandoning vulnerable users in the process.

"Facebook and Instagram already had an equity problem. Now, it's out of control," Sherman added.

The groups implored Meta to hire an independent third party to "formally analyze changes in harmful content facilitated by the policy changes" made in January, and for the tech giant to swiftly reinstate the content moderation standards that were in place earlier.

The International Fact-Checking Network has previously warned of devastating consequences if Meta broadens its policy shift related to fact-checkers beyond US borders to the company's programs covering more than 100 countries.

AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union.