OpenAI Abandons Plan to Become For-profit Company

'OpenAI is not a normal company and never will be,' OpenAI CEO Sam Altman wrote in an email to staff posted on the company's website. JOEL SAGET / AFP
'OpenAI is not a normal company and never will be,' OpenAI CEO Sam Altman wrote in an email to staff posted on the company's website. JOEL SAGET / AFP
TT

OpenAI Abandons Plan to Become For-profit Company

'OpenAI is not a normal company and never will be,' OpenAI CEO Sam Altman wrote in an email to staff posted on the company's website. JOEL SAGET / AFP
'OpenAI is not a normal company and never will be,' OpenAI CEO Sam Altman wrote in an email to staff posted on the company's website. JOEL SAGET / AFP

OpenAI CEO Sam Altman announced Monday that the company behind ChatGPT will continue to be run as a nonprofit, abandoning a contested plan to convert into a for-profit organization.

The structural issue had become a significant point of contention for the artificial intelligence (AI) pioneer, with major investors pushing for the change to better secure their returns, AFP said.

AI safety advocates had expressed concerns about pursuing substantial profits from such powerful technology without the oversight of a nonprofit board of directors acting in society's interest rather than for shareholder profits.

"OpenAI is not a normal company and never will be," Altman wrote in an email to staff posted on the company's website.

"We made the decision for the nonprofit to stay in control after hearing from civic leaders and having discussions with the offices of the Attorneys General of California and Delaware," he added.

OpenAI was founded as a nonprofit in 2015 and later created a "capped" for-profit entity allowing limited profit-making to attract investors, with cloud computing giant Microsoft becoming the largest early backer.

This arrangement nearly collapsed in 2023 when the board unexpectedly fired Altman. Staff revolted, leading to Altman's reinstatement while those responsible for his dismissal departed.

Alarmed by the instability, investors demanded OpenAI transition to a more traditional for-profit structure within two years.

Under its initial reform plan revealed last year, OpenAI would have become an outright for-profit public benefit corporation (PBC), reassuring investors considering the tens of billions of dollars necessary to fulfill the company's ambitions.

Any status change, however, requires approval from state governments in California and Delaware, where the company is headquartered and registered, respectively.

The plan faced strong criticism from AI safety activists and co-founder Elon Musk, who sued the company he left in 2018, claiming the proposal violated its founding philosophy.

In the revised plan, OpenAI's money-making arm will now be fully open to generate profits but, crucially, will remain under the nonprofit board's supervision.

"We believe this sets us up to continue to make rapid, safe progress and to put great AI in the hands of everyone," Altman said.

SoftBank sign-off

OpenAI's major investors will likely have a say in this proposal, with Japanese investment giant SoftBank having made the change to being a for-profit a condition for their massive $30 billion investment announced on March 31.

In an official document, SoftBank stated its total investment could be reduced to $20 billion if OpenAI does not restructure into a for-profit entity by year-end.

The substantial cash injections are needed to cover OpenAI's colossal computing requirements to build increasingly energy-intensive and complex AI models.

The company's original vision did not contemplate "the needs for hundreds of billions of dollars of compute to train models and serve users," Altman said.

SoftBank's contribution in March represented the majority of the $40 billion raised in a funding round that valued the ChatGPT maker at $300 billion, marking the largest capital-raising event ever for a startup.

The company, led by Altman, has become one of Silicon Valley's most successful startups, propelled to prominence in 2022 with the release of ChatGPT, its generative AI chatbot.



Italy Watchdog Orders Meta to Halt WhatsApp Terms Barring Rival AI Chatbots

The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)
The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)
TT

Italy Watchdog Orders Meta to Halt WhatsApp Terms Barring Rival AI Chatbots

The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)
The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)

Italy's antitrust authority (AGCM) on Wednesday ordered Meta Platforms to suspend contractual terms ​that could shut rival AI chatbots out of WhatsApp, as it investigates the US tech group for suspected abuse of a dominant position.

A spokesperson for Meta called the decision "fundamentally flawed," and said the emergence of AI chatbots "put a strain on our systems that ‌they were ‌not designed to support".

"We ‌will ⁠appeal," ​the ‌spokesperson added.

The move is the latest in a string by European regulators against Big Tech firms, as the EU seeks to balance support for the sector with efforts to curb its expanding influence.

Meta's conduct appeared capable of restricting "output, market ⁠access or technical development in the AI chatbot services market", ‌potentially harming consumers, AGCM ‍said.

In July, the ‍Italian regulator opened the investigation into Meta over ‍the suspected abuse of a dominant position related to WhatsApp. It widened the probe in November to cover updated terms for the messaging app's business ​platform.

"These contractual conditions completely exclude Meta AI's competitors in the AI chatbot services ⁠market from the WhatsApp platform," the watchdog said.

EU antitrust regulators launched a parallel investigation into Meta last month over the same allegations.

Europe's tough stance - a marked contrast to more lenient US regulation - has sparked industry pushback, particularly by US tech titans, and led to criticism from the administration of US President Donald Trump.

The Italian watchdog said it was coordinating with the European ‌Commission to ensure Meta's conduct was addressed "in the most effective manner".


Amazon Says Blocked 1,800 North Koreans from Applying for Jobs

Amazon logo (Reuters)
Amazon logo (Reuters)
TT

Amazon Says Blocked 1,800 North Koreans from Applying for Jobs

Amazon logo (Reuters)
Amazon logo (Reuters)

US tech giant Amazon said it has blocked over 1,800 North Koreans from joining the company, as Pyongyang sends large numbers of IT workers overseas to earn and launder funds.

In a post on LinkedIn, Amazon's Chief Security Officer Stephen Schmidt said last week that North Korean workers had been "attempting to secure remote IT jobs with companies worldwide, particularly in the US".

He said the firm had seen nearly a one-third rise in applications by North Koreans in the past year, reported AFP.

The North Koreans typically use "laptop farms" -- a computer in the United States operated remotely from outside the country, he said.

He warned the problem wasn't specific to Amazon and "is likely happening at scale across the industry".

Tell-tale signs of North Korean workers, Schmidt said, included wrongly formatted phone numbers and dodgy academic credentials.

In July, a woman in Arizona was sentenced to more than eight years in prison for running a laptop farm helping North Korean IT workers secure remote jobs at more than 300 US companies.

The scheme generated more than $17 million in revenue for her and North Korea, officials said.

Last year, Seoul's intelligence agency warned that North Korean operatives had used LinkedIn to pose as recruiters and approach South Koreans working at defense firms to obtain information on their technologies.

"North Korea is actively training cyber personnel and infiltrating key locations worldwide," Hong Min, an analyst at the Korea Institute for National Unification, told AFP.

"Given Amazon's business nature, the motive seems largely economic, with a high likelihood that the operation was planned to steal financial assets," he added.

North Korea's cyber-warfare program dates back to at least the mid-1990s.

It has since grown into a 6,000-strong cyber unit known as Bureau 121, which operates from several countries, according to a 2020 US military report.

In November, Washington announced sanctions on eight individuals accused of being "state-sponsored hackers", whose illicit operations were conducted "to fund the regime's nuclear weapons program" by stealing and laundering money.

The US Department of the Treasury has accused North Korea-affiliated cybercriminals of stealing over $3 billion over the past three years, primarily in cryptocurrency.


KAUST Scientists Develop AI-Generated Data to Improve Environmental Disaster Tracking

King Abdullah University of Science and Technology (KAUST) logo
King Abdullah University of Science and Technology (KAUST) logo
TT

KAUST Scientists Develop AI-Generated Data to Improve Environmental Disaster Tracking

King Abdullah University of Science and Technology (KAUST) logo
King Abdullah University of Science and Technology (KAUST) logo

King Abdullah University of Science and Technology (KAUST) and SARsatX, a Saudi company specializing in Earth observation technologies, have developed computer-generated data to train deep learning models to predict oil spills.

According to KAUST, validating the use of synthetic data is crucial for monitoring environmental disasters, as early detection and rapid response can significantly reduce the risks of environmental damage.

Dean of the Biological and Environmental Science and Engineering Division at KAUST Dr. Matthew McCabe noted that one of the biggest challenges in environmental applications of artificial intelligence is the shortage of high-quality training data.

He explained that this challenge can be addressed by using deep learning to generate synthetic data from a very small sample of real data and then training predictive AI models on it.

This approach can significantly enhance efforts to protect the marine environment by enabling faster and more reliable monitoring of oil spills while reducing the logistical and environmental challenges associated with data collection.