A Former OpenAI Leader Says Safety Has 'Taken a Backseat to Shiny Products' at the AI Company

 OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people’s moods. - The AP.
OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people’s moods. - The AP.
TT

A Former OpenAI Leader Says Safety Has 'Taken a Backseat to Shiny Products' at the AI Company

 OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people’s moods. - The AP.
OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people’s moods. - The AP.

A former OpenAI leader who resigned from the company earlier this week said that safety has “taken a backseat to shiny products” at the influential artificial intelligence company.

Jan Leike, who ran OpenAI's “Superalignment” team alongside a company co-founder who also resigned this week, wrote in a series of posts on the social media platform X that he joined the San Francisco-based company because he thought it would be the best place to do AI research.

“However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point,” wrote Leike, whose last day was Thursday, The AP reported.

An AI researcher by training, Leike said he believes there should be more focus on preparing for the next generation of AI models, including on things like safety and analyzing the societal impacts of such technologies. He said building “smarter-than-human machines is an inherently dangerous endeavor” and that the company “is shouldering an enormous responsibility on behalf of all of humanity.”

“OpenAI must become a safety-first AGI company,” wrote Leike, using the abbreviated version of artificial general intelligence, a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can.

Open AI CEO Sam Altman wrote in a reply to Leike's posts that he was “super appreciative” of Leike's contributions to the company was “very sad to see him leave.”

Leike is "right we have a lot more to do; we are committed to doing it,” Altman said, pledging to write a longer post on the subject in the coming days.

The company also confirmed Friday that it had disbanded Leike's Superalignment team, which was launched last year to focus on AI risks, and is integrating the team's members across its research efforts.

Leike’s resignation came after OpenAI co-founder and chief scientist Ilya Sutskever said Tuesday that he was leaving the company after nearly a decade. Sutskever was one of four board members last fall who voted to push out Altman — only to quickly reinstate him. It was Sutskever who told Altman last November that he was being fired, but he later said he regretted doing so.

Sutskever said he is working on a new project that's meaningful to him without offering additional details. He will be replaced by Jakub Pachocki as chief scientist. Altman called Pachocki “also easily one of the greatest minds of our generation” and said he is “very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone.”



Italy Fines OpenAI over ChatGPT Privacy Rules Breach

The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works - Reuters
The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works - Reuters
TT

Italy Fines OpenAI over ChatGPT Privacy Rules Breach

The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works - Reuters
The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works - Reuters

Italy's data protection agency said on Friday it fined ChatGPT maker OpenAI 15 million euros ($15.58 million) after closing an investigation into use of personal data by the generative artificial intelligence application.

The fine comes after the authority found OpenAI processed users' personal data to "train ChatGPT without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users".

OpenAI said the decision was "disproportionate" and that the company will file an appeal against it.

The investigation, which started in 2023, also concluded that the US-based company did not have an adequate age verification system in place to prevent children under the age of 13 from being exposed to inappropriate AI-generated content, the authority said, Reuters reported.

The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works, particularly as regards to data collection of users and non-users to train algorithms.

Italy's authority, known as Garante, is one of the European Union's most proactive regulators in assessing AI platform compliance with the bloc's data privacy regime.

Last year it briefly banned the use of ChatGPT in Italy over alleged breaches of EU privacy rules.

The service was reactivated after Microsoft-backed OpenAI addressed issues concerning, among other things, the right of users to refuse consent for the use of personal data to train the algorithms.

"They've since recognised our industry-leading approach to protecting privacy in AI, yet this fine is nearly twenty times the revenue we made in Italy during the relevant period," OpenAI said, adding the Garante's approach "undermines Italy's AI ambitions".

The regulator said the size of its 15-million-euro fine was calculated taking into account OpenAI's "cooperative stance", suggesting the fine could have been even bigger.

Under the EU's General Data Protection Regulation (GDPR) introduced in 2018, any company found to have broken rules faces fines of up to 20 million euros or 4% of its global turnover.