OpenAI CEO Says Possible to Get Regulation Wrong, but Should Not Fear it

FILE PHOTO: A keyboard is placed in front of a displayed OpenAI logo in this illustration taken February 21, 2023. REUTERS/Dado Ruvic/Illustration/File Photo/File Photo
FILE PHOTO: A keyboard is placed in front of a displayed OpenAI logo in this illustration taken February 21, 2023. REUTERS/Dado Ruvic/Illustration/File Photo/File Photo
TT

OpenAI CEO Says Possible to Get Regulation Wrong, but Should Not Fear it

FILE PHOTO: A keyboard is placed in front of a displayed OpenAI logo in this illustration taken February 21, 2023. REUTERS/Dado Ruvic/Illustration/File Photo/File Photo
FILE PHOTO: A keyboard is placed in front of a displayed OpenAI logo in this illustration taken February 21, 2023. REUTERS/Dado Ruvic/Illustration/File Photo/File Photo

The CEO of ChatGPT maker OpenAI said on Monday that it was possible to get regulation wrong but it is important and should not be feared, amid global concerns about rapid advances in artificial intelligence, or AI.

Many countries are planning AI regulation, and Britain is hosting a global AI safety summit in November, focusing on understanding the risks posed by the frontier technology and how national and international frameworks could be supported.
Sam Altman, CEO and the public face of the startup OpenAI, backed by Microsoft Corp, said during a visit to Taipei that although he was not that worried about government over-regulation, it could happen.
"I also worry about under-regulation. People in our industry bash regulation a lot. We've been calling for regulation, but only of the most powerful systems," he said.
"Models that are like 10,000 times the power of GPT4, models that are like as smart as human civilization, whatever, those probably deserve some regulation," added Altman, speaking at an AI event hosted by the charitable foundation of Terry Gou, the founder of major Apple supplier Foxconn.
According to Reuters, Altman said that in the tech industry there is a "reflexive anti-regulation thing".
"Regulation has been not a pure good, but it's been good in a lot of ways. I don't want to have to make an opinion about every time I step on an airplane how safe it's going to be, but I trust that they're pretty safe and I think regulation has been a positive good there," he said.
"It is possible to get regulation wrong, but I don't think we sit around and fear it. In fact we think some version of it is important."



Italy Fines OpenAI over ChatGPT Privacy Rules Breach

The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works - Reuters
The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works - Reuters
TT

Italy Fines OpenAI over ChatGPT Privacy Rules Breach

The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works - Reuters
The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works - Reuters

Italy's data protection agency said on Friday it fined ChatGPT maker OpenAI 15 million euros ($15.58 million) after closing an investigation into use of personal data by the generative artificial intelligence application.

The fine comes after the authority found OpenAI processed users' personal data to "train ChatGPT without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users".

OpenAI said the decision was "disproportionate" and that the company will file an appeal against it.

The investigation, which started in 2023, also concluded that the US-based company did not have an adequate age verification system in place to prevent children under the age of 13 from being exposed to inappropriate AI-generated content, the authority said, Reuters reported.

The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works, particularly as regards to data collection of users and non-users to train algorithms.

Italy's authority, known as Garante, is one of the European Union's most proactive regulators in assessing AI platform compliance with the bloc's data privacy regime.

Last year it briefly banned the use of ChatGPT in Italy over alleged breaches of EU privacy rules.

The service was reactivated after Microsoft-backed OpenAI addressed issues concerning, among other things, the right of users to refuse consent for the use of personal data to train the algorithms.

"They've since recognised our industry-leading approach to protecting privacy in AI, yet this fine is nearly twenty times the revenue we made in Italy during the relevant period," OpenAI said, adding the Garante's approach "undermines Italy's AI ambitions".

The regulator said the size of its 15-million-euro fine was calculated taking into account OpenAI's "cooperative stance", suggesting the fine could have been even bigger.

Under the EU's General Data Protection Regulation (GDPR) introduced in 2018, any company found to have broken rules faces fines of up to 20 million euros or 4% of its global turnover.