Italy Fines OpenAI over ChatGPT Privacy Rules Breach

The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works - Reuters
The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works - Reuters
TT
20

Italy Fines OpenAI over ChatGPT Privacy Rules Breach

The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works - Reuters
The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works - Reuters

Italy's data protection agency said on Friday it fined ChatGPT maker OpenAI 15 million euros ($15.58 million) after closing an investigation into use of personal data by the generative artificial intelligence application.

The fine comes after the authority found OpenAI processed users' personal data to "train ChatGPT without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users".

OpenAI said the decision was "disproportionate" and that the company will file an appeal against it.

The investigation, which started in 2023, also concluded that the US-based company did not have an adequate age verification system in place to prevent children under the age of 13 from being exposed to inappropriate AI-generated content, the authority said, Reuters reported.

The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works, particularly as regards to data collection of users and non-users to train algorithms.

Italy's authority, known as Garante, is one of the European Union's most proactive regulators in assessing AI platform compliance with the bloc's data privacy regime.

Last year it briefly banned the use of ChatGPT in Italy over alleged breaches of EU privacy rules.

The service was reactivated after Microsoft-backed OpenAI addressed issues concerning, among other things, the right of users to refuse consent for the use of personal data to train the algorithms.

"They've since recognised our industry-leading approach to protecting privacy in AI, yet this fine is nearly twenty times the revenue we made in Italy during the relevant period," OpenAI said, adding the Garante's approach "undermines Italy's AI ambitions".

The regulator said the size of its 15-million-euro fine was calculated taking into account OpenAI's "cooperative stance", suggesting the fine could have been even bigger.

Under the EU's General Data Protection Regulation (GDPR) introduced in 2018, any company found to have broken rules faces fines of up to 20 million euros or 4% of its global turnover.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.