EU Proposes New Copyright Rules for Generative AI

European Union flags fly outside the European Commission headquarters in Brussels, Belgium, March 1, 2023.REUTERS/Johanna Geron/File Photo
European Union flags fly outside the European Commission headquarters in Brussels, Belgium, March 1, 2023.REUTERS/Johanna Geron/File Photo
TT
20

EU Proposes New Copyright Rules for Generative AI

European Union flags fly outside the European Commission headquarters in Brussels, Belgium, March 1, 2023.REUTERS/Johanna Geron/File Photo
European Union flags fly outside the European Commission headquarters in Brussels, Belgium, March 1, 2023.REUTERS/Johanna Geron/File Photo

Companies deploying generative AI tools, such as ChatGPT, will have to disclose any copyrighted material used to develop their systems, according to an early EU agreement that could pave the way for the world's first comprehensive laws governing the technology.

The European Commission began drafting the AI Act nearly two years ago to regulate emerging artificial intelligence technology, which underwent a boom in investment and popularity following the release of OpenAI's AI-powered chatbot ChatGPT, Reuters said.

Members of the European Parliament agreed to push the draft through to the next stage, the trilogue, during which EU lawmakers and member states will thrash out the final details of the bill.

Under the proposals, AI tools will be classified according to their perceived risk level: from minimal through to limited, high, and unacceptable. Areas of concern could include biometric surveillance, spreading misinformation or discriminatory language.

While high-risk tools will not be banned, those using them will need to be highly transparent in their operations.

Companies deploying generative AI tools, such as ChatGPT or image generator Midjourney, will also have to disclose any copyrighted material used to develop their systems.

This provision was a late addition drawn up within the past two weeks, according to a source familiar with discussions. Some committee members initially proposed banning copyrighted material being used to train generative AI models altogether, the source said, but this was abandoned in favor of a transparency requirement.

"Against conservative wishes for more surveillance and leftist fantasies of over-regulation, parliament found a solid compromise that would regulate AI proportionately, protect citizens' rights, as well as foster innovation and boost the economy," said Svenja Hahn, a European Parliament deputy.

Macquarie analyst Fred Havemeyer said the EU's proposal was "tactful" rather than a "ban first, and ask questions later" approach proposed by some.

"The EU has been on the frontier of regulating AI technology," he told Reuters.

RACE TO MARKET

Microsoft-backed OpenAI provoked awe and anxiety around the world when it unveiled ChatGPT late last year. The chatbot became the fastest-growing consumer application in history, reaching 100 million monthly active users in a matter of weeks.

The ensuing race among tech companies to bring generative AI products to market concerned some onlookers, with Twitter-owner Elon Musk backing a proposal to halt development of such systems for six months.

Shortly after signing the letter, the Financial Times reported Musk was planning to launch his own startup to rival OpenAI.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.