EU Tech Chief Calls for Voluntary AI Code of Conduct Within Months 

European Commission Executive Vice President Margrethe Vestager speaks to the press at the EU-US Trade and Technology Council (TTC) meeting in the Kulturens hus in Lulea, Sweden, on May 31, 2023. (AFP)
European Commission Executive Vice President Margrethe Vestager speaks to the press at the EU-US Trade and Technology Council (TTC) meeting in the Kulturens hus in Lulea, Sweden, on May 31, 2023. (AFP)
TT
20

EU Tech Chief Calls for Voluntary AI Code of Conduct Within Months 

European Commission Executive Vice President Margrethe Vestager speaks to the press at the EU-US Trade and Technology Council (TTC) meeting in the Kulturens hus in Lulea, Sweden, on May 31, 2023. (AFP)
European Commission Executive Vice President Margrethe Vestager speaks to the press at the EU-US Trade and Technology Council (TTC) meeting in the Kulturens hus in Lulea, Sweden, on May 31, 2023. (AFP)

The United States and European Union should push the artificial intelligence (AI) industry to adopt a voluntary code of conduct within months to provide safeguards while new laws are developed, EU tech chief Margrethe Vestager said on Wednesday.

The European Union's AI Act, with rules on facial recognition and biometric surveillance, could be the world's first comprehensive legislation governing the technology, but is still going through the legislative process.

"In the best of cases it will take effect in two and a half to three years’ time. That is obviously way too late," Vestager told reporters before a meeting of the joint EU-U.S Trade and Technology Council in Sweden. "We need to act now."

EU industry chief Thierry Breton said last week that Alphabet and the European Commission aimed to develop an AI pact as concerns mount about the impact on society particularly from generative AI, like ChapGPT, that create content.

Leaders of the G7 nations called earlier this month for the development of technical standards to keep AI "trustworthy", urging international discussions on topics such as governance, copyrights, transparency and the threat of disinformation

Vestager said there needed to be agreement on specifics, not just general statements, suggesting the European Union and the United States could help drive the process.

"If the two of us take the lead with close friends, I think we can push something that will make us all much more comfortable with the fact that generative AI is now in the world and is developing at amazing speeds," she said.

Vestager, a European Commission vice president, said a code of conduct come emerge quickly while governments and legislators from the EU to Canada to India establish rules.

"That is the kind of speed you need, to discuss in the coming weeks, a few months, and of course also involve industry ... in order for society to trust what is ongoing," she said.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.