Germany, France and Italy Reach Agreement on Future AI Regulation

FILE PHOTO: European Union (EU) flags fly in front of the headquarters of the European Central Bank (ECB) in Frankfurt, Germany, July 8, 2020. REUTERS/Ralph Orlowski/File Photo
FILE PHOTO: European Union (EU) flags fly in front of the headquarters of the European Central Bank (ECB) in Frankfurt, Germany, July 8, 2020. REUTERS/Ralph Orlowski/File Photo
TT

Germany, France and Italy Reach Agreement on Future AI Regulation

FILE PHOTO: European Union (EU) flags fly in front of the headquarters of the European Central Bank (ECB) in Frankfurt, Germany, July 8, 2020. REUTERS/Ralph Orlowski/File Photo
FILE PHOTO: European Union (EU) flags fly in front of the headquarters of the European Central Bank (ECB) in Frankfurt, Germany, July 8, 2020. REUTERS/Ralph Orlowski/File Photo

France, Germany and Italy have reached an agreement on how artificial intelligence should be regulated, according to a joint paper seen by Reuters, which is expected to accelerate negotiations at the European level.
The three governments support commitments that are voluntary, but binding on small and large AI providers in the European Union that sign up to them.
The European Commission, the European Parliament and the EU Council are negotiating how the bloc should position itself.
In June, the European Parliament presented an "AI Act" designed to contain the risks of AI applications and avoid discriminatory effects, while harnessing the innovative power of AI.
During the discussions, the European Parliament proposed that the code of conduct should initially only be binding for major AI providers, which are primarily from the US.
The three EU governments have said this apparent competitive advantage for smaller European providers could have the drawback of reducing trust in them and of resulting in fewer customers.
The rules of conduct and transparency should therefore be binding for everyone, they said.
Initially, no sanctions should be imposed, according to the paper.
If violations of the code of conduct are identified after a certain period of time, however, a system of sanctions could be set up. In future, a European authority would monitor compliance with the standards, the paper said.
Germany's Economy Ministry, which is in charge of the topic together with the Ministry of Digital Affairs, said laws and state control should not regulate AI itself, but rather its application.
Digital Affairs Minister Volker Wissing told Reuters he was very pleased an agreement had been reached with France and Germany to limit only the use of AI.
"We need to regulate the applications and not the technology if we want to play in the top AI league worldwide," Wissing said.
State Secretary for Economic Affairs Franziska Brantner told Reuters it was crucial to harness the opportunities and limit the risks.
"We have developed a proposal that can ensure a balance between both objectives in a technological and legal terrain that has not yet been defined," Brantner said.
As governments around the world seek to capture the economic benefits of AI, Britain in November hosted its first AI safety summit.
The German government is hosting a digital summit in Jena, in the state of Thuringia, on Monday and Tuesday that will bring together representatives from politics, business and science.



OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

AI startups OpenAI and Anthropic have signed deals with the United States government for research, testing and evaluation of their artificial intelligence models, the US Artificial Intelligence Safety Institute said on Thursday.

The first-of-their-kind agreements come at a time when the companies are facing regulatory scrutiny over safe and ethical use of AI technologies.

California legislators are set to vote on a bill as soon as this week to broadly regulate how AI is developed and deployed in the state.

Under the deals, the US AI Safety Institute will have access to major new models from both OpenAI and Anthropic prior to and following their public release.

The agreements will also enable collaborative research to evaluate capabilities of the AI models and risks associated with them, Reuters reported.

"We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on," said Jason Kwon, chief strategy officer at ChatGPT maker OpenAI.

Anthropic, which is backed by Amazon and Alphabet , did not immediately respond to a Reuters request for comment.

"These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said Elizabeth Kelly, director of the US AI Safety Institute.

The institute, a part of the US commerce department's National Institute of Standards and Technology (NIST), will also collaborate with the U.K. AI Safety Institute and provide feedback to the companies on potential safety improvements.

The US AI Safety Institute was launched last year as part of an executive order by President Joe Biden's administration to evaluate known and emerging risks of artificial intelligence models.