US, China Meet in Geneva to Discuss AI Risks 

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. (Reuters)
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. (Reuters)
TT

US, China Meet in Geneva to Discuss AI Risks 

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. (Reuters)
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. (Reuters)

The US and China will meet in Geneva to discuss advanced artificial intelligence on Tuesday, US officials said, stressing that Washington's policies would not be up for negotiation even as the talks explore mitigating risks from the emerging technology.

President Joe Biden's administration has sought to engage China on a range of issues to reduce miscommunication between the two rivals. US Secretary of State Antony Blinken and China's Foreign Minister Wang Yi broached the topic of AI in April in Beijing, where they agreed to hold their first formal bilateral talks on the subject.

The State Department has pressed China and Russia to match US declarations that only humans, and never artificial intelligence, would make decisions on deploying nuclear weapons.

"This is the first meeting of its kind. So, we expect to have a discussion of the full range of risks, but wouldn't prejudge any specifics at this point," a senior administration official told reporters ahead of the meeting when asked if the US would prioritize the nuclear weapons issue.

China's rapid deployment of AI capabilities across civilian, military and national security sectors often undermined the security of the US and its allies, the official said, adding the talks would allow Washington to directly communicate its concerns.

"To be very clear, talks with Beijing are not focused on promoting any form of technical collaboration or cooperating on frontier research in any matter. And our technology protection policies are not up for negotiation," the official added.

Reuters has reported that the Biden administration plans to put guardrails on US-developed proprietary AI models that power popular chatbots like ChatGPT to safeguard the technology from countries such as China and Russia.

A second US official briefing reporters said Washington and Beijing were competing to shape the rules on AI, but also hoped to explore whether some rules could be "embraced by all countries."

"We certainly don't see eye to eye ... on many AI topics and applications, but we believe that communication on critical AI risks can make the world safer," the second official said.

US National Security Council official Tarun Chhabra and Seth Center, the State Department's acting special envoy for critical and emerging technology, will lead the talks with officials from China's Foreign Ministry and state planner, the National Development and Reform Commission.

US Senate majority leader Chuck Schumer plans to issue recommendations in coming weeks to address risks from AI, which he says will then be translated into piecemeal legislation.

He has cited competition with China and its divergent goals for AI, including surveillance and facial recognition applications, as reason for Washington's need to take a lead in crafting laws around the rapidly advancing technology.

Chinese authorities have been emphasizing the need for the country to develop its own "controllable" AI technology.



OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

AI startups OpenAI and Anthropic have signed deals with the United States government for research, testing and evaluation of their artificial intelligence models, the US Artificial Intelligence Safety Institute said on Thursday.

The first-of-their-kind agreements come at a time when the companies are facing regulatory scrutiny over safe and ethical use of AI technologies.

California legislators are set to vote on a bill as soon as this week to broadly regulate how AI is developed and deployed in the state.

Under the deals, the US AI Safety Institute will have access to major new models from both OpenAI and Anthropic prior to and following their public release.

The agreements will also enable collaborative research to evaluate capabilities of the AI models and risks associated with them, Reuters reported.

"We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on," said Jason Kwon, chief strategy officer at ChatGPT maker OpenAI.

Anthropic, which is backed by Amazon and Alphabet , did not immediately respond to a Reuters request for comment.

"These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said Elizabeth Kelly, director of the US AI Safety Institute.

The institute, a part of the US commerce department's National Institute of Standards and Technology (NIST), will also collaborate with the U.K. AI Safety Institute and provide feedback to the companies on potential safety improvements.

The US AI Safety Institute was launched last year as part of an executive order by President Joe Biden's administration to evaluate known and emerging risks of artificial intelligence models.