US, China Meet in Geneva to Discuss AI Risks 

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. (Reuters)
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. (Reuters)
TT

US, China Meet in Geneva to Discuss AI Risks 

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. (Reuters)
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. (Reuters)

The US and China will meet in Geneva to discuss advanced artificial intelligence on Tuesday, US officials said, stressing that Washington's policies would not be up for negotiation even as the talks explore mitigating risks from the emerging technology.

President Joe Biden's administration has sought to engage China on a range of issues to reduce miscommunication between the two rivals. US Secretary of State Antony Blinken and China's Foreign Minister Wang Yi broached the topic of AI in April in Beijing, where they agreed to hold their first formal bilateral talks on the subject.

The State Department has pressed China and Russia to match US declarations that only humans, and never artificial intelligence, would make decisions on deploying nuclear weapons.

"This is the first meeting of its kind. So, we expect to have a discussion of the full range of risks, but wouldn't prejudge any specifics at this point," a senior administration official told reporters ahead of the meeting when asked if the US would prioritize the nuclear weapons issue.

China's rapid deployment of AI capabilities across civilian, military and national security sectors often undermined the security of the US and its allies, the official said, adding the talks would allow Washington to directly communicate its concerns.

"To be very clear, talks with Beijing are not focused on promoting any form of technical collaboration or cooperating on frontier research in any matter. And our technology protection policies are not up for negotiation," the official added.

Reuters has reported that the Biden administration plans to put guardrails on US-developed proprietary AI models that power popular chatbots like ChatGPT to safeguard the technology from countries such as China and Russia.

A second US official briefing reporters said Washington and Beijing were competing to shape the rules on AI, but also hoped to explore whether some rules could be "embraced by all countries."

"We certainly don't see eye to eye ... on many AI topics and applications, but we believe that communication on critical AI risks can make the world safer," the second official said.

US National Security Council official Tarun Chhabra and Seth Center, the State Department's acting special envoy for critical and emerging technology, will lead the talks with officials from China's Foreign Ministry and state planner, the National Development and Reform Commission.

US Senate majority leader Chuck Schumer plans to issue recommendations in coming weeks to address risks from AI, which he says will then be translated into piecemeal legislation.

He has cited competition with China and its divergent goals for AI, including surveillance and facial recognition applications, as reason for Washington's need to take a lead in crafting laws around the rapidly advancing technology.

Chinese authorities have been emphasizing the need for the country to develop its own "controllable" AI technology.



OpenAI Appoints Former Top US Cyberwarrior Paul Nakasone to its Board of Directors

OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people’s moods. - The AP.
OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people’s moods. - The AP.
TT

OpenAI Appoints Former Top US Cyberwarrior Paul Nakasone to its Board of Directors

OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people’s moods. - The AP.
OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people’s moods. - The AP.

OpenAI has appointed a former top US cyberwarrior and intelligence official to its board of directors, saying he will help protect the ChatGPT maker from “increasingly sophisticated bad actors.”

Retired Army Gen. Paul Nakasone was the commander of US Cyber Command and the director of the National Security Agency before stepping down earlier this year.

He joins an OpenAI board of directors that's still picking up new members after upheaval at the San Francisco artificial intelligence company forced a reset of the board's leadership last year. The previous board had abruptly fired CEO Sam Altman and then was itself replaced as he returned to his CEO role days later, Reuters.

OpenAI reinstated Altman to its board of directors in March and said it had “full confidence” in his leadership after the conclusion of an outside investigation into the company’s turmoil. OpenAI's board is technically a nonprofit but also governs its rapidly growing business.

Nakasone is also joining OpenAI's new safety and security committee — a group that's supposed to advise the full board on “critical safety and security decisions” for its projects and operations. The safety group replaced an earlier safety team that was disbanded after several of its leaders quit.

Nakasone was already leading the Army branch of US Cyber Command when then-President Donald Trump in 2018 picked him to be director of the NSA, one of the nation's top intelligence posts, and head of US Cyber Command. He maintained the dual roles when President Joe Biden took office in 2021. He retired in February.