US, China Meet in Geneva to Discuss AI Risks 

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. (Reuters)
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. (Reuters)
TT
20

US, China Meet in Geneva to Discuss AI Risks 

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. (Reuters)
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. (Reuters)

The US and China will meet in Geneva to discuss advanced artificial intelligence on Tuesday, US officials said, stressing that Washington's policies would not be up for negotiation even as the talks explore mitigating risks from the emerging technology.

President Joe Biden's administration has sought to engage China on a range of issues to reduce miscommunication between the two rivals. US Secretary of State Antony Blinken and China's Foreign Minister Wang Yi broached the topic of AI in April in Beijing, where they agreed to hold their first formal bilateral talks on the subject.

The State Department has pressed China and Russia to match US declarations that only humans, and never artificial intelligence, would make decisions on deploying nuclear weapons.

"This is the first meeting of its kind. So, we expect to have a discussion of the full range of risks, but wouldn't prejudge any specifics at this point," a senior administration official told reporters ahead of the meeting when asked if the US would prioritize the nuclear weapons issue.

China's rapid deployment of AI capabilities across civilian, military and national security sectors often undermined the security of the US and its allies, the official said, adding the talks would allow Washington to directly communicate its concerns.

"To be very clear, talks with Beijing are not focused on promoting any form of technical collaboration or cooperating on frontier research in any matter. And our technology protection policies are not up for negotiation," the official added.

Reuters has reported that the Biden administration plans to put guardrails on US-developed proprietary AI models that power popular chatbots like ChatGPT to safeguard the technology from countries such as China and Russia.

A second US official briefing reporters said Washington and Beijing were competing to shape the rules on AI, but also hoped to explore whether some rules could be "embraced by all countries."

"We certainly don't see eye to eye ... on many AI topics and applications, but we believe that communication on critical AI risks can make the world safer," the second official said.

US National Security Council official Tarun Chhabra and Seth Center, the State Department's acting special envoy for critical and emerging technology, will lead the talks with officials from China's Foreign Ministry and state planner, the National Development and Reform Commission.

US Senate majority leader Chuck Schumer plans to issue recommendations in coming weeks to address risks from AI, which he says will then be translated into piecemeal legislation.

He has cited competition with China and its divergent goals for AI, including surveillance and facial recognition applications, as reason for Washington's need to take a lead in crafting laws around the rapidly advancing technology.

Chinese authorities have been emphasizing the need for the country to develop its own "controllable" AI technology.



Anthropic Says Looking to Power European Tech with Hiring Push

As the AI race heats up, so does the race to find talent in the sector, which is currently dominated by US and Chinese companies. Fabrice COFFRINI / AFP/File
As the AI race heats up, so does the race to find talent in the sector, which is currently dominated by US and Chinese companies. Fabrice COFFRINI / AFP/File
TT
20

Anthropic Says Looking to Power European Tech with Hiring Push

As the AI race heats up, so does the race to find talent in the sector, which is currently dominated by US and Chinese companies. Fabrice COFFRINI / AFP/File
As the AI race heats up, so does the race to find talent in the sector, which is currently dominated by US and Chinese companies. Fabrice COFFRINI / AFP/File

American AI giant Anthropic aims to boost the European tech ecosystem as it expands on the continent, product chief Mike Krieger told AFP Thursday at the Vivatech trade fair in Paris.

The OpenAI competitor wants to be "the engine behind some of the largest startups of tomorrow... (and) many of them can and should come from Europe", Krieger said.

Tech industry and political leaders have often lamented Europe's failure to capitalize on its research and education strength to build heavyweight local companies -- with many young founders instead leaving to set up shop across the Atlantic.

Krieger's praise for the region's "really strong talent pipeline" chimed with an air of continental tech optimism at Vivatech.

French AI startup Mistral on Wednesday announced a multibillion-dollar tie-up to bring high-powered computing resources from chip behemoth Nvidia to the region.

The semiconductor firm will "increase the amount of AI computing capacity in Europe by a factor of 10" within two years, Nvidia boss Jensen Huang told an audience at the southern Paris convention center.

Among 100 planned continental hires, Anthropic is building up its technical and research strength in Europe, where it has offices in Dublin and non-EU capital London, Krieger said.

Beyond the startups he hopes to boost, many long-standing European companies "have a really strong appetite for transforming themselves with AI", he added, citing luxury giant LVMH, which had a large footprint at Vivatech.

'Safe by design'

Mistral -- founded only in 2023 and far smaller than American industry leaders like OpenAI and Anthropic -- is nevertheless "definitely in the conversation" in the industry, Krieger said.

The French firm recently followed in the footsteps of the US companies by releasing a so-called "reasoning" model able to take on more complex tasks.

"I talk to customers all the time that are maybe using (Anthropic's AI) Claude for some of the long-horizon agentic tasks, but then they've also fine-tuned Mistral for one of their data processing tasks, and I think they can co-exist in that way," Krieger said.

So-called "agentic" AI models -- including the most recent versions of Claude -- work as autonomous or semi-autonomous agents that are able to do work over longer horizons with less human supervision, including by interacting with tools like web browsers and email.

Capabilities displayed by the latest releases have raised fears among some researchers, such as University of Montreal professor and "AI godfather" Yoshua Bengio, that independently acting AI could soon pose a risk to humanity.

Bengio last week launched a non-profit, LawZero, to develop "safe-by-design" AI -- originally a key founding promise of OpenAI and Anthropic.

'Very specific genius'

"A huge part of why I joined Anthropic was because of how seriously they were taking that question" of AI safety, said Krieger, a Brazilian software engineer who co-founded Instagram, which he left in 2018.

Anthropic is still working on measures designed to restrict their AI models' potential to do harm, he added.

But it has yet to release details of its "level 4" AI safety protections foreseen for still more powerful models, after activating ASL (AI Safety Level) 3 to corral the capabilities of May's Claude Opus 4 release.

Developing ASL 4 is "an active part of the work of the company", Krieger said, without giving a potential release date.

With Claude 4 Opus, "we've deployed the mitigations kind of proactively... safe doesn't have to mean slow, but it does mean having to be thoughtful and proactive ahead of time" to make sure safety protections don't impair performance, he added.

Looking to upcoming releases from Anthropic, Krieger said the company's models were on track to match chief executive Dario Amodei's prediction that Anthropic would offer customers access to a "country of geniuses in a data center" by 2026 or 2027 -- within limits.

Anthropic's latest AI models are "genius-level at some very specific things", he said.

"In the coming year... it will continue to spike in particular aspects of things, and still need a lot of human-in-the-loop coordination," he forecast.