US, China Meet in Geneva to Discuss AI Risks 

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. (Reuters)
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. (Reuters)
TT

US, China Meet in Geneva to Discuss AI Risks 

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. (Reuters)
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. (Reuters)

The US and China will meet in Geneva to discuss advanced artificial intelligence on Tuesday, US officials said, stressing that Washington's policies would not be up for negotiation even as the talks explore mitigating risks from the emerging technology.

President Joe Biden's administration has sought to engage China on a range of issues to reduce miscommunication between the two rivals. US Secretary of State Antony Blinken and China's Foreign Minister Wang Yi broached the topic of AI in April in Beijing, where they agreed to hold their first formal bilateral talks on the subject.

The State Department has pressed China and Russia to match US declarations that only humans, and never artificial intelligence, would make decisions on deploying nuclear weapons.

"This is the first meeting of its kind. So, we expect to have a discussion of the full range of risks, but wouldn't prejudge any specifics at this point," a senior administration official told reporters ahead of the meeting when asked if the US would prioritize the nuclear weapons issue.

China's rapid deployment of AI capabilities across civilian, military and national security sectors often undermined the security of the US and its allies, the official said, adding the talks would allow Washington to directly communicate its concerns.

"To be very clear, talks with Beijing are not focused on promoting any form of technical collaboration or cooperating on frontier research in any matter. And our technology protection policies are not up for negotiation," the official added.

Reuters has reported that the Biden administration plans to put guardrails on US-developed proprietary AI models that power popular chatbots like ChatGPT to safeguard the technology from countries such as China and Russia.

A second US official briefing reporters said Washington and Beijing were competing to shape the rules on AI, but also hoped to explore whether some rules could be "embraced by all countries."

"We certainly don't see eye to eye ... on many AI topics and applications, but we believe that communication on critical AI risks can make the world safer," the second official said.

US National Security Council official Tarun Chhabra and Seth Center, the State Department's acting special envoy for critical and emerging technology, will lead the talks with officials from China's Foreign Ministry and state planner, the National Development and Reform Commission.

US Senate majority leader Chuck Schumer plans to issue recommendations in coming weeks to address risks from AI, which he says will then be translated into piecemeal legislation.

He has cited competition with China and its divergent goals for AI, including surveillance and facial recognition applications, as reason for Washington's need to take a lead in crafting laws around the rapidly advancing technology.

Chinese authorities have been emphasizing the need for the country to develop its own "controllable" AI technology.



SVC Develops AI Intelligence Platform to Strengthen Private Capital Ecosystem

The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights- SPA
The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights- SPA
TT

SVC Develops AI Intelligence Platform to Strengthen Private Capital Ecosystem

The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights- SPA
The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights- SPA

Saudi Venture Capital Company (SVC) announced the launch of its proprietary intelligence platform, Aian, developed in-house using Saudi national expertise to enhance its institutional role in developing the Kingdom’s private capital ecosystem and supporting its mandate as a market maker guided by data-driven growth principles.

According to a press release issued by the SVC today, Aian is a custom-built AI-powered market intelligence capability that transforms SVC’s accumulated institutional expertise and detailed private market data into structured, actionable insights on market dynamics, sector evolution, and capital formation. The platform converts institutional memory into compounding intelligence, enabling decisions that integrate both current market signals and long-term historical trends, SPA reported.

Deputy CEO and Chief Investment Officer Nora Alsarhan stated that as Saudi Arabia’s private capital market expands, clarity, transparency, and data integrity become as critical as capital itself. She noted that Aian represents a new layer of national market infrastructure, strengthening institutional confidence, enabling evidence-based decision-making, and supporting sustainable growth.

By transforming data into actionable intelligence, she said, the platform reinforces the Kingdom’s position as a leading regional private capital hub under Vision 2030.

She added that market making extends beyond capital deployment to shaping the conditions under which capital flows efficiently, emphasizing that the next phase of market development will be driven by intelligence and analytical insight alongside investment.

Through Aian, SVC is building the knowledge backbone of Saudi Arabia’s private capital ecosystem, enabling clearer visibility, greater precision in decision-making, and capital formation guided by insight rather than assumption.

Chief Strategy Officer Athary Almubarak said that in private capital markets, access to reliable insight increasingly represents the primary constraint, particularly in emerging and fast-scaling markets where disclosures vary and institutional knowledge is fragmented.

She explained that for development-focused investment institutions, inconsistent data presents a structural challenge that directly impacts capital allocation efficiency and the ability to crowd in private investment at scale.

She noted that SVC was established to address such market frictions and that, as a government-backed investor with an explicit market-making mandate, its role extends beyond financing to building the enabling environment in which private capital can grow sustainably.

By integrating SVC’s proprietary portfolio data with selected external market sources, Aian enables continuous consolidation and validation of market activity, producing a dynamic representation of capital deployment over time rather than relying solely on static reporting.

The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights, enabling SVC to identify priority market gaps, recalibrate capital allocation, design targeted ecosystem interventions, and anchor policy dialogue in evidence.

The release added that Aian also features predictive analytics capabilities that anticipate upcoming funding activity, including projected investment rounds and estimated ticket sizes. In addition, it incorporates institutional benchmarking tools that enable structured comparisons across peers, sectors, and interventions, supporting more precise, data-driven ecosystem development.


Job Threats, Rogue Bots: Five Hot Issues in AI

A Delhi police officer outside the venue of the 'India AI Impact Summit 2026'. Arun SANKAR / AFP
A Delhi police officer outside the venue of the 'India AI Impact Summit 2026'. Arun SANKAR / AFP
TT

Job Threats, Rogue Bots: Five Hot Issues in AI

A Delhi police officer outside the venue of the 'India AI Impact Summit 2026'. Arun SANKAR / AFP
A Delhi police officer outside the venue of the 'India AI Impact Summit 2026'. Arun SANKAR / AFP

As artificial intelligence evolves at a blistering pace, world leaders and thousands of other delegates will discuss how to handle the technology at the AI Impact Summit, which opens Monday in New Delhi.

Here are five big issues on the agenda:

Job loss fears

Generative AI threatens to disrupt myriad industries, from software development and factory work to music and the movies.

India -- with its large customer service and tech support sectors -- could be vulnerable, and shares in the country's outsourcing firms have plunged in recent days, partly due to advances in AI assistant tools.

"Automation, intelligent systems, and data-driven processes are increasingly taking over routine and repetitive tasks, reshaping traditional job structures," the summit's "human capital" working group says.

"While these developments can drive efficiency and innovation, they also risk displacing segments of the workforce," widening socio-economic divides, it warns.

Bad robots

The Delhi summit is the fourth in a series of international AI meetings. The first in 2023 was called the AI Safety Summit, and preventing real-world harm is still a key goal.

In the United States, families of people who have taken their own lives have sued OpenAI, accusing ChatGPT of having contributed to the suicides. The company says it has made efforts to strengthen its safeguards.

Elon Musk's Grok AI tool also recently sparked global outrage and bans in several countries over its ability to create sexualized deepfakes depicting real people, including children, in skimpy clothing.

Other concerns range from copyright violations to scammers using AI tools to produce perfectly spelled phishing emails.

Energy demands

Tech giants are spending hundreds of billions of dollars on AI infrastructure, building data centers packed with cutting-edge microchips, and also, in some cases, nuclear plants to power them.

The International Energy Agency projects that electricity consumption from data centers will double by 2030, fueled by the AI boom.

In 2024, data centers accounted for an estimated 1.5 percent of global electricity consumption, it says.

Alongside concerns over planet-warming carbon emissions are worries about water use to cool the data centers servers, which can lead to shortages on hot days.

Moves to regulate

In South Korea, a wide-ranging law regulating artificial intelligence took effect in January, requiring companies to tell users when products use generative AI.

Many countries are planning similar moves, despite a warning from US Vice President JD Vance last year against "excessive regulation" that could stifle innovation.

The European Union's Artificial Intelligence Act allows regulators to ban AI systems deemed to pose "unacceptable risks" to society.

That could include identifying people in real time in public spaces or evaluating criminal risk based on biometric data alone.

'Everyone dies'

More existential fears have also been expressed by AI insiders who believe the technology is marching towards so-called "Artificial General Intelligence", when machines' abilities match those of humans.

OpenAI and rival startup Anthropic have seen public resignations of staff members who have spoken out about the ethical implications of their technology.

Anthropic warned last week that its latest chatbot models could be nudged towards "knowingly supporting -- in small ways -- efforts toward chemical weapon development and other heinous crimes".

Researcher Eliezer Yudkowsky, author of the 2025 book "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All" has also compared AI to the development of nuclear weapons.


OpenAI Hires Creator of 'OpenClaw' AI Agent Tool

FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo
TT

OpenAI Hires Creator of 'OpenClaw' AI Agent Tool

FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI has hired the Austrian creator of OpenClaw, an artificial intelligence tool able to execute real-world tasks, the US startup's head Sam Altman said on Sunday.

AI agent tool OpenClaw has fascinated -- and spooked -- the tech world since researcher Peter Steinberger built it in November to help organize his digital life.

A Reddit-like pseudo social network for OpenClaw agents called Moltbook, where chatbots converse, has also grabbed headlines and provoked soul-searching over AI.

Elon Musk called Moltbook "the very early stages of the singularity" -- a term for the moment when human intelligence is overwhelmed by AI forever, although some people have questioned to what extent humans are manipulating the bots' posts.

Steinberger "is joining OpenAI to drive the next generation of personal agents," Altman wrote in an X post.

"He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people," AFP quoted him as saying.

"We expect this will quickly become core to our product offerings," Altman wrote, saying that OpenClaw would remain an open-source project within a foundation supported by OpenAI.

"The future is going to be extremely multi-agent and it's important to us to support open source as part of that."

Users of OpenClaw download the tool, and connect it to generative AI models such as ChatGPT.

They then communicate with their AI agent through WhatsApp or Telegram, as they would with a friend or colleague.

Many users gush over the tool's futuristic abilities to send emails and buy things online, but others report an overall chaotic experience with added cybersecurity risks.

Only a small percentage of OpenAI's nearly one billion users pay for subscription services, putting pressure on the company to find new revenue sources.

It has begun testing advertisements and sponsored content in the massively popular ChatGPT, spawning privacy concerns as it looks for ways to start balancing its hundreds of billions in spending commitments.