Governments Race to Regulate AI Tools

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo
TT

Governments Race to Regulate AI Tools

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo

Rapid advances in artificial intelligence (AI) such as Microsoft-backed OpenAI's ChatGPT are complicating governments' efforts to agree to laws governing the use of the technology, Reuters said.
Here are the latest steps national and international governing bodies are taking to regulate AI tools:
AUSTRALIA
* Planning regulations
Australia will make search engines draft new codes to prevent the sharing of child sexual abuse material created by AI and the production of deepfake versions of the same material, its internet regulator said in September.
BRITAIN
* Planning regulations
Governments and companies need to address the risks of AI head on, Prime Minister Rishi Sunak said on Oct. 26 ahead of the first global AI Safety Summit at Bletchley Park on Nov. 1-2.
Sunak added Britain would set up the world's first AI safety institute to "understand what each new model is capable of, exploring all the risks from social harms like bias and misinformation through to the most extreme risks".
Britain's data watchdog said on Oct. 10 it had issued Snap Inc's Snapchat with a preliminary enforcement notice over a possible failure to properly assess the privacy risks of its generative AI chatbot to users, particularly children.
CHINA
* Implemented temporary regulations
China published proposed security requirements for firms offering services powered by generative AI on Oct. 12, including a blacklist of sources that cannot be used to train AI models.
The country issued a set of temporary measures in August, requiring service providers to submit security assessments and receive clearance before releasing mass-market AI products.
EUROPEAN UNION
* Planning regulations
European lawmakers agreed on Oct. 24 on a critical part of new AI rules outlining the types of systems that will be designated "high risk", and inched closer to a broader agreement on the landmark AI Act, according to five people familiar with the matter. An agreement is expected in December, two co-rapporteurs said.
European Commission President Ursula von der Leyen on Sept. 13 called for a global panel to assess the risks and benefits of AI.
FRANCE
* Investigating possible breaches
France's privacy watchdog said in April it was investigating complaints about ChatGPT.
G7
* Seeking input on regulations
G7 leaders in May called for the development and adoption of technical standards to keep AI "trustworthy".
ITALY
* Investigating possible breaches
Italy's data protection authority plans to review AI platforms and hire experts in the field, a top official said in May. ChatGPT was temporarily banned in the country in March, but it was made available again in April.
JAPAN
* Investigating possible breaches
Japan expects to introduce by the end of 2023 regulations that are likely closer to the US attitude than the stringent ones planned in the EU, an official close to deliberations said in July.
The country's privacy watchdog has warned OpenAI not to collect sensitive data without people's permission.
POLAND
* Investigating possible breaches
Poland's Personal Data Protection Office said on Sept. 21 it was investigating OpenAI over a complaint that ChatGPT breaks EU data protection laws.
SPAIN
* Investigating possible breaches
Spain's data protection agency in April launched a preliminary investigation into potential data breaches by ChatGPT.
UNITED NATIONS
* Planning regulations
The UN Secretary-General António Guterres on Oct. 26 announced the creation of a 39-member advisory body, composed of tech company executives, government officials and academics, to address issues in the international governance of AI.
The UN Security Council held its first formal discussion on AI in July, addressing military and non-military applications of AI that "could have very serious consequences for global peace and security", Guterres said at the time.
US
* Seeking input on regulations
The White House is expected to unveil on Oct. 30 a long-awaited AI executive order, which would require "advanced AI models to undergo assessments before they can be used by federal workers", the Washington Post reported.
The US Congress in September held hearings on AI and an AI forum featuring Meta CEO Mark Zuckerberg and Tesla CEO Elon Musk.
More than 60 senators took part in the talks, during which Musk called for a US "referee" for AI. Lawmakers said there was universal agreement about the need for government regulation of the technology.
On Sept. 12, the White House said Adobe, IBM , Nvidia and five other firms had signed President Joe Biden's voluntary commitments governing AI, which require steps such as watermarking AI-generated content.
A Washington D.C. district judge ruled in August that a work of art created by AI without any human input cannot be copyrighted under US law.
The US Federal Trade Commission opened in July an investigation into OpenAI on claims that it has run afoul of consumer protection laws.



AI No Better Than Other Methods for Patients Seeking Medical Advice, Study Shows

AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
TT

AI No Better Than Other Methods for Patients Seeking Medical Advice, Study Shows

AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)

Asking AI about medical symptoms does not help patients make better decisions about their health than other methods, such as a standard internet search, according to a new study published in Nature Medicine.

The authors said the study was important as people were increasingly turning to AI and chatbots for advice on their health, but without evidence that this was necessarily the best and safest approach.

Researchers led by the University of Oxford’s Internet Institute worked alongside a group of doctors to draw up 10 different medical scenarios, ranging from a common cold to a life-threatening hemorrhage causing bleeding on the brain.

When tested without human participants, three large-language models – Open AI's Chat GPT-4o, ‌Meta's Llama ‌3 and Cohere's Command R+ – identified the conditions in ‌94.9% ⁠of cases, ‌and chose the correct course of action, like calling an ambulance or going to the doctor, in an average of 56.3% of cases. The companies did not respond to requests for comment.

'HUGE GAP' BETWEEN AI'S POTENTIAL AND ACTUAL PERFORMANCE

The researchers then recruited 1,298 participants in Britain to either use AI, or their usual resources like an internet search, or their experience, or the National Health Service website to ⁠investigate the symptoms and decide their next step.

When the participants did this, relevant conditions were identified in ‌less than 34.5% of cases, and the right ‍course of action was given in ‍less than 44.2%, no better than the control group using more traditional ‍tools.

Adam Mahdi, co-author of the paper and associate professor at Oxford, said the study showed the “huge gap” between the potential of AI and the pitfalls when it was used by people.

“The knowledge may be in those bots; however, this knowledge doesn’t always translate when interacting with humans,” he said, meaning that more work was needed to identify why this was happening.

HUMANS OFTEN GIVING INCOMPLETE INFORMATION

The ⁠team studied around 30 of the interactions in detail, and concluded that often humans were providing incomplete or wrong information, but the LLMs were also sometimes generating misleading or incorrect responses.

For example, one patient reporting the symptoms of a subarachnoid hemorrhage – a life-threatening condition causing bleeding on the brain – was correctly told by AI to go to hospital after describing a stiff neck, light sensitivity and the "worst headache ever". The other described the same symptoms but a "terrible" headache, and was told to lie down in a darkened room.

The team now plans a similar study in different countries and languages, and over time, to test if that impacts AI’s performance.

The ‌study was supported by the data company Prolific, the German non-profit Dieter Schwarz Stiftung, and the UK and US governments.


Meta Criticizes EU Antitrust Move Against WhatsApp Block on AI Rivals

(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)
(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)
TT

Meta Criticizes EU Antitrust Move Against WhatsApp Block on AI Rivals

(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)
(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)

Meta Platforms on Monday criticized EU regulators after they charged the US tech giant with breaching antitrust rules and threaten to halt its block on ⁠AI rivals on its messaging service WhatsApp.

"The facts are that there is no reason for ⁠the EU to intervene in the WhatsApp Business API. There are many AI options and people can use them from app stores, operating systems, devices, websites, and ⁠industry partnerships," a Meta spokesperson said in an email.

"The Commission's logic incorrectly assumes the WhatsApp Business API is a key distribution channel for these chatbots."


Chinese Robot Makers Ready for Lunar New Year Entertainment Spotlight

A folk performer breathes fire during a performance ahead of Lunar New Year celebrations in a village in Huai'an, in China's eastern Jiangsu Province on February 7, 2026. (AFP)
A folk performer breathes fire during a performance ahead of Lunar New Year celebrations in a village in Huai'an, in China's eastern Jiangsu Province on February 7, 2026. (AFP)
TT

Chinese Robot Makers Ready for Lunar New Year Entertainment Spotlight

A folk performer breathes fire during a performance ahead of Lunar New Year celebrations in a village in Huai'an, in China's eastern Jiangsu Province on February 7, 2026. (AFP)
A folk performer breathes fire during a performance ahead of Lunar New Year celebrations in a village in Huai'an, in China's eastern Jiangsu Province on February 7, 2026. (AFP)

In China, humanoid robots are serving as Lunar New Year entertainment, with their manufacturers pitching their song-and-dance skills to the general public as well as potential customers, investors and government officials.

On Sunday, Shanghai-based robotics start-up Agibot live-streamed an almost hour-long variety show featuring its robots dancing, performing acrobatics and magic, lip-syncing ballads and performing in comedy sketches. Other Agibot humanoid robots waved from an audience section.

An estimated 1.4 million people watched on the Chinese streaming platform Douyin. Agibot, which called the promotional stunt "the world's first robot-powered gala," did not have an immediate estimate for total viewership.

The ‌show ran a ‌week ahead of China's annual Spring Festival gala ‌to ⁠be aired ‌by state television, an event that has become an important - if unlikely - venue for Chinese robot makers to show off their success.

A squad of 16 full-size humanoids from Unitree joined human dancers in performing at China Central Television's 2025 gala, drawing stunned accolades from millions of viewers.

Less than three weeks later, Unitree's founder was invited to a high-profile symposium chaired by Chinese President Xi Jinping. The Hangzhou-based robotics ⁠firm has since been preparing for a potential initial public offering.

This year's CCTV gala will include ‌participation by four humanoid robot startups, Unitree, Galbot, Noetix ‍and MagicLab, the companies and broadcaster ‍have said.

Agibot's gala employed over 200 robots. It was streamed on social ‍media platforms RedNote, Sina Weibo, TikTok and its Chinese version Douyin. Chinese-language television networks HTTV and iCiTi TV also broadcast the performance.

"When robots begin to understand Lunar New Year and begin to have a sense of humor, the human-computer interaction may come faster than we think," Ma Hongyun, a photographer and writer with 4.8 million followers on Weibo, said in a post.

Agibot, which says ⁠its humanoid robots are designed for a range of applications, including in education, entertainment and factories, plans to launch an initial public offering in Hong Kong, Reuters has reported.

State-run Securities Times said Agibot had opted out of the CCTV gala in order to focus spending on research and development. The company did not respond to a request for comment.

The company demonstrated two of its robots to Xi during a visit in April last year.

US billionaire Elon Musk, who has pivoted automaker Tesla toward a focus on artificial intelligence and the Optimus humanoid robot, has said the only competitive threat he faces in robotics is from Chinese firms.