As AI Rises, Lawmakers Try to Catch Up

Representation photo (AP)
Representation photo (AP)
TT

As AI Rises, Lawmakers Try to Catch Up

Representation photo (AP)
Representation photo (AP)

From "intelligent" vacuum cleaners and driverless cars to advanced techniques for diagnosing diseases, artificial intelligence has burrowed its way into every arena of modern life.

Its promoters reckon it is revolutionizing human experience, but critics stress that the technology risks putting machines in charge of life-changing decisions.

Regulators in Europe and North America are worried.

The European Union is likely to pass legislation next year -- the AI Act -- aimed at reining in the age of the algorithm.

The United States recently published a blueprint for an AI Bill of Rights and Canada is also mulling legislation.

Looming large in the debates has been China's use of biometric data, facial recognition and other technology to build a powerful system of control.

Gry Hasselbalch, a Danish academic who advises the EU on the controversial technology, argued that the West was also in danger of creating "totalitarian infrastructures".

"I see that as a huge threat, no matter the benefits," she told AFP.

But before regulators can act, they face the daunting task of defining what AI actually is.

- 'Mug's game' -
Suresh Venkatasubramanian of Brown University, who co-authored the AI Bill of Rights, said trying to define AI was "a mug's game".

Any technology that affects people's rights should be within the scope of the bill, he tweeted.

The 27-nation EU is taking the more tortuous route of attempting to define the sprawling field.

Its draft law lists the kinds of approaches defined as AI, and it includes pretty much any computer system that involves automation.

The problem stems from the changing use of the term AI.

For decades, it described attempts to create machines that simulated human thinking.

But funding largely dried up for this research -- known as symbolic AI -- in the early 2000s.

The rise of the Silicon Valley titans saw AI reborn as a catch-all label for their number-crunching programs and the algorithms they generated.

This automation allowed them to target users with advertising and content, helping them to make hundreds of billions of dollars.

"AI was a way for them to make more use of this surveillance data and to mystify what was happening," Meredith Whittaker, a former Google worker who co-founded New York University's AI Now Institute, told AFP.

So the EU and US have both concluded that any definition of AI needs to be as broad as possible.

- 'Too challenging' -
But from that point, the two Western powerhouses have largely gone their separate ways.

The EU's draft AI Act runs to more than 100 pages.

Among its most eye-catching proposals are the complete prohibition of certain "high-risk" technologies -- the kind of biometric surveillance tools used in China.

It also drastically limits the use of AI tools by migration officials, police and judges.

Hasselbach said some technologies were "simply too challenging to fundamental rights".

The AI Bill of Rights, on the other hand, is a brief set of principles framed in aspirational language, with exhortations like "you should be protected from unsafe or ineffective systems".

The bill was issued by the White House and relies on existing law.

Experts reckon no dedicated AI legislation is likely in the United States until 2024 at the earliest because Congress is deadlocked.

- 'Flesh wound' -
Opinions differ on the merits of each approach.

"We desperately need regulation," Gary Marcus of New York University told AFP.

He points out that "large language models" -- the AI behind chatbots, translation tools, predictive text software and much else -- can be used to generate harmful disinformation.

Whittaker questioned the value of laws aimed at tackling AI rather than the "surveillance business models" that underpin it.

"If you're not addressing that at a fundamental level, I think you're putting a band-aid over a flesh wound," she said.

But other experts have broadly welcomed the US approach.

AI was a better target for regulators than the more abstract concept of privacy, said Sean McGregor, a researcher who chronicles tech failures for the AI Incident Database.

But he said there could be a risk of over-regulation.

"The authorities that exist can regulate AI," he told AFP, pointing to the likes of the US Federal Trade Commission and the housing regulator HUD.

But where experts broadly agree is the need to remove the hype and mysticism that surrounds AI technology.

"It's not magical," McGregor said, likening AI to a highly sophisticated Excel spreadsheet.



AI No Better Than Other Methods for Patients Seeking Medical Advice, Study Shows

AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
TT

AI No Better Than Other Methods for Patients Seeking Medical Advice, Study Shows

AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)

Asking AI about medical symptoms does not help patients make better decisions about their health than other methods, such as a standard internet search, according to a new study published in Nature Medicine.

The authors said the study was important as people were increasingly turning to AI and chatbots for advice on their health, but without evidence that this was necessarily the best and safest approach.

Researchers led by the University of Oxford’s Internet Institute worked alongside a group of doctors to draw up 10 different medical scenarios, ranging from a common cold to a life-threatening hemorrhage causing bleeding on the brain.

When tested without human participants, three large-language models – Open AI's Chat GPT-4o, ‌Meta's Llama ‌3 and Cohere's Command R+ – identified the conditions in ‌94.9% ⁠of cases, ‌and chose the correct course of action, like calling an ambulance or going to the doctor, in an average of 56.3% of cases. The companies did not respond to requests for comment.

'HUGE GAP' BETWEEN AI'S POTENTIAL AND ACTUAL PERFORMANCE

The researchers then recruited 1,298 participants in Britain to either use AI, or their usual resources like an internet search, or their experience, or the National Health Service website to ⁠investigate the symptoms and decide their next step.

When the participants did this, relevant conditions were identified in ‌less than 34.5% of cases, and the right ‍course of action was given in ‍less than 44.2%, no better than the control group using more traditional ‍tools.

Adam Mahdi, co-author of the paper and associate professor at Oxford, said the study showed the “huge gap” between the potential of AI and the pitfalls when it was used by people.

“The knowledge may be in those bots; however, this knowledge doesn’t always translate when interacting with humans,” he said, meaning that more work was needed to identify why this was happening.

HUMANS OFTEN GIVING INCOMPLETE INFORMATION

The ⁠team studied around 30 of the interactions in detail, and concluded that often humans were providing incomplete or wrong information, but the LLMs were also sometimes generating misleading or incorrect responses.

For example, one patient reporting the symptoms of a subarachnoid hemorrhage – a life-threatening condition causing bleeding on the brain – was correctly told by AI to go to hospital after describing a stiff neck, light sensitivity and the "worst headache ever". The other described the same symptoms but a "terrible" headache, and was told to lie down in a darkened room.

The team now plans a similar study in different countries and languages, and over time, to test if that impacts AI’s performance.

The ‌study was supported by the data company Prolific, the German non-profit Dieter Schwarz Stiftung, and the UK and US governments.


Meta Criticizes EU Antitrust Move Against WhatsApp Block on AI Rivals

(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)
(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)
TT

Meta Criticizes EU Antitrust Move Against WhatsApp Block on AI Rivals

(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)
(FILES) This illustration photograph taken on December 1, 2025, shows the logo of WhatsApp displayed on a smartphone's screen, in Frankfurt am Main, western Germany. (Photo by Kirill KUDRYAVTSEV / AFP)

Meta Platforms on Monday criticized EU regulators after they charged the US tech giant with breaching antitrust rules and threaten to halt its block on ⁠AI rivals on its messaging service WhatsApp.

"The facts are that there is no reason for ⁠the EU to intervene in the WhatsApp Business API. There are many AI options and people can use them from app stores, operating systems, devices, websites, and ⁠industry partnerships," a Meta spokesperson said in an email.

"The Commission's logic incorrectly assumes the WhatsApp Business API is a key distribution channel for these chatbots."


Chinese Robot Makers Ready for Lunar New Year Entertainment Spotlight

A folk performer breathes fire during a performance ahead of Lunar New Year celebrations in a village in Huai'an, in China's eastern Jiangsu Province on February 7, 2026. (AFP)
A folk performer breathes fire during a performance ahead of Lunar New Year celebrations in a village in Huai'an, in China's eastern Jiangsu Province on February 7, 2026. (AFP)
TT

Chinese Robot Makers Ready for Lunar New Year Entertainment Spotlight

A folk performer breathes fire during a performance ahead of Lunar New Year celebrations in a village in Huai'an, in China's eastern Jiangsu Province on February 7, 2026. (AFP)
A folk performer breathes fire during a performance ahead of Lunar New Year celebrations in a village in Huai'an, in China's eastern Jiangsu Province on February 7, 2026. (AFP)

In China, humanoid robots are serving as Lunar New Year entertainment, with their manufacturers pitching their song-and-dance skills to the general public as well as potential customers, investors and government officials.

On Sunday, Shanghai-based robotics start-up Agibot live-streamed an almost hour-long variety show featuring its robots dancing, performing acrobatics and magic, lip-syncing ballads and performing in comedy sketches. Other Agibot humanoid robots waved from an audience section.

An estimated 1.4 million people watched on the Chinese streaming platform Douyin. Agibot, which called the promotional stunt "the world's first robot-powered gala," did not have an immediate estimate for total viewership.

The ‌show ran a ‌week ahead of China's annual Spring Festival gala ‌to ⁠be aired ‌by state television, an event that has become an important - if unlikely - venue for Chinese robot makers to show off their success.

A squad of 16 full-size humanoids from Unitree joined human dancers in performing at China Central Television's 2025 gala, drawing stunned accolades from millions of viewers.

Less than three weeks later, Unitree's founder was invited to a high-profile symposium chaired by Chinese President Xi Jinping. The Hangzhou-based robotics ⁠firm has since been preparing for a potential initial public offering.

This year's CCTV gala will include ‌participation by four humanoid robot startups, Unitree, Galbot, Noetix ‍and MagicLab, the companies and broadcaster ‍have said.

Agibot's gala employed over 200 robots. It was streamed on social ‍media platforms RedNote, Sina Weibo, TikTok and its Chinese version Douyin. Chinese-language television networks HTTV and iCiTi TV also broadcast the performance.

"When robots begin to understand Lunar New Year and begin to have a sense of humor, the human-computer interaction may come faster than we think," Ma Hongyun, a photographer and writer with 4.8 million followers on Weibo, said in a post.

Agibot, which says ⁠its humanoid robots are designed for a range of applications, including in education, entertainment and factories, plans to launch an initial public offering in Hong Kong, Reuters has reported.

State-run Securities Times said Agibot had opted out of the CCTV gala in order to focus spending on research and development. The company did not respond to a request for comment.

The company demonstrated two of its robots to Xi during a visit in April last year.

US billionaire Elon Musk, who has pivoted automaker Tesla toward a focus on artificial intelligence and the Optimus humanoid robot, has said the only competitive threat he faces in robotics is from Chinese firms.