As AI Rises, Lawmakers Try to Catch Up

Representation photo (AP)
Representation photo (AP)
TT

As AI Rises, Lawmakers Try to Catch Up

Representation photo (AP)
Representation photo (AP)

From "intelligent" vacuum cleaners and driverless cars to advanced techniques for diagnosing diseases, artificial intelligence has burrowed its way into every arena of modern life.

Its promoters reckon it is revolutionizing human experience, but critics stress that the technology risks putting machines in charge of life-changing decisions.

Regulators in Europe and North America are worried.

The European Union is likely to pass legislation next year -- the AI Act -- aimed at reining in the age of the algorithm.

The United States recently published a blueprint for an AI Bill of Rights and Canada is also mulling legislation.

Looming large in the debates has been China's use of biometric data, facial recognition and other technology to build a powerful system of control.

Gry Hasselbalch, a Danish academic who advises the EU on the controversial technology, argued that the West was also in danger of creating "totalitarian infrastructures".

"I see that as a huge threat, no matter the benefits," she told AFP.

But before regulators can act, they face the daunting task of defining what AI actually is.

- 'Mug's game' -
Suresh Venkatasubramanian of Brown University, who co-authored the AI Bill of Rights, said trying to define AI was "a mug's game".

Any technology that affects people's rights should be within the scope of the bill, he tweeted.

The 27-nation EU is taking the more tortuous route of attempting to define the sprawling field.

Its draft law lists the kinds of approaches defined as AI, and it includes pretty much any computer system that involves automation.

The problem stems from the changing use of the term AI.

For decades, it described attempts to create machines that simulated human thinking.

But funding largely dried up for this research -- known as symbolic AI -- in the early 2000s.

The rise of the Silicon Valley titans saw AI reborn as a catch-all label for their number-crunching programs and the algorithms they generated.

This automation allowed them to target users with advertising and content, helping them to make hundreds of billions of dollars.

"AI was a way for them to make more use of this surveillance data and to mystify what was happening," Meredith Whittaker, a former Google worker who co-founded New York University's AI Now Institute, told AFP.

So the EU and US have both concluded that any definition of AI needs to be as broad as possible.

- 'Too challenging' -
But from that point, the two Western powerhouses have largely gone their separate ways.

The EU's draft AI Act runs to more than 100 pages.

Among its most eye-catching proposals are the complete prohibition of certain "high-risk" technologies -- the kind of biometric surveillance tools used in China.

It also drastically limits the use of AI tools by migration officials, police and judges.

Hasselbach said some technologies were "simply too challenging to fundamental rights".

The AI Bill of Rights, on the other hand, is a brief set of principles framed in aspirational language, with exhortations like "you should be protected from unsafe or ineffective systems".

The bill was issued by the White House and relies on existing law.

Experts reckon no dedicated AI legislation is likely in the United States until 2024 at the earliest because Congress is deadlocked.

- 'Flesh wound' -
Opinions differ on the merits of each approach.

"We desperately need regulation," Gary Marcus of New York University told AFP.

He points out that "large language models" -- the AI behind chatbots, translation tools, predictive text software and much else -- can be used to generate harmful disinformation.

Whittaker questioned the value of laws aimed at tackling AI rather than the "surveillance business models" that underpin it.

"If you're not addressing that at a fundamental level, I think you're putting a band-aid over a flesh wound," she said.

But other experts have broadly welcomed the US approach.

AI was a better target for regulators than the more abstract concept of privacy, said Sean McGregor, a researcher who chronicles tech failures for the AI Incident Database.

But he said there could be a risk of over-regulation.

"The authorities that exist can regulate AI," he told AFP, pointing to the likes of the US Federal Trade Commission and the housing regulator HUD.

But where experts broadly agree is the need to remove the hype and mysticism that surrounds AI technology.

"It's not magical," McGregor said, likening AI to a highly sophisticated Excel spreadsheet.



Foundation Stone Laid for World’s Largest Government Data Center in Riyadh

Officials are seen at Thursday's ceremony. (SPA)
Officials are seen at Thursday's ceremony. (SPA)
TT

Foundation Stone Laid for World’s Largest Government Data Center in Riyadh

Officials are seen at Thursday's ceremony. (SPA)
Officials are seen at Thursday's ceremony. (SPA)

The foundation stone was laid in Riyadh Thursday for the Saudi Data and Artificial Intelligence Authority (SDAIA) “Hexagon” Data Center, the world’s largest government data center by megawatt capacity.

Classified as Tier IV and holding the highest data center rating by the global Uptime Institute, the facility will have a total capacity of 480 megawatts and will be built on an area exceeding 30 million square feet in the Saudi capital.

Designed to the highest international standards, the center will provide maximum availability, security, and operational readiness for government data centers. It will meet the growing needs of government entities and support the increasing reliance on electronic services.

The project will contribute to strengthening the national economy and reinforce the Kingdom’s position as a key player in the future of the global digital economy.

A ceremony was held on the occasion, attended by senior officials from various government entities. They were received at the venue by President of SDAIA Dr. Abdullah bin Sharaf Alghamdi and SDAIA officials.

Director of the National Information Center at SDAIA Dr. Issam bin Abdullah Alwagait outlined the project’s details, technical and engineering specifications, and the operational architecture ensuring the highest levels of readiness and availability.

He also reviewed the international accreditations obtained for the center’s solutions and engineering design in line with recognized global standards.

In a press statement, SDAIA President Dr. Abdullah bin Sharaf Alghamdi said the landmark national project comes as part of the continued support of Prince Mohammed bin Salman bin Abdulaziz Al Saud, Crown Prince, Prime Minister and Chairman of SDAIA’s Board of Directors.

This support enables the authority, as the Kingdom’s competent body for data, including big data, and artificial intelligence and the national reference for their regulation, development, and use, to contribute to advancing the Kingdom toward leadership among data- and AI-driven economies, he noted.

The Kingdom will continue to strengthen its presence in advanced technologies with the ongoing support of the Crown Prince, he stressed.

SDAIA will pursue pioneering projects that reflect its ambitious path toward building an integrated digital ecosystem, strengthening national enablers in data and artificial intelligence, and developing world-class technical infrastructure that boosts the competitiveness of the national economy and attracts investment. This aligns with Saudi Vision 2030’s objectives of building a sustainable knowledge-based economy and achieving global leadership in advanced technologies.


Neuralink Plans ‘High-Volume’ Brain Implant Production by 2026, Musk Says

Elon Musk steps off Air Force One upon arrival at Morristown Municipal Airport in Morristown, New Jersey, US, March 22, 2025. (AFP)
Elon Musk steps off Air Force One upon arrival at Morristown Municipal Airport in Morristown, New Jersey, US, March 22, 2025. (AFP)
TT

Neuralink Plans ‘High-Volume’ Brain Implant Production by 2026, Musk Says

Elon Musk steps off Air Force One upon arrival at Morristown Municipal Airport in Morristown, New Jersey, US, March 22, 2025. (AFP)
Elon Musk steps off Air Force One upon arrival at Morristown Municipal Airport in Morristown, New Jersey, US, March 22, 2025. (AFP)

Elon Musk's brain implant company Neuralink will start "high-volume production" of brain-computer interface devices and move to an entirely automated surgical procedure in 2026, Musk said in a post on the social media platform X on ‌Wednesday.

Neuralink did ‌not immediately respond ‌to ⁠a Reuters ‌request for comment.

The implant is designed to help people with conditions such as a spinal cord injury. The first patient has used it to play video ⁠games, browse the internet, post on ‌social media, and ‍move a cursor ‍on a laptop.

The company began ‍human trials of its brain implant in 2024 after addressing safety concerns raised by the US Food and Drug Administration, which had initially rejected its application in ⁠2022.

Neuralink said in September that 12 people worldwide with severe paralysis have received its brain implants and were using them to control digital and physical tools through thought. It also secured $650 million in a June funding round.


Report: France Aims to Ban Under-15s from Social Media from September 2026

French President Emmanuel Macron holds a press conference during a European Union leaders' summit, in Brussels, Belgium December 19, 2025. (Reuters)
French President Emmanuel Macron holds a press conference during a European Union leaders' summit, in Brussels, Belgium December 19, 2025. (Reuters)
TT

Report: France Aims to Ban Under-15s from Social Media from September 2026

French President Emmanuel Macron holds a press conference during a European Union leaders' summit, in Brussels, Belgium December 19, 2025. (Reuters)
French President Emmanuel Macron holds a press conference during a European Union leaders' summit, in Brussels, Belgium December 19, 2025. (Reuters)

France plans to ban children under 15 from social media sites and to prohibit mobile phones in high schools from September 2026, local media reported on Wednesday, moves that underscore rising public angst over the impact of online harms on minors.

President Emmanuel Macron has often pointed to social media as one of the factors to blame for violence among young people and has signaled he wants France to follow Australia, whose world-first ‌ban for under-16s ‌on social media platforms including Facebook, Snapchat, TikTok ‌and ⁠YouTube came into force ‌in December.

Le Monde newspaper said Macron could announce the measures in his New Year's Eve national address, due to be broadcast at 1900 GMT. His government will submit draft legislation for legal checks in early January, Le Monde and France Info reported.

The Elysee and the prime minister's office did not immediately respond to a request for comment on the reports.

Mobile phones have been banned ⁠in French primary and middle schools since 2018 and the reported new changes would extend that ban ‌to high schools. Pupils aged 11 to ‍15 attend middle schools in the French ‍educational system.

France also passed a law in 2023 requiring social platforms to ‍obtain parental consent for under-15s to create accounts, though technical challenges have impeded its enforcement.

Macron said in June he would push for regulation at the level of the European Union to ban access to social media for all under-15s after a fatal stabbing at a school in eastern France shocked the nation.

The European Parliament in ⁠November urged the EU to set minimum ages for children to access social media to combat a rise in mental health problems among adolescents from excessive exposure, although it is member states which impose age limits. Various other countries have also taken steps to regulate children's access to social media.

Macron heads into the New Year with his domestic legacy in tatters after his gamble on parliamentary elections in 2024 led to a hung parliament, triggering France's worst political crisis in decades that has seen a succession of weak governments.

However, cracking down further on minors' access to social media could prove popular, according to opinion ‌polls. A Harris Interactive survey in 2024 showed 73% of those canvassed supporting a ban on social media access for under-15s.