Firms and Researchers at Odds over Superhuman AI

Three-quarters of respondents to a survey by the US-based Association for the Advancement of Artificial Intelligence agreed that 'scaling up' LLMs was unlikely to produce artificial general intelligence. Joe Klamar / AFP/File
Three-quarters of respondents to a survey by the US-based Association for the Advancement of Artificial Intelligence agreed that 'scaling up' LLMs was unlikely to produce artificial general intelligence. Joe Klamar / AFP/File
TT

Firms and Researchers at Odds over Superhuman AI

Three-quarters of respondents to a survey by the US-based Association for the Advancement of Artificial Intelligence agreed that 'scaling up' LLMs was unlikely to produce artificial general intelligence. Joe Klamar / AFP/File
Three-quarters of respondents to a survey by the US-based Association for the Advancement of Artificial Intelligence agreed that 'scaling up' LLMs was unlikely to produce artificial general intelligence. Joe Klamar / AFP/File

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction, AFP said.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more skeptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

'Genie out of the bottle'

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Skepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

'Biggest thing ever'

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility program at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.



Nvidia, Joining Big Tech Deal Spree, to License Groq Technology, Hire Executives

The Nvidia logo is seen on a graphic card package in this illustration created on August 19, 2025. (Reuters)
The Nvidia logo is seen on a graphic card package in this illustration created on August 19, 2025. (Reuters)
TT

Nvidia, Joining Big Tech Deal Spree, to License Groq Technology, Hire Executives

The Nvidia logo is seen on a graphic card package in this illustration created on August 19, 2025. (Reuters)
The Nvidia logo is seen on a graphic card package in this illustration created on August 19, 2025. (Reuters)

Nvidia has agreed to license chip technology from startup Groq and hire away its CEO, a veteran of Alphabet's Google, Groq said in a blog post on Wednesday.

The deal follows a familiar pattern in recent years where the world's biggest technology firms pay large sums in deals with promising startups to take their technology and talent but stop short of formally acquiring the target.

Groq specializes in what is known as inference, where artificial intelligence models that have already been trained respond to requests from users. While Nvidia dominates the market for training AI models, it faces much more competition in inference, where traditional rivals such as Advanced Micro Devices have aimed ‌to challenge it ‌as well as startups such as Groq and Cerebras Systems.

Nvidia ‌has ⁠agreed to a "non-exclusive" ‌license to Groq's technology, Groq said. It said its founder Jonathan Ross, who helped Google start its AI chip program, as well as Groq President Sunny Madra and other members of its engineering team, will join Nvidia.

A person close to Nvidia confirmed the licensing agreement.

Groq did not disclose financial details of the deal. CNBC reported that Nvidia had agreed to acquire Groq for $20 billion in cash, but neither Nvidia nor Groq commented on the report. Groq said in its blog post that it will continue to ⁠operate as an independent company with Simon Edwards as CEO and that its cloud business will continue operating.

In similar recent deals, Microsoft's ‌top AI executive came through a $650 million deal with a startup ‍that was billed as a licensing fee, and ‍Meta spent $15 billion to hire Scale AI's CEO without acquiring the entire firm. Amazon hired ‍away founders from Adept AI, and Nvidia did a similar deal this year. The deals have faced scrutiny by regulators, though none has yet been unwound.

"Antitrust would seem to be the primary risk here, though structuring the deal as a non-exclusive license may keep the fiction of competition alive (even as Groq’s leadership and, we would presume, technical talent move over to Nvidia)," Bernstein analyst Stacy Rasgon wrote in a note to clients on Wednesday after Groq's announcement. And Nvidia CEO Jensen Huang's "relationship with ⁠the Trump administration appears among the strongest of the key US tech companies."

Groq more than doubled its valuation to $6.9 billion from $2.8 billion in August last year, following a $750 million funding round in September.

Groq is one of a number of upstarts that do not use external high-bandwidth memory chips, freeing them from the memory crunch affecting the global chip industry. The approach, which uses a form of on-chip memory called SRAM, helps speed up interactions with chatbots and other AI models but also limits the size of the model that can be served.

Groq's primary rival in the approach is Cerebras Systems, which Reuters this month reported plans to go public as soon as next year. Groq and Cerebras have signed large deals in the Middle East.

Nvidia's Huang spent much of his biggest keynote speech of 2025 arguing that ‌Nvidia would be able to maintain its lead as AI markets shift from training to inference.


Italy Watchdog Orders Meta to Halt WhatsApp Terms Barring Rival AI Chatbots

The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)
The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)
TT

Italy Watchdog Orders Meta to Halt WhatsApp Terms Barring Rival AI Chatbots

The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)
The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)

Italy's antitrust authority (AGCM) on Wednesday ordered Meta Platforms to suspend contractual terms ​that could shut rival AI chatbots out of WhatsApp, as it investigates the US tech group for suspected abuse of a dominant position.

A spokesperson for Meta called the decision "fundamentally flawed," and said the emergence of AI chatbots "put a strain on our systems that ‌they were ‌not designed to support".

"We ‌will ⁠appeal," ​the ‌spokesperson added.

The move is the latest in a string by European regulators against Big Tech firms, as the EU seeks to balance support for the sector with efforts to curb its expanding influence.

Meta's conduct appeared capable of restricting "output, market ⁠access or technical development in the AI chatbot services market", ‌potentially harming consumers, AGCM ‍said.

In July, the ‍Italian regulator opened the investigation into Meta over ‍the suspected abuse of a dominant position related to WhatsApp. It widened the probe in November to cover updated terms for the messaging app's business ​platform.

"These contractual conditions completely exclude Meta AI's competitors in the AI chatbot services ⁠market from the WhatsApp platform," the watchdog said.

EU antitrust regulators launched a parallel investigation into Meta last month over the same allegations.

Europe's tough stance - a marked contrast to more lenient US regulation - has sparked industry pushback, particularly by US tech titans, and led to criticism from the administration of US President Donald Trump.

The Italian watchdog said it was coordinating with the European ‌Commission to ensure Meta's conduct was addressed "in the most effective manner".


Amazon Says Blocked 1,800 North Koreans from Applying for Jobs

Amazon logo (Reuters)
Amazon logo (Reuters)
TT

Amazon Says Blocked 1,800 North Koreans from Applying for Jobs

Amazon logo (Reuters)
Amazon logo (Reuters)

US tech giant Amazon said it has blocked over 1,800 North Koreans from joining the company, as Pyongyang sends large numbers of IT workers overseas to earn and launder funds.

In a post on LinkedIn, Amazon's Chief Security Officer Stephen Schmidt said last week that North Korean workers had been "attempting to secure remote IT jobs with companies worldwide, particularly in the US".

He said the firm had seen nearly a one-third rise in applications by North Koreans in the past year, reported AFP.

The North Koreans typically use "laptop farms" -- a computer in the United States operated remotely from outside the country, he said.

He warned the problem wasn't specific to Amazon and "is likely happening at scale across the industry".

Tell-tale signs of North Korean workers, Schmidt said, included wrongly formatted phone numbers and dodgy academic credentials.

In July, a woman in Arizona was sentenced to more than eight years in prison for running a laptop farm helping North Korean IT workers secure remote jobs at more than 300 US companies.

The scheme generated more than $17 million in revenue for her and North Korea, officials said.

Last year, Seoul's intelligence agency warned that North Korean operatives had used LinkedIn to pose as recruiters and approach South Koreans working at defense firms to obtain information on their technologies.

"North Korea is actively training cyber personnel and infiltrating key locations worldwide," Hong Min, an analyst at the Korea Institute for National Unification, told AFP.

"Given Amazon's business nature, the motive seems largely economic, with a high likelihood that the operation was planned to steal financial assets," he added.

North Korea's cyber-warfare program dates back to at least the mid-1990s.

It has since grown into a 6,000-strong cyber unit known as Bureau 121, which operates from several countries, according to a 2020 US military report.

In November, Washington announced sanctions on eight individuals accused of being "state-sponsored hackers", whose illicit operations were conducted "to fund the regime's nuclear weapons program" by stealing and laundering money.

The US Department of the Treasury has accused North Korea-affiliated cybercriminals of stealing over $3 billion over the past three years, primarily in cryptocurrency.