Generative AI's Most Prominent Skeptic Doubles Down

Generative AI critic Gary Marcus, speaks at the Web Summit Vancouver 2025 tech conference in Vancouver Canada. Don MacKinnon / AFP
Generative AI critic Gary Marcus, speaks at the Web Summit Vancouver 2025 tech conference in Vancouver Canada. Don MacKinnon / AFP
TT

Generative AI's Most Prominent Skeptic Doubles Down

Generative AI critic Gary Marcus, speaks at the Web Summit Vancouver 2025 tech conference in Vancouver Canada. Don MacKinnon / AFP
Generative AI critic Gary Marcus, speaks at the Web Summit Vancouver 2025 tech conference in Vancouver Canada. Don MacKinnon / AFP

Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley's AI true believers.

Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation, AFP said.

Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth.

"Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada.

Marcus's criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative.

The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises.

"I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world."

His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise.

Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability.

That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace.

Yet for all the hype, the practical gains remain limited.

The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business.

Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI -- one he believes might actually achieve human-level intelligence in ways that current generative AI never will.

"One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained.

This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources."

'Right answers matter'

Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude.

He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters."

This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes.

Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate.

Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet."

Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data.

"The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society.

"They have all this private data, so they can sell that as a consolation prize."

Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much.

"They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said.

"But nobody's going to make much money off it because they're expensive to run, and everybody has the same product."



Italy Watchdog Orders Meta to Halt WhatsApp Terms Barring Rival AI Chatbots

The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)
The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)
TT

Italy Watchdog Orders Meta to Halt WhatsApp Terms Barring Rival AI Chatbots

The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)
The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)

Italy's antitrust authority (AGCM) on Wednesday ordered Meta Platforms to suspend contractual terms ​that could shut rival AI chatbots out of WhatsApp, as it investigates the US tech group for suspected abuse of a dominant position.

A spokesperson for Meta called the decision "fundamentally flawed," and said the emergence of AI chatbots "put a strain on our systems that ‌they were ‌not designed to support".

"We ‌will ⁠appeal," ​the ‌spokesperson added.

The move is the latest in a string by European regulators against Big Tech firms, as the EU seeks to balance support for the sector with efforts to curb its expanding influence.

Meta's conduct appeared capable of restricting "output, market ⁠access or technical development in the AI chatbot services market", ‌potentially harming consumers, AGCM ‍said.

In July, the ‍Italian regulator opened the investigation into Meta over ‍the suspected abuse of a dominant position related to WhatsApp. It widened the probe in November to cover updated terms for the messaging app's business ​platform.

"These contractual conditions completely exclude Meta AI's competitors in the AI chatbot services ⁠market from the WhatsApp platform," the watchdog said.

EU antitrust regulators launched a parallel investigation into Meta last month over the same allegations.

Europe's tough stance - a marked contrast to more lenient US regulation - has sparked industry pushback, particularly by US tech titans, and led to criticism from the administration of US President Donald Trump.

The Italian watchdog said it was coordinating with the European ‌Commission to ensure Meta's conduct was addressed "in the most effective manner".


Amazon Says Blocked 1,800 North Koreans from Applying for Jobs

Amazon logo (Reuters)
Amazon logo (Reuters)
TT

Amazon Says Blocked 1,800 North Koreans from Applying for Jobs

Amazon logo (Reuters)
Amazon logo (Reuters)

US tech giant Amazon said it has blocked over 1,800 North Koreans from joining the company, as Pyongyang sends large numbers of IT workers overseas to earn and launder funds.

In a post on LinkedIn, Amazon's Chief Security Officer Stephen Schmidt said last week that North Korean workers had been "attempting to secure remote IT jobs with companies worldwide, particularly in the US".

He said the firm had seen nearly a one-third rise in applications by North Koreans in the past year, reported AFP.

The North Koreans typically use "laptop farms" -- a computer in the United States operated remotely from outside the country, he said.

He warned the problem wasn't specific to Amazon and "is likely happening at scale across the industry".

Tell-tale signs of North Korean workers, Schmidt said, included wrongly formatted phone numbers and dodgy academic credentials.

In July, a woman in Arizona was sentenced to more than eight years in prison for running a laptop farm helping North Korean IT workers secure remote jobs at more than 300 US companies.

The scheme generated more than $17 million in revenue for her and North Korea, officials said.

Last year, Seoul's intelligence agency warned that North Korean operatives had used LinkedIn to pose as recruiters and approach South Koreans working at defense firms to obtain information on their technologies.

"North Korea is actively training cyber personnel and infiltrating key locations worldwide," Hong Min, an analyst at the Korea Institute for National Unification, told AFP.

"Given Amazon's business nature, the motive seems largely economic, with a high likelihood that the operation was planned to steal financial assets," he added.

North Korea's cyber-warfare program dates back to at least the mid-1990s.

It has since grown into a 6,000-strong cyber unit known as Bureau 121, which operates from several countries, according to a 2020 US military report.

In November, Washington announced sanctions on eight individuals accused of being "state-sponsored hackers", whose illicit operations were conducted "to fund the regime's nuclear weapons program" by stealing and laundering money.

The US Department of the Treasury has accused North Korea-affiliated cybercriminals of stealing over $3 billion over the past three years, primarily in cryptocurrency.


KAUST Scientists Develop AI-Generated Data to Improve Environmental Disaster Tracking

King Abdullah University of Science and Technology (KAUST) logo
King Abdullah University of Science and Technology (KAUST) logo
TT

KAUST Scientists Develop AI-Generated Data to Improve Environmental Disaster Tracking

King Abdullah University of Science and Technology (KAUST) logo
King Abdullah University of Science and Technology (KAUST) logo

King Abdullah University of Science and Technology (KAUST) and SARsatX, a Saudi company specializing in Earth observation technologies, have developed computer-generated data to train deep learning models to predict oil spills.

According to KAUST, validating the use of synthetic data is crucial for monitoring environmental disasters, as early detection and rapid response can significantly reduce the risks of environmental damage.

Dean of the Biological and Environmental Science and Engineering Division at KAUST Dr. Matthew McCabe noted that one of the biggest challenges in environmental applications of artificial intelligence is the shortage of high-quality training data.

He explained that this challenge can be addressed by using deep learning to generate synthetic data from a very small sample of real data and then training predictive AI models on it.

This approach can significantly enhance efforts to protect the marine environment by enabling faster and more reliable monitoring of oil spills while reducing the logistical and environmental challenges associated with data collection.