US Judge Finds Israel's NSO Group Liable for Hacking in WhatsApp Lawsuit

Israeli cyber firm NSO Group's exhibition stand is seen at "ISDEF 2019", an international defense and homeland security expo, in Tel Aviv, Israel June 4, 2019. REUTERS/Keren Manor/File Photo
Israeli cyber firm NSO Group's exhibition stand is seen at "ISDEF 2019", an international defense and homeland security expo, in Tel Aviv, Israel June 4, 2019. REUTERS/Keren Manor/File Photo
TT
20

US Judge Finds Israel's NSO Group Liable for Hacking in WhatsApp Lawsuit

Israeli cyber firm NSO Group's exhibition stand is seen at "ISDEF 2019", an international defense and homeland security expo, in Tel Aviv, Israel June 4, 2019. REUTERS/Keren Manor/File Photo
Israeli cyber firm NSO Group's exhibition stand is seen at "ISDEF 2019", an international defense and homeland security expo, in Tel Aviv, Israel June 4, 2019. REUTERS/Keren Manor/File Photo

A US judge ruled on Friday in favor of Meta Platforms' WhatsApp in a lawsuit accusing Israel's NSO Group of exploiting a bug in the messaging app to install spy software allowing unauthorized surveillance.

US District Judge Phyllis Hamilton in Oakland, California, granted a motion by WhatsApp and found NSO liable for hacking and breach of contract.

The case will now proceed to a trial only on the issue of damages, Hamilton said. NSO Group did not immediately respond to an emailed request for comment, according to Reuters.

Will Cathcart, the head of WhatsApp, said the ruling is a win for privacy.

"We spent five years presenting our case because we firmly believe that spyware companies could not hide behind immunity or avoid accountability for their unlawful actions," Cathcart said in a social media post.

"Surveillance companies should be on notice that illegal spying will not be tolerated."

Cybersecurity experts welcomed the judgment.

John Scott-Railton, a senior researcher with Canadian internet watchdog Citizen Lab — which first brought to light NSO’s Pegasus spyware in 2016 — called the judgment a landmark ruling with “huge implications for the spyware industry.”

“The entire industry has hidden behind the claim that whatever their customers do with their hacking tools, it's not their responsibility,” he said in an instant message. “Today's ruling makes it clear that NSO Group is in fact responsible for breaking numerous laws.”

WhatsApp in 2019 sued NSO seeking an injunction and damages, accusing it of accessing WhatsApp servers without permission six months earlier to install the Pegasus software on victims' mobile devices. The lawsuit alleged the intrusion allowed the surveillance of 1,400 people, including journalists, human rights activists and dissidents.

NSO had argued that Pegasus helps law enforcement and intelligence agencies fight crime and protect national security and that its technology is intended to help catch terrorists, pedophiles and hardened criminals.

NSO appealed a trial judge's 2020 refusal to award it "conduct-based immunity," a common law doctrine protecting foreign officials acting in their official capacity.

Upholding that ruling in 2021, the San Francisco-based 9th US Circuit Court of Appeals called it an "easy case" because NSO's mere licensing of Pegasus and offering technical support did not shield it from liability under a federal law called the Foreign Sovereign Immunities Act, which took precedence over common law.

The US Supreme Court last year turned away NSO's appeal of the lower court's decision, allowing the lawsuit to proceed.



Reddit Sues AI Giant Anthropic Over Content Use

Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
TT
20

Reddit Sues AI Giant Anthropic Over Content Use

Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP

Social media outlet Reddit filed a lawsuit Wednesday against artificial intelligence company Anthropic, accusing the startup of illegally scraping millions of user comments to train its Claude chatbot without permission or compensation.

The lawsuit in a California state court represents the latest front in the growing battle between content providers and AI companies over the use of data to train increasingly sophisticated language models that power the generative AI revolution.

Anthropic, valued at $61.5 billion and heavily backed by Amazon, was founded in 2021 by former executives from OpenAI, the creator of ChatGPT.

The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development.

"This case is about the two faces of Anthropic: the public face that attempts to ingratiate itself into the consumer's consciousness with claims of righteousness and respect for boundaries and the law, and the private face that ignores any rules that interfere with its attempts to further line its pockets," the suit said.

According to the complaint, Anthropic has been training its models on Reddit content since at least December 2021, with CEO Dario Amodei co-authoring research papers that specifically identified high-quality content for data training.

The lawsuit alleges that despite Anthropic's public claims that it had blocked its bots from accessing Reddit, the company's automated systems continued to harvest Reddit's servers more than 100,000 times in subsequent months.

Reddit is seeking monetary damages and a court injunction to force Anthropic to comply with its user agreement terms. The company has requested a jury trial.

In an email to AFP, Anthropic said "We disagree with Reddit's claims and will defend ourselves vigorously."

Reddit has entered into licensing agreements with other AI giants including Google and OpenAI, which allow those companies to use Reddit content under terms that protect user privacy and provide compensation to the platform.

Those deals have helped lift Reddit's share price since it went public in 2024.

Reddit shares closed up more than six percent on Wednesday following news of the lawsuit.

Musicians, book authors, visual artists and news publications have sued the various AI companies that used their data without permission or payment.

AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally changes the original content and is necessary for innovation.

Though most of these lawsuits are still in early stages, their outcomes could have a profound effect on the shape of the AI industry.