A Former OpenAI Leader Says Safety Has 'Taken a Backseat to Shiny Products' at the AI Company

 OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people’s moods. - The AP.
OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people’s moods. - The AP.
TT

A Former OpenAI Leader Says Safety Has 'Taken a Backseat to Shiny Products' at the AI Company

 OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people’s moods. - The AP.
OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people’s moods. - The AP.

A former OpenAI leader who resigned from the company earlier this week said that safety has “taken a backseat to shiny products” at the influential artificial intelligence company.

Jan Leike, who ran OpenAI's “Superalignment” team alongside a company co-founder who also resigned this week, wrote in a series of posts on the social media platform X that he joined the San Francisco-based company because he thought it would be the best place to do AI research.

“However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point,” wrote Leike, whose last day was Thursday, The AP reported.

An AI researcher by training, Leike said he believes there should be more focus on preparing for the next generation of AI models, including on things like safety and analyzing the societal impacts of such technologies. He said building “smarter-than-human machines is an inherently dangerous endeavor” and that the company “is shouldering an enormous responsibility on behalf of all of humanity.”

“OpenAI must become a safety-first AGI company,” wrote Leike, using the abbreviated version of artificial general intelligence, a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can.

Open AI CEO Sam Altman wrote in a reply to Leike's posts that he was “super appreciative” of Leike's contributions to the company was “very sad to see him leave.”

Leike is "right we have a lot more to do; we are committed to doing it,” Altman said, pledging to write a longer post on the subject in the coming days.

The company also confirmed Friday that it had disbanded Leike's Superalignment team, which was launched last year to focus on AI risks, and is integrating the team's members across its research efforts.

Leike’s resignation came after OpenAI co-founder and chief scientist Ilya Sutskever said Tuesday that he was leaving the company after nearly a decade. Sutskever was one of four board members last fall who voted to push out Altman — only to quickly reinstate him. It was Sutskever who told Altman last November that he was being fired, but he later said he regretted doing so.

Sutskever said he is working on a new project that's meaningful to him without offering additional details. He will be replaced by Jakub Pachocki as chief scientist. Altman called Pachocki “also easily one of the greatest minds of our generation” and said he is “very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone.”



US Judge Finds Israel's NSO Group Liable for Hacking in WhatsApp Lawsuit

Israeli cyber firm NSO Group's exhibition stand is seen at "ISDEF 2019", an international defense and homeland security expo, in Tel Aviv, Israel June 4, 2019. REUTERS/Keren Manor/File Photo
Israeli cyber firm NSO Group's exhibition stand is seen at "ISDEF 2019", an international defense and homeland security expo, in Tel Aviv, Israel June 4, 2019. REUTERS/Keren Manor/File Photo
TT

US Judge Finds Israel's NSO Group Liable for Hacking in WhatsApp Lawsuit

Israeli cyber firm NSO Group's exhibition stand is seen at "ISDEF 2019", an international defense and homeland security expo, in Tel Aviv, Israel June 4, 2019. REUTERS/Keren Manor/File Photo
Israeli cyber firm NSO Group's exhibition stand is seen at "ISDEF 2019", an international defense and homeland security expo, in Tel Aviv, Israel June 4, 2019. REUTERS/Keren Manor/File Photo

A US judge ruled on Friday in favor of Meta Platforms' WhatsApp in a lawsuit accusing Israel's NSO Group of exploiting a bug in the messaging app to install spy software allowing unauthorized surveillance.

US District Judge Phyllis Hamilton in Oakland, California, granted a motion by WhatsApp and found NSO liable for hacking and breach of contract.

The case will now proceed to a trial only on the issue of damages, Hamilton said. NSO Group did not immediately respond to an emailed request for comment, according to Reuters.

Will Cathcart, the head of WhatsApp, said the ruling is a win for privacy.

"We spent five years presenting our case because we firmly believe that spyware companies could not hide behind immunity or avoid accountability for their unlawful actions," Cathcart said in a social media post.

"Surveillance companies should be on notice that illegal spying will not be tolerated."

Cybersecurity experts welcomed the judgment.

John Scott-Railton, a senior researcher with Canadian internet watchdog Citizen Lab — which first brought to light NSO’s Pegasus spyware in 2016 — called the judgment a landmark ruling with “huge implications for the spyware industry.”

“The entire industry has hidden behind the claim that whatever their customers do with their hacking tools, it's not their responsibility,” he said in an instant message. “Today's ruling makes it clear that NSO Group is in fact responsible for breaking numerous laws.”

WhatsApp in 2019 sued NSO seeking an injunction and damages, accusing it of accessing WhatsApp servers without permission six months earlier to install the Pegasus software on victims' mobile devices. The lawsuit alleged the intrusion allowed the surveillance of 1,400 people, including journalists, human rights activists and dissidents.

NSO had argued that Pegasus helps law enforcement and intelligence agencies fight crime and protect national security and that its technology is intended to help catch terrorists, pedophiles and hardened criminals.

NSO appealed a trial judge's 2020 refusal to award it "conduct-based immunity," a common law doctrine protecting foreign officials acting in their official capacity.

Upholding that ruling in 2021, the San Francisco-based 9th US Circuit Court of Appeals called it an "easy case" because NSO's mere licensing of Pegasus and offering technical support did not shield it from liability under a federal law called the Foreign Sovereign Immunities Act, which took precedence over common law.

The US Supreme Court last year turned away NSO's appeal of the lower court's decision, allowing the lawsuit to proceed.