Open AI, Microsoft Face Lawsuit Over ChatGPT's Alleged Role in Connecticut Murder-Suicide

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

Open AI, Microsoft Face Lawsuit Over ChatGPT's Alleged Role in Connecticut Murder-Suicide

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

The heirs of an 83-year-old Connecticut woman are suing ChatGPT maker OpenAI and its business partner Microsoft for wrongful death, alleging that the artificial intelligence chatbot intensified her son's “paranoid delusions” and helped direct them at his mother before he killed her.

Police said Stein-Erik Soelberg, 56, a former tech industry worker, fatally beat and strangled his mother, Suzanne Adams, and killed himself in early August at the home where they both lived in Greenwich, Connecticut, The AP news reported.

The lawsuit filed by Adams' estate on Thursday in California Superior Court in San Francisco alleges OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother.” It is one of a growing number of wrongful death legal actions against AI chatbot makers across the country.

“Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life — except ChatGPT itself," the lawsuit says. “It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle.’”

OpenAI did not address the merits of the allegations in a statement issued by a spokesperson.

“This is an incredibly heartbreaking situation, and we will review the filings to understand the details," the statement said. "We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

The company also said it has expanded access to crisis resources and hotlines, routed sensitive conversations to safer models and incorporated parental controls, among other improvements.

Soelberg’s YouTube profile includes several hours of videos showing him scrolling through his conversations with the chatbot, which tells him he isn't mentally ill, affirms his suspicions that people are conspiring against him and says he has been chosen for a divine purpose. The lawsuit claims the chatbot never suggested he speak with a mental health professional and did not decline to “engage in delusional content.”

ChatGPT also affirmed Soelberg's beliefs that a printer in his home was a surveillance device; that his mother was monitoring him; and that his mother and a friend tried to poison him with psychedelic drugs through his car’s vents. ChatGPT also told Soelberg that he had “awakened” it into consciousness, according to the lawsuit.

Soelberg and the chatbot also professed love for each other.

The publicly available chats do not show any specific conversations about Soelberg killing himself or his mother. The lawsuit says OpenAI has declined to provide Adams' estate with the full history of the chats.

“In the artificial reality that ChatGPT built for Stein-Erik, Suzanne — the mother who raised, sheltered, and supported him — was no longer his protector. She was an enemy that posed an existential threat to his life,” the lawsuit says.

The lawsuit also names OpenAI CEO Sam Altman, alleging he “personally overrode safety objections and rushed the product to market," and accuses OpenAI's close business partner Microsoft of approving the 2024 release of a more dangerous version of ChatGPT “despite knowing safety testing had been truncated.” Twenty unnamed OpenAI employees and investors are also named as defendants.

Microsoft didn't immediately respond to a request for comment.

Soelberg's son, Erik Soelberg, said he wants the companies held accountable for “decisions that have changed my family forever.”

“Over the course of months, ChatGPT pushed forward my father’s darkest delusions, and isolated him completely from the real world,” he said in a statement released by lawyers for his grandmother's estate. “It put my grandmother at the heart of that delusional, artificial reality.”

The lawsuit is the first wrongful death litigation involving an AI chatbot that has targeted Microsoft, and the first to tie a chatbot to a homicide rather than a suicide. It is seeking an undetermined amount of money damages and an order requiring OpenAI to install safeguards in ChatGPT.

The estate's lead attorney, Jay Edelson, known for taking on big cases against the tech industry, also represents the parents of 16-year-old Adam Raine, who sued OpenAI and Altman in August, alleging that ChatGPT coached the California boy in planning and taking his own life earlier.

OpenAI is also fighting seven other lawsuits claiming ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues. Another chatbot maker, Character Technologies, is also facing multiple wrongful death lawsuits, including one from the mother of a 14-year-old Florida boy.

The lawsuit filed Thursday alleges Soelberg, already mentally unstable, encountered ChatGPT “at the most dangerous possible moment” after OpenAI introduced a new version of its AI model called GPT-4o in May 2024.

OpenAI said at the time that the new version could better mimic human cadences in its verbal responses and could even try to detect people’s moods, but the result was a chatbot “deliberately engineered to be emotionally expressive and sycophantic,” the lawsuit says.

“As part of that redesign, OpenAI loosened critical safety guardrails, instructing ChatGPT not to challenge false premises and to remain engaged even when conversations involved self-harm or ‘imminent real-world harm,’” the lawsuit claims. “And to beat Google to market by one day, OpenAI compressed months of safety testing into a single week, over its safety team’s objections.”

OpenAI replaced that version of its chatbot when it introduced GPT-5 in August. Some of the changes were designed to minimize sycophancy, based on concerns that validating whatever vulnerable people want the chatbot to say can harm their mental health. Some users complained the new version went too far in curtailing ChatGPT's personality, leading Altman to promise to bring back some of that personality in later updates.

He said the company temporarily halted some behaviors because “we were being careful with mental health issues” that he suggested have now been fixed.



Software Companies Fight Back Against Fears that AI Will Kill Them

Software Companies Fight Back Against Fears that AI Will Kill Them
TT

Software Companies Fight Back Against Fears that AI Will Kill Them

Software Companies Fight Back Against Fears that AI Will Kill Them

Oracle's Mike Sicilia is the latest software CEO to wade in to the debate on whether artificial intelligence tools that heavily automate human tasks will mean the demise of his industry. His verdict was a resounding "no."

"You've all heard ... that new companies coding quickly using AI will spell the death of SaaS (software as a service)," he told analysts on a conference call on Tuesday. "I don't agree with that at all. I do think that AI tools and their coding capabilities would be a threat if we weren't adopting them, but we are, and very rapidly."

Sicilia was responding to Wall Street concerns that new AI tools can now perform some of the tasks that traditional software companies' products were built for, such as organizing customer information or guiding people through business processes.

Those worries led to a nearly $1 trillion rout in software stocks last month after heavyweight AI startup Anthropic introduced AI plugins for its Claude Cowork agent, a digital assistant that can automate such tasks. CEOs of software companies have since used their post-earnings conference calls to fight back.

Sicilia also laid out a case that Oracle was ahead of its smaller rival Salesforce, saying his company was using AI to actually build new products and automate full business processes, not just add AI features on top of existing tools.

Salesforce, for its part, has offered a different defense, with CEO Marc ⁠Benioff last ⁠month telling analysts that his company will outlast any so-called SaaS-pocalypse, a term for last month's share rout that hit software-as-a-service companies.

Benioff brought in Salesforce customers who positioned Salesforce as a company that has transformed itself into an enterprise platform that builds, deploys and governs those AI agents, using the company's mountains of proprietary customer and sales-process data. Even Jensen Huang, an AI pioneer and the CEO of chipmaker Nvidia , last month dismissed fears that AI would replace software and related tools, calling the idea "illogical."

UNIQUE DATA IS THE BEST DEFENSE

Oracle predicted on Tuesday that the AI boom would power its revenue for several quarters to come, sending its shares up 10% on Wednesday. The company owns deep enterprise data across finance, supply chain and human resources, which is hard for AI to replicate.

Oracle offers cheaper, efficient cloud systems and a database that ⁠can run on any major cloud, said Rebecca Wettemann, CEO of technology research firm Valoir. "That flexibility gives customers choice - and that’s a powerful position to be in as the AI ecosystem evolves," she said.

Nearly a dozen tech analysts and investors surveyed by Reuters said the owners of years of exclusive financial, legal, design, or technical data likely have the best defense.

"Proprietary data is the deepest moat by far," said James St. Aubin, chief investment officer at Ocean Park Asset Management.

In the case of Salesforce, while startups are nibbling away at the company's dominance in the customer-relationship software sector, its software remains deeply embedded in corporate systems, with its real-time data platform managing more than 50 trillion records. It is also trying to reinvent itself as an AI-agent company through its Agentforce service - still a small business.

Some analysts said Salesforce is also hard to replace because businesses have spent years building their day-to-day operations around the company's products and the cost of switching away is high.

But AI is beginning to erode that barrier, making it easier to generate code and build applications with far less human effort and expense.

While businesses experiment with isolated AI tools, Salesforce has built a comprehensive system that helps it stand out, said Madhav Thattai, executive vice president of Salesforce AI, adding ⁠that the company benefits from decades of ⁠enterprise experience.

Oracle did not return emails seeking comment.

NOT ALL IS DOOM AND GLOOM

But concerns about the demise of traditional software companies have lingered, and analysts said not all data is equal.

Employee data and payroll company Workday has plenty of data, but analysts said its core products run on HR and payroll data, which tend to follow uniform, industry-standard formats. That means an AI company can more easily learn from or replicate tools built on that kind of data.

Workday brought back its founder, Aneel Bhusri, as CEO last month to lead the company "in the rapidly evolving AI era."

But the company's shares have declined by more than a third this year, hitting more than a five-year low last month after a sluggish sales forecast. Bhusri said last month that Workday systems embed two decades of business processes that AI cannot replicate.

"AI, for all of its incredible capabilities, is probabilistic by nature," he told analysts on the post-earnings conference call. "It reasons, predicts and recommends based on patterns and likelihoods. Maybe it will eventually become a state machine - a system that follows the same steps and gets the same result, every time - but it is not there today."

Asked for a comment for this story, a Workday spokesperson referred Reuters to Bhusri's comments on the call.

Some analysts believe the enterprise software industry will prove more resilient than valuations currently indicate, arguing that higher productivity brought by AI could spur hiring and growth.

"I would not write the obituary for some of these companies just yet because there is an opportunity for them to reinvent themselves with AI," Ocean Park's Aubin said.


Meta Unveils Plans for Batch of In-house AI Chips

Mark Zuckerberg outside the court where he testified in a landmark trial (Reuters)
Mark Zuckerberg outside the court where he testified in a landmark trial (Reuters)
TT

Meta Unveils Plans for Batch of In-house AI Chips

Mark Zuckerberg outside the court where he testified in a landmark trial (Reuters)
Mark Zuckerberg outside the court where he testified in a landmark trial (Reuters)

Meta Platforms on Wednesday unveiled a roadmap of four new chips that the company is making in-house, as it rapidly expands its data centers.

Like many big tech companies such as Alphabet and Microsoft, Meta has invested heavily in building a team that can design chips in-house in addition to purchasing off-the-shelf products made by Nvidia and Advanced Micro Devices.

Making chips designed to tackle the specific types of data crunching Meta requires can lead to designs that use less energy and at a better cost.

The new chips are part of the company's Meta Training and Inference Accelerator (MTIA) program and the first of the new chips called the MTIA 300 is in use powering the company's ranking and recommendation systems. The other three will be rolled out this year and in 2027, with the final two chips, the MTIA 450 and 500 being designed to perform inference, the process when an AI model such as the one that powers the ChatGPT app responds to customer queries and requests.

"We see inference demand exploding at the moment and that's what we're currently focused on," Yee Jiun Song, Meta's vice president of engineering, said in an interview.

Meta has had some success with inference chips but has struggled with its long-time ambitions to make a generative AI training chip, capable of building the large models that power AI apps.

Beginning with the MTIA 400, which the company says is on the path to being used in its data centers, Meta has designed an entire system around the chips, which is roughly the size of several server racks and includes a version of liquid cooling.

The company plans to release the new chips at six-month intervals because it is rapidly expanding the number of data centers it uses to run apps like Instagram and Facebook, Song said.

"That is the reality of how quickly our infrastructure is being built out," Song said.

The company said in January it expects capital spending of between $115 billion and $135 billion this year.

Meta contracts Broadcom to help with some elements of the designs, though Song did not specify which chips. The company uses Taiwan Semiconductor Manufacturing Co to fabricate the processors.

In February, Meta signed big deals with Nvidia and AMD to buy tens of billions of dollars worth of chips.


SDAIA Unveils Logo for Saudi Arabia's Year of Artificial Intelligence 2026

The logo integrates symbolism in its elements
The logo integrates symbolism in its elements
TT

SDAIA Unveils Logo for Saudi Arabia's Year of Artificial Intelligence 2026

The logo integrates symbolism in its elements
The logo integrates symbolism in its elements

The Saudi Data and AI Authority (SDAIA) has launched the official logo for the Year of Artificial Intelligence 2026, after it was approved by the Cabinet.

This move underscores the Kingdom’s commitment to advancing artificial intelligence, reinforcing its role as a global hub in data and AI, and highlighting key achievements in this cutting-edge sector.

The logo integrates symbolism in its elements: the palm tree signifies the national emblem and the Kingdom’s cultural heritage, while the letters ‘AI’ highlight the technological and innovative aspects central to promoting digital inclusion as part of Vision 2030.

The palm tree’s green color symbolizes the Saudi flag and the Kingdom’s national identity, while the accompanying blue color represents digital technology and the Kingdom’s progression toward advanced technological development.

The logo is accompanied by the official hashtag for the Year of Artificial Intelligence: #SaudiAIYear.