Open AI, Microsoft Face Lawsuit Over ChatGPT's Alleged Role in Connecticut Murder-Suicide

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

Open AI, Microsoft Face Lawsuit Over ChatGPT's Alleged Role in Connecticut Murder-Suicide

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

The heirs of an 83-year-old Connecticut woman are suing ChatGPT maker OpenAI and its business partner Microsoft for wrongful death, alleging that the artificial intelligence chatbot intensified her son's “paranoid delusions” and helped direct them at his mother before he killed her.

Police said Stein-Erik Soelberg, 56, a former tech industry worker, fatally beat and strangled his mother, Suzanne Adams, and killed himself in early August at the home where they both lived in Greenwich, Connecticut, The AP news reported.

The lawsuit filed by Adams' estate on Thursday in California Superior Court in San Francisco alleges OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother.” It is one of a growing number of wrongful death legal actions against AI chatbot makers across the country.

“Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life — except ChatGPT itself," the lawsuit says. “It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his ‘adversary circle.’”

OpenAI did not address the merits of the allegations in a statement issued by a spokesperson.

“This is an incredibly heartbreaking situation, and we will review the filings to understand the details," the statement said. "We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

The company also said it has expanded access to crisis resources and hotlines, routed sensitive conversations to safer models and incorporated parental controls, among other improvements.

Soelberg’s YouTube profile includes several hours of videos showing him scrolling through his conversations with the chatbot, which tells him he isn't mentally ill, affirms his suspicions that people are conspiring against him and says he has been chosen for a divine purpose. The lawsuit claims the chatbot never suggested he speak with a mental health professional and did not decline to “engage in delusional content.”

ChatGPT also affirmed Soelberg's beliefs that a printer in his home was a surveillance device; that his mother was monitoring him; and that his mother and a friend tried to poison him with psychedelic drugs through his car’s vents. ChatGPT also told Soelberg that he had “awakened” it into consciousness, according to the lawsuit.

Soelberg and the chatbot also professed love for each other.

The publicly available chats do not show any specific conversations about Soelberg killing himself or his mother. The lawsuit says OpenAI has declined to provide Adams' estate with the full history of the chats.

“In the artificial reality that ChatGPT built for Stein-Erik, Suzanne — the mother who raised, sheltered, and supported him — was no longer his protector. She was an enemy that posed an existential threat to his life,” the lawsuit says.

The lawsuit also names OpenAI CEO Sam Altman, alleging he “personally overrode safety objections and rushed the product to market," and accuses OpenAI's close business partner Microsoft of approving the 2024 release of a more dangerous version of ChatGPT “despite knowing safety testing had been truncated.” Twenty unnamed OpenAI employees and investors are also named as defendants.

Microsoft didn't immediately respond to a request for comment.

Soelberg's son, Erik Soelberg, said he wants the companies held accountable for “decisions that have changed my family forever.”

“Over the course of months, ChatGPT pushed forward my father’s darkest delusions, and isolated him completely from the real world,” he said in a statement released by lawyers for his grandmother's estate. “It put my grandmother at the heart of that delusional, artificial reality.”

The lawsuit is the first wrongful death litigation involving an AI chatbot that has targeted Microsoft, and the first to tie a chatbot to a homicide rather than a suicide. It is seeking an undetermined amount of money damages and an order requiring OpenAI to install safeguards in ChatGPT.

The estate's lead attorney, Jay Edelson, known for taking on big cases against the tech industry, also represents the parents of 16-year-old Adam Raine, who sued OpenAI and Altman in August, alleging that ChatGPT coached the California boy in planning and taking his own life earlier.

OpenAI is also fighting seven other lawsuits claiming ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues. Another chatbot maker, Character Technologies, is also facing multiple wrongful death lawsuits, including one from the mother of a 14-year-old Florida boy.

The lawsuit filed Thursday alleges Soelberg, already mentally unstable, encountered ChatGPT “at the most dangerous possible moment” after OpenAI introduced a new version of its AI model called GPT-4o in May 2024.

OpenAI said at the time that the new version could better mimic human cadences in its verbal responses and could even try to detect people’s moods, but the result was a chatbot “deliberately engineered to be emotionally expressive and sycophantic,” the lawsuit says.

“As part of that redesign, OpenAI loosened critical safety guardrails, instructing ChatGPT not to challenge false premises and to remain engaged even when conversations involved self-harm or ‘imminent real-world harm,’” the lawsuit claims. “And to beat Google to market by one day, OpenAI compressed months of safety testing into a single week, over its safety team’s objections.”

OpenAI replaced that version of its chatbot when it introduced GPT-5 in August. Some of the changes were designed to minimize sycophancy, based on concerns that validating whatever vulnerable people want the chatbot to say can harm their mental health. Some users complained the new version went too far in curtailing ChatGPT's personality, leading Altman to promise to bring back some of that personality in later updates.

He said the company temporarily halted some behaviors because “we were being careful with mental health issues” that he suggested have now been fixed.



AI Offers Hope for Young Filmmakers Dreaming of an Oscar

Chinese USC student SiJia Zheng speaks about how he used artificial intelligence to modify his face and make him into all the different characters of his short film 'Torment'. Frederic J. BROWN / AFP
Chinese USC student SiJia Zheng speaks about how he used artificial intelligence to modify his face and make him into all the different characters of his short film 'Torment'. Frederic J. BROWN / AFP
TT

AI Offers Hope for Young Filmmakers Dreaming of an Oscar

Chinese USC student SiJia Zheng speaks about how he used artificial intelligence to modify his face and make him into all the different characters of his short film 'Torment'. Frederic J. BROWN / AFP
Chinese USC student SiJia Zheng speaks about how he used artificial intelligence to modify his face and make him into all the different characters of his short film 'Torment'. Frederic J. BROWN / AFP

Studying at the film school where Oscar-nominated "Sinners" director Ryan Coogler honed his craft, SiJia Zheng dreams of winning an Academy Award.

Now with the recent developments in artificial intelligence, he can see a shortcut to achieving his ambition.3

"That's a chance for beginners like me who can use AI to just make a film and to announce to the world that I have the ability to be a director," he told AFP.

Zheng, 29, who hails from China, is one of a burgeoning class of students at USC's School of Cinematic Arts, studying animation in a place that has long been a training ground for future Pixar and DreamWorks talent.

He has used his time at the Los Angeles university to learn about the emerging field of AI animation.

That has included producing his seven-minute short film "Torment" about a masked killer terrorizing a high school.

The film, which was recognized at the LA Shorts festival, was generated entirely by AI -- in just one week.

Zheng recorded himself in front of a green screen and then asked the software to modify his face to make him into all the different characters in the movie.

The technology also allowed him to set his story in an Asian school and have scenes in a swimming pool -- two things that would have cost a fortune if he had filmed them traditionally.

"As a student, it's impossible to have that much money" to produce a film, he said.

- 'Tool' -

Not everyone in Hollywood feels so positively about AI.

The technology was one of the key sticking points in the writers' and actors' strikes that paralyzed Hollywood in 2023.

Guillermo del Toro, the director of "Frankenstein," which will compete for the best picture Oscar on Sunday, is notoriously anti-AI, insisting he would "rather die" than use it.

Zheng said he had been impressed by the Mexican director's "amazing" film, particularly the opening scene where the monster attacks a 19th-century three-masted ship, which del Toro's prop department constructed specially for the movie.

But "when I watched the film...I was just thinking: 'Oh, using AI to do that would be much cheaper and...make something pretty similar.'"

He insists, however, that it doesn't replace the filmmaking spark.

"AI is just a tool, and people can use it to become even better."

The Academy of Motion Picture Arts and Sciences, the body that will hand out the Oscars in Hollywood on March 15, seems to agree -- last year the body updated its rules to say it was neutral on the technology.

"Generative Artificial Intelligence and other digital tools...neither help nor harm the chances of achieving a nomination," it said last April.

- 'Ethical' use -

At the University of Southern California (USC), teachers like Debra Isaac are trying to navigate the ethics around the emerging technology of AI.

The animation professor said she was shocked by an AI video that rocketed around the internet in recent weeks.

The short sequence, created by Seedance -- the AI generation model developed by TikTok's parent company, Bytedance -- shows an ersatz fight between Brad Pitt and Tom Cruise. Neither star was compensated.

But, used properly, AI does not need to be exploitative, and is not a lazy way to make films, Isaac said.

"It's not just about, 'Hey, I have a prompt, and I'm just gonna type a few words and I'll get my image, and I'll get my animation, and I'm done,'" she said.

"Some of these tools are not ethically dubious at all. They're trained by people that are using their own work," she added.

That's precisely what Xindi Zhang, a recent graduate of the program and winner of a Student Academy Award for her short film "The Song of Drifters," did.

For the mini-documentary about the difficulty of feeling at home anywhere, the 29-year-old artist fed the AI dozens of her drawings.

The database then served as graphic inspiration, allowing the computer to stylize the shots of the cities where the film takes place, accelerating production that would otherwise have taken years.

Even with the help of AI, she spent nearly a month perfecting certain shots.

It's "a craft that nobody really appreciates right now," she says.

But anyone who looks at the use of AI will soon find it's not a compromise-free shortcut to perfection.

"Good, cheap and fast will never happen, no matter what tool you use," Zhang said.


Saudi Arabia Leads Globally in Women’s AI Empowerment with Groundbreaking Initiatives

Saudi Arabia Leads Globally in Women’s AI Empowerment with Groundbreaking Initiatives
TT

Saudi Arabia Leads Globally in Women’s AI Empowerment with Groundbreaking Initiatives

Saudi Arabia Leads Globally in Women’s AI Empowerment with Groundbreaking Initiatives

The Kingdom of Saudi Arabia has made significant strides in empowering women in the data and artificial intelligence (AI) sectors, aiming to elevate their global competitiveness as part of Saudi Vision 2030.

Numerous initiatives have increased the participation of Saudi women in advanced technologies, with the Saudi Data and Artificial Intelligence Authority (SDAIA) offering specialized programs and workshops in partnership with global technology leaders, SPA reported.

In just one year, over 666,000 Saudi women received training in data and AI, positioning the Kingdom first globally in women’s AI empowerment, according to the 2025 AI Index by Stanford University. Key initiatives include the Artificial Intelligence Academy with Microsoft, the Generative AI Academy with NVIDIA, the "SAMAI" initiative (targeting one million Saudis in AI), and the development of a national data and AI curriculum for university students.

These programs have enhanced women's skills and facilitated their contributions to crucial sectors such as health, energy, and education.

SDAIA has created a supportive work environment for women through flexible digital infrastructure, enabling remote work and work-life balance. This commitment reflects the Kingdom's dedication to building a sustainable, data-driven economy, with Saudi women now playing vital roles in shaping the future of advanced technologies.


China Could See Widespread Use of Brain-Computer Tech in 3-5 Years, Expert Says

People cross a road in Beijing on March 6, 2026. (AFP)
People cross a road in Beijing on March 6, 2026. (AFP)
TT

China Could See Widespread Use of Brain-Computer Tech in 3-5 Years, Expert Says

People cross a road in Beijing on March 6, 2026. (AFP)
People cross a road in Beijing on March 6, 2026. (AFP)

China could see brain-computer interface (BCI) technology move into practical public use within three to five years as products mature, a leading BCI expert said, as Beijing races to catch up with US startups including Elon Musk's Neuralink.

Beijing elevated BCIs to a core future strategic industry in its new five-year plan released this week, placing it alongside sectors such as quantum, embodied AI, 6G and nuclear fusion.

"New policies will not change things overnight. I think after another three to five years, we will gradually see some (BCI) products moving ‌towards actual practical ‌service for the public," said Yao Dezhong, Director of ‌the ⁠Sichuan Institute of Brain ⁠Science, in an interview on Saturday on the sidelines of China's annual parliament meetings in Beijing.

TRIALS

A national BCI development strategy released last year aims for major technical breakthroughs by 2027 and for China to cultivate two or three world-class firms by 2030.

China is the second country to launch invasive BCI human trials. More than 10 trials are active, matching the US, while scientists plan to enroll more ⁠than 50 patients nationwide this year.

Recent high-profile trials have enabled ‌paralyzed patients and amputees to regain partial mobility ‌and operate robotic hands or intelligent wheelchairs.

The government has already integrated some BCI treatments into ‌national medical insurance in a few pilot provinces, and the domestic market is ‌projected to reach 5.58 billion yuan ($809 million) by 2027, according to CCID Consulting.

"China has many advantages in BCIs, such as its huge population, enormous patient demand, cost-effective industrial chain and abundant pool of STEM (science, technology, engineering and maths) talent," said Yao, who also ‌leads a key neuroinformatics research center under China's science and technology ministry.

Policies such as insurance integration and national standards aim ⁠to close the "huge" ⁠gap between scientific research, industry and clinical applications, he said.

"The path from experimental to clinical trials is quite long, and this remains a problem," he told Reuters, adding that many Chinese hospitals have established BCI research labs to speed up the process.

While US startups like Neuralink focus on invasive chips that penetrate brain tissue, Chinese researchers are developing invasive, semi-invasive and non-invasive BCIs with wider potential clinical use.

Semi-invasive BCIs, placed on the brain's surface, may lose some signal quality but reduce risks such as tissue damage and other post-surgery complications. Neuralink's surgical robot can insert hundreds of electrodes into the brain in minutes.

"This is a technical advantage, which I think is remarkable," said Yao, of Neuralink.

"(But) China is actually making very fast progress in this area now. In fact, Musk's direction is basically achievable domestically."