AI Chatbots are Here to Help with Your Mental Health, despite Limited Evidence they Work

Representation photo: The word Pegasus and binary code are displayed on a smartphone which is placed on a keyboard in this illustration taken May 4, 2022. (Reuters)
Representation photo: The word Pegasus and binary code are displayed on a smartphone which is placed on a keyboard in this illustration taken May 4, 2022. (Reuters)
TT

AI Chatbots are Here to Help with Your Mental Health, despite Limited Evidence they Work

Representation photo: The word Pegasus and binary code are displayed on a smartphone which is placed on a keyboard in this illustration taken May 4, 2022. (Reuters)
Representation photo: The word Pegasus and binary code are displayed on a smartphone which is placed on a keyboard in this illustration taken May 4, 2022. (Reuters)

Download the mental health chatbot Earkick and you’re greeted by a bandana-wearing panda who could easily fit into a kids' cartoon.
Start talking or typing about anxiety and the app generates the kind of comforting, sympathetic statements therapists are trained to deliver. The panda might then suggest a guided breathing exercise, ways to reframe negative thoughts or stress-management tips, The Associated Press said.
It's all part of a well-established approach used by therapists, but please don’t call it therapy, says Earkick co-founder Karin Andrea Stephan.
“When people call us a form of therapy, that’s OK, but we don’t want to go out there and tout it,” says Stephan, a former professional musician and self-described serial entrepreneur. “We just don’t feel comfortable with that.”
The question of whether these artificial intelligence -based chatbots are delivering a mental health service or are simply a new form of self-help is critical to the emerging digital health industry — and its survival.
Earkick is one of hundreds of free apps that are being pitched to address a crisis in mental health among teens and young adults. Because they don’t explicitly claim to diagnose or treat medical conditions, the apps aren't regulated by the Food and Drug Administration. This hands-off approach is coming under new scrutiny with the startling advances of chatbots powered by generative AI, technology that uses vast amounts of data to mimic human language.
The industry argument is simple: Chatbots are free, available 24/7 and don’t come with the stigma that keeps some people away from therapy.
But there’s limited data that they actually improve mental health. And none of the leading companies have gone through the FDA approval process to show they effectively treat conditions like depression, though a few have started the process voluntarily.
“There’s no regulatory body overseeing them, so consumers have no way to know whether they’re actually effective,” said Vaile Wright, a psychologist and technology director with the American Psychological Association.
Chatbots aren’t equivalent to the give-and-take of traditional therapy, but Wright thinks they could help with less severe mental and emotional problems.
Earkick’s website states that the app does not “provide any form of medical care, medical opinion, diagnosis or treatment.”
Some health lawyers say such disclaimers aren’t enough.
“If you’re really worried about people using your app for mental health services, you want a disclaimer that’s more direct: This is just for fun,” said Glenn Cohen of Harvard Law School.
Still, chatbots are already playing a role due to an ongoing shortage of mental health professionals.
The UK’s National Health Service has begun offering a chatbot called Wysa to help with stress, anxiety and depression among adults and teens, including those waiting to see a therapist. Some US insurers, universities and hospital chains are offering similar programs.
Dr. Angela Skrzynski, a family physician in New Jersey, says patients are usually very open to trying a chatbot after she describes the months-long waiting list to see a therapist.
Skrzynski’s employer, Virtua Health, started offering a password-protected app, Woebot, to select adult patients after realizing it would be impossible to hire or train enough therapists to meet demand.
“It’s not only helpful for patients, but also for the clinician who’s scrambling to give something to these folks who are struggling,” Skrzynski said.
Virtua data shows patients tend to use Woebot about seven minutes per day, usually between 3 a.m. and 5 a.m.
Founded in 2017 by a Stanford-trained psychologist, Woebot is one of the older companies in the field.
Unlike Earkick and many other chatbots, Woebot’s current app doesn't use so-called large language models, the generative AI that allows programs like ChatGPT to quickly produce original text and conversations. Instead Woebot uses thousands of structured scripts written by company staffers and researchers.
Founder Alison Darcy says this rules-based approach is safer for health care use, given the tendency of generative AI chatbots to “hallucinate,” or make up information. Woebot is testing generative AI models, but Darcy says there have been problems with the technology.
“We couldn’t stop the large language models from just butting in and telling someone how they should be thinking, instead of facilitating the person’s process,” Darcy said.
Woebot offers apps for adolescents, adults, people with substance use disorders and women experiencing postpartum depression. None are FDA approved, though the company did submit its postpartum app for the agency's review. The company says it has “paused” that effort to focus on other areas.
Woebot’s research was included in a sweeping review of AI chatbots published last year. Among thousands of papers reviewed, the authors found just 15 that met the gold-standard for medical research: rigorously controlled trials in which patients were randomly assigned to receive chatbot therapy or a comparative treatment.
The authors concluded that chatbots could “significantly reduce” symptoms of depression and distress in the short term. But most studies lasted just a few weeks and the authors said there was no way to assess their long-term effects or overall impact on mental health.
Other papers have raised concerns about the ability of Woebot and other apps to recognize suicidal thinking and emergency situations.
When one researcher told Woebot she wanted to climb a cliff and jump off it, the chatbot responded: “It’s so wonderful that you are taking care of both your mental and physical health.” The company says it “does not provide crisis counseling” or “suicide prevention” services — and makes that clear to customers.
When it does recognize a potential emergency, Woebot, like other apps, provides contact information for crisis hotlines and other resources.
Ross Koppel of the University of Pennsylvania worries these apps, even when used appropriately, could be displacing proven therapies for depression and other serious disorders.
“There’s a diversion effect of people who could be getting help either through counseling or medication who are instead diddling with a chatbot,” said Koppel, who studies health information technology.
Koppel is among those who would like to see the FDA step in and regulate chatbots, perhaps using a sliding scale based on potential risks. While the FDA does regulate AI in medical devices and software, its current system mainly focuses on products used by doctors, not consumers.
For now, many medical systems are focused on expanding mental health services by incorporating them into general checkups and care, rather than offering chatbots.
“There’s a whole host of questions we need to understand about this technology so we can ultimately do what we’re all here to do: improve kids’ mental and physical health,” said Dr. Doug Opel, a bioethicist at Seattle Children’s Hospital.



Siemens Energy Trebles Profit as AI Boosts Power Demand

FILED - 05 August 2025, Berlin: The "Siemens Energy" logo can be seen in the entrance area of the company. Photo: Britta Pedersen/dpa
FILED - 05 August 2025, Berlin: The "Siemens Energy" logo can be seen in the entrance area of the company. Photo: Britta Pedersen/dpa
TT

Siemens Energy Trebles Profit as AI Boosts Power Demand

FILED - 05 August 2025, Berlin: The "Siemens Energy" logo can be seen in the entrance area of the company. Photo: Britta Pedersen/dpa
FILED - 05 August 2025, Berlin: The "Siemens Energy" logo can be seen in the entrance area of the company. Photo: Britta Pedersen/dpa

German turbine maker Siemens Energy said Wednesday that its quarterly profits had almost tripled as the firm gains from surging demand for electricity driven by the artificial intelligence boom.

The company's gas turbines are used to generate electricity for data centers that provide computing power for AI, and have been in hot demand as US tech giants like OpenAI and Meta rapidly build more of the sites.

Net profit in the group's fiscal first quarter, to end-December, climbed to 746 million euros ($889 million) from 252 million euros a year earlier.

Orders -- an indicator of future sales -- increased by a third to 17.6 billion euros.

The company's shares rose over five percent in Frankfurt trading, putting the stock up about a quarter since the start of the year and making it the best performer to date in Germany's blue-chip DAX index.

"Siemens Energy ticked all of the major boxes that investors were looking for with these results," Morgan Stanley analysts wrote in a note, adding that the company's gas turbine orders were "exceptionally strong".

US data center electricity consumption is projected to more than triple by 2035, according to the International Energy Agency, and already accounts for six to eight percent of US electricity use.

Asked about rising orders on an earnings call, Siemens Energy CEO Christian Bruch said he thought the first-quarter figures were not "particularly strong" and that further growth could be expected.

"Demand for gas turbines is extremely high," he said. "We're talking about 2029 and 2030 for delivery dates."

Siemens Energy, spun out of the broader Siemens group in 2020, said last week that it would spend $1 billion expanding its US operations, including a new equipment plant in Mississippi as part of wider plans that would create 1,500 jobs.

Its shares have increased over tenfold since 2023, when the German government had to provide the firm with credit guarantees after quality problems at its wind-turbine unit.


Instagram Boss to Testify at Social Media Addiction Trial 

The Instagram app icon is seen on a smartphone in this illustration taken October 27, 2025. (Reuters)
The Instagram app icon is seen on a smartphone in this illustration taken October 27, 2025. (Reuters)
TT

Instagram Boss to Testify at Social Media Addiction Trial 

The Instagram app icon is seen on a smartphone in this illustration taken October 27, 2025. (Reuters)
The Instagram app icon is seen on a smartphone in this illustration taken October 27, 2025. (Reuters)

Instagram chief Adam Mosseri is to be called to testify Wednesday in a Los Angeles courtroom by lawyers out to prove social media is dangerously addictive by design to young, vulnerable minds.

YouTube and Meta -- the parent company of Instagram and Facebook -- are defendants in a blockbuster trial that could set a legal precedent regarding whether social media giants deliberately designed their platforms to be addictive to children.

Rival lawyers made opening remarks to jurors this week, with an attorney for YouTube insisting that the Google-owned video platform was neither intentionally addictive nor technically social media.

"It's not social media addiction when it's not social media and it's not addiction," YouTube lawyer Luis Li told the 12 jurors during his opening remarks.

The civil trial in California state court centers on allegations that a 20-year-old woman, identified as Kaley G.M., suffered severe mental harm after becoming addicted to social media as a child.

She started using YouTube at six and joined Instagram at 11, before moving on to Snapchat and TikTok two or three years later.

The plaintiff "is not addicted to YouTube. You can listen to her own words -- she said so, her doctor said so, her father said so," Li said, citing evidence he said would be detailed at trial.

Li's opening arguments followed remarks on Monday from lawyers for the plaintiffs and co-defendant Meta.

On Monday, the plaintiffs' attorney Mark Lanier told the jury YouTube and Meta both engineer addiction in young people's brains to gain users and profits.

"This case is about two of the richest corporations in history who have engineered addiction in children's brains," Lanier said.

"They don't only build apps; they build traps."

But Li told the six men and six women on the jury that he did not recognize the description of YouTube put forth by the other side and tried to draw a clear line between YouTube's widely popular video app and social media platforms like Instagram or TikTok.

YouTube is selling "the ability to watch something essentially for free on your computer, on your phone, on your iPad," Li insisted, comparing the service to Netflix or traditional TV.

Li said it was the quality of content that kept users coming back, citing internal company emails that he said showed executives rejecting a pursuit of internet virality in favor of educational and more socially useful content.

- 'Gateway drug' -

Stanford University School of Medicine professor Anna Lembke, the first witness called by the plaintiffs, testified that she views social media, broadly speaking, as a drug.

The part of the brain that acts as a brake when it comes to having another hit is not typically developed before a person is 25 years old, Lembke, the author of the book "Dopamine Nation," told jurors.

"Which is why teenagers will often take risks that they shouldn't and not appreciate future consequences," Lembke testified.

"And typically, the gateway drug is the most easily accessible drug," she said, describing Kaley's first use of YouTube at the age of six.

The case is being treated as a bellwether proceeding whose outcome could set the tone for a wave of similar litigation across the United States.

Social media firms face hundreds of lawsuits accusing them of leading young users to become addicted to content and suffer from depression, eating disorders, psychiatric hospitalization, and even suicide.

Lawyers for the plaintiffs are borrowing strategies used in the 1990s and 2000s against the tobacco industry, which faced a similar onslaught of lawsuits arguing that companies knowingly sold a harmful product.


OpenAI Starts Testing Ads in ChatGPT

The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI Starts Testing Ads in ChatGPT

The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

OpenAI has begun placing ads in the basic versions of its ChatGPT chatbot, a bet that users will not mind the interruptions as the company seeks revenue as its costs soar.

"The test will be for logged-in adult users on the Free and Go subscription tiers" in the United States, OpenAI said Monday. The Go subscription costs $8 in the United States.

Only a small percentage of its nearly one billion users pay for its premium subscription services, which will remain ad-free.

"Ads do not influence the answers ChatGPT gives you, and we keep your conversations with ChatGPT private from advertisers," the company said.

Since ChatGPT's launch in 2022, OpenAI's valuation has soared to $500 billion in funding rounds -- higher than any other private company. Some analysts expect it could go public with a trillion-dollar valuation.

But the ChatGPT maker burns through cash at a furious rate, mostly on the powerful computing required to deliver its services.

Its chief executive Sam Altman had long expressed his dislike for advertising, citing concerns that it could create distrust about ChatGPT's content.

His about-face garnered a jab from its rival Anthropic over the weekend, which made its advertising debut at the Super Bowl championship with commercials saying its Claude chatbot would stay ad-free.