Social Media ‘Addicting the Brains of Children,’ Plaintiff’s Lawyer Argues in Landmark Trial

Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)
Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)
TT

Social Media ‘Addicting the Brains of Children,’ Plaintiff’s Lawyer Argues in Landmark Trial

Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)
Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)

Comparing social media platforms to casinos and addictive drugs, lawyer Mark Lanier delivered opening statements Monday in a landmark trial in Los Angeles that seeks to hold Instagram owner Meta and Google's YouTube responsible for harms to children who use their products.

Instagram's parent company Meta and Google's YouTube face claims that their platforms addict children through deliberate design choices that keep kids glued to their screens. TikTok and Snap, which were originally named in the lawsuit, settled for undisclosed sums.

Jurors got their first glimpse into what will be a lengthy trial characterized by dueling narratives from the plaintiffs and the two remaining defendants.

Meta lawyer Paul Schmidt spoke of the disagreement within the scientific community over social media addiction, with some researchers believing it doesn’t exist, or that addiction is not the most appropriate way to describe heavy social media use.

‘Addicting the brains of children’

Lanier, the plaintiff's lawyer, delivered lively first remarks where he said the case will be as “easy as ABC” — which stands for “addicting the brains of children.” He said Meta and Google, “two of the richest corporations in history,” have “engineered addiction in children’s brains.”

He presented jurors with a slew of internal emails, documents and studies conducted by Meta and YouTube, as well as YouTube’s parent company, Google. He emphasized the findings of a study Meta conducted called “Project Myst” in which they surveyed 1,000 teens and their parents about their social media use.

The two major findings, Lanier said, were that Meta knew children who experienced “adverse events” like trauma and stress were particularly vulnerable for addiction; and that parental supervision and controls made little impact.

He also highlighted internal Google documents that likened some company products to a casino, and internal communication between Meta employees in which one person said Instagram is “like a drug” and they are “basically pushers.”

At the core of the Los Angeles case is a 20-year-old identified only by the initials “KGM,” whose case could determine how thousands of other, similar lawsuits against social media companies will play out. She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury.

Plaintiff grew up using YouTube, Instagram

KGM made a brief appearance after a break during Lanier’s statement and she will return to testify later in the trial. Lanier spent time describing KGM's childhood, focusing particularly on what her personality was like before she began using social media.

She started using YouTube at age 6 and Instagram at age 9, Lanier said. Before she graduated elementary school, she had posted 284 videos on YouTube.

The outcome of the trial could have profound effects on the companies' businesses and how they will handle children using their platforms.

Lanier said the companies’ lawyers will “try to blame the little girl and her parents for the trap they built,” referencing the plaintiff. She was a minor when she said she became addicted to social media, which she claims had a detrimental impact on her mental health.

Lanier said that despite the public position of Meta and YouTube being that they work to protect children, their internal documents show an entirely different position, with explicit references to young children being listed as their target audiences.

The attorney also drew comparisons between the social media companies and tobacco firms, citing internal communication between Meta employees who were concerned about the company’s lack of proactive action about the potential harm their platforms can have on children and teens.

“For a teenager, social validation is survival,” Lanier said. The defendants “engineered a feature that caters to a minor’s craving for social validation,” he added, speaking about “like” buttons and similar features.

Meta pushes back

In his opening statement representing Meta, Schmidt said the core question in the case is whether the platforms were a substantial factor in KGM’s mental health struggles. He spent much of his time going through the plaintiff’s health records, emphasizing that she had experienced many difficult circumstances in her childhood, including emotional abuse, body image issues and bullying.

Schmidt presented a clip from a video deposition from one of KGM‘s mental health providers, Dr. Thomas Suberman, who said social media was “not the through-line of what I recall being her main issues,” adding that her struggles seemed to largely stem from interpersonal conflicts and relationships.

He painted a picture — with KGM’s own text messages and testimony pointing to a volatile home life — of a particularly troubled relationship with her mother.

Schmidt acknowledged that many mental health professionals do believe social media addiction can exist, but said three of KGM’s providers — all of whom believe in the form of addiction — have never diagnosed her with it, or treated her for it.

Schmidt stressed to the jurors that the case is not about whether social media is a good thing or whether teens spend too much time on their phones or whether the jurors like or dislike Meta, but whether social media was a substantial factor in KGM’s mental health struggles.

A reckoning for social media and youth harms

A slew of trials beginning this year seek to hold social media companies responsible for harming children's mental well-being. Executives, including Meta CEO Mark Zuckerberg, are expected to testify at the Los Angeles trial, which will last six to eight weeks.

Experts have drawn similarities to the Big Tobacco trials that led to a 1998 settlement requiring cigarette companies to pay billions in health care costs and restrict marketing targeting minors.

A separate trial in New Mexico, meanwhile, also kicked off with opening statements on Monday. In that trial, Meta is accused of failing to protect young users from sexual exploitation, following an undercover online investigation. Attorney General Raúl Torrez in late 2023 sued Meta and Zuckerberg, who was later dropped from the suit.

A federal bellwether trial beginning in June in Oakland, California, will be the first to represent school districts that have sued social media platforms over harms to children.

In addition, more than 40 state attorneys general have filed lawsuits against Meta, claiming it is harming young people and contributing to the youth mental health crisis by deliberately designing features on Instagram and Facebook that addict children to its platforms. The majority of cases filed their lawsuits in federal court, but some sued in their respective states.

TikTok also faces similar lawsuits in more than a dozen states.



Report: Nvidia Nears Deal for Scaled-down Investment in OpenAI

Nvidia chief executive Jensen Huang has insisted that the AI chip powerhouse is committed to a big investment in ChatGPT-maker OpenAI. Lionel BONAVENTURE / AFP
Nvidia chief executive Jensen Huang has insisted that the AI chip powerhouse is committed to a big investment in ChatGPT-maker OpenAI. Lionel BONAVENTURE / AFP
TT

Report: Nvidia Nears Deal for Scaled-down Investment in OpenAI

Nvidia chief executive Jensen Huang has insisted that the AI chip powerhouse is committed to a big investment in ChatGPT-maker OpenAI. Lionel BONAVENTURE / AFP
Nvidia chief executive Jensen Huang has insisted that the AI chip powerhouse is committed to a big investment in ChatGPT-maker OpenAI. Lionel BONAVENTURE / AFP

Nvidia is on the cusp of investing $30 billion in OpenAI, scaling back a plan to pump $100 billion into the ChatGPT maker, the Financial Times reported Thursday.

The AI-chip powerhouse will be part of OpenAI's new funding round with an agreement that could be concluded as early as this weekend, according to the Times, which cited unnamed sources close to the matter.

Nvidia declined to comment on the report.

Nvidia chief executive Jensen Huang has insisted that the US tech giant will make a "huge" investment in OpenAI and dismissed as "nonsense" reports that he is unhappy with the generative AI star.

Huang made the remarks late in January after the Wall Street Journal reported that Nvidia's plan to invest up to $100 billion in OpenAI had been put on ice.

Nvidia announced the plan in September, with the investment helping OpenAI build more infrastructure for next-generation artificial intelligence.

The funding round is reported to value OpenAI at some $850 billion.

Huang told journalists that the notion of Nvidia having doubts about a huge investment in OpenAI was "complete nonsense."

Huang insisted that Nvidia was going ahead with its investment in OpenAI, describing it as "one of the most consequential companies of our time".

"Sam is closing the round, and we will absolutely be involved in the round," Huang said, referring to OpenAI chief executive Sam Altman.

"We will invest a great deal of money."

Nvidia has become the coveted supplier of processors needed for training and operating the large language models (LLM) behind chatbots like OpenAI's ChatGPT or Google Gemini.

LLM developers like OpenAI are directing much of the mammoth investment they have received into Nvidia's products, rushing to build GPU-stuffed data centers to serve an anticipated flood of demand for AI services.

The AI rush, and its frenzy of investment in giant data centers and the massive purchase of energy-intensive chips, continues despite signs of concern in the markets.


SDAIA President: Saudi Arabia Is Building an Integrated National AI Ecosystem in Line with Vision 2030 

SDAIA President Abdullah Al-Ghamdi delivers his remarks at Thursday's meeting. (SPA)
SDAIA President Abdullah Al-Ghamdi delivers his remarks at Thursday's meeting. (SPA)
TT

SDAIA President: Saudi Arabia Is Building an Integrated National AI Ecosystem in Line with Vision 2030 

SDAIA President Abdullah Al-Ghamdi delivers his remarks at Thursday's meeting. (SPA)
SDAIA President Abdullah Al-Ghamdi delivers his remarks at Thursday's meeting. (SPA)

President of the Saudi Data and Artificial Intelligence Authority (SDAIA) Abdullah Al-Ghamdi stressed on Thursday that Saudi Arabia, guided by the objectives of its Vision 2030, is moving steadily to establish artificial intelligence (AI) as a trusted national capability.

The goal is to use AI to help develop government services, enhance competitiveness, build human capacity, and improve the quality of life through a comprehensive strategy based on three main pillars that unlock the full potential of this technology and achieve sustainable developmental impact, he told a high-level ministerial meeting on the sidelines of the India AI Impact Summit 2026.

“The first pillar focuses on building human capacity and enhancing readiness to engage with AI technologies,” he said.

He added that the second is building an integrated national AI ecosystem that drives expansion and innovation by developing advanced digital infrastructure that enables various sectors to adopt AI applications efficiently, consistently, and with effective governance.

The third pillar is governance, which ensures responsible and measurable AI through a national framework aligned with international standards, he explained.

Al-Ghamdi was heading the Kingdom’s delegation at the summit, and the meeting saw broad participation from heads of state, decision-makers, and technology leaders from around the world.


OpenAI's Altman Says World 'Urgently' Needs AI Regulation

OpenAI’s CEO Sam Altman speaks at the AI Summit in New Delhi, India, Thursday, Feb. 19, 2026. (AP Photo)
OpenAI’s CEO Sam Altman speaks at the AI Summit in New Delhi, India, Thursday, Feb. 19, 2026. (AP Photo)
TT

OpenAI's Altman Says World 'Urgently' Needs AI Regulation

OpenAI’s CEO Sam Altman speaks at the AI Summit in New Delhi, India, Thursday, Feb. 19, 2026. (AP Photo)
OpenAI’s CEO Sam Altman speaks at the AI Summit in New Delhi, India, Thursday, Feb. 19, 2026. (AP Photo)

Sam Altman, head of ChatGPT maker OpenAI, told a global artificial intelligence conference on Thursday that the world "urgently" needs to regulate the fast-evolving technology.

An organization could be set up to coordinate these efforts, similar to the International Atomic Energy Agency (IAEA), AFP quoted him as saying.

Altman is one of the hosts of top tech CEOs in New Delhi for the AI Impact Summit, the fourth annual global meeting on how to handle advanced computing power.

Frenzied demand for generative AI has turbocharged profits for many companies while fueling anxiety about the risks to individuals and the planet.

"Democratization of AI is the best way to ensure humanity flourishes," Altman said, adding that "centralization of this technology in one company or country could lead to ruin".

"This is not to suggest that we won't need any regulation or safeguards," he said. "We obviously do, urgently, like we have for other powerful technologies."

Many researchers and campaigners say stronger action is needed to combat emerging issues, ranging from job disruption to sexualized deepfakes and AI-enabled online scams.

"We expect the world may need something like the IAEA for international coordination of AI," with the ability to "rapidly respond to changing circumstances", Altman said.

"The next few years will test global society as this technology continues to improve at a rapid pace. We can choose to either empower people or concentrate power," he added.

"Technology always disrupts jobs; we always find new and better things to do."

Generative AI chatbot ChatGPT has 100 million weekly users in India, more than a third of whom are students, he said.

Earlier on Thursday, OpenAI announced with Indian IT giant Tata Consultancy Services (TCS) a plan to build data center infrastructure in the South Asian country.