Meta Buried ‘Causal’ Evidence of Social Media Harm, US Court Filings Allege

Meta and Facebook logos are seen in this illustration taken February 15, 2022. (Reuters)
Meta and Facebook logos are seen in this illustration taken February 15, 2022. (Reuters)
TT

Meta Buried ‘Causal’ Evidence of Social Media Harm, US Court Filings Allege

Meta and Facebook logos are seen in this illustration taken February 15, 2022. (Reuters)
Meta and Facebook logos are seen in this illustration taken February 15, 2022. (Reuters)

Meta shut down internal research into the mental health effects of Facebook after finding causal evidence that its products harmed users’ mental health, according to unredacted filings in a lawsuit by US school districts against Meta and other social media platforms.

In a 2020 research project code-named “Project Mercury,” Meta scientists worked with survey firm Nielsen to gauge the effect of “deactivating” Facebook, according to Meta documents obtained via discovery. To the company’s disappointment, “people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,” internal documents said.

Rather than publishing those findings or pursuing additional research, the filing states, Meta called off further work and internally declared that the negative study findings were tainted by the “existing media narrative” around the company.

“The Nielsen study does show causal impact on social comparison,” (unhappy face emoji), an unnamed staff researcher allegedly wrote. Another staffer worried that keeping quiet about negative findings would be akin to the tobacco industry “doing research and knowing cigs were bad and then keeping that info to themselves.”

Despite Meta’s own work documenting a causal link between its products and negative mental health effects, the filing alleges, Meta told Congress that it had no ability to quantify whether its products were harmful to teenage girls.

In a statement Saturday, Meta spokesman Andy Stone said the study was stopped because its methodology was flawed and that it worked diligently to improve the safety of its products.

“The full record will show that for over a decade, we have listened to parents, researched issues that matter most, and made real changes to protect teens,” he said.

PLAINTIFFS ALLEGE PRODUCT RISKS WERE HIDDEN

The allegation of Meta burying evidence of social media harms is just one of many in a late Friday filing by Motley Rice, a law firm suing Meta, Google, TikTok and Snapchat on behalf of school districts around the country. Broadly, the plaintiffs argue the companies have intentionally hidden the internally recognized risks of their products from users, parents and teachers.

TikTok, Google and Snapchat did not immediately respond to a request for comment.

Allegations against Meta and its rivals include tacitly encouraging children below the age of 13 to use their platforms, failing to address child sexual abuse content and seeking to expand the use of social media products by teenagers while they were at school. The plaintiffs also allege that the platforms attempted to pay child-focused organizations to defend the safety of their products in public.

In one instance, TikTok sponsored the National PTA and then internally boasted about its ability to influence the child-focused organization. Per the filing, TikTok officials said the PTA would “do whatever we want going forward in the fall... (t)hey’ll announce things publicly(,), (t)heir CEO will do press statements for us.”

By and large, however, the allegations against the other social media platforms are less detailed than those against Meta. The internal documents cited by the plaintiffs allege:

1. Meta intentionally designed its youth safety features to be ineffective and rarely used, and blocked testing of safety features that it feared might be harmful to growth.

2. Meta required users to be caught 17 times attempting to traffic people for sex before it would remove them from its platform, which a document described as “a very, very, very high strike threshold."

3. Meta recognized that optimizing its products to increase teen engagement resulted in serving them more harmful content, but did so anyway.

4. Meta stalled internal efforts to prevent child predators from contacting minors for years due to growth concerns, and pressured safety staff to circulate arguments justifying its decision not to act.

 5. In a text message in 2021, Mark Zuckerberg said that he wouldn’t say that child safety was his top concern “when I have a number of other areas I’m more focused on like building the metaverse.” Zuckerberg also shot down or ignored requests by Nick Clegg, Meta's then-head of global public policy, to better fund child safety work.

Meta’s Stone disputed these allegations, saying the company’s teen safety measures are effective and that the company’s current policy is to remove accounts as soon as they are flagged for sex trafficking.

He said the suit misrepresents its efforts to build safety features for teens and parents and called its safety work “broadly effective.”

“We strongly disagree with these allegations, which rely on cherry-picked quotes and misinformed opinions,” Stone said.

The underlying Meta documents cited in the filing are not public, and Meta has filed a motion to strike the documents. Stone said the objection was to the over-broad nature of what plaintiffs are seeking to unseal, not unsealing in its entirety.

A hearing regarding the filing is set for January 26 in Northern California District Court.



OpenAI's Altman Says World 'Urgently' Needs AI Regulation

OpenAI’s CEO Sam Altman speaks at the AI Summit in New Delhi, India, Thursday, Feb. 19, 2026. (AP Photo)
OpenAI’s CEO Sam Altman speaks at the AI Summit in New Delhi, India, Thursday, Feb. 19, 2026. (AP Photo)
TT

OpenAI's Altman Says World 'Urgently' Needs AI Regulation

OpenAI’s CEO Sam Altman speaks at the AI Summit in New Delhi, India, Thursday, Feb. 19, 2026. (AP Photo)
OpenAI’s CEO Sam Altman speaks at the AI Summit in New Delhi, India, Thursday, Feb. 19, 2026. (AP Photo)

Sam Altman, head of ChatGPT maker OpenAI, told a global artificial intelligence conference on Thursday that the world "urgently" needs to regulate the fast-evolving technology.

An organization could be set up to coordinate these efforts, similar to the International Atomic Energy Agency (IAEA), AFP quoted him as saying.

Altman is one of the hosts of top tech CEOs in New Delhi for the AI Impact Summit, the fourth annual global meeting on how to handle advanced computing power.

Frenzied demand for generative AI has turbocharged profits for many companies while fueling anxiety about the risks to individuals and the planet.

"Democratization of AI is the best way to ensure humanity flourishes," Altman said, adding that "centralization of this technology in one company or country could lead to ruin".

"This is not to suggest that we won't need any regulation or safeguards," he said. "We obviously do, urgently, like we have for other powerful technologies."

Many researchers and campaigners say stronger action is needed to combat emerging issues, ranging from job disruption to sexualized deepfakes and AI-enabled online scams.

"We expect the world may need something like the IAEA for international coordination of AI," with the ability to "rapidly respond to changing circumstances", Altman said.

"The next few years will test global society as this technology continues to improve at a rapid pace. We can choose to either empower people or concentrate power," he added.

"Technology always disrupts jobs; we always find new and better things to do."

Generative AI chatbot ChatGPT has 100 million weekly users in India, more than a third of whom are students, he said.

Earlier on Thursday, OpenAI announced with Indian IT giant Tata Consultancy Services (TCS) a plan to build data center infrastructure in the South Asian country.


Saudi Arabia Showcases Responsible Use of AI at AI Impact Summit in India

Saudi Arabia took part in a high-level session on harnessing artificial intelligence on the sidelines of the AI Impact Summit 2026 hosted by India.
Saudi Arabia took part in a high-level session on harnessing artificial intelligence on the sidelines of the AI Impact Summit 2026 hosted by India.
TT

Saudi Arabia Showcases Responsible Use of AI at AI Impact Summit in India

Saudi Arabia took part in a high-level session on harnessing artificial intelligence on the sidelines of the AI Impact Summit 2026 hosted by India.
Saudi Arabia took part in a high-level session on harnessing artificial intelligence on the sidelines of the AI Impact Summit 2026 hosted by India.

Saudi Arabia, represented by the Saudi Data and Artificial Intelligence Authority (SDAIA), took part in a high-level session on harnessing artificial intelligence for people, planet, and progress on the sidelines of the AI Impact Summit 2026 hosted by India, the Saudi Press agency reported on Wednesday.

The event drew participation from more than 70 countries and 25 international organizations, as well as senior decision-makers and technology experts.

The Saudi delegation, led by SDAIA President Dr. Abdullah Alghamdi, included Saudi Ambassador to India Haitham Al-Maliki and officials from relevant government entities.

The session aimed to launch a global network of specialized AI scientific institutions, accelerate discovery through advanced technologies, strengthen international cooperation among states and research bodies, and support the deployment of artificial intelligence to address global challenges and advance the United Nations Sustainable Development Goals (SDGs) 2030.

Deputy Chief Strategy Officer at SDAIA Dr. Abdulrahman Habib emphasized the need to unify international efforts to promote the responsible and ethical use of artificial intelligence, ensuring a sustainable, positive impact on societies and economies worldwide and supporting the 2030 SDGs.

He also reviewed Saudi Arabia’s data and AI initiatives, highlighting efforts to develop regulatory frameworks and national policies that balance innovation with the governance of emerging technologies, as well as applied models that have enhanced quality of life, improved government service efficiency, and advanced environmental sustainability.

SDAIA's participation in the summit underscores Saudi Arabia’s role in shaping the global future of AI and in strengthening its presence in international forums focused on advanced technologies, in line with the objectives of Saudi Vision 2030, which prioritizes digital transformation and innovation.


Google Says to Build New Subsea Cables from India in AI Push

A logo of Google is on display at Bharat Mandapam, one of the venues for AI Impact Summit, in New Delhi, India, February 17, 2026. REUTERS/Bhawika Chhabra
A logo of Google is on display at Bharat Mandapam, one of the venues for AI Impact Summit, in New Delhi, India, February 17, 2026. REUTERS/Bhawika Chhabra
TT

Google Says to Build New Subsea Cables from India in AI Push

A logo of Google is on display at Bharat Mandapam, one of the venues for AI Impact Summit, in New Delhi, India, February 17, 2026. REUTERS/Bhawika Chhabra
A logo of Google is on display at Bharat Mandapam, one of the venues for AI Impact Summit, in New Delhi, India, February 17, 2026. REUTERS/Bhawika Chhabra

Google announced Wednesday it would build new subsea cables from India and other locations as part of its existing $15 billion investment in the South Asian nation, which is hosting a major artificial intelligence summit this week.

The US tech giant said it would build "three subsea paths connecting India to Singapore, South Africa, and Australia; and four strategic fiber-optic routes that bolster network resilience and capacity between the United States, India, and multiple locations across the Southern Hemisphere".