Meta Buried ‘Causal’ Evidence of Social Media Harm, US Court Filings Allege

Meta and Facebook logos are seen in this illustration taken February 15, 2022. (Reuters)
Meta and Facebook logos are seen in this illustration taken February 15, 2022. (Reuters)
TT

Meta Buried ‘Causal’ Evidence of Social Media Harm, US Court Filings Allege

Meta and Facebook logos are seen in this illustration taken February 15, 2022. (Reuters)
Meta and Facebook logos are seen in this illustration taken February 15, 2022. (Reuters)

Meta shut down internal research into the mental health effects of Facebook after finding causal evidence that its products harmed users’ mental health, according to unredacted filings in a lawsuit by US school districts against Meta and other social media platforms.

In a 2020 research project code-named “Project Mercury,” Meta scientists worked with survey firm Nielsen to gauge the effect of “deactivating” Facebook, according to Meta documents obtained via discovery. To the company’s disappointment, “people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,” internal documents said.

Rather than publishing those findings or pursuing additional research, the filing states, Meta called off further work and internally declared that the negative study findings were tainted by the “existing media narrative” around the company.

“The Nielsen study does show causal impact on social comparison,” (unhappy face emoji), an unnamed staff researcher allegedly wrote. Another staffer worried that keeping quiet about negative findings would be akin to the tobacco industry “doing research and knowing cigs were bad and then keeping that info to themselves.”

Despite Meta’s own work documenting a causal link between its products and negative mental health effects, the filing alleges, Meta told Congress that it had no ability to quantify whether its products were harmful to teenage girls.

In a statement Saturday, Meta spokesman Andy Stone said the study was stopped because its methodology was flawed and that it worked diligently to improve the safety of its products.

“The full record will show that for over a decade, we have listened to parents, researched issues that matter most, and made real changes to protect teens,” he said.

PLAINTIFFS ALLEGE PRODUCT RISKS WERE HIDDEN

The allegation of Meta burying evidence of social media harms is just one of many in a late Friday filing by Motley Rice, a law firm suing Meta, Google, TikTok and Snapchat on behalf of school districts around the country. Broadly, the plaintiffs argue the companies have intentionally hidden the internally recognized risks of their products from users, parents and teachers.

TikTok, Google and Snapchat did not immediately respond to a request for comment.

Allegations against Meta and its rivals include tacitly encouraging children below the age of 13 to use their platforms, failing to address child sexual abuse content and seeking to expand the use of social media products by teenagers while they were at school. The plaintiffs also allege that the platforms attempted to pay child-focused organizations to defend the safety of their products in public.

In one instance, TikTok sponsored the National PTA and then internally boasted about its ability to influence the child-focused organization. Per the filing, TikTok officials said the PTA would “do whatever we want going forward in the fall... (t)hey’ll announce things publicly(,), (t)heir CEO will do press statements for us.”

By and large, however, the allegations against the other social media platforms are less detailed than those against Meta. The internal documents cited by the plaintiffs allege:

1. Meta intentionally designed its youth safety features to be ineffective and rarely used, and blocked testing of safety features that it feared might be harmful to growth.

2. Meta required users to be caught 17 times attempting to traffic people for sex before it would remove them from its platform, which a document described as “a very, very, very high strike threshold."

3. Meta recognized that optimizing its products to increase teen engagement resulted in serving them more harmful content, but did so anyway.

4. Meta stalled internal efforts to prevent child predators from contacting minors for years due to growth concerns, and pressured safety staff to circulate arguments justifying its decision not to act.

 5. In a text message in 2021, Mark Zuckerberg said that he wouldn’t say that child safety was his top concern “when I have a number of other areas I’m more focused on like building the metaverse.” Zuckerberg also shot down or ignored requests by Nick Clegg, Meta's then-head of global public policy, to better fund child safety work.

Meta’s Stone disputed these allegations, saying the company’s teen safety measures are effective and that the company’s current policy is to remove accounts as soon as they are flagged for sex trafficking.

He said the suit misrepresents its efforts to build safety features for teens and parents and called its safety work “broadly effective.”

“We strongly disagree with these allegations, which rely on cherry-picked quotes and misinformed opinions,” Stone said.

The underlying Meta documents cited in the filing are not public, and Meta has filed a motion to strike the documents. Stone said the objection was to the over-broad nature of what plaintiffs are seeking to unseal, not unsealing in its entirety.

A hearing regarding the filing is set for January 26 in Northern California District Court.



Foundation Stone Laid for World’s Largest Government Data Center in Riyadh

Officials are seen at Thursday's ceremony. (SPA)
Officials are seen at Thursday's ceremony. (SPA)
TT

Foundation Stone Laid for World’s Largest Government Data Center in Riyadh

Officials are seen at Thursday's ceremony. (SPA)
Officials are seen at Thursday's ceremony. (SPA)

The foundation stone was laid in Riyadh Thursday for the Saudi Data and Artificial Intelligence Authority (SDAIA) “Hexagon” Data Center, the world’s largest government data center by megawatt capacity.

Classified as Tier IV and holding the highest data center rating by the global Uptime Institute, the facility will have a total capacity of 480 megawatts and will be built on an area exceeding 30 million square feet in the Saudi capital.

Designed to the highest international standards, the center will provide maximum availability, security, and operational readiness for government data centers. It will meet the growing needs of government entities and support the increasing reliance on electronic services.

The project will contribute to strengthening the national economy and reinforce the Kingdom’s position as a key player in the future of the global digital economy.

A ceremony was held on the occasion, attended by senior officials from various government entities. They were received at the venue by President of SDAIA Dr. Abdullah bin Sharaf Alghamdi and SDAIA officials.

Director of the National Information Center at SDAIA Dr. Issam bin Abdullah Alwagait outlined the project’s details, technical and engineering specifications, and the operational architecture ensuring the highest levels of readiness and availability.

He also reviewed the international accreditations obtained for the center’s solutions and engineering design in line with recognized global standards.

In a press statement, SDAIA President Dr. Abdullah bin Sharaf Alghamdi said the landmark national project comes as part of the continued support of Prince Mohammed bin Salman bin Abdulaziz Al Saud, Crown Prince, Prime Minister and Chairman of SDAIA’s Board of Directors.

This support enables the authority, as the Kingdom’s competent body for data, including big data, and artificial intelligence and the national reference for their regulation, development, and use, to contribute to advancing the Kingdom toward leadership among data- and AI-driven economies, he noted.

The Kingdom will continue to strengthen its presence in advanced technologies with the ongoing support of the Crown Prince, he stressed.

SDAIA will pursue pioneering projects that reflect its ambitious path toward building an integrated digital ecosystem, strengthening national enablers in data and artificial intelligence, and developing world-class technical infrastructure that boosts the competitiveness of the national economy and attracts investment. This aligns with Saudi Vision 2030’s objectives of building a sustainable knowledge-based economy and achieving global leadership in advanced technologies.


Neuralink Plans ‘High-Volume’ Brain Implant Production by 2026, Musk Says

Elon Musk steps off Air Force One upon arrival at Morristown Municipal Airport in Morristown, New Jersey, US, March 22, 2025. (AFP)
Elon Musk steps off Air Force One upon arrival at Morristown Municipal Airport in Morristown, New Jersey, US, March 22, 2025. (AFP)
TT

Neuralink Plans ‘High-Volume’ Brain Implant Production by 2026, Musk Says

Elon Musk steps off Air Force One upon arrival at Morristown Municipal Airport in Morristown, New Jersey, US, March 22, 2025. (AFP)
Elon Musk steps off Air Force One upon arrival at Morristown Municipal Airport in Morristown, New Jersey, US, March 22, 2025. (AFP)

Elon Musk's brain implant company Neuralink will start "high-volume production" of brain-computer interface devices and move to an entirely automated surgical procedure in 2026, Musk said in a post on the social media platform X on ‌Wednesday.

Neuralink did ‌not immediately respond ‌to ⁠a Reuters ‌request for comment.

The implant is designed to help people with conditions such as a spinal cord injury. The first patient has used it to play video ⁠games, browse the internet, post on ‌social media, and ‍move a cursor ‍on a laptop.

The company began ‍human trials of its brain implant in 2024 after addressing safety concerns raised by the US Food and Drug Administration, which had initially rejected its application in ⁠2022.

Neuralink said in September that 12 people worldwide with severe paralysis have received its brain implants and were using them to control digital and physical tools through thought. It also secured $650 million in a June funding round.


Report: France Aims to Ban Under-15s from Social Media from September 2026

French President Emmanuel Macron holds a press conference during a European Union leaders' summit, in Brussels, Belgium December 19, 2025. (Reuters)
French President Emmanuel Macron holds a press conference during a European Union leaders' summit, in Brussels, Belgium December 19, 2025. (Reuters)
TT

Report: France Aims to Ban Under-15s from Social Media from September 2026

French President Emmanuel Macron holds a press conference during a European Union leaders' summit, in Brussels, Belgium December 19, 2025. (Reuters)
French President Emmanuel Macron holds a press conference during a European Union leaders' summit, in Brussels, Belgium December 19, 2025. (Reuters)

France plans to ban children under 15 from social media sites and to prohibit mobile phones in high schools from September 2026, local media reported on Wednesday, moves that underscore rising public angst over the impact of online harms on minors.

President Emmanuel Macron has often pointed to social media as one of the factors to blame for violence among young people and has signaled he wants France to follow Australia, whose world-first ‌ban for under-16s ‌on social media platforms including Facebook, Snapchat, TikTok ‌and ⁠YouTube came into force ‌in December.

Le Monde newspaper said Macron could announce the measures in his New Year's Eve national address, due to be broadcast at 1900 GMT. His government will submit draft legislation for legal checks in early January, Le Monde and France Info reported.

The Elysee and the prime minister's office did not immediately respond to a request for comment on the reports.

Mobile phones have been banned ⁠in French primary and middle schools since 2018 and the reported new changes would extend that ban ‌to high schools. Pupils aged 11 to ‍15 attend middle schools in the French ‍educational system.

France also passed a law in 2023 requiring social platforms to ‍obtain parental consent for under-15s to create accounts, though technical challenges have impeded its enforcement.

Macron said in June he would push for regulation at the level of the European Union to ban access to social media for all under-15s after a fatal stabbing at a school in eastern France shocked the nation.

The European Parliament in ⁠November urged the EU to set minimum ages for children to access social media to combat a rise in mental health problems among adolescents from excessive exposure, although it is member states which impose age limits. Various other countries have also taken steps to regulate children's access to social media.

Macron heads into the New Year with his domestic legacy in tatters after his gamble on parliamentary elections in 2024 led to a hung parliament, triggering France's worst political crisis in decades that has seen a succession of weak governments.

However, cracking down further on minors' access to social media could prove popular, according to opinion ‌polls. A Harris Interactive survey in 2024 showed 73% of those canvassed supporting a ban on social media access for under-15s.