Meta Buried ‘Causal’ Evidence of Social Media Harm, US Court Filings Allege

Meta and Facebook logos are seen in this illustration taken February 15, 2022. (Reuters)
Meta and Facebook logos are seen in this illustration taken February 15, 2022. (Reuters)
TT

Meta Buried ‘Causal’ Evidence of Social Media Harm, US Court Filings Allege

Meta and Facebook logos are seen in this illustration taken February 15, 2022. (Reuters)
Meta and Facebook logos are seen in this illustration taken February 15, 2022. (Reuters)

Meta shut down internal research into the mental health effects of Facebook after finding causal evidence that its products harmed users’ mental health, according to unredacted filings in a lawsuit by US school districts against Meta and other social media platforms.

In a 2020 research project code-named “Project Mercury,” Meta scientists worked with survey firm Nielsen to gauge the effect of “deactivating” Facebook, according to Meta documents obtained via discovery. To the company’s disappointment, “people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,” internal documents said.

Rather than publishing those findings or pursuing additional research, the filing states, Meta called off further work and internally declared that the negative study findings were tainted by the “existing media narrative” around the company.

“The Nielsen study does show causal impact on social comparison,” (unhappy face emoji), an unnamed staff researcher allegedly wrote. Another staffer worried that keeping quiet about negative findings would be akin to the tobacco industry “doing research and knowing cigs were bad and then keeping that info to themselves.”

Despite Meta’s own work documenting a causal link between its products and negative mental health effects, the filing alleges, Meta told Congress that it had no ability to quantify whether its products were harmful to teenage girls.

In a statement Saturday, Meta spokesman Andy Stone said the study was stopped because its methodology was flawed and that it worked diligently to improve the safety of its products.

“The full record will show that for over a decade, we have listened to parents, researched issues that matter most, and made real changes to protect teens,” he said.

PLAINTIFFS ALLEGE PRODUCT RISKS WERE HIDDEN

The allegation of Meta burying evidence of social media harms is just one of many in a late Friday filing by Motley Rice, a law firm suing Meta, Google, TikTok and Snapchat on behalf of school districts around the country. Broadly, the plaintiffs argue the companies have intentionally hidden the internally recognized risks of their products from users, parents and teachers.

TikTok, Google and Snapchat did not immediately respond to a request for comment.

Allegations against Meta and its rivals include tacitly encouraging children below the age of 13 to use their platforms, failing to address child sexual abuse content and seeking to expand the use of social media products by teenagers while they were at school. The plaintiffs also allege that the platforms attempted to pay child-focused organizations to defend the safety of their products in public.

In one instance, TikTok sponsored the National PTA and then internally boasted about its ability to influence the child-focused organization. Per the filing, TikTok officials said the PTA would “do whatever we want going forward in the fall... (t)hey’ll announce things publicly(,), (t)heir CEO will do press statements for us.”

By and large, however, the allegations against the other social media platforms are less detailed than those against Meta. The internal documents cited by the plaintiffs allege:

1. Meta intentionally designed its youth safety features to be ineffective and rarely used, and blocked testing of safety features that it feared might be harmful to growth.

2. Meta required users to be caught 17 times attempting to traffic people for sex before it would remove them from its platform, which a document described as “a very, very, very high strike threshold."

3. Meta recognized that optimizing its products to increase teen engagement resulted in serving them more harmful content, but did so anyway.

4. Meta stalled internal efforts to prevent child predators from contacting minors for years due to growth concerns, and pressured safety staff to circulate arguments justifying its decision not to act.

 5. In a text message in 2021, Mark Zuckerberg said that he wouldn’t say that child safety was his top concern “when I have a number of other areas I’m more focused on like building the metaverse.” Zuckerberg also shot down or ignored requests by Nick Clegg, Meta's then-head of global public policy, to better fund child safety work.

Meta’s Stone disputed these allegations, saying the company’s teen safety measures are effective and that the company’s current policy is to remove accounts as soon as they are flagged for sex trafficking.

He said the suit misrepresents its efforts to build safety features for teens and parents and called its safety work “broadly effective.”

“We strongly disagree with these allegations, which rely on cherry-picked quotes and misinformed opinions,” Stone said.

The underlying Meta documents cited in the filing are not public, and Meta has filed a motion to strike the documents. Stone said the objection was to the over-broad nature of what plaintiffs are seeking to unseal, not unsealing in its entirety.

A hearing regarding the filing is set for January 26 in Northern California District Court.



AI Offers Hope for Young Filmmakers Dreaming of an Oscar

Chinese USC student SiJia Zheng speaks about how he used artificial intelligence to modify his face and make him into all the different characters of his short film 'Torment'. Frederic J. BROWN / AFP
Chinese USC student SiJia Zheng speaks about how he used artificial intelligence to modify his face and make him into all the different characters of his short film 'Torment'. Frederic J. BROWN / AFP
TT

AI Offers Hope for Young Filmmakers Dreaming of an Oscar

Chinese USC student SiJia Zheng speaks about how he used artificial intelligence to modify his face and make him into all the different characters of his short film 'Torment'. Frederic J. BROWN / AFP
Chinese USC student SiJia Zheng speaks about how he used artificial intelligence to modify his face and make him into all the different characters of his short film 'Torment'. Frederic J. BROWN / AFP

Studying at the film school where Oscar-nominated "Sinners" director Ryan Coogler honed his craft, SiJia Zheng dreams of winning an Academy Award.

Now with the recent developments in artificial intelligence, he can see a shortcut to achieving his ambition.3

"That's a chance for beginners like me who can use AI to just make a film and to announce to the world that I have the ability to be a director," he told AFP.

Zheng, 29, who hails from China, is one of a burgeoning class of students at USC's School of Cinematic Arts, studying animation in a place that has long been a training ground for future Pixar and DreamWorks talent.

He has used his time at the Los Angeles university to learn about the emerging field of AI animation.

That has included producing his seven-minute short film "Torment" about a masked killer terrorizing a high school.

The film, which was recognized at the LA Shorts festival, was generated entirely by AI -- in just one week.

Zheng recorded himself in front of a green screen and then asked the software to modify his face to make him into all the different characters in the movie.

The technology also allowed him to set his story in an Asian school and have scenes in a swimming pool -- two things that would have cost a fortune if he had filmed them traditionally.

"As a student, it's impossible to have that much money" to produce a film, he said.

- 'Tool' -

Not everyone in Hollywood feels so positively about AI.

The technology was one of the key sticking points in the writers' and actors' strikes that paralyzed Hollywood in 2023.

Guillermo del Toro, the director of "Frankenstein," which will compete for the best picture Oscar on Sunday, is notoriously anti-AI, insisting he would "rather die" than use it.

Zheng said he had been impressed by the Mexican director's "amazing" film, particularly the opening scene where the monster attacks a 19th-century three-masted ship, which del Toro's prop department constructed specially for the movie.

But "when I watched the film...I was just thinking: 'Oh, using AI to do that would be much cheaper and...make something pretty similar.'"

He insists, however, that it doesn't replace the filmmaking spark.

"AI is just a tool, and people can use it to become even better."

The Academy of Motion Picture Arts and Sciences, the body that will hand out the Oscars in Hollywood on March 15, seems to agree -- last year the body updated its rules to say it was neutral on the technology.

"Generative Artificial Intelligence and other digital tools...neither help nor harm the chances of achieving a nomination," it said last April.

- 'Ethical' use -

At the University of Southern California (USC), teachers like Debra Isaac are trying to navigate the ethics around the emerging technology of AI.

The animation professor said she was shocked by an AI video that rocketed around the internet in recent weeks.

The short sequence, created by Seedance -- the AI generation model developed by TikTok's parent company, Bytedance -- shows an ersatz fight between Brad Pitt and Tom Cruise. Neither star was compensated.

But, used properly, AI does not need to be exploitative, and is not a lazy way to make films, Isaac said.

"It's not just about, 'Hey, I have a prompt, and I'm just gonna type a few words and I'll get my image, and I'll get my animation, and I'm done,'" she said.

"Some of these tools are not ethically dubious at all. They're trained by people that are using their own work," she added.

That's precisely what Xindi Zhang, a recent graduate of the program and winner of a Student Academy Award for her short film "The Song of Drifters," did.

For the mini-documentary about the difficulty of feeling at home anywhere, the 29-year-old artist fed the AI dozens of her drawings.

The database then served as graphic inspiration, allowing the computer to stylize the shots of the cities where the film takes place, accelerating production that would otherwise have taken years.

Even with the help of AI, she spent nearly a month perfecting certain shots.

It's "a craft that nobody really appreciates right now," she says.

But anyone who looks at the use of AI will soon find it's not a compromise-free shortcut to perfection.

"Good, cheap and fast will never happen, no matter what tool you use," Zhang said.


Saudi Arabia Leads Globally in Women’s AI Empowerment with Groundbreaking Initiatives

Saudi Arabia Leads Globally in Women’s AI Empowerment with Groundbreaking Initiatives
TT

Saudi Arabia Leads Globally in Women’s AI Empowerment with Groundbreaking Initiatives

Saudi Arabia Leads Globally in Women’s AI Empowerment with Groundbreaking Initiatives

The Kingdom of Saudi Arabia has made significant strides in empowering women in the data and artificial intelligence (AI) sectors, aiming to elevate their global competitiveness as part of Saudi Vision 2030.

Numerous initiatives have increased the participation of Saudi women in advanced technologies, with the Saudi Data and Artificial Intelligence Authority (SDAIA) offering specialized programs and workshops in partnership with global technology leaders, SPA reported.

In just one year, over 666,000 Saudi women received training in data and AI, positioning the Kingdom first globally in women’s AI empowerment, according to the 2025 AI Index by Stanford University. Key initiatives include the Artificial Intelligence Academy with Microsoft, the Generative AI Academy with NVIDIA, the "SAMAI" initiative (targeting one million Saudis in AI), and the development of a national data and AI curriculum for university students.

These programs have enhanced women's skills and facilitated their contributions to crucial sectors such as health, energy, and education.

SDAIA has created a supportive work environment for women through flexible digital infrastructure, enabling remote work and work-life balance. This commitment reflects the Kingdom's dedication to building a sustainable, data-driven economy, with Saudi women now playing vital roles in shaping the future of advanced technologies.


China Could See Widespread Use of Brain-Computer Tech in 3-5 Years, Expert Says

People cross a road in Beijing on March 6, 2026. (AFP)
People cross a road in Beijing on March 6, 2026. (AFP)
TT

China Could See Widespread Use of Brain-Computer Tech in 3-5 Years, Expert Says

People cross a road in Beijing on March 6, 2026. (AFP)
People cross a road in Beijing on March 6, 2026. (AFP)

China could see brain-computer interface (BCI) technology move into practical public use within three to five years as products mature, a leading BCI expert said, as Beijing races to catch up with US startups including Elon Musk's Neuralink.

Beijing elevated BCIs to a core future strategic industry in its new five-year plan released this week, placing it alongside sectors such as quantum, embodied AI, 6G and nuclear fusion.

"New policies will not change things overnight. I think after another three to five years, we will gradually see some (BCI) products moving ‌towards actual practical ‌service for the public," said Yao Dezhong, Director of ‌the ⁠Sichuan Institute of Brain ⁠Science, in an interview on Saturday on the sidelines of China's annual parliament meetings in Beijing.

TRIALS

A national BCI development strategy released last year aims for major technical breakthroughs by 2027 and for China to cultivate two or three world-class firms by 2030.

China is the second country to launch invasive BCI human trials. More than 10 trials are active, matching the US, while scientists plan to enroll more ⁠than 50 patients nationwide this year.

Recent high-profile trials have enabled ‌paralyzed patients and amputees to regain partial mobility ‌and operate robotic hands or intelligent wheelchairs.

The government has already integrated some BCI treatments into ‌national medical insurance in a few pilot provinces, and the domestic market is ‌projected to reach 5.58 billion yuan ($809 million) by 2027, according to CCID Consulting.

"China has many advantages in BCIs, such as its huge population, enormous patient demand, cost-effective industrial chain and abundant pool of STEM (science, technology, engineering and maths) talent," said Yao, who also ‌leads a key neuroinformatics research center under China's science and technology ministry.

Policies such as insurance integration and national standards aim ⁠to close the "huge" ⁠gap between scientific research, industry and clinical applications, he said.

"The path from experimental to clinical trials is quite long, and this remains a problem," he told Reuters, adding that many Chinese hospitals have established BCI research labs to speed up the process.

While US startups like Neuralink focus on invasive chips that penetrate brain tissue, Chinese researchers are developing invasive, semi-invasive and non-invasive BCIs with wider potential clinical use.

Semi-invasive BCIs, placed on the brain's surface, may lose some signal quality but reduce risks such as tissue damage and other post-surgery complications. Neuralink's surgical robot can insert hundreds of electrodes into the brain in minutes.

"This is a technical advantage, which I think is remarkable," said Yao, of Neuralink.

"(But) China is actually making very fast progress in this area now. In fact, Musk's direction is basically achievable domestically."