EU Warns Musk's X Spreading 'Illegal' Disinfo after Hamas Attack

An EU official said in a letter that concerns over X's moderation practices have heightened after the Hamas attack against Israel. JOEL SAGET / AFP
An EU official said in a letter that concerns over X's moderation practices have heightened after the Hamas attack against Israel. JOEL SAGET / AFP
TT
20

EU Warns Musk's X Spreading 'Illegal' Disinfo after Hamas Attack

An EU official said in a letter that concerns over X's moderation practices have heightened after the Hamas attack against Israel. JOEL SAGET / AFP
An EU official said in a letter that concerns over X's moderation practices have heightened after the Hamas attack against Israel. JOEL SAGET / AFP

The EU's digital chief Thierry Breton warned Elon Musk on Tuesday that his platform X, formerly Twitter, is spreading "illegal content and disinformation", in a letter seen by AFP.

The letter said concerns had heightened after the Hamas attack against Israel, and demanded Musk respond to the complaint within 24 hours and contact "relevant law enforcement authorities".

As the European Union's commissioner for industry and the digital economy, Breton is charged with regulating internet giants that trade within the bloc, and can launch legal action.

"Following the terrorist attacks carried out by Hamas against Israel, we have indications that your platform is being used to disseminate illegal content and disinformation in the EU," Breton wrote.

Breton reminded Musk that EU law sets tough rules on moderating content, "especially when it comes to violent and terrorist content that appears to circulate on your platform".

He asked that X respond to his complaint within 24 hours and also get in touch with Europol, the EU police coordinating agency.

"We will include your answer in our assessment file on your compliance with the DSA," Breton said, referring to the new EU Digital Services Act, which regulates online platforms.

"I remind you that following the opening of a potential investigation and a finding of non-compliance, penalties can be imposed," it said.

Musk, responding later on X to a user who had posted the letter, invited Breton to "please list the violations you allude to".

"Our policy is that everything is open source and transparent, an approach that I know the EU supports," Musk wrote.

Hate and violence

Brussels has previously complained that, among the large-scale internet platforms that fall under the DSA remit, Musk's Twitter now rebranded X spreads the biggest proportion of disinformation.

In August, when the new law came into effect, Musk replied to a post by Breton promising that the platform was "working hard" to comply, but there have been more warning signs.

While the rules were still voluntary, the firm pulled out of an oversight group, and Musk -- a self-styled "free speech absolutist" -- has been dismissive of criticism in his personal posts.

In September, the billionaire tech mogul boasted that he had cut half of its global team dedicated to monitoring and limiting disinformation and fraud around major elections.

Since Saturday's shock attack on Israeli communities by the Hamas group, web platforms have been swamped by posts containing fake or misrepresented reports and footage.

While the confirmed death toll in the renewed war has now passed 3,000 -- unconfirmed, exaggerated or false reports of atrocities have also proliferated.

Experts fear these moves have increased the risk of misinformation provoking real-world harm, amplifying hate and violence.



Rise in 'Harmful Content' Since Meta Policy Rollbacks, Survey Shows

The logo of Meta is seen at the entrance of the company's temporary stand ahead of the World Economic Forum (WEF) in Davos, Switzerland January 18, 2025. (Reuters)
The logo of Meta is seen at the entrance of the company's temporary stand ahead of the World Economic Forum (WEF) in Davos, Switzerland January 18, 2025. (Reuters)
TT
20

Rise in 'Harmful Content' Since Meta Policy Rollbacks, Survey Shows

The logo of Meta is seen at the entrance of the company's temporary stand ahead of the World Economic Forum (WEF) in Davos, Switzerland January 18, 2025. (Reuters)
The logo of Meta is seen at the entrance of the company's temporary stand ahead of the World Economic Forum (WEF) in Davos, Switzerland January 18, 2025. (Reuters)

Harmful content including hate speech has surged across Meta's platforms since the company ended third-party fact-checking in the United States and eased moderation policies, a survey showed Monday.

The survey of around 7,000 active users on Instagram, Facebook and Threads comes after the Palo Alto company ditched US fact-checkers in January and turned over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X.

The decision was widely seen as an attempt to appease President Donald Trump's new administration, whose conservative support base has long complained that fact-checking on tech platforms was a way to curtail free speech and censor right-wing content.

Meta also rolled back restrictions around topics such as gender and sexual identity. The tech giant's updated community guidelines said its platforms would permit users to accuse people of "mental illness" or "abnormality" based on their gender or sexual orientation.

"These policy shifts signified a dramatic reversal of content moderation standards the company had built over nearly a decade," said the survey published by digital and human rights groups including UltraViolet, GLAAD, and All Out.

"Among our survey population of approximately 7,000 active users, we found stark evidence of increased harmful content, decreased freedom of expression, and increased self-censorship".

One in six respondents in the survey reported being the victim of some form of gender-based or sexual violence on Meta platforms, while 66 percent said they had witnessed harmful content such as hateful or violent material.

Ninety-two percent of surveyed users said they were concerned about increasing harmful content and felt "less protected from being exposed to or targeted by" such material on Meta's platforms.

Seventy-seven percent of respondents described feeling "less safe" expressing themselves freely.

The company declined to comment on the survey.

In its most recent quarterly report, published in May, Meta insisted that the changes in January had left a minimal impact.

"Following the changes announced in January we've cut enforcement mistakes in the US in half, while during that same time period the low prevalence of violating content on the platform remained largely unchanged for most problem areas," the report said.

But the groups behind the survey insisted that the report did not reflect users' experiences of targeted hate and harassment.

"Social media is not just a place we 'go' anymore. It's a place we live, work, and play. That's why it's more crucial than ever to ensure that all people can safely access these spaces and freely express themselves without fear of retribution," Jenna Sherman, campaign director at UltraViolet, told AFP.

"But after helping to set a standard for content moderation online for nearly a decade, (chief executive) Mark Zuckerberg decided to move his company backwards, abandoning vulnerable users in the process.

"Facebook and Instagram already had an equity problem. Now, it's out of control," Sherman added.

The groups implored Meta to hire an independent third party to "formally analyze changes in harmful content facilitated by the policy changes" made in January, and for the tech giant to swiftly reinstate the content moderation standards that were in place earlier.

The International Fact-Checking Network has previously warned of devastating consequences if Meta broadens its policy shift related to fact-checkers beyond US borders to the company's programs covering more than 100 countries.

AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union.