Report: Musk’s $44 Bln Buyout of Twitter Faces US Antitrust Review

Tesla CEO Elon Musk introduces the Cybertruck at Tesla's design studio Thursday, Nov. 21, 2019, in Hawthorne, Calif. (AP)
Tesla CEO Elon Musk introduces the Cybertruck at Tesla's design studio Thursday, Nov. 21, 2019, in Hawthorne, Calif. (AP)
TT
20

Report: Musk’s $44 Bln Buyout of Twitter Faces US Antitrust Review

Tesla CEO Elon Musk introduces the Cybertruck at Tesla's design studio Thursday, Nov. 21, 2019, in Hawthorne, Calif. (AP)
Tesla CEO Elon Musk introduces the Cybertruck at Tesla's design studio Thursday, Nov. 21, 2019, in Hawthorne, Calif. (AP)

The US Federal Trade Commission (FTC) is reviewing Tesla Chief Executive Elon Musk's $44 billion takeover of Twitter Inc, Bloomberg News reported on Thursday, citing a person familiar with the deal.

The FTC declined to comment, while Musk could not be reached for comment.

The agency will decide in the next month whether it will do an in-depth antitrust probe of the proposed transaction, the person told Bloomberg. Such a probe would delay the deal's closing by months.

Antitrust experts have said there is little likelihood the agency will find any evidence that Musk's purchase of Twitter is illegal under antitrust law.

The FTC is already investigating Musk's initial purchase of a 9% stake in Twitter, probing whether he complied with an antitrust reporting requirement when he acquired the shares in early April.

One critic of the deal has been Open Markets Institute, which said that it should be stopped to avoid giving an already powerful man "direct control over one of the world's most important platforms for public communications and debate." It also cited Musk's ownership of the satellite communications company Starlink as a concern.

The deal has the support of Republicans, who hope conservatives banned from the site, like former President Donald Trump, will be allowed to return.

While Musk has tweeted about free speech, when he discusses plans for Twitter he focuses more on helping revenues by getting more people to use it or cutting such expenses as executive pay. He has said nothing publicly about allowing banned former users to return.



Rise in 'Harmful Content' Since Meta Policy Rollbacks, Survey Shows

The logo of Meta is seen at the entrance of the company's temporary stand ahead of the World Economic Forum (WEF) in Davos, Switzerland January 18, 2025. (Reuters)
The logo of Meta is seen at the entrance of the company's temporary stand ahead of the World Economic Forum (WEF) in Davos, Switzerland January 18, 2025. (Reuters)
TT
20

Rise in 'Harmful Content' Since Meta Policy Rollbacks, Survey Shows

The logo of Meta is seen at the entrance of the company's temporary stand ahead of the World Economic Forum (WEF) in Davos, Switzerland January 18, 2025. (Reuters)
The logo of Meta is seen at the entrance of the company's temporary stand ahead of the World Economic Forum (WEF) in Davos, Switzerland January 18, 2025. (Reuters)

Harmful content including hate speech has surged across Meta's platforms since the company ended third-party fact-checking in the United States and eased moderation policies, a survey showed Monday.

The survey of around 7,000 active users on Instagram, Facebook and Threads comes after the Palo Alto company ditched US fact-checkers in January and turned over the task of debunking falsehoods to ordinary users under a model known as "Community Notes," popularized by X.

The decision was widely seen as an attempt to appease President Donald Trump's new administration, whose conservative support base has long complained that fact-checking on tech platforms was a way to curtail free speech and censor right-wing content.

Meta also rolled back restrictions around topics such as gender and sexual identity. The tech giant's updated community guidelines said its platforms would permit users to accuse people of "mental illness" or "abnormality" based on their gender or sexual orientation.

"These policy shifts signified a dramatic reversal of content moderation standards the company had built over nearly a decade," said the survey published by digital and human rights groups including UltraViolet, GLAAD, and All Out.

"Among our survey population of approximately 7,000 active users, we found stark evidence of increased harmful content, decreased freedom of expression, and increased self-censorship".

One in six respondents in the survey reported being the victim of some form of gender-based or sexual violence on Meta platforms, while 66 percent said they had witnessed harmful content such as hateful or violent material.

Ninety-two percent of surveyed users said they were concerned about increasing harmful content and felt "less protected from being exposed to or targeted by" such material on Meta's platforms.

Seventy-seven percent of respondents described feeling "less safe" expressing themselves freely.

The company declined to comment on the survey.

In its most recent quarterly report, published in May, Meta insisted that the changes in January had left a minimal impact.

"Following the changes announced in January we've cut enforcement mistakes in the US in half, while during that same time period the low prevalence of violating content on the platform remained largely unchanged for most problem areas," the report said.

But the groups behind the survey insisted that the report did not reflect users' experiences of targeted hate and harassment.

"Social media is not just a place we 'go' anymore. It's a place we live, work, and play. That's why it's more crucial than ever to ensure that all people can safely access these spaces and freely express themselves without fear of retribution," Jenna Sherman, campaign director at UltraViolet, told AFP.

"But after helping to set a standard for content moderation online for nearly a decade, (chief executive) Mark Zuckerberg decided to move his company backwards, abandoning vulnerable users in the process.

"Facebook and Instagram already had an equity problem. Now, it's out of control," Sherman added.

The groups implored Meta to hire an independent third party to "formally analyze changes in harmful content facilitated by the policy changes" made in January, and for the tech giant to swiftly reinstate the content moderation standards that were in place earlier.

The International Fact-Checking Network has previously warned of devastating consequences if Meta broadens its policy shift related to fact-checkers beyond US borders to the company's programs covering more than 100 countries.

AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union.