Twitter, Others Slip on Removing Hate Speech, EU Review Says 

A view of Twitter headquarters in San Francisco, California, USA, 21 November 2022. (EPA)
A view of Twitter headquarters in San Francisco, California, USA, 21 November 2022. (EPA)
TT

Twitter, Others Slip on Removing Hate Speech, EU Review Says 

A view of Twitter headquarters in San Francisco, California, USA, 21 November 2022. (EPA)
A view of Twitter headquarters in San Francisco, California, USA, 21 November 2022. (EPA)

Twitter took longer to review hateful content and removed less of it in 2022 compared with the previous year, according to European Union data released Thursday. 

The EU figures were published as part of an annual evaluation of online platforms' compliance with the 27-nation bloc's code of conduct on disinformation. 

Twitter wasn't alone — most other tech companies signed up to the voluntary code also scored worse. But the figures could foreshadow trouble for Twitter in complying with the EU's tough new online rules after owner Elon Musk fired many of the platform's 7,500 full-time workers and an untold number of contractors responsible for content moderation and other crucial tasks. 

The EU report found Twitter assessed just over half of the notifications it received about illegal hate speech within 24 hours, down from 82% in 2021. Facebook, Instagram and YouTube also took longer, while TikTok was the only one to improve. 

The amount of hate speech Twitter removed after it was flagged up slipped to 45.4% from 49.8% the year before. The removal rate at other platforms also slipped, except at YouTube, which surged. 

Twitter didn't respond to a request for comment. Emails to several staff on the company's European communications team bounced back as undeliverable. 

Musk's $44 billion acquisition of Twitter last month fanned widespread concern that purveyors of lies and misinformation would be allowed to flourish on the site. The billionaire Tesla CEO, who has frequently expressed his belief that Twitter had become too restrictive, has been reinstating suspended accounts, including former President Donald Trump's. 

Twitter faces more scrutiny in Europe by the middle of next year, when new EU rules aimed at protecting internet users’ online safety will start applying to the biggest online platforms. Violations could result in huge fines of up to 6% of a company's annual global revenue. 

France's online regulator Arcom said it received a reply from Twitter after writing to the company earlier this week to say it was concerned about the effect that staff departures would have on Twitter's “ability maintain a safe environment for its users.” 

Arcom also asked the company to confirm it can meet its “legal obligations” in fighting online hate speech and that it is committed to implementing the new EU online rules. Arcom said it received a response from Twitter and that it will “study their response,” without giving more details. 

Tech companies that signed up to the EU's disinformation code agree to commit to measures aimed at reducing disinformation and file regular reports on whether they’re living up to their promises, though there’s little in the way of punishment. 



OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

AI startups OpenAI and Anthropic have signed deals with the United States government for research, testing and evaluation of their artificial intelligence models, the US Artificial Intelligence Safety Institute said on Thursday.

The first-of-their-kind agreements come at a time when the companies are facing regulatory scrutiny over safe and ethical use of AI technologies.

California legislators are set to vote on a bill as soon as this week to broadly regulate how AI is developed and deployed in the state.

Under the deals, the US AI Safety Institute will have access to major new models from both OpenAI and Anthropic prior to and following their public release.

The agreements will also enable collaborative research to evaluate capabilities of the AI models and risks associated with them, Reuters reported.

"We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on," said Jason Kwon, chief strategy officer at ChatGPT maker OpenAI.

Anthropic, which is backed by Amazon and Alphabet , did not immediately respond to a Reuters request for comment.

"These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said Elizabeth Kelly, director of the US AI Safety Institute.

The institute, a part of the US commerce department's National Institute of Standards and Technology (NIST), will also collaborate with the U.K. AI Safety Institute and provide feedback to the companies on potential safety improvements.

The US AI Safety Institute was launched last year as part of an executive order by President Joe Biden's administration to evaluate known and emerging risks of artificial intelligence models.