Twitter Expands Research Group to Study Content Moderation

The shadows of people holding mobile phones are cast onto a backdrop projected with the Twitter logo in this illustration picture taken in Warsaw September 27, 2013. REUTERS/Kacper Pempel
The shadows of people holding mobile phones are cast onto a backdrop projected with the Twitter logo in this illustration picture taken in Warsaw September 27, 2013. REUTERS/Kacper Pempel
TT
20

Twitter Expands Research Group to Study Content Moderation

The shadows of people holding mobile phones are cast onto a backdrop projected with the Twitter logo in this illustration picture taken in Warsaw September 27, 2013. REUTERS/Kacper Pempel
The shadows of people holding mobile phones are cast onto a backdrop projected with the Twitter logo in this illustration picture taken in Warsaw September 27, 2013. REUTERS/Kacper Pempel

Twitter Inc plans to provide more data to external researchers who study online misinformation and moderation, the social media company said Thursday, part of what it says is an effort to increase transparency on the platform.

The company will also open an application process to allow more people working in academia, civil society and journalism to join the Twitter Moderation Research Consortium, a group that Twitter formed in pilot mode earlier this year and has access to the datasets.

While researchers have studied the flow of harmful content on social platforms for years, they have often done so without direct involvement from social media companies.

During a briefing with reporters, Twitter said it hopes the data will lead to new types of studies about how efforts to fight online misinformation work.

Twitter has already shared datasets with researchers about coordinated efforts backed by foreign governments to manipulate information on Twitter, Reuters reported.

The company said it now plans to share information about other content moderation areas, such as tweets that have been labeled as potentially misleading.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.