Internal Bug Promoted Problematic Content on Facebook

Facebook News allows users to access news on the US social media giant’s platform. (AFP)
Facebook News allows users to access news on the US social media giant’s platform. (AFP)
TT
20

Internal Bug Promoted Problematic Content on Facebook

Facebook News allows users to access news on the US social media giant’s platform. (AFP)
Facebook News allows users to access news on the US social media giant’s platform. (AFP)

Content identified as misleading or problematic were mistakenly prioritized in users' Facebook feeds recently, thanks to a software bug that took six months to fix, according to tech site The Verge.

Facebook disputed the report, which was published Thursday, saying that it "vastly overstated what this bug was because ultimately it had no meaningful, long-term impact on problematic content," according to Joe Osborne, a spokesman for parent company Meta.

But the bug was serious enough for a group of Facebook employees to draft an internal report referring to a "massive ranking failure" of content, The Verge reported.

In October, the employees noticed that some content which had been marked as questionable by external media -- members of Facebook's third-party fact-checking program -- was nevertheless being favored by the algorithm to be widely distributed in users' News Feeds.

"Unable to find the root cause, the engineers watched the surge subside a few weeks later and then flare up repeatedly until the ranking issue was fixed on March 11," The Verge reported.

But according to Osborne, the bug affected "only a very small number of views" of content.

That's because "the overwhelming majority of posts in Feed are not eligible to be down-ranked in the first place," Osborne explained, adding that other mechanisms designed to limit views of "harmful" content remained in place, "including other demotions, fact-checking labels and violating content removals."

AFP currently works with Facebook's fact checking program in more than 80 countries and 24 languages. Under the program, which started in December 2016, Facebook pays to use fact checks from around 80 organizations, including media outlets and specialized fact checkers, on its platform, WhatsApp and on Instagram.

Content rated "false" is downgraded in news feeds so fewer people will see it. If someone tries to share that post, they are presented with an article explaining why it is misleading.

Those who still choose to share the post receive a notification with a link to the article. No posts are taken down. Fact checkers are free to choose how and what they wish to investigate.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.