Meta to Start Using Public Posts on Facebook, Instagram in UK to Train AI

Meta AI logo is seen in this illustration taken May 20, 2024. (Reuters)
Meta AI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT
20

Meta to Start Using Public Posts on Facebook, Instagram in UK to Train AI

Meta AI logo is seen in this illustration taken May 20, 2024. (Reuters)
Meta AI logo is seen in this illustration taken May 20, 2024. (Reuters)

Meta Platforms will begin training its AI models using public content shared by adults on Facebook and Instagram in the UK over the coming months, the company said, after it had paused the training in the region following a regulatory backlash.

The company will use public posts including photos, captions and comments to train its generative artificial intelligence models, it said on Friday, adding that the training content will not include private messages or information from accounts of users under the age of 18.

The update follows Meta's decision in mid-June to pause the launch of its AI models in Europe after the Irish privacy regulator told the company to delay its plan to harness data from social media posts.

The company had then said the delay would also allow it to address requests from Britain's Information Commissioner's Office (ICO).

"Since we paused training our generative AI models in the UK to address regulatory feedback, we've engaged positively with the ICO ... this clarity and certainty will help us bring AI at Meta products to the UK much sooner," Meta said on Friday.

Facebook and Instagram users in the UK will start receiving in-app notifications from next week explaining the company's procedure and how users can object to their data being used for the training, Meta added.

In June, the company's plans faced backlash from advocacy group NOYB, which urged national privacy watchdogs across Europe to stop such use of social media content, saying the notifications were insufficient to meet EU's stringent EU privacy and transparency rules.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.