Meta Makes End-to-End Encryption a Default on Facebook Messenger

The META logo is seen at the Vivatech show in Paris in Paris, France on June 14, 2023. (AP)
The META logo is seen at the Vivatech show in Paris in Paris, France on June 14, 2023. (AP)
TT
20

Meta Makes End-to-End Encryption a Default on Facebook Messenger

The META logo is seen at the Vivatech show in Paris in Paris, France on June 14, 2023. (AP)
The META logo is seen at the Vivatech show in Paris in Paris, France on June 14, 2023. (AP)

Meta is rolling out end-to-end encryption for calls and messages across its Facebook and Messenger platforms, the company announced Thursday.

Such encryption means that no one other than the sender and the recipient — not even Meta — can decipher people’s messages. Encrypted chats, first introduced as an optional feature in Messenger in 2016, will now be the standard for all users going forward, according to Messenger head Loredana Crisan.

"This has taken years to deliver because we’ve taken our time to get this right," Crisan wrote in a blog post. "Our engineers, cryptographers, designers, policy experts and product managers have worked tirelessly to rebuild Messenger features from the ground up."

Meta CEO Mark Zuckerberg promised, back in 2019, to bring end-to-end encryption to its platforms after the social media company suffered a string of high-profile scandals, notably when Cambridge Analytica accessed user data on Facebook. Privacy advocates again shined a spotlight on Meta after Nebraska investigators reviewed private Facebook messages while investigating an abortion that violated a state 20-week ban.

Meta, whose WhatsApp platform already encrypts messages, said the feature can help keep users safe from hackers, fraudsters and criminals.

Meanwhile, encryption critics, law enforcement and even a Meta report released in 2022 note the risks of enhanced encryption, including users who could abuse the privacy feature to sexually exploit children, facilitate human trafficking and spread hate speech.

"What will Meta’s bosses say to children who have suffered sexual abuse, whose trauma will be compounded by their decision not to preserve their privacy? How will they justify turning a blind eye to this illegal and harmful content being spread via their platforms?" said Internet Watch Foundation chief executive Susie Hargreaves.

"The company has a strong track record in detecting large amounts of child sexual abuse material before it appears on its platforms. We urge Meta to continue this vital protection."

The new features will be available immediately, but Crisan wrote that it would take some time for the privacy feature to be rolled out to all of its users.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.