Hackers Got User Data from Meta with Forged Request

3D-printed images of the logos of Facebook and parent company Meta Platforms are seen on a laptop keyboard. (File photo: REUTERS/Dado Ruvic)
3D-printed images of the logos of Facebook and parent company Meta Platforms are seen on a laptop keyboard. (File photo: REUTERS/Dado Ruvic)
TT
20

Hackers Got User Data from Meta with Forged Request

3D-printed images of the logos of Facebook and parent company Meta Platforms are seen on a laptop keyboard. (File photo: REUTERS/Dado Ruvic)
3D-printed images of the logos of Facebook and parent company Meta Platforms are seen on a laptop keyboard. (File photo: REUTERS/Dado Ruvic)

Facebook owner Meta gave user information to hackers who pretended to be law enforcement officials last year, a company source said Wednesday, highlighting the risks of a measure used in urgent cases.

Imposters were able to get details like physical addresses or phone numbers in response to falsified "emergency data requests," which can slip past privacy barriers, said the source who requested anonymity due to the sensitivity of the matter, AFP said.

Criminal hackers have been compromising email accounts or websites tied to police or government and claiming they can't wait for a judge's order for information because it's an "urgent matter of life and death," cyber expert Brian Krebs wrote Tuesday.

Bloomberg news agency, which originally reported Meta being targeted, also reported that Apple had provided customer data in response to forged data requests.

Apple and Meta did not officially confirm the incidents, but provided statements citing their policies in handling information demands.

When US law enforcement officials want data on a social media account's owner or an associated cell phone number, they must submit an official court-ordered warrant or subpoena, Krebs wrote.

But in urgent cases authorities can make an "emergency data request," which "largely bypasses any official review and does not require the requestor to supply any court-approved documents," he added.

Meta, in a statement, said the firm reviews every data request for "legal sufficiency" and uses "advanced systems and processes" to validate law enforcement requests and detect abuse.

"We block known compromised accounts from making requests and work with law enforcement to respond to incidents involving suspected fraudulent requests, as we have done in this case," the statement added.

Apple noted its guidelines, which say that in the case of an emergency application "a supervisor for the government or law enforcement agent who submitted the... request may be contacted and asked to confirm to Apple that the emergency request was legitimate."

Krebs noted that the lack of a unitary, national system for these type of requests is one of the key problems associated with them, as companies end up deciding how to deal with them.

"To make matters more complicated, there are tens of thousands of police jurisdictions around the world — including roughly 18,000 in the United States alone — and all it takes for hackers to succeed is illicit access to a single police email account," he wrote.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.