How Did Hackers Breach Microsoft’s Security, Create Millions of Fake Accounts?

This file photo from April 12, 2016, shows the Microsoft logo in Issy-les-Moulineaux, outside Paris, France. (AP Photo/Michel Euler, File)
This file photo from April 12, 2016, shows the Microsoft logo in Issy-les-Moulineaux, outside Paris, France. (AP Photo/Michel Euler, File)
TT
20

How Did Hackers Breach Microsoft’s Security, Create Millions of Fake Accounts?

This file photo from April 12, 2016, shows the Microsoft logo in Issy-les-Moulineaux, outside Paris, France. (AP Photo/Michel Euler, File)
This file photo from April 12, 2016, shows the Microsoft logo in Issy-les-Moulineaux, outside Paris, France. (AP Photo/Michel Euler, File)

The trustworthiness of the online authentication systems used to verify whether the user is human is currently under scrutiny. Microsoft recently uncovered a group of cyber criminals in a major development that exposed the widely-used authentication technique known as “Captcha”.

Microsoft uncovered a group of hackers, "Storm-1152", that sold 750 million fake Microsoft accounts that enable cyber criminals to execute their online attacks.

- The beginning

Storm-1152 is a group of cyber hackers that operates from Vietnam. It managed to overcome all the authentication terms required to create a Microsoft account.

The group initially targets the Captcha technique, a widely-used window that requests a user to type a series of letters or numbers, or click on parts of a picture depicting buses of stairs, to verify that they are human, not robots.

But this authentication method is becoming less efficient, as Storm-1152 found a way to deceive it and create millions of fake accounts.

The hackers used “machine learning” to train their special hacking tool on clicking in the right place on the verification pictures, explained François Deruty, expert at a cybersecurity firm, Sekoia.

Then, the Storm-1152 hackers sold these fake accounts on a website for actors planning attacks, such as phishing emails and ransomware, according to Deruty.

- Famous group

The Vietnamese group is well-known. While other countries like China, Iran, Russia and North Korea make headlines in most cybersecurity attacks news, Vietnam, like India and Türkiye, has many hacking groups that make progress every year, added Deruty.

Microsoft has blocked a part of its websites on the US territories following a federal ruling that approved the closure of the servers that the group breached. “They definitely have other websites somewhere else and an international collaboration is needed to shut them down,” the expert noted.

Defenses against techniques used by cybercriminals

There are novel techniques such as the multifactor authentication, which uses codes sent via SMSs for example, but it’s a matter of time before the hackers figure out its vulnerabilities.

Other methods include security keys provided by banks for better security, but expanding these novel methods require more time and money, while Microsoft still keeps the old versions of its different programs.



Reddit Sues AI Giant Anthropic Over Content Use

Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
TT
20

Reddit Sues AI Giant Anthropic Over Content Use

Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP

Social media outlet Reddit filed a lawsuit Wednesday against artificial intelligence company Anthropic, accusing the startup of illegally scraping millions of user comments to train its Claude chatbot without permission or compensation.

The lawsuit in a California state court represents the latest front in the growing battle between content providers and AI companies over the use of data to train increasingly sophisticated language models that power the generative AI revolution.

Anthropic, valued at $61.5 billion and heavily backed by Amazon, was founded in 2021 by former executives from OpenAI, the creator of ChatGPT.

The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development.

"This case is about the two faces of Anthropic: the public face that attempts to ingratiate itself into the consumer's consciousness with claims of righteousness and respect for boundaries and the law, and the private face that ignores any rules that interfere with its attempts to further line its pockets," the suit said.

According to the complaint, Anthropic has been training its models on Reddit content since at least December 2021, with CEO Dario Amodei co-authoring research papers that specifically identified high-quality content for data training.

The lawsuit alleges that despite Anthropic's public claims that it had blocked its bots from accessing Reddit, the company's automated systems continued to harvest Reddit's servers more than 100,000 times in subsequent months.

Reddit is seeking monetary damages and a court injunction to force Anthropic to comply with its user agreement terms. The company has requested a jury trial.

In an email to AFP, Anthropic said "We disagree with Reddit's claims and will defend ourselves vigorously."

Reddit has entered into licensing agreements with other AI giants including Google and OpenAI, which allow those companies to use Reddit content under terms that protect user privacy and provide compensation to the platform.

Those deals have helped lift Reddit's share price since it went public in 2024.

Reddit shares closed up more than six percent on Wednesday following news of the lawsuit.

Musicians, book authors, visual artists and news publications have sued the various AI companies that used their data without permission or payment.

AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally changes the original content and is necessary for innovation.

Though most of these lawsuits are still in early stages, their outcomes could have a profound effect on the shape of the AI industry.