Self-Proclaimed Bitcoin Inventor Lied ‘Repeatedly’ to Support Claim, Says UK Judge

A man walks past a bitcoin poster in Hong Kong on April 15, 2024. DALE DE LA REY / AFP
A man walks past a bitcoin poster in Hong Kong on April 15, 2024. DALE DE LA REY / AFP
TT
20

Self-Proclaimed Bitcoin Inventor Lied ‘Repeatedly’ to Support Claim, Says UK Judge

A man walks past a bitcoin poster in Hong Kong on April 15, 2024. DALE DE LA REY / AFP
A man walks past a bitcoin poster in Hong Kong on April 15, 2024. DALE DE LA REY / AFP

An Australian computer scientist who claimed he invented bitcoin lied "extensively and repeatedly" and forged documents "on a grand scale" to support his false claim, a judge at London's High Court ruled on Monday.

Craig Wright had long claimed to have been the author of a 2008 white paper, the foundational text of bitcoin, published under the pseudonym "Satoshi Nakamoto".

But Judge James Mellor ruled in March that the evidence Wright was not Satoshi was "overwhelming", after a trial in a case brought by the Crypto Open Patent Alliance (COPA) to stop Wright suing bitcoin developers.

Mellor gave reasons for his conclusions on Monday, stating in a written ruling: "Dr Wright presents himself as an extremely clever person. However, in my judgment, he is not nearly as clever as he thinks he is."

The judge added: "All his lies and forged documents were in support of his biggest lie: his claim to be Satoshi Nakamoto."

Mellor also said that Wright's actions in suing developers and his expressed views about bitcoin also pointed against him being Satoshi, Reuters reported.

Wright, who denied forging documents when he gave evidence in February, said in a post on X: "I fully intend to appeal the decision of the court on the matter of the identity issue."

COPA – whose members include Twitter founder Jack Dorsey's payments firm Block – described Monday's ruling as "a watershed moment for the open-source community".

"Developers can now continue their important work maintaining, iterating on, and improving the bitcoin network without risking their personal livelihoods or fearing costly and time-consuming litigation from Craig Wright," a COPA spokesperson said.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.