As Deepfakes Flourish, Countries Struggle With Response

A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters
A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters
TT

As Deepfakes Flourish, Countries Struggle With Response

A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters
A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters

software that allows people to swap faces, voices and other characteristics to create digital forgeries — has been used in recent years to make a synthetic substitute of Elon Musk that shilled a cryptocurrency scam, to digitally “undress” more than 100,000 women on Telegram and to steal millions of dollars from companies by mimicking their executives’ voices on the phone.

In most of the world, the authorities can’t do much about it. Even as the software grows more sophisticated and accessible, few laws exist to manage its spread.

China hopes to be the exception. This month, the country adopted expansive rules requiring that manipulated material have the subject’s consent and bear digital signatures or watermarks, and that deepfake service providers offer ways to “refute rumors.”

But China faces the same hurdles that have stymied other efforts to govern deepfakes: The worst abusers of the technology tend to be the hardest to catch, operating anonymously, adapting quickly and sharing their synthetic creations through borderless online platforms. China’s move has also highlighted another reason that few countries have adopted rules: Many people worry that the government could use the rules to curtail free speech.

But simply by forging ahead with its mandates, tech experts said, Beijing could influence how other governments deal with the machine learning and artificial intelligence that power deepfake technology. With limited precedent in the field, lawmakers around the world are looking for test cases to mimic or reject.

“The A.I. scene is an interesting place for global politics, because countries are competing with one another on who’s going to set the tone,” said Ravit Dotan, a postdoctoral researcher who runs the Collaborative A.I. Responsibility Lab at the University of Pittsburgh. “We know that laws are coming, but we don’t know what they are yet, so there’s a lot of unpredictability.”

Deepfakes hold great promise in many industries. Last year, the Dutch police revived a 2003 cold case by creating a digital avatar of the 13-year-old murder victim and publicizing footage of him walking through a group of his family and friends in the present day. The technology is also used for parody and satire, for online shoppers trying on clothes in virtual fitting rooms, for dynamic museum dioramas and for actors hoping to speak multiple languages in international movie releases. Researchers at the M.I.T. Media Lab and UNICEF used similar techniques to study empathy by transforming images of North American and European cities into the battle-scarred landscapes caused by the Syrian war.

But problematic applications are also plentiful. Legal experts worry that deepfakes could be misused to erode trust in surveillance videos, body cameras and other evidence. (A doctored recording submitted in a British child custody case in 2019 appeared to show a parent making violent threats, according to the parent’s lawyer.) Digital forgeries could discredit or incite violence against police officers, or send them on wild goose chases. The Department of Homeland Security has also identified risks including cyberbullying, blackmail, stock manipulation and political instability.

The increasing volume of deepfakes could lead to a situation where “citizens no longer have a shared reality, or could create societal confusion about which information sources are reliable; a situation sometimes referred to as ‘information apocalypse’ or ‘reality apathy,’” the European law enforcement agency Europol wrote in a report last year.

British officials last year cited threats such as a website that “virtually strips women naked” and that was visited 38 million times in the first eight months of 2021. But there and in the European Union, proposals to set guardrails for the technology have yet to become law.

Attempts in the United States to create a federal task force to examine deepfake technology have stalled. Representative Yvette D. Clarke, a New York Democrat, proposed a bill in 2019 and again in 2021 — the Defending Each and Every Person From False Appearances by Keeping Exploitation Subject to Accountability Act — that has yet to come to a vote. She said she planned to reintroduce the bill this year.

Ms. Clarke said her bill, which would require deepfakes to bear watermarks or identifying labels, was “a protective measure.” By contrast, she described the new Chinese rules as “more of a control mechanism.”

“Many of the sophisticated civil societies recognize how this can be weaponized and destructive,” she said, adding that the United States should be bolder in setting its own standards rather than trailing another front-runner.

“We don’t want the Chinese eating our lunch in the tech space at all,” Ms. Clarke said. “We want to be able to set the baseline for our expectations around the tech industry, around consumer protections in that space.”

But law enforcement officials have said the industry is still unable to detect deepfakes and struggles to manage malicious uses of the technology. A lawyer in California wrote in a law journal in 2021 that certain deepfake rules had “an almost insurmountable feasibility problem” and were “functionally unenforceable” because (usually anonymous) abusers can easily cover their tracks.

The rules that do exist in the United States are largely aimed at political or pornographic deepfakes. Marc Berman, a Democrat in California’s State Assembly who represents parts of Silicon Valley and has sponsored such legislation, said he was unaware of any efforts to enforce his laws via lawsuits or fines. But he said that, in deference to one of his laws, a deepfaking app had removed the ability to mimic President Donald J. Trump before the 2020 election.

Only a handful of other states, including New York, restrict deepfake pornography. While running for re-election in 2019, Houston’s mayor said a critical ad from a fellow candidate broke a Texas law that bans certain misleading political deepfakes.

“Half of the value is causing more people to be a little bit more skeptical about what they’re seeing on a social media platforms and encourage folks not to take everything at face value,” Mr. Berman said.

But even as technology experts, lawmakers and victims call for stronger protections, they also urge caution. Deepfake laws, they said, risk being both overreaching but also toothless. Forcing labels or disclaimers onto deepfakes designed as valid commentary on politics or culture could also make the content appear less trustworthy, they added.

Digital rights groups such as the Electronic Frontier Foundation are pushing legislators to relinquish deepfake policing to tech companies, or to use an existing legal framework that addresses issues such as fraud, copyright infringement, obscenity and defamation.

“That’s the best remedy against harms, rather than the governmental interference, which in its implementation is almost always going to capture material that is not harmful, that chills people from legitimate, productive speech,” said David Greene, a civil liberties lawyer for the Electronic Frontier Foundation.

Several months ago, Google began prohibiting people from using its Colaboratory platform, a data analysis tool, to train A.I. systems to generate deepfakes. In the fall, the company behind Stable Diffusion, an image-generating tool, launched an update that hamstrings users trying to create nude and pornographic content, according to The Verge. Meta, TikTok, YouTube and Reddit ban deepfakes that are intended to be misleading.

But laws or bans may struggle to contain a technology that is designed to continually adapt and improve. Last year, researchers from the RAND Corporation demonstrated how difficult deepfakes can be to identify when they showed a set of videos to more than 3,000 test subjects and asked them to identify the ones that were manipulated (such as a deepfake of the climate activist Greta Thunberg disavowing the existence of climate change).

The group was wrong more than a third of the time. Even a subset of several dozen students studying machine learning at Carnegie Mellon University were wrong more than 20 percent of the time.

Initiatives from companies such as Microsoft and Adobe now try to authenticate media and train moderation technology to recognize the inconsistencies that mark synthetic content. But they are in a constant struggle to outpace deepfake creators who often discover new ways to fix defects, remove watermarks and alter metadata to cover their tracks.

“There is a technological arms race between deepfake creators and deepfake detectors,” said Jared Mondschein, a physical scientist at RAND. “Until we start coming up with ways to better detect deepfakes, it’ll be really hard for any amount of legislation to have any teeth.”

The New York Times



EU Launches Antitrust Probe into Google’s Use of Online Content for AI Purposes 

01 December 2025, Hamburg: The Google logo shines above the entrance to Google's German headquarters. (dpa)
01 December 2025, Hamburg: The Google logo shines above the entrance to Google's German headquarters. (dpa)
TT

EU Launches Antitrust Probe into Google’s Use of Online Content for AI Purposes 

01 December 2025, Hamburg: The Google logo shines above the entrance to Google's German headquarters. (dpa)
01 December 2025, Hamburg: The Google logo shines above the entrance to Google's German headquarters. (dpa)

The European Commission has opened an antitrust probe to assess whether Google is breaching EU competition rules in its use of online content from web publishers and YouTube for artificial intelligence purposes, it said on Tuesday.

"The investigation will notably examine whether Google is distorting competition by imposing unfair terms and conditions on publishers and content creators, or by granting itself privileged access to such content, thereby placing developers of rival AI models at a disadvantage," the Commission said.

It said it was concerned Google may have used content from web publishers to generate AI-powered services on its search results pages without appropriate compensation to publishers and without offering them the possibility to refuse such use of their content.

The Commission said it is also concerned whether Google has used content uploaded to YouTube to train its own generate AI models without offering creators compensation or the possibility to refuse.


US to Allow Nvidia H200 Chip Shipments to China, Trump Says 

A Nvidia logo appears in this illustration taken August 25, 2025. (Reuters) 
A Nvidia logo appears in this illustration taken August 25, 2025. (Reuters) 
TT

US to Allow Nvidia H200 Chip Shipments to China, Trump Says 

A Nvidia logo appears in this illustration taken August 25, 2025. (Reuters) 
A Nvidia logo appears in this illustration taken August 25, 2025. (Reuters) 

The United States will allow Nvidia's H200 processors, its second-best artificial intelligence chips, to be exported to China and collect a 25% fee on such sales, US President Donald Trump said on Monday.

The decision appears to settle a US debate about whether Nvidia and rivals should maintain their global lead in AI chips by selling to China or withhold the exports, though Beijing has told companies not to use US technology, leaving it unclear whether Trump's decision would lead to new sales.

Nvidia shares rose 2% in after-hours trading after Trump made the announcement on Truth Social, following a 3% rise during the day on a report by Semafor.

Trump said in his post that he had informed President Xi Jinping of China, where Nvidia's chips are under government scrutiny, about the move and that he "responded positively."

He said the US Commerce Department was finalizing details of the arrangement and the same approach would apply to other AI chip firms such as Advanced Micro Devices and Intel.

Trump's post said the fee to be paid to the US government was "$25%", and a White House official confirmed he meant 25%, higher than the 15% proposed in August.

"We will protect National Security, create American Jobs, and keep America’s lead in AI," Trump wrote on Truth Social. "NVIDIA’s US Customers are already moving forward with their incredible, highly advanced Blackwell chips, and soon, Rubin, neither of which are part of this deal."

Trump did not say how many H200 chips would be authorized for shipment or what conditions might apply, only that exports would occur "under conditions that allow for continued strong National Security."

Administration officials consider the move a compromise between sending Nvidia's latest Blackwell chips to China, which Trump has declined to allow, and sending China no US chips at all, which officials believe would bolster Huawei's efforts to sell AI chips in China, a person familiar with the matter said.

"Offering H200 to approved commercial customers, vetted by the Department of Commerce, strikes a thoughtful balance that is great for America," Nvidia said in a statement.

Intel declined to comment. The US Commerce Department, which oversees export controls, and AMD did not respond to requests for comment.

A White House official said that the 25% fee would be collected as an import tax from Taiwan, where the chips are made, to the United States, where the chips will undergo a security review by US officials before being exported to China.

FEARS OF CHIPS STRENGTHENING CHINA'S MILITARY

China hawks in Washington are concerned that selling more advanced AI chips to China could help Beijing supercharge its military, fears that had first prompted limits on such exports by the Biden administration.

The Trump administration had been considering greenlighting the sale, sources told Reuters last month. Trump said last week he met with Nvidia CEO Jensen Huang and that the executive was aware of where he stood on export controls.

"It’s a terrible mistake to trade off national security for advantages in trade," said Eric Hirschhorn, who was a senior Commerce Department official during the Obama administration. "It cuts against the consistent policies of Democratic and Republican administrations alike not to assist China’s military modernization."

According to a report released on Sunday by the non-partisan think tank, the Institute for Progress (IFP), the H200 would be almost six times as powerful as the H20, the most advanced AI semiconductor that can legally be exported to China, after the Trump administration reversed its short-lived ban on such sales this year.

The Blackwell chip now in use by US AI firms is about 1.5 times faster than H200 chips for training AI systems, the IFP said, and five times faster for inferencing work where AI models are put to use. Nvidia's own research has suggested Blackwell chips are 10 times faster than H200 chips for some tasks.

Several Democratic US senators in a statement described Trump's decision as a "colossal economic and national security failure" that would be a boon to China's industry and military.

Republican Representative John Moolenaar, who chairs the House China Select Committee, said in a statement to Reuters that China would use the chips to strengthen its military capabilities and surveillance.

"Nvidia should be under no illusions - China will rip off its technology, mass-produce it themselves and seek to end Nvidia as a competitor," he said.

CHINA EYES POTENTIAL SECURITY RISKS

The approval, however, comes as China is strengthening its resolve to wean the country off its reliance on Nvidia's chips. China's cyberspace regulator in July also accused Nvidia's H20 chips of potentially carrying backdoor security risks, an allegation Nvidia has denied.

In recent months, Beijing has cautioned Chinese tech companies against buying chips that Nvidia downgraded to sell to the Chinese market, which are the H20, RTX 6000D and L20, two sources said.

"Chinese firms want H200s, but the Chinese state is driven by paranoia and pride," said Craig Singleton, a senior fellow at the Washington think tank Foundation for Defense of Democracies. "Washington may approve the chips, but Beijing still has to let them in."

The H200 change of stance comes the same day that Trump's Justice Department announced it had cracked a China-linked chip smuggling ring that in late 2024 and early 2025 exported and attempted to export at least $160 million worth of controlled Nvidia H100 and H200 chips.

Chris McGuire, an expert on technology and national security who served at the US State Department until this summer, said Chinese firms would likely still buy H200s, given that the chip "is better than every chip the Chinese can make."

China's domestic AI chip companies now include tech giant Huawei Technologies, which in September released a three-year product roadmap, as well as smaller players such as Cambricon and Moore Threads.

China's SSE STAR Chip Index and the CSI Semiconductor Industry Index both dropped more than 1% at market open on Tuesday but soon recovered most of the losses.


NextEra Expands Google Cloud Partnership, Secures Clean Energy Contracts with Meta

Electric power transmission pylon miniatures and Nextera Energy logo are seen in this illustration taken, December 9, 2022. REUTERS/Dado Ruvic/Illustration
Electric power transmission pylon miniatures and Nextera Energy logo are seen in this illustration taken, December 9, 2022. REUTERS/Dado Ruvic/Illustration
TT

NextEra Expands Google Cloud Partnership, Secures Clean Energy Contracts with Meta

Electric power transmission pylon miniatures and Nextera Energy logo are seen in this illustration taken, December 9, 2022. REUTERS/Dado Ruvic/Illustration
Electric power transmission pylon miniatures and Nextera Energy logo are seen in this illustration taken, December 9, 2022. REUTERS/Dado Ruvic/Illustration

NextEra Energy expanded its partnership with Alphabet's Google Cloud to scale up data center capacity, while securing over 2.5 gigawatts of clean energy contracts from Meta across the US, the company said on Monday.

Shares of NextEra were up 2.7% in premarket trading.

Under the expanded deal with Google Cloud, the companies will develop multiple new gigawatt-scale (GW) data centre campuses, each with accompanying generation and capacity, Reuters reported.

NextEra and Google Cloud plan to launch an AI-powered product by mid-2026 to predict equipment issues, optimize crew scheduling and boost grid reliability amid storms, aging assets, and rising demand.

The deal comes as US electricity demand grows due to rapid AI adoption, prompting cloud companies and utilities to secure land, grid connections and new generation to support large data center loads.

In October, the company had partnered with Google to restart an Iowa nuclear power plant shut down five years ago.

The technology industry's quest for massive amounts of electricity for AI processing has renewed interest in the country's nuclear reactors.

NextEra said it had signed 11 power purchase agreements and two energy storage agreements with Meta, totaling over 2.5 GW of clean energy contracts. The projects are scheduled to come online between 2026 and 2028.

The utility also reached an agreement with WPPI Energy to continue supplying 168 megawatts of the output from the Point Beach Nuclear Plant in Two Rivers into the 2050s.

Separately, NextEra forecast higher adjusted profit in 2026 as well as the current-year as it continues to benefit from the surge in power demand.

NextEra now expects adjusted earnings for 2025 of between $3.62 and $3.70 per share, compared with its prior view of between $3.45 and $3.70 per share.

For 2026, it expects adjusted profit between $3.92 and $4.02 per share, compared with its prior view of between $3.63 and $4.00 per share.