As Deepfakes Flourish, Countries Struggle With Response

A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters
A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters
TT

As Deepfakes Flourish, Countries Struggle With Response

A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters
A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters

software that allows people to swap faces, voices and other characteristics to create digital forgeries — has been used in recent years to make a synthetic substitute of Elon Musk that shilled a cryptocurrency scam, to digitally “undress” more than 100,000 women on Telegram and to steal millions of dollars from companies by mimicking their executives’ voices on the phone.

In most of the world, the authorities can’t do much about it. Even as the software grows more sophisticated and accessible, few laws exist to manage its spread.

China hopes to be the exception. This month, the country adopted expansive rules requiring that manipulated material have the subject’s consent and bear digital signatures or watermarks, and that deepfake service providers offer ways to “refute rumors.”

But China faces the same hurdles that have stymied other efforts to govern deepfakes: The worst abusers of the technology tend to be the hardest to catch, operating anonymously, adapting quickly and sharing their synthetic creations through borderless online platforms. China’s move has also highlighted another reason that few countries have adopted rules: Many people worry that the government could use the rules to curtail free speech.

But simply by forging ahead with its mandates, tech experts said, Beijing could influence how other governments deal with the machine learning and artificial intelligence that power deepfake technology. With limited precedent in the field, lawmakers around the world are looking for test cases to mimic or reject.

“The A.I. scene is an interesting place for global politics, because countries are competing with one another on who’s going to set the tone,” said Ravit Dotan, a postdoctoral researcher who runs the Collaborative A.I. Responsibility Lab at the University of Pittsburgh. “We know that laws are coming, but we don’t know what they are yet, so there’s a lot of unpredictability.”

Deepfakes hold great promise in many industries. Last year, the Dutch police revived a 2003 cold case by creating a digital avatar of the 13-year-old murder victim and publicizing footage of him walking through a group of his family and friends in the present day. The technology is also used for parody and satire, for online shoppers trying on clothes in virtual fitting rooms, for dynamic museum dioramas and for actors hoping to speak multiple languages in international movie releases. Researchers at the M.I.T. Media Lab and UNICEF used similar techniques to study empathy by transforming images of North American and European cities into the battle-scarred landscapes caused by the Syrian war.

But problematic applications are also plentiful. Legal experts worry that deepfakes could be misused to erode trust in surveillance videos, body cameras and other evidence. (A doctored recording submitted in a British child custody case in 2019 appeared to show a parent making violent threats, according to the parent’s lawyer.) Digital forgeries could discredit or incite violence against police officers, or send them on wild goose chases. The Department of Homeland Security has also identified risks including cyberbullying, blackmail, stock manipulation and political instability.

The increasing volume of deepfakes could lead to a situation where “citizens no longer have a shared reality, or could create societal confusion about which information sources are reliable; a situation sometimes referred to as ‘information apocalypse’ or ‘reality apathy,’” the European law enforcement agency Europol wrote in a report last year.

British officials last year cited threats such as a website that “virtually strips women naked” and that was visited 38 million times in the first eight months of 2021. But there and in the European Union, proposals to set guardrails for the technology have yet to become law.

Attempts in the United States to create a federal task force to examine deepfake technology have stalled. Representative Yvette D. Clarke, a New York Democrat, proposed a bill in 2019 and again in 2021 — the Defending Each and Every Person From False Appearances by Keeping Exploitation Subject to Accountability Act — that has yet to come to a vote. She said she planned to reintroduce the bill this year.

Ms. Clarke said her bill, which would require deepfakes to bear watermarks or identifying labels, was “a protective measure.” By contrast, she described the new Chinese rules as “more of a control mechanism.”

“Many of the sophisticated civil societies recognize how this can be weaponized and destructive,” she said, adding that the United States should be bolder in setting its own standards rather than trailing another front-runner.

“We don’t want the Chinese eating our lunch in the tech space at all,” Ms. Clarke said. “We want to be able to set the baseline for our expectations around the tech industry, around consumer protections in that space.”

But law enforcement officials have said the industry is still unable to detect deepfakes and struggles to manage malicious uses of the technology. A lawyer in California wrote in a law journal in 2021 that certain deepfake rules had “an almost insurmountable feasibility problem” and were “functionally unenforceable” because (usually anonymous) abusers can easily cover their tracks.

The rules that do exist in the United States are largely aimed at political or pornographic deepfakes. Marc Berman, a Democrat in California’s State Assembly who represents parts of Silicon Valley and has sponsored such legislation, said he was unaware of any efforts to enforce his laws via lawsuits or fines. But he said that, in deference to one of his laws, a deepfaking app had removed the ability to mimic President Donald J. Trump before the 2020 election.

Only a handful of other states, including New York, restrict deepfake pornography. While running for re-election in 2019, Houston’s mayor said a critical ad from a fellow candidate broke a Texas law that bans certain misleading political deepfakes.

“Half of the value is causing more people to be a little bit more skeptical about what they’re seeing on a social media platforms and encourage folks not to take everything at face value,” Mr. Berman said.

But even as technology experts, lawmakers and victims call for stronger protections, they also urge caution. Deepfake laws, they said, risk being both overreaching but also toothless. Forcing labels or disclaimers onto deepfakes designed as valid commentary on politics or culture could also make the content appear less trustworthy, they added.

Digital rights groups such as the Electronic Frontier Foundation are pushing legislators to relinquish deepfake policing to tech companies, or to use an existing legal framework that addresses issues such as fraud, copyright infringement, obscenity and defamation.

“That’s the best remedy against harms, rather than the governmental interference, which in its implementation is almost always going to capture material that is not harmful, that chills people from legitimate, productive speech,” said David Greene, a civil liberties lawyer for the Electronic Frontier Foundation.

Several months ago, Google began prohibiting people from using its Colaboratory platform, a data analysis tool, to train A.I. systems to generate deepfakes. In the fall, the company behind Stable Diffusion, an image-generating tool, launched an update that hamstrings users trying to create nude and pornographic content, according to The Verge. Meta, TikTok, YouTube and Reddit ban deepfakes that are intended to be misleading.

But laws or bans may struggle to contain a technology that is designed to continually adapt and improve. Last year, researchers from the RAND Corporation demonstrated how difficult deepfakes can be to identify when they showed a set of videos to more than 3,000 test subjects and asked them to identify the ones that were manipulated (such as a deepfake of the climate activist Greta Thunberg disavowing the existence of climate change).

The group was wrong more than a third of the time. Even a subset of several dozen students studying machine learning at Carnegie Mellon University were wrong more than 20 percent of the time.

Initiatives from companies such as Microsoft and Adobe now try to authenticate media and train moderation technology to recognize the inconsistencies that mark synthetic content. But they are in a constant struggle to outpace deepfake creators who often discover new ways to fix defects, remove watermarks and alter metadata to cover their tracks.

“There is a technological arms race between deepfake creators and deepfake detectors,” said Jared Mondschein, a physical scientist at RAND. “Until we start coming up with ways to better detect deepfakes, it’ll be really hard for any amount of legislation to have any teeth.”

The New York Times



Meta Reportedly Delays Release of Phoenix Mixed-reality Glasses to 2027

FILE PHOTO: The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. REUTERS/Gonzalo Fuentes/File Photo
FILE PHOTO: The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. REUTERS/Gonzalo Fuentes/File Photo
TT

Meta Reportedly Delays Release of Phoenix Mixed-reality Glasses to 2027

FILE PHOTO: The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. REUTERS/Gonzalo Fuentes/File Photo
FILE PHOTO: The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. REUTERS/Gonzalo Fuentes/File Photo

Meta is delaying the release of its Phoenix mixed-reality glasses until 2027, aiming to get the details right, Business Insider reported on Friday, citing an internal memo.

The delay from an initially planned release in the second half of 2026 is because the company wants a fully polished device, the report said.

Meta did not immediately respond to a Reuters request for comment on the report.

Meta executives Gabriel Aul and Ryan Cairns said moving the release date back is "going to give us a lot more breathing room to get the details right," the report added.

The goggles, previously code-named Puffin, weigh around 100 grams (3.5 ounces) and have lower-resolution displays and weaker computing performance than high-end headsets like Apple’s Vision Pro, the Information reported in July.

Mixed reality merges augmented and virtual reality and allows real-world and digital objects to interact.

Meta is expected to make budget cuts of up to 30% for its metaverse initiative, Bloomberg News reported on Thursday.

The metaverse group sits within Reality Labs, which produces the company's Quest mixed-reality headsets, smart glasses made with EssilorLuxottica's Ray-Ban and upcoming augmented-reality glasses.


Apple, Google Send New Round of Cyber Threat Notifications to Users Around World

The Apple logo is seen in this illustration taken September 24, 2025. (Reuters)
The Apple logo is seen in this illustration taken September 24, 2025. (Reuters)
TT

Apple, Google Send New Round of Cyber Threat Notifications to Users Around World

The Apple logo is seen in this illustration taken September 24, 2025. (Reuters)
The Apple logo is seen in this illustration taken September 24, 2025. (Reuters)

Apple and Google have sent a new round of cyber threat notifications to users around the world, the companies said this week, announcing their latest effort to insulate customers against surveillance threats.

Apple and the Alphabet-owned Google are two of several tech companies that regularly issue warnings to users when they determine they may have been targeted by state-backed hackers.

Apple said the warnings were issued on Dec. 2 but gave few further details about the alleged hacking activity and did not address questions about the number of users targeted or say who was thought to be conducting the surveillance.

Apple said that "to date we have notified users in over 150 countries in total."

Apple's statement follows Google's Dec. 3 announcement that it was warning all known users targeted using Intellexa spyware, which it said spanned "several hundred accounts across various countries, including Pakistan, Kazakhstan, Angola, Egypt, Uzbekistan, Saudi Arabia, and Tajikistan."

Google said in its announcement that Intellexa, a cyber intelligence company that is sanctioned by the US government, was "evading restrictions and thriving."

Executives tied to Intellexa did not immediately return messages.

Previous waves of warnings have triggered headlines and prompted investigations by government bodies, including the European Union, whose senior officials have previously been targeted using spyware.

Threat notifications impose costs on cyber spies by alerting victims, said John Scott-Railton, a researcher with the Canadian digital watchdog group Citizen Lab.

He said they were "also often the first step in a string of investigations and discoveries that can lead to real accountability around spyware abuses."


AI Bubble to Be Short-lived, Rebound Stronger, NTT DATA Chief Says

FILE PHOTO: Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. REUTERS/Dado Ruvic/Illustration/File Photo
TT

AI Bubble to Be Short-lived, Rebound Stronger, NTT DATA Chief Says

FILE PHOTO: Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI" in this illustration taken, February 19, 2024. REUTERS/Dado Ruvic/Illustration/File Photo

A potential artificial intelligence bubble will deflate faster than past tech cycles but give way to an even stronger rebound as corporate adoption catches up with infrastructure spending, the head of Japanese IT company NTT DATA Inc. said.

Despite worries around supply chains, the direction of travel is clear, CEO Abhijit Dubey said in an interview with the Reuters Global Markets Forum.

"There is absolutely no doubt that in the medium- to long-term, AI is a massive secular trend," he said.

"Over the next 12 months, I think we're going to have a bit of a normalization ... It'll be a short-lived bubble, and (AI) will come out of it stronger."

With demand for compute still running ahead of supply, "supply chains are almost spoken for" over the next two to three years, he said. Pricing power is already tilting toward chipmakers and hyperscalers, mirroring their stretched valuations in public markets, he added.

AI has triggered the biggest technological shake-up since the advent of the internet, fueling trillions of dollars of investment and eye-watering equity gains. But it has caused shortages of memory chips, drawn regulatory scrutiny, and created growing unease over the future of work.

Dubey, who is also the firm's chief AI officer, said his company has begun rethinking recruitment strategies as AI reshapes labor markets.

"There will clearly be an impact ... Over a five- to 25-year horizon, there will likely be dislocation," he said. However, he added that NTT DATA continues to hire across locations.

Speakers at the Reuters NEXT conference in New York discussed how AI may upend work and job growth.

AI startup Writer Inc.'s CEO May Habib said customers are focused on slowing headcount growth.

"You close a customer, you get on the phone with the CEO to kick off the project, and it's like, 'Great, how soon can I whack 30% of my team?'," she said.

Still, a PwC survey of the global workforce released in November suggests the reality of generative AI usage has yet to match boardroom expectations.

Daily use of GenAI remains "significantly lower" than widely touted by executives, PwC said, even as workers with AI skills commanded an average wage premium of 56% — more than double last year's figure.

PwC also flagged a widening skills gap, with about half of non-managers reporting access to training resources, compared with roughly three-quarters of senior executives.