As Deepfakes Flourish, Countries Struggle With Response

A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters
A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters
TT

As Deepfakes Flourish, Countries Struggle With Response

A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters
A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters

software that allows people to swap faces, voices and other characteristics to create digital forgeries — has been used in recent years to make a synthetic substitute of Elon Musk that shilled a cryptocurrency scam, to digitally “undress” more than 100,000 women on Telegram and to steal millions of dollars from companies by mimicking their executives’ voices on the phone.

In most of the world, the authorities can’t do much about it. Even as the software grows more sophisticated and accessible, few laws exist to manage its spread.

China hopes to be the exception. This month, the country adopted expansive rules requiring that manipulated material have the subject’s consent and bear digital signatures or watermarks, and that deepfake service providers offer ways to “refute rumors.”

But China faces the same hurdles that have stymied other efforts to govern deepfakes: The worst abusers of the technology tend to be the hardest to catch, operating anonymously, adapting quickly and sharing their synthetic creations through borderless online platforms. China’s move has also highlighted another reason that few countries have adopted rules: Many people worry that the government could use the rules to curtail free speech.

But simply by forging ahead with its mandates, tech experts said, Beijing could influence how other governments deal with the machine learning and artificial intelligence that power deepfake technology. With limited precedent in the field, lawmakers around the world are looking for test cases to mimic or reject.

“The A.I. scene is an interesting place for global politics, because countries are competing with one another on who’s going to set the tone,” said Ravit Dotan, a postdoctoral researcher who runs the Collaborative A.I. Responsibility Lab at the University of Pittsburgh. “We know that laws are coming, but we don’t know what they are yet, so there’s a lot of unpredictability.”

Deepfakes hold great promise in many industries. Last year, the Dutch police revived a 2003 cold case by creating a digital avatar of the 13-year-old murder victim and publicizing footage of him walking through a group of his family and friends in the present day. The technology is also used for parody and satire, for online shoppers trying on clothes in virtual fitting rooms, for dynamic museum dioramas and for actors hoping to speak multiple languages in international movie releases. Researchers at the M.I.T. Media Lab and UNICEF used similar techniques to study empathy by transforming images of North American and European cities into the battle-scarred landscapes caused by the Syrian war.

But problematic applications are also plentiful. Legal experts worry that deepfakes could be misused to erode trust in surveillance videos, body cameras and other evidence. (A doctored recording submitted in a British child custody case in 2019 appeared to show a parent making violent threats, according to the parent’s lawyer.) Digital forgeries could discredit or incite violence against police officers, or send them on wild goose chases. The Department of Homeland Security has also identified risks including cyberbullying, blackmail, stock manipulation and political instability.

The increasing volume of deepfakes could lead to a situation where “citizens no longer have a shared reality, or could create societal confusion about which information sources are reliable; a situation sometimes referred to as ‘information apocalypse’ or ‘reality apathy,’” the European law enforcement agency Europol wrote in a report last year.

British officials last year cited threats such as a website that “virtually strips women naked” and that was visited 38 million times in the first eight months of 2021. But there and in the European Union, proposals to set guardrails for the technology have yet to become law.

Attempts in the United States to create a federal task force to examine deepfake technology have stalled. Representative Yvette D. Clarke, a New York Democrat, proposed a bill in 2019 and again in 2021 — the Defending Each and Every Person From False Appearances by Keeping Exploitation Subject to Accountability Act — that has yet to come to a vote. She said she planned to reintroduce the bill this year.

Ms. Clarke said her bill, which would require deepfakes to bear watermarks or identifying labels, was “a protective measure.” By contrast, she described the new Chinese rules as “more of a control mechanism.”

“Many of the sophisticated civil societies recognize how this can be weaponized and destructive,” she said, adding that the United States should be bolder in setting its own standards rather than trailing another front-runner.

“We don’t want the Chinese eating our lunch in the tech space at all,” Ms. Clarke said. “We want to be able to set the baseline for our expectations around the tech industry, around consumer protections in that space.”

But law enforcement officials have said the industry is still unable to detect deepfakes and struggles to manage malicious uses of the technology. A lawyer in California wrote in a law journal in 2021 that certain deepfake rules had “an almost insurmountable feasibility problem” and were “functionally unenforceable” because (usually anonymous) abusers can easily cover their tracks.

The rules that do exist in the United States are largely aimed at political or pornographic deepfakes. Marc Berman, a Democrat in California’s State Assembly who represents parts of Silicon Valley and has sponsored such legislation, said he was unaware of any efforts to enforce his laws via lawsuits or fines. But he said that, in deference to one of his laws, a deepfaking app had removed the ability to mimic President Donald J. Trump before the 2020 election.

Only a handful of other states, including New York, restrict deepfake pornography. While running for re-election in 2019, Houston’s mayor said a critical ad from a fellow candidate broke a Texas law that bans certain misleading political deepfakes.

“Half of the value is causing more people to be a little bit more skeptical about what they’re seeing on a social media platforms and encourage folks not to take everything at face value,” Mr. Berman said.

But even as technology experts, lawmakers and victims call for stronger protections, they also urge caution. Deepfake laws, they said, risk being both overreaching but also toothless. Forcing labels or disclaimers onto deepfakes designed as valid commentary on politics or culture could also make the content appear less trustworthy, they added.

Digital rights groups such as the Electronic Frontier Foundation are pushing legislators to relinquish deepfake policing to tech companies, or to use an existing legal framework that addresses issues such as fraud, copyright infringement, obscenity and defamation.

“That’s the best remedy against harms, rather than the governmental interference, which in its implementation is almost always going to capture material that is not harmful, that chills people from legitimate, productive speech,” said David Greene, a civil liberties lawyer for the Electronic Frontier Foundation.

Several months ago, Google began prohibiting people from using its Colaboratory platform, a data analysis tool, to train A.I. systems to generate deepfakes. In the fall, the company behind Stable Diffusion, an image-generating tool, launched an update that hamstrings users trying to create nude and pornographic content, according to The Verge. Meta, TikTok, YouTube and Reddit ban deepfakes that are intended to be misleading.

But laws or bans may struggle to contain a technology that is designed to continually adapt and improve. Last year, researchers from the RAND Corporation demonstrated how difficult deepfakes can be to identify when they showed a set of videos to more than 3,000 test subjects and asked them to identify the ones that were manipulated (such as a deepfake of the climate activist Greta Thunberg disavowing the existence of climate change).

The group was wrong more than a third of the time. Even a subset of several dozen students studying machine learning at Carnegie Mellon University were wrong more than 20 percent of the time.

Initiatives from companies such as Microsoft and Adobe now try to authenticate media and train moderation technology to recognize the inconsistencies that mark synthetic content. But they are in a constant struggle to outpace deepfake creators who often discover new ways to fix defects, remove watermarks and alter metadata to cover their tracks.

“There is a technological arms race between deepfake creators and deepfake detectors,” said Jared Mondschein, a physical scientist at RAND. “Until we start coming up with ways to better detect deepfakes, it’ll be really hard for any amount of legislation to have any teeth.”

The New York Times



Indian PM, President of Saudi Arabia’s SDAIA Discuss AI Cooperation 

Indian Prime Minister Narendra Modi and President of the Saudi Data and Artificial Intelligence Authority (SDAIA) President Dr. Abdullah Al-Ghamdi meet on the sidelines of the India AI Impact Summit 2026. (SPA)
Indian Prime Minister Narendra Modi and President of the Saudi Data and Artificial Intelligence Authority (SDAIA) President Dr. Abdullah Al-Ghamdi meet on the sidelines of the India AI Impact Summit 2026. (SPA)
TT

Indian PM, President of Saudi Arabia’s SDAIA Discuss AI Cooperation 

Indian Prime Minister Narendra Modi and President of the Saudi Data and Artificial Intelligence Authority (SDAIA) President Dr. Abdullah Al-Ghamdi meet on the sidelines of the India AI Impact Summit 2026. (SPA)
Indian Prime Minister Narendra Modi and President of the Saudi Data and Artificial Intelligence Authority (SDAIA) President Dr. Abdullah Al-Ghamdi meet on the sidelines of the India AI Impact Summit 2026. (SPA)

Indian Prime Minister Narendra Modi held talks with President of the Saudi Data and Artificial Intelligence Authority (SDAIA) President Dr. Abdullah Al-Ghamdi on the sidelines of the India AI Impact Summit 2026, reported the Saudi Press Agency on Friday.

Discussions focused on knowledge transfer and the exchange of expertise to accelerate digital development in both nations. They also tackled expanding bilateral cooperation in data and AI.

Al-Ghamdi commended India’s leadership in hosting the summit, noting that such international partnerships are essential for harnessing advanced technology to benefit humanity and achieve shared strategic goals.


India Chases 'DeepSeek Moment' with Homegrown AI

A handout photo made available by the Press Information Bureau (PIB) of Indian Prime Minister Narendra Modi speaking with global leaders at the AI Impact Summit 2026 at Bharat Mandapam in New Delhi, India, 19 February 2026.EPA/PRESS INFORMATION BUREAU HANDOUT HANDOUT
A handout photo made available by the Press Information Bureau (PIB) of Indian Prime Minister Narendra Modi speaking with global leaders at the AI Impact Summit 2026 at Bharat Mandapam in New Delhi, India, 19 February 2026.EPA/PRESS INFORMATION BUREAU HANDOUT HANDOUT
TT

India Chases 'DeepSeek Moment' with Homegrown AI

A handout photo made available by the Press Information Bureau (PIB) of Indian Prime Minister Narendra Modi speaking with global leaders at the AI Impact Summit 2026 at Bharat Mandapam in New Delhi, India, 19 February 2026.EPA/PRESS INFORMATION BUREAU HANDOUT HANDOUT
A handout photo made available by the Press Information Bureau (PIB) of Indian Prime Minister Narendra Modi speaking with global leaders at the AI Impact Summit 2026 at Bharat Mandapam in New Delhi, India, 19 February 2026.EPA/PRESS INFORMATION BUREAU HANDOUT HANDOUT

Fledgling Indian artificial intelligence companies showcased homegrown technologies this week at a major summit in New Delhi, underpinning big dreams of becoming a global AI power.

But analysts said the country was unlikely to have a "DeepSeek moment" -- the sort of boom China had last year with a high-performance, low-cost chatbot -- any time soon, AFP reported.

Still, building custom AI tools could bring benefits to the world's most populous nation.
At the AI Impact Summit, Prime Minister Narendra Modi lauded new Indian AI models, along with other examples of the country's rising profile in the field.

"All the solutions that have been presented here demonstrate the power of 'Made in India' and India's innovative qualities," Modi said Thursday.

One of the startups making a buzz at the five-day summit was Sarvam AI, which this week released two large language models it says were trained from scratch in India.

Its models are optimized to work across 22 Indian languages, says the company, which received government-subsidized access to advanced computer processors.

The five-day summit, which wraps up Friday, is the fourth annual international meeting to discuss the risks and rewards of the fast-growing AI sector.

It is the largest yet and the first in a developing country, with Indian businesses striking deals with US tech giants to build large-scale data center infrastructure to help train and run AI systems.

On Friday, Abu Dhabi-based tech group G42 said the United Arab Emirates would deploy an AI supercomputer system in India, in a project "designed to lower barriers to AI innovation".

So-called sovereign AI has become a priority for many countries hoping to reduce dependence on US and Chinese platforms while ensuring that systems respect local regulations, including on data privacy.

AI models that succeed in India "can be deployed all over the world", Modi said on Thursday.

But experts said the sheer computational might of the United States would be hard to match.

"Despite the headline pledges, we don't expect India to emerge as a frontier AI innovation hub in the near term," said Reema Bhattacharya, head of Asia research at risk intelligence company Verisk Maplecroft.

"Its more realistic trajectory is to become the world's largest AI adoption market, embedding AI at scale through digital public infrastructure and cost-efficient applications," she said.

Another Indian company that drew attention with product debuts this week was the Bengaluru-based Gnani.ai, which introduced its Vachana speech models at the summit.

Trained on more than a million hours of audio, Vachana models generate natural-sounding voices in Indian languages that can process customer interactions and allow people to interact with digital services out loud.

Job disruption and redundancies, including in India's huge call center industry, have been one key focus of discussions at the Delhi summit.

Prihesh Ratnayake, head of AI initiatives at think-tank Factum, told AFP that the new Indian AI models were "not really meant to be global".

"They're India-specific models, and hopefully we'll see their impact over the coming year," he said.

"Why does India need to build for the global scale? India itself is the biggest market."
And Nanubala Gnana Sai at the Cambridge AI Safety Hub said that homegrown models could bring other benefits.

Existing models, even those developed in China, "have intrinsic bias towards Western values, culture and ethos -- as a product of being trained heavily on that consensus", Sai told AFP.

India already has some major strengths, including "technology diffusion, eager talent pool and cheap labor", and dedicated efforts can help startups pivot to artificial intelligence, he said.

"The end-product may not 'rival' ChatGPT or DeepSeek on benchmarks, but will provide leverage for the Global South to have its own stand in an increasingly polarized world."


Report: Nvidia Nears Deal for Scaled-down Investment in OpenAI

Nvidia chief executive Jensen Huang has insisted that the AI chip powerhouse is committed to a big investment in ChatGPT-maker OpenAI. Lionel BONAVENTURE / AFP
Nvidia chief executive Jensen Huang has insisted that the AI chip powerhouse is committed to a big investment in ChatGPT-maker OpenAI. Lionel BONAVENTURE / AFP
TT

Report: Nvidia Nears Deal for Scaled-down Investment in OpenAI

Nvidia chief executive Jensen Huang has insisted that the AI chip powerhouse is committed to a big investment in ChatGPT-maker OpenAI. Lionel BONAVENTURE / AFP
Nvidia chief executive Jensen Huang has insisted that the AI chip powerhouse is committed to a big investment in ChatGPT-maker OpenAI. Lionel BONAVENTURE / AFP

Nvidia is on the cusp of investing $30 billion in OpenAI, scaling back a plan to pump $100 billion into the ChatGPT maker, the Financial Times reported Thursday.

The AI-chip powerhouse will be part of OpenAI's new funding round with an agreement that could be concluded as early as this weekend, according to the Times, which cited unnamed sources close to the matter.

Nvidia declined to comment on the report.

Nvidia chief executive Jensen Huang has insisted that the US tech giant will make a "huge" investment in OpenAI and dismissed as "nonsense" reports that he is unhappy with the generative AI star.

Huang made the remarks late in January after the Wall Street Journal reported that Nvidia's plan to invest up to $100 billion in OpenAI had been put on ice.

Nvidia announced the plan in September, with the investment helping OpenAI build more infrastructure for next-generation artificial intelligence.

The funding round is reported to value OpenAI at some $850 billion.

Huang told journalists that the notion of Nvidia having doubts about a huge investment in OpenAI was "complete nonsense."

Huang insisted that Nvidia was going ahead with its investment in OpenAI, describing it as "one of the most consequential companies of our time".

"Sam is closing the round, and we will absolutely be involved in the round," Huang said, referring to OpenAI chief executive Sam Altman.

"We will invest a great deal of money."

Nvidia has become the coveted supplier of processors needed for training and operating the large language models (LLM) behind chatbots like OpenAI's ChatGPT or Google Gemini.

LLM developers like OpenAI are directing much of the mammoth investment they have received into Nvidia's products, rushing to build GPU-stuffed data centers to serve an anticipated flood of demand for AI services.

The AI rush, and its frenzy of investment in giant data centers and the massive purchase of energy-intensive chips, continues despite signs of concern in the markets.