As Deepfakes Flourish, Countries Struggle With Response

A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters
A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters
TT

As Deepfakes Flourish, Countries Struggle With Response

A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters
A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters

software that allows people to swap faces, voices and other characteristics to create digital forgeries — has been used in recent years to make a synthetic substitute of Elon Musk that shilled a cryptocurrency scam, to digitally “undress” more than 100,000 women on Telegram and to steal millions of dollars from companies by mimicking their executives’ voices on the phone.

In most of the world, the authorities can’t do much about it. Even as the software grows more sophisticated and accessible, few laws exist to manage its spread.

China hopes to be the exception. This month, the country adopted expansive rules requiring that manipulated material have the subject’s consent and bear digital signatures or watermarks, and that deepfake service providers offer ways to “refute rumors.”

But China faces the same hurdles that have stymied other efforts to govern deepfakes: The worst abusers of the technology tend to be the hardest to catch, operating anonymously, adapting quickly and sharing their synthetic creations through borderless online platforms. China’s move has also highlighted another reason that few countries have adopted rules: Many people worry that the government could use the rules to curtail free speech.

But simply by forging ahead with its mandates, tech experts said, Beijing could influence how other governments deal with the machine learning and artificial intelligence that power deepfake technology. With limited precedent in the field, lawmakers around the world are looking for test cases to mimic or reject.

“The A.I. scene is an interesting place for global politics, because countries are competing with one another on who’s going to set the tone,” said Ravit Dotan, a postdoctoral researcher who runs the Collaborative A.I. Responsibility Lab at the University of Pittsburgh. “We know that laws are coming, but we don’t know what they are yet, so there’s a lot of unpredictability.”

Deepfakes hold great promise in many industries. Last year, the Dutch police revived a 2003 cold case by creating a digital avatar of the 13-year-old murder victim and publicizing footage of him walking through a group of his family and friends in the present day. The technology is also used for parody and satire, for online shoppers trying on clothes in virtual fitting rooms, for dynamic museum dioramas and for actors hoping to speak multiple languages in international movie releases. Researchers at the M.I.T. Media Lab and UNICEF used similar techniques to study empathy by transforming images of North American and European cities into the battle-scarred landscapes caused by the Syrian war.

But problematic applications are also plentiful. Legal experts worry that deepfakes could be misused to erode trust in surveillance videos, body cameras and other evidence. (A doctored recording submitted in a British child custody case in 2019 appeared to show a parent making violent threats, according to the parent’s lawyer.) Digital forgeries could discredit or incite violence against police officers, or send them on wild goose chases. The Department of Homeland Security has also identified risks including cyberbullying, blackmail, stock manipulation and political instability.

The increasing volume of deepfakes could lead to a situation where “citizens no longer have a shared reality, or could create societal confusion about which information sources are reliable; a situation sometimes referred to as ‘information apocalypse’ or ‘reality apathy,’” the European law enforcement agency Europol wrote in a report last year.

British officials last year cited threats such as a website that “virtually strips women naked” and that was visited 38 million times in the first eight months of 2021. But there and in the European Union, proposals to set guardrails for the technology have yet to become law.

Attempts in the United States to create a federal task force to examine deepfake technology have stalled. Representative Yvette D. Clarke, a New York Democrat, proposed a bill in 2019 and again in 2021 — the Defending Each and Every Person From False Appearances by Keeping Exploitation Subject to Accountability Act — that has yet to come to a vote. She said she planned to reintroduce the bill this year.

Ms. Clarke said her bill, which would require deepfakes to bear watermarks or identifying labels, was “a protective measure.” By contrast, she described the new Chinese rules as “more of a control mechanism.”

“Many of the sophisticated civil societies recognize how this can be weaponized and destructive,” she said, adding that the United States should be bolder in setting its own standards rather than trailing another front-runner.

“We don’t want the Chinese eating our lunch in the tech space at all,” Ms. Clarke said. “We want to be able to set the baseline for our expectations around the tech industry, around consumer protections in that space.”

But law enforcement officials have said the industry is still unable to detect deepfakes and struggles to manage malicious uses of the technology. A lawyer in California wrote in a law journal in 2021 that certain deepfake rules had “an almost insurmountable feasibility problem” and were “functionally unenforceable” because (usually anonymous) abusers can easily cover their tracks.

The rules that do exist in the United States are largely aimed at political or pornographic deepfakes. Marc Berman, a Democrat in California’s State Assembly who represents parts of Silicon Valley and has sponsored such legislation, said he was unaware of any efforts to enforce his laws via lawsuits or fines. But he said that, in deference to one of his laws, a deepfaking app had removed the ability to mimic President Donald J. Trump before the 2020 election.

Only a handful of other states, including New York, restrict deepfake pornography. While running for re-election in 2019, Houston’s mayor said a critical ad from a fellow candidate broke a Texas law that bans certain misleading political deepfakes.

“Half of the value is causing more people to be a little bit more skeptical about what they’re seeing on a social media platforms and encourage folks not to take everything at face value,” Mr. Berman said.

But even as technology experts, lawmakers and victims call for stronger protections, they also urge caution. Deepfake laws, they said, risk being both overreaching but also toothless. Forcing labels or disclaimers onto deepfakes designed as valid commentary on politics or culture could also make the content appear less trustworthy, they added.

Digital rights groups such as the Electronic Frontier Foundation are pushing legislators to relinquish deepfake policing to tech companies, or to use an existing legal framework that addresses issues such as fraud, copyright infringement, obscenity and defamation.

“That’s the best remedy against harms, rather than the governmental interference, which in its implementation is almost always going to capture material that is not harmful, that chills people from legitimate, productive speech,” said David Greene, a civil liberties lawyer for the Electronic Frontier Foundation.

Several months ago, Google began prohibiting people from using its Colaboratory platform, a data analysis tool, to train A.I. systems to generate deepfakes. In the fall, the company behind Stable Diffusion, an image-generating tool, launched an update that hamstrings users trying to create nude and pornographic content, according to The Verge. Meta, TikTok, YouTube and Reddit ban deepfakes that are intended to be misleading.

But laws or bans may struggle to contain a technology that is designed to continually adapt and improve. Last year, researchers from the RAND Corporation demonstrated how difficult deepfakes can be to identify when they showed a set of videos to more than 3,000 test subjects and asked them to identify the ones that were manipulated (such as a deepfake of the climate activist Greta Thunberg disavowing the existence of climate change).

The group was wrong more than a third of the time. Even a subset of several dozen students studying machine learning at Carnegie Mellon University were wrong more than 20 percent of the time.

Initiatives from companies such as Microsoft and Adobe now try to authenticate media and train moderation technology to recognize the inconsistencies that mark synthetic content. But they are in a constant struggle to outpace deepfake creators who often discover new ways to fix defects, remove watermarks and alter metadata to cover their tracks.

“There is a technological arms race between deepfake creators and deepfake detectors,” said Jared Mondschein, a physical scientist at RAND. “Until we start coming up with ways to better detect deepfakes, it’ll be really hard for any amount of legislation to have any teeth.”

The New York Times



India Eyes $200B in Data Center Investments as It Ramps Up Its AI Hub Ambitions

FILE -Google CEO Sundar Pichai, right, interacts with India's Minister for Information and Technology Ashwini Vaishnaw during Google for India 2022 event in New Delhi, Dec. 19, 2022. (AP Photo/Manish Swarup), File)
FILE -Google CEO Sundar Pichai, right, interacts with India's Minister for Information and Technology Ashwini Vaishnaw during Google for India 2022 event in New Delhi, Dec. 19, 2022. (AP Photo/Manish Swarup), File)
TT

India Eyes $200B in Data Center Investments as It Ramps Up Its AI Hub Ambitions

FILE -Google CEO Sundar Pichai, right, interacts with India's Minister for Information and Technology Ashwini Vaishnaw during Google for India 2022 event in New Delhi, Dec. 19, 2022. (AP Photo/Manish Swarup), File)
FILE -Google CEO Sundar Pichai, right, interacts with India's Minister for Information and Technology Ashwini Vaishnaw during Google for India 2022 event in New Delhi, Dec. 19, 2022. (AP Photo/Manish Swarup), File)

India is hoping to garner as much as $200 billion in investments for data centers over the next few years as it scales up its ambitions to become a hub for artificial intelligence, the country’s minister for electronics and information technology said Tuesday.

The investments underscore the reliance of tech titans on India as a key technology and talent base in the global race for AI dominance. For New Delhi, they bring in high-value infrastructure and foreign capital at a scale that can accelerate its digital transformation ambitions.

The push comes as governments worldwide race to harness AI's economic potential while grappling with job disruption, regulation and the growing concentration of computing power in a few rich countries and companies.

“Today, India is being seen as a trusted AI partner to the Global South nations seeking open, affordable and development-focused solutions,” Ashwini Vaishnaw told The Associated Press in an email interview, as New Delhi hosts a major AI Impact Summit this week drawing participation from at least 20 global leaders and a who’s who of the tech industry.

In October, Google announced a $15 billion investment plan in India over the next five years to establish its first artificial intelligence hub in the South Asian country. Microsoft followed two months later with its biggest-ever Asia investment announcement of $17.5 billion to advance India’s cloud and artificial intelligence infrastructure over the next four years.

Amazon too has committed $35 billion investment in India by 2030 to expand its business, specifically targeting AI-driven digitization. The cumulative investments are part of $200 billion in investments that are in the pipeline and New Delhi hopes would flow in.

Vaishnaw said India’s pitch is that artificial intelligence must deliver measurable impacts at scale rather than remain an elite technology.

“A trusted AI ecosystem will attract investment and accelerate adoption,” he said, adding that a central pillar of India’s strategy to capitalize on the use of AI is building infrastructure.

The government recently announced a long-term tax holiday for data centers as it hopes to provide policy certainty and attract global capital.

Vaishnaw said the government has already operationalized a shared computing facility with more than 38,000 graphics processing units, or GPUs, allowing startups, researchers and public institutions to access high-end computing without heavy upfront costs.

“AI must not become exclusive. It must remain widely accessible,” he said.

Alongside the infrastructure drive, India is backing the development of sovereign foundational AI models trained on Indian languages and local contexts. Some of these models meet global benchmarks and in certain tasks rival widely used large language models, Vaishnaw said.

India is also seeking a larger role in shaping how AI is built and deployed globally as the country doesn’t see itself strictly as a “rule maker or rule taker,” according to Vaishnaw, but an active participant in setting practical, workable norms while expanding its AI services footprint worldwide.

“India will become a major provider of AI services in the near future,” he said, describing a strategy that is “self-reliant yet globally integrated” across applications, models, chips, infrastructure and energy.

Investor confidence is another focus area for New Delhi as global tech funding becomes more cautious.

Vaishnaw said the technology’s push is backed by execution, pointing to the Indian government's AI Mission program which emphasizes sector specific solutions through public-private partnerships.

The government is also betting on reskilling its workforce as global concerns grow that AI could disrupt white collar and technology jobs. New Delhi is scaling AI education across universities, skilling programs and online platforms to build a large AI-ready talent pool, the minister said.

Widespread 5G connectivity across the country and a young, tech-savvy population are expected to help with the adoption of AI at a faster pace, he added.

Balancing innovation with safeguards remains a challenge though, as AI expands into sensitive sectors such as governance, health care and finance.

Vaishnaw outlined a fourfold strategy that includes implementable global frameworks, trusted AI infrastructure, regulation of harmful misinformation and stronger human and technical capacity to hedge the impact.

“The future of AI should be inclusive, distributed and development-focused,” he said.


Report: SpaceX Competing to Produce Autonomous Drone Tech for Pentagon 

The SpaceX logo is seen in this illustration taken, March 10, 2025. (Reuters)
The SpaceX logo is seen in this illustration taken, March 10, 2025. (Reuters)
TT

Report: SpaceX Competing to Produce Autonomous Drone Tech for Pentagon 

The SpaceX logo is seen in this illustration taken, March 10, 2025. (Reuters)
The SpaceX logo is seen in this illustration taken, March 10, 2025. (Reuters)

Elon Musk's SpaceX and its wholly-owned subsidiary xAI are competing in a secret new Pentagon contest to produce voice-controlled, autonomous drone swarming technology, Bloomberg News reported on Monday, citing people familiar with the matter.

SpaceX, xAI and the Pentagon's defense innovation unit did not immediately respond to requests for comment. Reuters could not independently verify the report.

Texas-based SpaceX recently acquired xAI in a deal that combined Musk's major space and defense contractor with the billionaire entrepreneur's artificial intelligence startup. It occurred ahead of SpaceX's planned initial public offering this year.

Musk's companies are reportedly among a select few chosen to participate in the $100 million prize challenge initiated in January, according to the Bloomberg report.

The six-month competition aims to produce advanced swarming technology that can translate voice commands into digital instructions and run multiple drones, the report said.

Musk was among a group of AI and robotics researchers who wrote an open letter in 2015 that advocated a global ban on “offensive autonomous weapons,” arguing against making “new tools for killing people.”

The US also has been seeking safe and cost-effective ways to neutralize drones, particularly around airports and large sporting events - a concern that has become more urgent ahead of the FIFA World Cup and America250 anniversary celebrations this summer.

The US military, along with its allies, is now racing to deploy the so-called “loyal wingman” drones, an AI-powered aircraft designed to integrate with manned aircraft and anti-drone systems to neutralize enemy drones.

In June 2025, US President Donald Trump issued the Executive Order (EO) “Unleashing American Drone Dominance” which accelerated the development and commercialization of drone and AI technologies.


SVC Develops AI Intelligence Platform to Strengthen Private Capital Ecosystem

The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights- SPA
The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights- SPA
TT

SVC Develops AI Intelligence Platform to Strengthen Private Capital Ecosystem

The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights- SPA
The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights- SPA

Saudi Venture Capital Company (SVC) announced the launch of its proprietary intelligence platform, Aian, developed in-house using Saudi national expertise to enhance its institutional role in developing the Kingdom’s private capital ecosystem and supporting its mandate as a market maker guided by data-driven growth principles.

According to a press release issued by the SVC today, Aian is a custom-built AI-powered market intelligence capability that transforms SVC’s accumulated institutional expertise and detailed private market data into structured, actionable insights on market dynamics, sector evolution, and capital formation. The platform converts institutional memory into compounding intelligence, enabling decisions that integrate both current market signals and long-term historical trends, SPA reported.

Deputy CEO and Chief Investment Officer Nora Alsarhan stated that as Saudi Arabia’s private capital market expands, clarity, transparency, and data integrity become as critical as capital itself. She noted that Aian represents a new layer of national market infrastructure, strengthening institutional confidence, enabling evidence-based decision-making, and supporting sustainable growth.

By transforming data into actionable intelligence, she said, the platform reinforces the Kingdom’s position as a leading regional private capital hub under Vision 2030.

She added that market making extends beyond capital deployment to shaping the conditions under which capital flows efficiently, emphasizing that the next phase of market development will be driven by intelligence and analytical insight alongside investment.

Through Aian, SVC is building the knowledge backbone of Saudi Arabia’s private capital ecosystem, enabling clearer visibility, greater precision in decision-making, and capital formation guided by insight rather than assumption.

Chief Strategy Officer Athary Almubarak said that in private capital markets, access to reliable insight increasingly represents the primary constraint, particularly in emerging and fast-scaling markets where disclosures vary and institutional knowledge is fragmented.

She explained that for development-focused investment institutions, inconsistent data presents a structural challenge that directly impacts capital allocation efficiency and the ability to crowd in private investment at scale.

She noted that SVC was established to address such market frictions and that, as a government-backed investor with an explicit market-making mandate, its role extends beyond financing to building the enabling environment in which private capital can grow sustainably.

By integrating SVC’s proprietary portfolio data with selected external market sources, Aian enables continuous consolidation and validation of market activity, producing a dynamic representation of capital deployment over time rather than relying solely on static reporting.

The platform offers customizable analytical dashboards that deliver frequent updates and predictive insights, enabling SVC to identify priority market gaps, recalibrate capital allocation, design targeted ecosystem interventions, and anchor policy dialogue in evidence.

The release added that Aian also features predictive analytics capabilities that anticipate upcoming funding activity, including projected investment rounds and estimated ticket sizes. In addition, it incorporates institutional benchmarking tools that enable structured comparisons across peers, sectors, and interventions, supporting more precise, data-driven ecosystem development.