As Deepfakes Flourish, Countries Struggle With Response

A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters
A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters
TT

As Deepfakes Flourish, Countries Struggle With Response

A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters
A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters

software that allows people to swap faces, voices and other characteristics to create digital forgeries — has been used in recent years to make a synthetic substitute of Elon Musk that shilled a cryptocurrency scam, to digitally “undress” more than 100,000 women on Telegram and to steal millions of dollars from companies by mimicking their executives’ voices on the phone.

In most of the world, the authorities can’t do much about it. Even as the software grows more sophisticated and accessible, few laws exist to manage its spread.

China hopes to be the exception. This month, the country adopted expansive rules requiring that manipulated material have the subject’s consent and bear digital signatures or watermarks, and that deepfake service providers offer ways to “refute rumors.”

But China faces the same hurdles that have stymied other efforts to govern deepfakes: The worst abusers of the technology tend to be the hardest to catch, operating anonymously, adapting quickly and sharing their synthetic creations through borderless online platforms. China’s move has also highlighted another reason that few countries have adopted rules: Many people worry that the government could use the rules to curtail free speech.

But simply by forging ahead with its mandates, tech experts said, Beijing could influence how other governments deal with the machine learning and artificial intelligence that power deepfake technology. With limited precedent in the field, lawmakers around the world are looking for test cases to mimic or reject.

“The A.I. scene is an interesting place for global politics, because countries are competing with one another on who’s going to set the tone,” said Ravit Dotan, a postdoctoral researcher who runs the Collaborative A.I. Responsibility Lab at the University of Pittsburgh. “We know that laws are coming, but we don’t know what they are yet, so there’s a lot of unpredictability.”

Deepfakes hold great promise in many industries. Last year, the Dutch police revived a 2003 cold case by creating a digital avatar of the 13-year-old murder victim and publicizing footage of him walking through a group of his family and friends in the present day. The technology is also used for parody and satire, for online shoppers trying on clothes in virtual fitting rooms, for dynamic museum dioramas and for actors hoping to speak multiple languages in international movie releases. Researchers at the M.I.T. Media Lab and UNICEF used similar techniques to study empathy by transforming images of North American and European cities into the battle-scarred landscapes caused by the Syrian war.

But problematic applications are also plentiful. Legal experts worry that deepfakes could be misused to erode trust in surveillance videos, body cameras and other evidence. (A doctored recording submitted in a British child custody case in 2019 appeared to show a parent making violent threats, according to the parent’s lawyer.) Digital forgeries could discredit or incite violence against police officers, or send them on wild goose chases. The Department of Homeland Security has also identified risks including cyberbullying, blackmail, stock manipulation and political instability.

The increasing volume of deepfakes could lead to a situation where “citizens no longer have a shared reality, or could create societal confusion about which information sources are reliable; a situation sometimes referred to as ‘information apocalypse’ or ‘reality apathy,’” the European law enforcement agency Europol wrote in a report last year.

British officials last year cited threats such as a website that “virtually strips women naked” and that was visited 38 million times in the first eight months of 2021. But there and in the European Union, proposals to set guardrails for the technology have yet to become law.

Attempts in the United States to create a federal task force to examine deepfake technology have stalled. Representative Yvette D. Clarke, a New York Democrat, proposed a bill in 2019 and again in 2021 — the Defending Each and Every Person From False Appearances by Keeping Exploitation Subject to Accountability Act — that has yet to come to a vote. She said she planned to reintroduce the bill this year.

Ms. Clarke said her bill, which would require deepfakes to bear watermarks or identifying labels, was “a protective measure.” By contrast, she described the new Chinese rules as “more of a control mechanism.”

“Many of the sophisticated civil societies recognize how this can be weaponized and destructive,” she said, adding that the United States should be bolder in setting its own standards rather than trailing another front-runner.

“We don’t want the Chinese eating our lunch in the tech space at all,” Ms. Clarke said. “We want to be able to set the baseline for our expectations around the tech industry, around consumer protections in that space.”

But law enforcement officials have said the industry is still unable to detect deepfakes and struggles to manage malicious uses of the technology. A lawyer in California wrote in a law journal in 2021 that certain deepfake rules had “an almost insurmountable feasibility problem” and were “functionally unenforceable” because (usually anonymous) abusers can easily cover their tracks.

The rules that do exist in the United States are largely aimed at political or pornographic deepfakes. Marc Berman, a Democrat in California’s State Assembly who represents parts of Silicon Valley and has sponsored such legislation, said he was unaware of any efforts to enforce his laws via lawsuits or fines. But he said that, in deference to one of his laws, a deepfaking app had removed the ability to mimic President Donald J. Trump before the 2020 election.

Only a handful of other states, including New York, restrict deepfake pornography. While running for re-election in 2019, Houston’s mayor said a critical ad from a fellow candidate broke a Texas law that bans certain misleading political deepfakes.

“Half of the value is causing more people to be a little bit more skeptical about what they’re seeing on a social media platforms and encourage folks not to take everything at face value,” Mr. Berman said.

But even as technology experts, lawmakers and victims call for stronger protections, they also urge caution. Deepfake laws, they said, risk being both overreaching but also toothless. Forcing labels or disclaimers onto deepfakes designed as valid commentary on politics or culture could also make the content appear less trustworthy, they added.

Digital rights groups such as the Electronic Frontier Foundation are pushing legislators to relinquish deepfake policing to tech companies, or to use an existing legal framework that addresses issues such as fraud, copyright infringement, obscenity and defamation.

“That’s the best remedy against harms, rather than the governmental interference, which in its implementation is almost always going to capture material that is not harmful, that chills people from legitimate, productive speech,” said David Greene, a civil liberties lawyer for the Electronic Frontier Foundation.

Several months ago, Google began prohibiting people from using its Colaboratory platform, a data analysis tool, to train A.I. systems to generate deepfakes. In the fall, the company behind Stable Diffusion, an image-generating tool, launched an update that hamstrings users trying to create nude and pornographic content, according to The Verge. Meta, TikTok, YouTube and Reddit ban deepfakes that are intended to be misleading.

But laws or bans may struggle to contain a technology that is designed to continually adapt and improve. Last year, researchers from the RAND Corporation demonstrated how difficult deepfakes can be to identify when they showed a set of videos to more than 3,000 test subjects and asked them to identify the ones that were manipulated (such as a deepfake of the climate activist Greta Thunberg disavowing the existence of climate change).

The group was wrong more than a third of the time. Even a subset of several dozen students studying machine learning at Carnegie Mellon University were wrong more than 20 percent of the time.

Initiatives from companies such as Microsoft and Adobe now try to authenticate media and train moderation technology to recognize the inconsistencies that mark synthetic content. But they are in a constant struggle to outpace deepfake creators who often discover new ways to fix defects, remove watermarks and alter metadata to cover their tracks.

“There is a technological arms race between deepfake creators and deepfake detectors,” said Jared Mondschein, a physical scientist at RAND. “Until we start coming up with ways to better detect deepfakes, it’ll be really hard for any amount of legislation to have any teeth.”

The New York Times



New Process for Stable, Long-Lasting Batteries

The image shows a test cell used to fabricate and test the all-solid-state battery developed at PSI. (Paul Scherrer Institute PSI/Mahir Dzambegovic) 
The image shows a test cell used to fabricate and test the all-solid-state battery developed at PSI. (Paul Scherrer Institute PSI/Mahir Dzambegovic) 
TT

New Process for Stable, Long-Lasting Batteries

The image shows a test cell used to fabricate and test the all-solid-state battery developed at PSI. (Paul Scherrer Institute PSI/Mahir Dzambegovic) 
The image shows a test cell used to fabricate and test the all-solid-state battery developed at PSI. (Paul Scherrer Institute PSI/Mahir Dzambegovic) 

Researchers at the Paul Scherrer Institute PSI have achieved a breakthrough on the path to practical application of lithium metal all-solid-state batteries.

The team expects the next generation of batteries to store more energy, are safer to operate, and charge faster than conventional lithium-ion batteries.

The team has reported these results in the journal Advanced Science.

All-solid-state batteries are considered a promising solution for electromobility, mobile electronics, and stationary energy storage – in part because they do not require flammable liquid electrolytes and therefore are inherently safer than conventional lithium-ion batteries.

Two key problems, however, stand in the way of market readiness: On the one hand, the formation of lithium dendrites at the anode remains a critical point.

On the other hand, an electrochemical instability – at the interface between the lithium metal anode and the solid electrolyte – can impair the battery’s long-term performance and reliability.

To overcome these two obstacles, the team led by Mario El Kazzi, head of the Battery Materials and Diagnostics group at the Paul Scherrer Institute PSI, developed a new production process:

“We combined two approaches that, together, both densify the electrolyte and stabilize the interface with the lithium,” the scientist explained.

Central to the PSI study is the argyrodite type LPSCl, a sulphide-based solid electrolyte made of lithium, phosphorus, and sulphur. The mineral exhibits high lithium-ion conductivity, enabling rapid ion transport within the battery – a crucial prerequisite for high performance and efficient charging processes.

To densify argyrodite into a homogeneous electrolyte, El Kazzi and his team did incorporate the temperature factor, but in a more careful way: Instead of the classic sintering process, they chose a gentler approach in which the mineral was compressed under moderate pressure and at a moderate temperature of only about 80 degrees Celsius.

The result is a compact, dense microstructure resistant to the penetration of lithium dendrites. Already, in this form, the solid electrolyte is ideally suited for rapid lithium-ion transport.

To ensure reliable operation even at high current densities, such as those encountered during rapid charging and discharging, the all-solid-state cell required further modification.

For this purpose, a coating of lithium fluoride (LiF), only 65 nanometres thick, was evaporated under vacuum and applied uniformly to the lithium surface – serving as a ultra-thin passivation layer at the interface between the anode and the solid electrolyte.

In laboratory tests with button cells, the battery demonstrated extraordinary performance under demanding conditions.

“Its cycle stability at high voltage was remarkable,” said doctoral candidate Jinsong Zhang, lead author of the study.

After 1,500 charge and discharge cycles, the cell still retained approximately 75% of its original capacity.

This means that three-quarters of the lithium ions were still migrating from the cathode to the anode. “An outstanding result. These values are among the best reported to date.”

Zhang therefore sees a good chance that all-solid-state batteries could soon surpass conventional lithium-ion batteries with liquid electrolyte in terms of energy density and durability.

Thus El Kazzi and his team have demonstrated for the first time that the combination of solid electrolyte mild sintering and a thin passivation layer on lithium anode effectively suppresses both dendrite formation and interfacial instability.

This combined solution marks an important advance for all-solid-state battery research – not least because it offers ecological and economic advantages: Due to the low temperatures, the process saves energy and therefore costs.

“Our approach is a practical solution for the industrial production of argyrodite-based all-solid-state batteries,” said El Kazzi. “A few more adjustments – and we could get started.”


Meta Urges Australia to Change Teen Social Media Ban

Meta has called for Australia's social media for under-16s to target app stores. Saeed KHAN / AFP
Meta has called for Australia's social media for under-16s to target app stores. Saeed KHAN / AFP
TT

Meta Urges Australia to Change Teen Social Media Ban

Meta has called for Australia's social media for under-16s to target app stores. Saeed KHAN / AFP
Meta has called for Australia's social media for under-16s to target app stores. Saeed KHAN / AFP

Tech giant Meta urged Australia on Monday to rethink its world-first social media ban for under-16s, while reporting that it has blocked more than 544,000 accounts under the new law.

Australia has required big platforms including Meta, TikTok and YouTube to stop underage users from holding accounts since the legislation came into force on December 10 last year.

Companies face fines of Aus $49.5 million (US$33 million) if they fail to take "reasonable steps" to comply.

Billionaire Mark Zuckerberg's Meta said it had removed 331,000 underage accounts from Instagram, 173,000 from Facebook, and 40,000 from Threads in the week to December 11.

The company said it was committed to complying with the law.

"That said, we call on the Australian government to engage with industry constructively to find a better way forward, such as incentivizing all of industry to raise the standard in providing safe, privacy-preserving, age appropriate experiences online, instead of blanket bans," it said in statement.

Meta renewed an earlier call for app stores to be required to verify people's ages and get parental approval before under-16s can download an app.

This was the only way to avoid a "whack-a-mole" race to stop teens migrating to new apps to avoid the ban, the company said.

The government said it was holding social media companies to account for the harm they cause young Australians.

"Platforms like Meta collect a huge amount of data on their users for commercial purposes. They can and must use that information to comply with Australian law and ensure people under 16 are not on their platforms," a government spokesperson said.

Meta said parents and experts were worried about the ban isolating young people from online communities, and driving some to less regulated apps and darker corners of the internet.

Initial impacts of the legislation "suggest it is not meeting its objectives of increasing the safety and well-being of young Australians", it said.

While raising concern over the lack of an industry standard for determining age online, Meta said its compliance with the Australian law would be a "multilayered process".

Since the ban, the California-based firm said it had helped found the OpenAge Initiative, a non-profit group that has launched age-verification tools called AgeKeys to be used with participating platforms.


China Is Closing in on US Technology Lead Despite Constraints, AI Researchers Say

 Visitors look at robots on display at robotics company Unitree's first retail store in Beijing in January 9, 2026. (AFP)
Visitors look at robots on display at robotics company Unitree's first retail store in Beijing in January 9, 2026. (AFP)
TT

China Is Closing in on US Technology Lead Despite Constraints, AI Researchers Say

 Visitors look at robots on display at robotics company Unitree's first retail store in Beijing in January 9, 2026. (AFP)
Visitors look at robots on display at robotics company Unitree's first retail store in Beijing in January 9, 2026. (AFP)

China can narrow its technological gap with the US driven by growing risk-taking and innovation, though the lack of advanced chipmaking tools is hobbling the sector, the country's leading artificial intelligence researchers said on Saturday.

China's so-called "AI tiger" startups MiniMax and Zhipu AI had strong debuts on the Hong Kong Stock Exchange this week, reflecting growing confidence in the sector as Beijing fast-tracks AI and chip listings to bolster domestic alternatives to advanced US technology.

Yao Shunyu, a former senior researcher at ChatGPT maker OpenAI ‌who was named ‌technology giant Tencent's chief AI scientist in December, ‌said ⁠there was a ‌high likelihood of a Chinese firm becoming the world's leading AI company in the next three to five years but said the lack of advanced chipmaking machines was the main technical hurdle.

"Currently, we have a significant advantage in electricity and infrastructure. The main bottlenecks are production capacity, including lithography machines, and the software ecosystem," Yao said at an AI conference in Beijing.

China has completed a working prototype of an extreme-ultraviolet lithography ⁠machine potentially capable of producing cutting-edge semiconductor chips that rival the West's, Reuters reported last month. However, the ‌machine has not yet produced working chips and may ‍not do so until 2030, people with ‍knowledge of the matter told Reuters.

MIND THE INVESTMENT GAP

Yao and other ‍Chinese industry leaders at the Beijing conference on Saturday also acknowledged that the US maintains an advantage in computing power due to its hefty investments in infrastructure.

"The US computer infrastructure is likely one to two orders of magnitude larger than ours. But I see that whether it's OpenAI or other platforms, they're investing heavily in next-generation research," said Lin Junyang, technical lead for Alibaba's flagship Qwen large language model.

"We, ⁠on the other hand, are relatively strapped for cash; delivery alone likely consumes the majority of our computer infrastructure," Lin said during a panel discussion at the AGI-Next Frontier Summit held by the Beijing Key Laboratory of Foundational Models at Tsinghua University.

Lin said China's limited resources have spurred its researchers to be innovative, particularly through algorithm-hardware co-design, which enables AI firms to run large models on smaller, inexpensive hardware.

Tang Jie, founder of Zhipu AI which raised HK$4.35 billion in its IPO, also highlighted the willingness of younger Chinese AI entrepreneurs to embrace high-risk ventures - a trait traditionally associated with Silicon Valley - as a positive development.

"I think if we can improve this environment, ‌allowing more time for these risk-taking, intelligent individuals to engage in innovative endeavors ... this is something our government and the country can help improve," said Tang.