As Deepfakes Flourish, Countries Struggle With Response

A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters
A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters
TT

As Deepfakes Flourish, Countries Struggle With Response

A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters
A face covered by a wireframe, which is used to create a deepfake image. Reuters TV, via Reuters

software that allows people to swap faces, voices and other characteristics to create digital forgeries — has been used in recent years to make a synthetic substitute of Elon Musk that shilled a cryptocurrency scam, to digitally “undress” more than 100,000 women on Telegram and to steal millions of dollars from companies by mimicking their executives’ voices on the phone.

In most of the world, the authorities can’t do much about it. Even as the software grows more sophisticated and accessible, few laws exist to manage its spread.

China hopes to be the exception. This month, the country adopted expansive rules requiring that manipulated material have the subject’s consent and bear digital signatures or watermarks, and that deepfake service providers offer ways to “refute rumors.”

But China faces the same hurdles that have stymied other efforts to govern deepfakes: The worst abusers of the technology tend to be the hardest to catch, operating anonymously, adapting quickly and sharing their synthetic creations through borderless online platforms. China’s move has also highlighted another reason that few countries have adopted rules: Many people worry that the government could use the rules to curtail free speech.

But simply by forging ahead with its mandates, tech experts said, Beijing could influence how other governments deal with the machine learning and artificial intelligence that power deepfake technology. With limited precedent in the field, lawmakers around the world are looking for test cases to mimic or reject.

“The A.I. scene is an interesting place for global politics, because countries are competing with one another on who’s going to set the tone,” said Ravit Dotan, a postdoctoral researcher who runs the Collaborative A.I. Responsibility Lab at the University of Pittsburgh. “We know that laws are coming, but we don’t know what they are yet, so there’s a lot of unpredictability.”

Deepfakes hold great promise in many industries. Last year, the Dutch police revived a 2003 cold case by creating a digital avatar of the 13-year-old murder victim and publicizing footage of him walking through a group of his family and friends in the present day. The technology is also used for parody and satire, for online shoppers trying on clothes in virtual fitting rooms, for dynamic museum dioramas and for actors hoping to speak multiple languages in international movie releases. Researchers at the M.I.T. Media Lab and UNICEF used similar techniques to study empathy by transforming images of North American and European cities into the battle-scarred landscapes caused by the Syrian war.

But problematic applications are also plentiful. Legal experts worry that deepfakes could be misused to erode trust in surveillance videos, body cameras and other evidence. (A doctored recording submitted in a British child custody case in 2019 appeared to show a parent making violent threats, according to the parent’s lawyer.) Digital forgeries could discredit or incite violence against police officers, or send them on wild goose chases. The Department of Homeland Security has also identified risks including cyberbullying, blackmail, stock manipulation and political instability.

The increasing volume of deepfakes could lead to a situation where “citizens no longer have a shared reality, or could create societal confusion about which information sources are reliable; a situation sometimes referred to as ‘information apocalypse’ or ‘reality apathy,’” the European law enforcement agency Europol wrote in a report last year.

British officials last year cited threats such as a website that “virtually strips women naked” and that was visited 38 million times in the first eight months of 2021. But there and in the European Union, proposals to set guardrails for the technology have yet to become law.

Attempts in the United States to create a federal task force to examine deepfake technology have stalled. Representative Yvette D. Clarke, a New York Democrat, proposed a bill in 2019 and again in 2021 — the Defending Each and Every Person From False Appearances by Keeping Exploitation Subject to Accountability Act — that has yet to come to a vote. She said she planned to reintroduce the bill this year.

Ms. Clarke said her bill, which would require deepfakes to bear watermarks or identifying labels, was “a protective measure.” By contrast, she described the new Chinese rules as “more of a control mechanism.”

“Many of the sophisticated civil societies recognize how this can be weaponized and destructive,” she said, adding that the United States should be bolder in setting its own standards rather than trailing another front-runner.

“We don’t want the Chinese eating our lunch in the tech space at all,” Ms. Clarke said. “We want to be able to set the baseline for our expectations around the tech industry, around consumer protections in that space.”

But law enforcement officials have said the industry is still unable to detect deepfakes and struggles to manage malicious uses of the technology. A lawyer in California wrote in a law journal in 2021 that certain deepfake rules had “an almost insurmountable feasibility problem” and were “functionally unenforceable” because (usually anonymous) abusers can easily cover their tracks.

The rules that do exist in the United States are largely aimed at political or pornographic deepfakes. Marc Berman, a Democrat in California’s State Assembly who represents parts of Silicon Valley and has sponsored such legislation, said he was unaware of any efforts to enforce his laws via lawsuits or fines. But he said that, in deference to one of his laws, a deepfaking app had removed the ability to mimic President Donald J. Trump before the 2020 election.

Only a handful of other states, including New York, restrict deepfake pornography. While running for re-election in 2019, Houston’s mayor said a critical ad from a fellow candidate broke a Texas law that bans certain misleading political deepfakes.

“Half of the value is causing more people to be a little bit more skeptical about what they’re seeing on a social media platforms and encourage folks not to take everything at face value,” Mr. Berman said.

But even as technology experts, lawmakers and victims call for stronger protections, they also urge caution. Deepfake laws, they said, risk being both overreaching but also toothless. Forcing labels or disclaimers onto deepfakes designed as valid commentary on politics or culture could also make the content appear less trustworthy, they added.

Digital rights groups such as the Electronic Frontier Foundation are pushing legislators to relinquish deepfake policing to tech companies, or to use an existing legal framework that addresses issues such as fraud, copyright infringement, obscenity and defamation.

“That’s the best remedy against harms, rather than the governmental interference, which in its implementation is almost always going to capture material that is not harmful, that chills people from legitimate, productive speech,” said David Greene, a civil liberties lawyer for the Electronic Frontier Foundation.

Several months ago, Google began prohibiting people from using its Colaboratory platform, a data analysis tool, to train A.I. systems to generate deepfakes. In the fall, the company behind Stable Diffusion, an image-generating tool, launched an update that hamstrings users trying to create nude and pornographic content, according to The Verge. Meta, TikTok, YouTube and Reddit ban deepfakes that are intended to be misleading.

But laws or bans may struggle to contain a technology that is designed to continually adapt and improve. Last year, researchers from the RAND Corporation demonstrated how difficult deepfakes can be to identify when they showed a set of videos to more than 3,000 test subjects and asked them to identify the ones that were manipulated (such as a deepfake of the climate activist Greta Thunberg disavowing the existence of climate change).

The group was wrong more than a third of the time. Even a subset of several dozen students studying machine learning at Carnegie Mellon University were wrong more than 20 percent of the time.

Initiatives from companies such as Microsoft and Adobe now try to authenticate media and train moderation technology to recognize the inconsistencies that mark synthetic content. But they are in a constant struggle to outpace deepfake creators who often discover new ways to fix defects, remove watermarks and alter metadata to cover their tracks.

“There is a technological arms race between deepfake creators and deepfake detectors,” said Jared Mondschein, a physical scientist at RAND. “Until we start coming up with ways to better detect deepfakes, it’ll be really hard for any amount of legislation to have any teeth.”

The New York Times



OpenAI Starts Testing Ads in ChatGPT

The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI Starts Testing Ads in ChatGPT

The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

OpenAI has begun placing ads in the basic versions of its ChatGPT chatbot, a bet that users will not mind the interruptions as the company seeks revenue as its costs soar.

"The test will be for logged-in adult users on the Free and Go subscription tiers" in the United States, OpenAI said Monday. The Go subscription costs $8 in the United States.

Only a small percentage of its nearly one billion users pay for its premium subscription services, which will remain ad-free.

"Ads do not influence the answers ChatGPT gives you, and we keep your conversations with ChatGPT private from advertisers," the company said.

Since ChatGPT's launch in 2022, OpenAI's valuation has soared to $500 billion in funding rounds -- higher than any other private company. Some analysts expect it could go public with a trillion-dollar valuation.

But the ChatGPT maker burns through cash at a furious rate, mostly on the powerful computing required to deliver its services.

Its chief executive Sam Altman had long expressed his dislike for advertising, citing concerns that it could create distrust about ChatGPT's content.

His about-face garnered a jab from its rival Anthropic over the weekend, which made its advertising debut at the Super Bowl championship with commercials saying its Claude chatbot would stay ad-free.


Social Media ‘Addicting the Brains of Children,’ Plaintiff’s Lawyer Argues in Landmark Trial

Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)
Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)
TT

Social Media ‘Addicting the Brains of Children,’ Plaintiff’s Lawyer Argues in Landmark Trial

Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)
Teenagers pose for a photo while holding smartphones in front of a Meta logo in this illustration taken September 11, 2025. (Reuters)

Comparing social media platforms to casinos and addictive drugs, lawyer Mark Lanier delivered opening statements Monday in a landmark trial in Los Angeles that seeks to hold Instagram owner Meta and Google's YouTube responsible for harms to children who use their products.

Instagram's parent company Meta and Google's YouTube face claims that their platforms addict children through deliberate design choices that keep kids glued to their screens. TikTok and Snap, which were originally named in the lawsuit, settled for undisclosed sums.

Jurors got their first glimpse into what will be a lengthy trial characterized by dueling narratives from the plaintiffs and the two remaining defendants.

Meta lawyer Paul Schmidt spoke of the disagreement within the scientific community over social media addiction, with some researchers believing it doesn’t exist, or that addiction is not the most appropriate way to describe heavy social media use.

‘Addicting the brains of children’

Lanier, the plaintiff's lawyer, delivered lively first remarks where he said the case will be as “easy as ABC” — which stands for “addicting the brains of children.” He said Meta and Google, “two of the richest corporations in history,” have “engineered addiction in children’s brains.”

He presented jurors with a slew of internal emails, documents and studies conducted by Meta and YouTube, as well as YouTube’s parent company, Google. He emphasized the findings of a study Meta conducted called “Project Myst” in which they surveyed 1,000 teens and their parents about their social media use.

The two major findings, Lanier said, were that Meta knew children who experienced “adverse events” like trauma and stress were particularly vulnerable for addiction; and that parental supervision and controls made little impact.

He also highlighted internal Google documents that likened some company products to a casino, and internal communication between Meta employees in which one person said Instagram is “like a drug” and they are “basically pushers.”

At the core of the Los Angeles case is a 20-year-old identified only by the initials “KGM,” whose case could determine how thousands of other, similar lawsuits against social media companies will play out. She and two other plaintiffs have been selected for bellwether trials — essentially test cases for both sides to see how their arguments play out before a jury.

Plaintiff grew up using YouTube, Instagram

KGM made a brief appearance after a break during Lanier’s statement and she will return to testify later in the trial. Lanier spent time describing KGM's childhood, focusing particularly on what her personality was like before she began using social media.

She started using YouTube at age 6 and Instagram at age 9, Lanier said. Before she graduated elementary school, she had posted 284 videos on YouTube.

The outcome of the trial could have profound effects on the companies' businesses and how they will handle children using their platforms.

Lanier said the companies’ lawyers will “try to blame the little girl and her parents for the trap they built,” referencing the plaintiff. She was a minor when she said she became addicted to social media, which she claims had a detrimental impact on her mental health.

Lanier said that despite the public position of Meta and YouTube being that they work to protect children, their internal documents show an entirely different position, with explicit references to young children being listed as their target audiences.

The attorney also drew comparisons between the social media companies and tobacco firms, citing internal communication between Meta employees who were concerned about the company’s lack of proactive action about the potential harm their platforms can have on children and teens.

“For a teenager, social validation is survival,” Lanier said. The defendants “engineered a feature that caters to a minor’s craving for social validation,” he added, speaking about “like” buttons and similar features.

Meta pushes back

In his opening statement representing Meta, Schmidt said the core question in the case is whether the platforms were a substantial factor in KGM’s mental health struggles. He spent much of his time going through the plaintiff’s health records, emphasizing that she had experienced many difficult circumstances in her childhood, including emotional abuse, body image issues and bullying.

Schmidt presented a clip from a video deposition from one of KGM‘s mental health providers, Dr. Thomas Suberman, who said social media was “not the through-line of what I recall being her main issues,” adding that her struggles seemed to largely stem from interpersonal conflicts and relationships.

He painted a picture — with KGM’s own text messages and testimony pointing to a volatile home life — of a particularly troubled relationship with her mother.

Schmidt acknowledged that many mental health professionals do believe social media addiction can exist, but said three of KGM’s providers — all of whom believe in the form of addiction — have never diagnosed her with it, or treated her for it.

Schmidt stressed to the jurors that the case is not about whether social media is a good thing or whether teens spend too much time on their phones or whether the jurors like or dislike Meta, but whether social media was a substantial factor in KGM’s mental health struggles.

A reckoning for social media and youth harms

A slew of trials beginning this year seek to hold social media companies responsible for harming children's mental well-being. Executives, including Meta CEO Mark Zuckerberg, are expected to testify at the Los Angeles trial, which will last six to eight weeks.

Experts have drawn similarities to the Big Tobacco trials that led to a 1998 settlement requiring cigarette companies to pay billions in health care costs and restrict marketing targeting minors.

A separate trial in New Mexico, meanwhile, also kicked off with opening statements on Monday. In that trial, Meta is accused of failing to protect young users from sexual exploitation, following an undercover online investigation. Attorney General Raúl Torrez in late 2023 sued Meta and Zuckerberg, who was later dropped from the suit.

A federal bellwether trial beginning in June in Oakland, California, will be the first to represent school districts that have sued social media platforms over harms to children.

In addition, more than 40 state attorneys general have filed lawsuits against Meta, claiming it is harming young people and contributing to the youth mental health crisis by deliberately designing features on Instagram and Facebook that addict children to its platforms. The majority of cases filed their lawsuits in federal court, but some sued in their respective states.

TikTok also faces similar lawsuits in more than a dozen states.


AI No Better Than Other Methods for Patients Seeking Medical Advice, Study Shows

AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
TT

AI No Better Than Other Methods for Patients Seeking Medical Advice, Study Shows

AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration created on June 23, 2023. (Reuters)

Asking AI about medical symptoms does not help patients make better decisions about their health than other methods, such as a standard internet search, according to a new study published in Nature Medicine.

The authors said the study was important as people were increasingly turning to AI and chatbots for advice on their health, but without evidence that this was necessarily the best and safest approach.

Researchers led by the University of Oxford’s Internet Institute worked alongside a group of doctors to draw up 10 different medical scenarios, ranging from a common cold to a life-threatening hemorrhage causing bleeding on the brain.

When tested without human participants, three large-language models – Open AI's Chat GPT-4o, ‌Meta's Llama ‌3 and Cohere's Command R+ – identified the conditions in ‌94.9% ⁠of cases, ‌and chose the correct course of action, like calling an ambulance or going to the doctor, in an average of 56.3% of cases. The companies did not respond to requests for comment.

'HUGE GAP' BETWEEN AI'S POTENTIAL AND ACTUAL PERFORMANCE

The researchers then recruited 1,298 participants in Britain to either use AI, or their usual resources like an internet search, or their experience, or the National Health Service website to ⁠investigate the symptoms and decide their next step.

When the participants did this, relevant conditions were identified in ‌less than 34.5% of cases, and the right ‍course of action was given in ‍less than 44.2%, no better than the control group using more traditional ‍tools.

Adam Mahdi, co-author of the paper and associate professor at Oxford, said the study showed the “huge gap” between the potential of AI and the pitfalls when it was used by people.

“The knowledge may be in those bots; however, this knowledge doesn’t always translate when interacting with humans,” he said, meaning that more work was needed to identify why this was happening.

HUMANS OFTEN GIVING INCOMPLETE INFORMATION

The ⁠team studied around 30 of the interactions in detail, and concluded that often humans were providing incomplete or wrong information, but the LLMs were also sometimes generating misleading or incorrect responses.

For example, one patient reporting the symptoms of a subarachnoid hemorrhage – a life-threatening condition causing bleeding on the brain – was correctly told by AI to go to hospital after describing a stiff neck, light sensitivity and the "worst headache ever". The other described the same symptoms but a "terrible" headache, and was told to lie down in a darkened room.

The team now plans a similar study in different countries and languages, and over time, to test if that impacts AI’s performance.

The ‌study was supported by the data company Prolific, the German non-profit Dieter Schwarz Stiftung, and the UK and US governments.