White House Pushes Tech Industry to Shut Down Market for Abusive AI Deepfakes

Arati Prabhakar, left photo, Director of the White House Office of Science and Technology Policy, and Jennifer Klein, Director of the White House Gender Policy Council, are shown in 2023 file photos. Klein and Prabhakar are co-authors of a Thursday announcement calling on the tech industry and financial institutions to commit to new measures to curb the creation of AI-generated nonconsensual sexual imagery. (AP Photo, file)
Arati Prabhakar, left photo, Director of the White House Office of Science and Technology Policy, and Jennifer Klein, Director of the White House Gender Policy Council, are shown in 2023 file photos. Klein and Prabhakar are co-authors of a Thursday announcement calling on the tech industry and financial institutions to commit to new measures to curb the creation of AI-generated nonconsensual sexual imagery. (AP Photo, file)
TT

White House Pushes Tech Industry to Shut Down Market for Abusive AI Deepfakes

Arati Prabhakar, left photo, Director of the White House Office of Science and Technology Policy, and Jennifer Klein, Director of the White House Gender Policy Council, are shown in 2023 file photos. Klein and Prabhakar are co-authors of a Thursday announcement calling on the tech industry and financial institutions to commit to new measures to curb the creation of AI-generated nonconsensual sexual imagery. (AP Photo, file)
Arati Prabhakar, left photo, Director of the White House Office of Science and Technology Policy, and Jennifer Klein, Director of the White House Gender Policy Council, are shown in 2023 file photos. Klein and Prabhakar are co-authors of a Thursday announcement calling on the tech industry and financial institutions to commit to new measures to curb the creation of AI-generated nonconsensual sexual imagery. (AP Photo, file)

President Joe Biden's administration is pushing the tech industry and financial institutions to shut down a growing market of abusive sexual images made with artificial intelligence technology.

New generative AI tools have made it easy to transform someone's likeness into a sexually explicit AI deepfake and share those realistic images across chatrooms or social media. The victims — be they celebrities or children — have little recourse to stop it, The AP reported.

The White House is putting out a call Thursday looking for voluntary cooperation from companies in the absence of federal legislation. By committing to a set of specific measures, officials hope the private sector can curb the creation, spread and monetization of such nonconsensual AI images, including explicit images of children.

“As generative AI broke on the scene, everyone was speculating about where the first real harms would come. And I think we have the answer,” said Biden's chief science adviser Arati Prabhakar, director of the White House's Office of Science and Technology Policy.

She described to The Associated Press a “phenomenal acceleration” of nonconsensual imagery fueled by AI tools and largely targeting women and girls in a way that can upend their lives.

“We’ve seen an acceleration because of generative AI that’s moving really fast. And the fastest thing that can happen is for companies to step up and take responsibility.”

A document shared with AP ahead of its Thursday release calls for action from not just AI developers but payment processors, financial institutions, cloud computing providers, search engines and the gatekeepers — namely Apple and Google — that control what makes it onto mobile app stores.

The private sector should step up to “disrupt the monetization” of image-based sexual abuse, restricting payment access particularly to sites that advertise explicit images of minors, the administration said.

Prabhakar said many payment platforms and financial institutions already say that they won't support the kinds of businesses promoting abusive imagery.

“But sometimes it’s not enforced; sometimes they don’t have those terms of service,” she said. “And so that’s an example of something that could be done much more rigorously.”

Cloud service providers and mobile app stores could also “curb web services and mobile applications that are marketed for the purpose of creating or altering sexual images without individuals’ consent," the document says.

And whether it is AI-generated or a real nude photo put on the internet, survivors should more easily be able to get online platforms to remove them.

The most widely known victim of deepfake images is Taylor Swift, whose ardent fanbase fought back in January when abusive AI-generated images of the singer-songwriter began circulating on social media. Microsoft promised to strengthen its safeguards after some of the Swift images were traced to its AI visual design tool.

A growing number of schools in the US and elsewhere are also grappling with AI-generated deepfake photos depicting their students. In some cases, fellow teenagers were found to be creating AI-manipulated images and sharing them with classmates.



Blogs to Bluesky: Social Media Shifts Responses after 2004 Tsunami

Teuku Hafid Hududillah, 28, an Indonesia's Meteorology, Climatology and Geophysics Agency (BMKG) officer, shows the seismograph system that recorded the 9.1 magnitude quake on the 2004 Indian Ocean earthquake and tsunami, at the monitoring station in Aceh Besar, Aceh, Indonesia, December 23, 2024. (Reuters)
Teuku Hafid Hududillah, 28, an Indonesia's Meteorology, Climatology and Geophysics Agency (BMKG) officer, shows the seismograph system that recorded the 9.1 magnitude quake on the 2004 Indian Ocean earthquake and tsunami, at the monitoring station in Aceh Besar, Aceh, Indonesia, December 23, 2024. (Reuters)
TT

Blogs to Bluesky: Social Media Shifts Responses after 2004 Tsunami

Teuku Hafid Hududillah, 28, an Indonesia's Meteorology, Climatology and Geophysics Agency (BMKG) officer, shows the seismograph system that recorded the 9.1 magnitude quake on the 2004 Indian Ocean earthquake and tsunami, at the monitoring station in Aceh Besar, Aceh, Indonesia, December 23, 2024. (Reuters)
Teuku Hafid Hududillah, 28, an Indonesia's Meteorology, Climatology and Geophysics Agency (BMKG) officer, shows the seismograph system that recorded the 9.1 magnitude quake on the 2004 Indian Ocean earthquake and tsunami, at the monitoring station in Aceh Besar, Aceh, Indonesia, December 23, 2024. (Reuters)

The world's deadliest tsunami hit nations around the Indian Ocean two decades ago before social media platforms flourished, but they have since transformed how we understand and respond to disasters -- from finding the missing to swift crowdfunding.

When a 9.1-magnitude quake caused a tsunami that smashed into coastal areas on December 26, 2004, killing more than 220,000 people, broadcasters, newspapers and wire agencies were the main media bringing news of the calamity to the world.

Yet in some places, the sheer scale took days to emerge.

Survivor Mark Oberle was holidaying in Thailand's Phuket when the giant waves hit Patong beach, and penned a blog post to fend off questions from family, friends and strangers in the days after the disaster.

"The first hints of the extent were from European visitors who got text messages from friends back home," said Oberle, adding people initially thought the quake was local and small, when its epicenter was actually near western Indonesia, hundreds of miles away.

"I wrote the blog because there were so many friends and family who wanted to know more. Plus, I was getting many queries from strangers. People were desperate for good news tales," said the US-based physician, who helped the injured.

The blog included images of cars ploughed into hotels, water-filled roads and locals fleeing on scooters because rumors produced "a stampede from the beach to higher ground".

Bloggers were named "People of the Year" by ABC News in 2004 because of the intimacy of first-hand accounts like those published in the days following the tsunami.

But today billions can follow major events in real-time on social media, enabling citizen journalism and assistance from afar, despite the real risk of rumor and misinformation.

During Spain's worst floods for decades in October, people voluntarily managed social media accounts to assist relatives trying to locate their missing loved ones.

After Türkiye's devastating earthquake last year, a 20-year-old student was rescued thanks to a post of his location while buried under the rubble.

- 'Fast picture' -

Two decades ago, the online social media landscape was vastly different.

Facebook was launched early in 2004 but was not yet widely used when the tsunami hit.

One of YouTube's founders reportedly said an inspiration for the platform's founding in early 2005 was an inability to find footage of the tsunami in its aftermath.

Some tsunami images were posted on photo site Flickr. But X, Instagram and Bluesky now allow for instant sharing.

Experts are clear that more information saves lives -- hours lapsed between the tremor's epicenter near Indonesia and the giant waves that crashed into Sri Lanka, India and Thailand's coastal areas.

Daniel Aldrich, a professor at Northeastern University, conducted interviews in India's Tamil Nadu where many said they had no idea what a tsunami was and had no warnings in 2004.

"In India alone nearly 6,000 people were taken by surprise and drowned in that event," he said.

Mobile apps and online accounts now quickly publicize information about hospitals, evacuation routes or shelters.

"Social media would have provided an immediate way to help locate other survivors and get information," said Jeffrey Blevins, head of journalism at the University of Cincinnati.

Oberle also noted that "knowing what help was locally available... would have provided a clearer perspective of what to expect in the days to come".

- Citizen science -

Beyond emergency rescue, social media clips can also be a boon to understanding a disaster's cause.

When giant waves crashed into Indonesia's Aceh province, footage remained largely confined to handheld camcorders capturing the carnage.

Fast forward to 2018, when a quake-tsunami hit Indonesia's Palu city, killing more than 4,000 people, enough videos were taken on smartphones that scientists researching seismic activity were later able to use the clips to reconstruct its path and time between waves.

The piece of citizen science in 2020 used amateur videos to conclude it happened so fast because of underwater landslides close to shore.

But it's not all good news.

Scholars warn that disinformation and rumors have also hindered disaster responses.

When Hurricane Helene struck North Carolina in September, relief efforts were disrupted as tensions between locals and emergency workers rose over unfounded rumors including a higher hidden death toll and diverted aid.

Workers faced reported threats from local armed militias.

"This information was so malicious that FEMA (Federal Emergency Management Agency) had to withdraw its teams from the area," said Aldrich.

"Social media has absolutely altered the field of disaster response for the good and the bad."

Yet perhaps the biggest change -- the free flow of information to the vulnerable -- has been beneficial.

Laura Kong of the Honolulu-based International Tsunami Information Center recently recalls how "2004 was such a tragedy".

"Because... we might have known there was an event, but we didn't have a way to tell anyone."