White House Pushes Tech Industry to Shut Down Market for Abusive AI Deepfakes

Arati Prabhakar, left photo, Director of the White House Office of Science and Technology Policy, and Jennifer Klein, Director of the White House Gender Policy Council, are shown in 2023 file photos. Klein and Prabhakar are co-authors of a Thursday announcement calling on the tech industry and financial institutions to commit to new measures to curb the creation of AI-generated nonconsensual sexual imagery. (AP Photo, file)
Arati Prabhakar, left photo, Director of the White House Office of Science and Technology Policy, and Jennifer Klein, Director of the White House Gender Policy Council, are shown in 2023 file photos. Klein and Prabhakar are co-authors of a Thursday announcement calling on the tech industry and financial institutions to commit to new measures to curb the creation of AI-generated nonconsensual sexual imagery. (AP Photo, file)
TT

White House Pushes Tech Industry to Shut Down Market for Abusive AI Deepfakes

Arati Prabhakar, left photo, Director of the White House Office of Science and Technology Policy, and Jennifer Klein, Director of the White House Gender Policy Council, are shown in 2023 file photos. Klein and Prabhakar are co-authors of a Thursday announcement calling on the tech industry and financial institutions to commit to new measures to curb the creation of AI-generated nonconsensual sexual imagery. (AP Photo, file)
Arati Prabhakar, left photo, Director of the White House Office of Science and Technology Policy, and Jennifer Klein, Director of the White House Gender Policy Council, are shown in 2023 file photos. Klein and Prabhakar are co-authors of a Thursday announcement calling on the tech industry and financial institutions to commit to new measures to curb the creation of AI-generated nonconsensual sexual imagery. (AP Photo, file)

President Joe Biden's administration is pushing the tech industry and financial institutions to shut down a growing market of abusive sexual images made with artificial intelligence technology.

New generative AI tools have made it easy to transform someone's likeness into a sexually explicit AI deepfake and share those realistic images across chatrooms or social media. The victims — be they celebrities or children — have little recourse to stop it, The AP reported.

The White House is putting out a call Thursday looking for voluntary cooperation from companies in the absence of federal legislation. By committing to a set of specific measures, officials hope the private sector can curb the creation, spread and monetization of such nonconsensual AI images, including explicit images of children.

“As generative AI broke on the scene, everyone was speculating about where the first real harms would come. And I think we have the answer,” said Biden's chief science adviser Arati Prabhakar, director of the White House's Office of Science and Technology Policy.

She described to The Associated Press a “phenomenal acceleration” of nonconsensual imagery fueled by AI tools and largely targeting women and girls in a way that can upend their lives.

“We’ve seen an acceleration because of generative AI that’s moving really fast. And the fastest thing that can happen is for companies to step up and take responsibility.”

A document shared with AP ahead of its Thursday release calls for action from not just AI developers but payment processors, financial institutions, cloud computing providers, search engines and the gatekeepers — namely Apple and Google — that control what makes it onto mobile app stores.

The private sector should step up to “disrupt the monetization” of image-based sexual abuse, restricting payment access particularly to sites that advertise explicit images of minors, the administration said.

Prabhakar said many payment platforms and financial institutions already say that they won't support the kinds of businesses promoting abusive imagery.

“But sometimes it’s not enforced; sometimes they don’t have those terms of service,” she said. “And so that’s an example of something that could be done much more rigorously.”

Cloud service providers and mobile app stores could also “curb web services and mobile applications that are marketed for the purpose of creating or altering sexual images without individuals’ consent," the document says.

And whether it is AI-generated or a real nude photo put on the internet, survivors should more easily be able to get online platforms to remove them.

The most widely known victim of deepfake images is Taylor Swift, whose ardent fanbase fought back in January when abusive AI-generated images of the singer-songwriter began circulating on social media. Microsoft promised to strengthen its safeguards after some of the Swift images were traced to its AI visual design tool.

A growing number of schools in the US and elsewhere are also grappling with AI-generated deepfake photos depicting their students. In some cases, fellow teenagers were found to be creating AI-manipulated images and sharing them with classmates.



Mozilla Hit with Privacy Complaint Over Firefox User Tracking

FILE PHOTO: The Firefox logo is seen at a Mozilla stand during the Mobile World Congress in Barcelona, February 28, 2013. REUTERS/Albert Gea/File Photo
FILE PHOTO: The Firefox logo is seen at a Mozilla stand during the Mobile World Congress in Barcelona, February 28, 2013. REUTERS/Albert Gea/File Photo
TT

Mozilla Hit with Privacy Complaint Over Firefox User Tracking

FILE PHOTO: The Firefox logo is seen at a Mozilla stand during the Mobile World Congress in Barcelona, February 28, 2013. REUTERS/Albert Gea/File Photo
FILE PHOTO: The Firefox logo is seen at a Mozilla stand during the Mobile World Congress in Barcelona, February 28, 2013. REUTERS/Albert Gea/File Photo

Vienna-based advocacy group NOYB on Wednesday said it has filed a complaint with the Austrian data protection authority against Mozilla accusing the Firefox browser maker of tracking user behavior on websites without consent.
NOYB (None Of Your Business), the digital rights group founded by privacy activist Max Schrems, said Mozilla has enabled a so-called “privacy preserving attribution” feature that turned the browser into a tracking tool for websites without directly telling its users, Reuters reported.
Mozilla had defended the feature, saying it wanted to help websites understand how their ads perform without collecting data about individual people. By offering what it called a non-invasive alternative to cross-site tracking, it hoped to significantly reduce collecting individual information.
While this may be less invasive than unlimited tracking, it still interferes with user rights under the EU’s privacy laws, NOYB said, adding that Firefox has turned on the feature by default.
“It’s a shame that an organization like Mozilla believes that users are too dumb to say yes or no,” said Felix Mikolasch, data protection lawyer at NOYB. “Users should be able to make a choice and the feature should have been turned off by default.”
Open-source Firefox was once a top browser choice among users due to its privacy features but now lags market leader Google’s Chrome, Apple’s Safari and Microsoft’s Edge with a low single-digit market share.
NOYB wants Mozilla to inform users about its data processing activities, switch to an opt-in system and delete all unlawfully processed data of millions of affected users.
NOYB, which in June filed a complaint against Alphabet for allegedly tracking users of its Chrome browser, had also filed hundreds of complaints against big tech companies, some leading to big fines.