White House: OpenAI, Google, Others Pledge to Watermark AI Content for Safety

This illustration picture shows icons of Google's AI (Artificial Intelligence) app BardAI (or ChatBot) (C-L), OpenAI's app ChatGPT (C-R) and other AI apps on a smartphone screen in Oslo, on July 12, 2023. (Photo by OLIVIER MORIN / AFP)
This illustration picture shows icons of Google's AI (Artificial Intelligence) app BardAI (or ChatBot) (C-L), OpenAI's app ChatGPT (C-R) and other AI apps on a smartphone screen in Oslo, on July 12, 2023. (Photo by OLIVIER MORIN / AFP)
TT
20

White House: OpenAI, Google, Others Pledge to Watermark AI Content for Safety

This illustration picture shows icons of Google's AI (Artificial Intelligence) app BardAI (or ChatBot) (C-L), OpenAI's app ChatGPT (C-R) and other AI apps on a smartphone screen in Oslo, on July 12, 2023. (Photo by OLIVIER MORIN / AFP)
This illustration picture shows icons of Google's AI (Artificial Intelligence) app BardAI (or ChatBot) (C-L), OpenAI's app ChatGPT (C-R) and other AI apps on a smartphone screen in Oslo, on July 12, 2023. (Photo by OLIVIER MORIN / AFP)

Top AI companies including OpenAI, Alphabet and Meta Platforms have made voluntary commitments to the White House to implement measures such as watermarking AI-generated content to help make the technology safer, the Biden administration said.
The companies - which also include Anthropic, Inflection, Amazon.com and OpenAI partner Microsoft - pledged to thoroughly test systems before releasing them and share information about how to reduce risks and invest in cybersecurity.
The move is seen as a win for the Biden administration's effort to regulate the technology which has experienced a boom in investment and consumer popularity, Reuters reported.
Since generative AI, which uses data to create new content like ChatGPT's human-sounding prose, became wildly popular this year, lawmakers around the world began considering how to mitigate the dangers of the emerging technology to national security and the economy.
US Senate Majority Chuck Schumer in June called for "comprehensive legislation" to advance and ensure safeguards on artificial intelligence.
Congress is considering a bill that would require political ads to disclose whether AI was used to create imagery or other content.
President Joe Biden, who is hosting executives from the seven companies at the White House on Friday, is also working on developing an executive order and bipartisan legislation on AI technology.
As part of the effort, the seven companies committed to developing a system to "watermark" all forms of content, from text, images, audios, to videos generated by AI so that users will know when the technology has been used.
This watermark, embedded in the content in a technical manner, presumably will make it easier for users to spot deep-fake images or audios that may, for example, show violence that has not occurred, create a better scam or distort a photo of a politician to put the person in an unflattering light.
It is unclear how the watermark will be evident in the sharing of the information.
The companies also pledged to focus on protecting users' privacy as AI develops and on ensuring that the technology is free of bias and not used to discriminate against vulnerable groups.



Reddit Sues AI Giant Anthropic Over Content Use

Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
TT
20

Reddit Sues AI Giant Anthropic Over Content Use

Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP

Social media outlet Reddit filed a lawsuit Wednesday against artificial intelligence company Anthropic, accusing the startup of illegally scraping millions of user comments to train its Claude chatbot without permission or compensation.

The lawsuit in a California state court represents the latest front in the growing battle between content providers and AI companies over the use of data to train increasingly sophisticated language models that power the generative AI revolution.

Anthropic, valued at $61.5 billion and heavily backed by Amazon, was founded in 2021 by former executives from OpenAI, the creator of ChatGPT.

The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development.

"This case is about the two faces of Anthropic: the public face that attempts to ingratiate itself into the consumer's consciousness with claims of righteousness and respect for boundaries and the law, and the private face that ignores any rules that interfere with its attempts to further line its pockets," the suit said.

According to the complaint, Anthropic has been training its models on Reddit content since at least December 2021, with CEO Dario Amodei co-authoring research papers that specifically identified high-quality content for data training.

The lawsuit alleges that despite Anthropic's public claims that it had blocked its bots from accessing Reddit, the company's automated systems continued to harvest Reddit's servers more than 100,000 times in subsequent months.

Reddit is seeking monetary damages and a court injunction to force Anthropic to comply with its user agreement terms. The company has requested a jury trial.

In an email to AFP, Anthropic said "We disagree with Reddit's claims and will defend ourselves vigorously."

Reddit has entered into licensing agreements with other AI giants including Google and OpenAI, which allow those companies to use Reddit content under terms that protect user privacy and provide compensation to the platform.

Those deals have helped lift Reddit's share price since it went public in 2024.

Reddit shares closed up more than six percent on Wednesday following news of the lawsuit.

Musicians, book authors, visual artists and news publications have sued the various AI companies that used their data without permission or payment.

AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally changes the original content and is necessary for innovation.

Though most of these lawsuits are still in early stages, their outcomes could have a profound effect on the shape of the AI industry.