Adobe to Offer Free App to Help with Labeling AI-generated Content

Adobe logo is seen on smartphone in this illustration taken June 13, 2022. (Reuters)
Adobe logo is seen on smartphone in this illustration taken June 13, 2022. (Reuters)
TT
20

Adobe to Offer Free App to Help with Labeling AI-generated Content

Adobe logo is seen on smartphone in this illustration taken June 13, 2022. (Reuters)
Adobe logo is seen on smartphone in this illustration taken June 13, 2022. (Reuters)

Adobe said on Tuesday it will offer a free web-based app starting next year, aimed at helping the creators of images and videos to get credit for their work used in AI systems.

Since 2019, Adobe and other technology companies have been working on what the firms call "Content Credentials," a sort of digital stamp for photos and videos around the web to denote how they were created.

TikTok, which is owned by China's ByteDance, has already said it will use Content Credentials to help label AI-generated content, Reuters reported.

San Jose, California-based Adobe said it will offer a free service to allow the creators of photos and videos to affix Content Credentials to their work.

In addition to indicating that they authored the content, the creators can also use the free app to signal if they do not want their work to be used by AI training systems that ingest huge amounts of data, the company said.

The use of data in AI training systems has sparked legal responses in multiple industries, with publishers such as the New York Times suing OpenAI, while some other firms have opted to work out licensing deals.

As yet, no large AI company has agreed to abide by Adobe's system for transparency. In a release, Adobe said it was "actively working to drive industry-wide adoption" of its standards.

"By offering creators a simple, free and easy way to attach Content Credentials to what they create, we are helping them preserve the integrity of their work, while enabling a new era of transparency and trust online," Scott Belsky, chief strategy officer and executive vice president for design and emerging products at Adobe, said in a statement.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.