OpenAI Assigns New Project to AI Safety Leader Madry in Revamp

The OpenAI logo is displayed on a cell phone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model, Dec. 8, 2023, in Boston. (AP)
The OpenAI logo is displayed on a cell phone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model, Dec. 8, 2023, in Boston. (AP)
TT
20

OpenAI Assigns New Project to AI Safety Leader Madry in Revamp

The OpenAI logo is displayed on a cell phone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model, Dec. 8, 2023, in Boston. (AP)
The OpenAI logo is displayed on a cell phone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model, Dec. 8, 2023, in Boston. (AP)

OpenAI Chief Executive Sam Altman said on Tuesday the ChatGPT maker's AI safety leader Aleksander Madry was working on a new research project, as the startup rejigs the preparedness team.

"Aleksander is working on a new and v(very) important research project," Altman said in a post on X, adding that OpenAI executives Joaquin Quinonero Candela and Lilian Weng will be taking over the preparedness team in the meanwhile.

The preparedness team helps to evaluate artificial general intelligence readiness of the company's AI models, a spokesperson for OpenAI said in statement, adding Madry will take on a bigger role within the research organization following the move.

Madry did not immediately respond to requests for comment, according to Reuters.

"Joaquin and Lilian are taking over the preparedness team as part of unifying our safety work", Altman wrote in the post.

The Information, which was the first to report Madry's move, had said researcher Tejal Patwardhan will manage much of the work of the team.

The moves come when OpenAI's chatbots, which can engage in human-like conversations and create videos and images based on text prompts, have become increasingly powerful and have stirred safety concerns.

Earlier this year, the Microsoft-backed company formed a Safety and Security Committee to be led by board members, including CEO Sam Altman, in the run-up to its training of its next artificial intelligence model.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.