OpenAI Names Members to Its Nonprofit Commission 

The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT
20

OpenAI Names Members to Its Nonprofit Commission 

The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
The OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

ChatGPT maker OpenAI named members to its newly formed nonprofit commission on Tuesday, which will guide the company's philanthropic efforts.

Microsoft-backed OpenAI in December outlined a plan to revamp its corporate structure, saying it would create a public benefit corporation to manage its growing business and ease the restrictions imposed by its existing nonprofit parent.

OpenAI, which last month said it would raise up to $40 billion in a new funding round valuing the company at $300 billion, named Daniel Zingale, who has held senior leadership roles across California, as the commission's convener.

Dolores Huerta, Monica Lozano, Robert Ross and Jack Oliver, all of whom have prior experience with community-based organizations, have been appointed as advisors to the new commission, formed earlier this month.

"The advisors will receive learnings and input from the community on how OpenAI's philanthropy can address long-term systemic issues, while also considering both the promise and risks of AI," OpenAI said in a blog post.

They will advise OpenAI's board on directing community engagement processes, drawing insights from people and organizations involved in health, science, education, and public services. The commission is expected to submit its findings to the board within 90 days.

Last year, Elon Musk, who co-founded OpenAI in 2015, sued the AI startup and its CEO, Sam Altman. Musk accused OpenAI of straying from its original mission of developing AI for the benefit of humanity and focusing on corporate profits instead.

A dozen former-OpenAI employees last week filed a legal brief backing Musk's lawsuit.

OpenAI countersued Musk last week, citing a pattern of harassment by him, and asking a federal judge to stop him from any "further unlawful and unfair action" against OpenAI in a court case over the future structure of the firm that helped launch the AI revolution.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.