OpenAI CEO Says Possible to Get Regulation Wrong, but Should Not Fear it

FILE PHOTO: A keyboard is placed in front of a displayed OpenAI logo in this illustration taken February 21, 2023. REUTERS/Dado Ruvic/Illustration/File Photo/File Photo
FILE PHOTO: A keyboard is placed in front of a displayed OpenAI logo in this illustration taken February 21, 2023. REUTERS/Dado Ruvic/Illustration/File Photo/File Photo
TT
20

OpenAI CEO Says Possible to Get Regulation Wrong, but Should Not Fear it

FILE PHOTO: A keyboard is placed in front of a displayed OpenAI logo in this illustration taken February 21, 2023. REUTERS/Dado Ruvic/Illustration/File Photo/File Photo
FILE PHOTO: A keyboard is placed in front of a displayed OpenAI logo in this illustration taken February 21, 2023. REUTERS/Dado Ruvic/Illustration/File Photo/File Photo

The CEO of ChatGPT maker OpenAI said on Monday that it was possible to get regulation wrong but it is important and should not be feared, amid global concerns about rapid advances in artificial intelligence, or AI.

Many countries are planning AI regulation, and Britain is hosting a global AI safety summit in November, focusing on understanding the risks posed by the frontier technology and how national and international frameworks could be supported.
Sam Altman, CEO and the public face of the startup OpenAI, backed by Microsoft Corp, said during a visit to Taipei that although he was not that worried about government over-regulation, it could happen.
"I also worry about under-regulation. People in our industry bash regulation a lot. We've been calling for regulation, but only of the most powerful systems," he said.
"Models that are like 10,000 times the power of GPT4, models that are like as smart as human civilization, whatever, those probably deserve some regulation," added Altman, speaking at an AI event hosted by the charitable foundation of Terry Gou, the founder of major Apple supplier Foxconn.
According to Reuters, Altman said that in the tech industry there is a "reflexive anti-regulation thing".
"Regulation has been not a pure good, but it's been good in a lot of ways. I don't want to have to make an opinion about every time I step on an airplane how safe it's going to be, but I trust that they're pretty safe and I think regulation has been a positive good there," he said.
"It is possible to get regulation wrong, but I don't think we sit around and fear it. In fact we think some version of it is important."



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.