OpenAI CEO Warns ‘Societal Misalignments’ Could Make AI Dangerous 

Sam Altman, OpenAI CEO (on screen) speaks in a videocall with Omar al-Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, during the World Government Summit in Dubai on February 13, 2024. (AFP)
Sam Altman, OpenAI CEO (on screen) speaks in a videocall with Omar al-Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, during the World Government Summit in Dubai on February 13, 2024. (AFP)
TT
20

OpenAI CEO Warns ‘Societal Misalignments’ Could Make AI Dangerous 

Sam Altman, OpenAI CEO (on screen) speaks in a videocall with Omar al-Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, during the World Government Summit in Dubai on February 13, 2024. (AFP)
Sam Altman, OpenAI CEO (on screen) speaks in a videocall with Omar al-Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, during the World Government Summit in Dubai on February 13, 2024. (AFP)

The CEO of ChatGPT-maker OpenAI said Tuesday that the dangers that keep him awake at night regarding artificial intelligence are the “very subtle societal misalignments” that could make the systems wreak havoc.

Sam Altman, speaking at the World Government Summit in Dubai via a video call, reiterated his call for a body like the International Atomic Energy Agency to be created to oversee AI that's likely advancing faster than the world expects.

“There’s some things in there that are easy to imagine where things really go wrong. And I’m not that interested in the killer robots walking on the street direction of things going wrong,” Altman said. "I’m much more interested in the very subtle societal misalignments where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”

However, Altman stressed that the AI industry, like OpenAI, shouldn't be in the driver's seat when it comes to making regulations governing the industry.

“We’re still in the stage of a lot of discussion. So, there’s you know, everybody in the world is having a conference. Everyone’s got an idea, a policy paper, and that’s OK,” Altman said. “I think we’re still at a time where debate is needed and healthy, but at some point in the next few years, I think we have to move towards an action plan with real buy-in around the world.”

OpenAI, a San Francisco-based artificial intelligence startup, is one of the leaders in the field. Microsoft has invested some $1 billion in OpenAI. The Associated Press has signed a deal with OpenAI for it to access its news archive. Meanwhile, The New York Times has sued OpenAI and Microsoft over the use of its stories without permission to train OpenAI's chatbots.

OpenAI's success has made Altman the public face for generative AI’s rapid commercialization — and the fears over what may come from the new technology.

He said he was heartened to see that schools, where teachers feared students would use AI to write papers, now embrace the technology as crucial for the future. But he added that AI remains in its infancy.

“I think the reason is the current technology that we have is like ... that very first cellphone with a black-and-white screen,” Altman said. “So, give us some time. But I will say I think in a few more years it’ll be much better than it is now. And in a decade, it should be pretty remarkable.”



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.