OpenAI CEO Warns ‘Societal Misalignments’ Could Make AI Dangerous 

Sam Altman, OpenAI CEO (on screen) speaks in a videocall with Omar al-Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, during the World Government Summit in Dubai on February 13, 2024. (AFP)
Sam Altman, OpenAI CEO (on screen) speaks in a videocall with Omar al-Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, during the World Government Summit in Dubai on February 13, 2024. (AFP)
TT

OpenAI CEO Warns ‘Societal Misalignments’ Could Make AI Dangerous 

Sam Altman, OpenAI CEO (on screen) speaks in a videocall with Omar al-Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, during the World Government Summit in Dubai on February 13, 2024. (AFP)
Sam Altman, OpenAI CEO (on screen) speaks in a videocall with Omar al-Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, during the World Government Summit in Dubai on February 13, 2024. (AFP)

The CEO of ChatGPT-maker OpenAI said Tuesday that the dangers that keep him awake at night regarding artificial intelligence are the “very subtle societal misalignments” that could make the systems wreak havoc.

Sam Altman, speaking at the World Government Summit in Dubai via a video call, reiterated his call for a body like the International Atomic Energy Agency to be created to oversee AI that's likely advancing faster than the world expects.

“There’s some things in there that are easy to imagine where things really go wrong. And I’m not that interested in the killer robots walking on the street direction of things going wrong,” Altman said. "I’m much more interested in the very subtle societal misalignments where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”

However, Altman stressed that the AI industry, like OpenAI, shouldn't be in the driver's seat when it comes to making regulations governing the industry.

“We’re still in the stage of a lot of discussion. So, there’s you know, everybody in the world is having a conference. Everyone’s got an idea, a policy paper, and that’s OK,” Altman said. “I think we’re still at a time where debate is needed and healthy, but at some point in the next few years, I think we have to move towards an action plan with real buy-in around the world.”

OpenAI, a San Francisco-based artificial intelligence startup, is one of the leaders in the field. Microsoft has invested some $1 billion in OpenAI. The Associated Press has signed a deal with OpenAI for it to access its news archive. Meanwhile, The New York Times has sued OpenAI and Microsoft over the use of its stories without permission to train OpenAI's chatbots.

OpenAI's success has made Altman the public face for generative AI’s rapid commercialization — and the fears over what may come from the new technology.

He said he was heartened to see that schools, where teachers feared students would use AI to write papers, now embrace the technology as crucial for the future. But he added that AI remains in its infancy.

“I think the reason is the current technology that we have is like ... that very first cellphone with a black-and-white screen,” Altman said. “So, give us some time. But I will say I think in a few more years it’ll be much better than it is now. And in a decade, it should be pretty remarkable.”



Paris Olympics Expected to Face 4 Billion Cyber Incidents

A general view of the Olympic rings on the Eiffel Tower a day before the opening ceremony of the Paris 2024 Olympics, in Paris, France June 25, 2024. (Reuters)
A general view of the Olympic rings on the Eiffel Tower a day before the opening ceremony of the Paris 2024 Olympics, in Paris, France June 25, 2024. (Reuters)
TT

Paris Olympics Expected to Face 4 Billion Cyber Incidents

A general view of the Olympic rings on the Eiffel Tower a day before the opening ceremony of the Paris 2024 Olympics, in Paris, France June 25, 2024. (Reuters)
A general view of the Olympic rings on the Eiffel Tower a day before the opening ceremony of the Paris 2024 Olympics, in Paris, France June 25, 2024. (Reuters)

As the Paris 2024 Olympic Games approach, cybersecurity officials are bracing for over 4 billion cyber incidents. They are setting up a new centralized cybersecurity center for the Games, supported by advanced intelligence teams and artificial intelligence (AI) models.

Eric Greffier, the technical director for Paris 2024 at Cisco France, told Asharq Al-Awsat that the Tokyo 2020 Games saw around 450 million cyber incidents. He added that the number of incidents expected for Paris is at least ten times higher, requiring a more efficient response.

Greffier explained that a single cybersecurity center allows for better coordination and a faster response to incidents.

This approach has proven effective in other areas, such as banking and the NFL, where his company also handles cybersecurity, he added.

The Extended Detection and Response (XDR) system is central to the company’s security strategy.

Greffier described it as a “comprehensive dashboard” that gathers data from various sources, links events, and automates threat responses.

It offers a complete view of cybersecurity and helps manage threats proactively, he affirmed.

The system covers all aspects of the Olympic Games’ digital security, from network and cloud protection to application security and end-user safety.

In cybersecurity, AI is vital for managing large amounts of data and spotting potential threats. Greffier noted that with 4 billion expected incidents, filtering out irrelevant data is crucial.

The Olympic cybersecurity center uses AI and machine learning to automate threat responses, letting analysts focus on real issues, he explained.

One example is a network analytics tool that monitors traffic to find unusual patterns.

Greffier said that by creating models of normal behavior, the system can detect anomalies that might indicate a potential attack. While this might generate false alarms, it helps ensure that unusual activity is flagged for further review.