OpenAI CEO Warns ‘Societal Misalignments’ Could Make AI Dangerous 

Sam Altman, OpenAI CEO (on screen) speaks in a videocall with Omar al-Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, during the World Government Summit in Dubai on February 13, 2024. (AFP)
Sam Altman, OpenAI CEO (on screen) speaks in a videocall with Omar al-Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, during the World Government Summit in Dubai on February 13, 2024. (AFP)
TT
20

OpenAI CEO Warns ‘Societal Misalignments’ Could Make AI Dangerous 

Sam Altman, OpenAI CEO (on screen) speaks in a videocall with Omar al-Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, during the World Government Summit in Dubai on February 13, 2024. (AFP)
Sam Altman, OpenAI CEO (on screen) speaks in a videocall with Omar al-Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, during the World Government Summit in Dubai on February 13, 2024. (AFP)

The CEO of ChatGPT-maker OpenAI said Tuesday that the dangers that keep him awake at night regarding artificial intelligence are the “very subtle societal misalignments” that could make the systems wreak havoc.

Sam Altman, speaking at the World Government Summit in Dubai via a video call, reiterated his call for a body like the International Atomic Energy Agency to be created to oversee AI that's likely advancing faster than the world expects.

“There’s some things in there that are easy to imagine where things really go wrong. And I’m not that interested in the killer robots walking on the street direction of things going wrong,” Altman said. "I’m much more interested in the very subtle societal misalignments where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”

However, Altman stressed that the AI industry, like OpenAI, shouldn't be in the driver's seat when it comes to making regulations governing the industry.

“We’re still in the stage of a lot of discussion. So, there’s you know, everybody in the world is having a conference. Everyone’s got an idea, a policy paper, and that’s OK,” Altman said. “I think we’re still at a time where debate is needed and healthy, but at some point in the next few years, I think we have to move towards an action plan with real buy-in around the world.”

OpenAI, a San Francisco-based artificial intelligence startup, is one of the leaders in the field. Microsoft has invested some $1 billion in OpenAI. The Associated Press has signed a deal with OpenAI for it to access its news archive. Meanwhile, The New York Times has sued OpenAI and Microsoft over the use of its stories without permission to train OpenAI's chatbots.

OpenAI's success has made Altman the public face for generative AI’s rapid commercialization — and the fears over what may come from the new technology.

He said he was heartened to see that schools, where teachers feared students would use AI to write papers, now embrace the technology as crucial for the future. But he added that AI remains in its infancy.

“I think the reason is the current technology that we have is like ... that very first cellphone with a black-and-white screen,” Altman said. “So, give us some time. But I will say I think in a few more years it’ll be much better than it is now. And in a decade, it should be pretty remarkable.”



Reddit Sues AI Giant Anthropic Over Content Use

Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
TT
20

Reddit Sues AI Giant Anthropic Over Content Use

Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP

Social media outlet Reddit filed a lawsuit Wednesday against artificial intelligence company Anthropic, accusing the startup of illegally scraping millions of user comments to train its Claude chatbot without permission or compensation.

The lawsuit in a California state court represents the latest front in the growing battle between content providers and AI companies over the use of data to train increasingly sophisticated language models that power the generative AI revolution.

Anthropic, valued at $61.5 billion and heavily backed by Amazon, was founded in 2021 by former executives from OpenAI, the creator of ChatGPT.

The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development.

"This case is about the two faces of Anthropic: the public face that attempts to ingratiate itself into the consumer's consciousness with claims of righteousness and respect for boundaries and the law, and the private face that ignores any rules that interfere with its attempts to further line its pockets," the suit said.

According to the complaint, Anthropic has been training its models on Reddit content since at least December 2021, with CEO Dario Amodei co-authoring research papers that specifically identified high-quality content for data training.

The lawsuit alleges that despite Anthropic's public claims that it had blocked its bots from accessing Reddit, the company's automated systems continued to harvest Reddit's servers more than 100,000 times in subsequent months.

Reddit is seeking monetary damages and a court injunction to force Anthropic to comply with its user agreement terms. The company has requested a jury trial.

In an email to AFP, Anthropic said "We disagree with Reddit's claims and will defend ourselves vigorously."

Reddit has entered into licensing agreements with other AI giants including Google and OpenAI, which allow those companies to use Reddit content under terms that protect user privacy and provide compensation to the platform.

Those deals have helped lift Reddit's share price since it went public in 2024.

Reddit shares closed up more than six percent on Wednesday following news of the lawsuit.

Musicians, book authors, visual artists and news publications have sued the various AI companies that used their data without permission or payment.

AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally changes the original content and is necessary for innovation.

Though most of these lawsuits are still in early stages, their outcomes could have a profound effect on the shape of the AI industry.