OpenAI CEO Warns ‘Societal Misalignments’ Could Make AI Dangerous 

Sam Altman, OpenAI CEO (on screen) speaks in a videocall with Omar al-Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, during the World Government Summit in Dubai on February 13, 2024. (AFP)
Sam Altman, OpenAI CEO (on screen) speaks in a videocall with Omar al-Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, during the World Government Summit in Dubai on February 13, 2024. (AFP)
TT
20

OpenAI CEO Warns ‘Societal Misalignments’ Could Make AI Dangerous 

Sam Altman, OpenAI CEO (on screen) speaks in a videocall with Omar al-Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, during the World Government Summit in Dubai on February 13, 2024. (AFP)
Sam Altman, OpenAI CEO (on screen) speaks in a videocall with Omar al-Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, during the World Government Summit in Dubai on February 13, 2024. (AFP)

The CEO of ChatGPT-maker OpenAI said Tuesday that the dangers that keep him awake at night regarding artificial intelligence are the “very subtle societal misalignments” that could make the systems wreak havoc.

Sam Altman, speaking at the World Government Summit in Dubai via a video call, reiterated his call for a body like the International Atomic Energy Agency to be created to oversee AI that's likely advancing faster than the world expects.

“There’s some things in there that are easy to imagine where things really go wrong. And I’m not that interested in the killer robots walking on the street direction of things going wrong,” Altman said. "I’m much more interested in the very subtle societal misalignments where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”

However, Altman stressed that the AI industry, like OpenAI, shouldn't be in the driver's seat when it comes to making regulations governing the industry.

“We’re still in the stage of a lot of discussion. So, there’s you know, everybody in the world is having a conference. Everyone’s got an idea, a policy paper, and that’s OK,” Altman said. “I think we’re still at a time where debate is needed and healthy, but at some point in the next few years, I think we have to move towards an action plan with real buy-in around the world.”

OpenAI, a San Francisco-based artificial intelligence startup, is one of the leaders in the field. Microsoft has invested some $1 billion in OpenAI. The Associated Press has signed a deal with OpenAI for it to access its news archive. Meanwhile, The New York Times has sued OpenAI and Microsoft over the use of its stories without permission to train OpenAI's chatbots.

OpenAI's success has made Altman the public face for generative AI’s rapid commercialization — and the fears over what may come from the new technology.

He said he was heartened to see that schools, where teachers feared students would use AI to write papers, now embrace the technology as crucial for the future. But he added that AI remains in its infancy.

“I think the reason is the current technology that we have is like ... that very first cellphone with a black-and-white screen,” Altman said. “So, give us some time. But I will say I think in a few more years it’ll be much better than it is now. And in a decade, it should be pretty remarkable.”



Justice at Stake as Generative AI Enters the Courtroom

Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP
Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP
TT
20

Justice at Stake as Generative AI Enters the Courtroom

Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP
Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP

Generative artificial intelligence (GenAI) is making its way into courts despite early stumbles, raising questions about how it will influence the legal system and justice itself.

Judges use the technology for research, lawyers utilize it for appeals and parties involved in cases have relied on GenAI to help express themselves in court.

"It's probably used more than people expect," said Daniel Linna, a professor at the Northwestern Pritzker School of Law, about GenAI in the US legal system.

"Judges don't necessarily raise their hand and talk about this to a whole room of judges, but I have people who come to me afterward and say they are experimenting with it”.

In one prominent instance, GenAI enabled murder victim Chris Pelkey to address an Arizona courtroom -- in the form of a video avatar -- at the sentencing of the man convicted of shooting him dead in 2021 during a clash between motorists.

"I believe in forgiveness," said a digital proxy of Pelkey created by his sister, Stacey Wales.

The judge voiced appreciation for the avatar, saying it seemed authentic.

"I knew it would be powerful," Wales told , "that that it would humanize Chris in the eyes of the judge."

The AI testimony, a first of its kind, ended the sentencing hearing at which Wales and other members of the slain man's family spoke about the impact of the loss.

Since the hearing, examples of GenAI being used in US legal cases have multiplied.

"It is a helpful tool and it is time-saving, as long as the accuracy is confirmed," said attorney Stephen Schwartz, who practices in the northeastern state of Maine.

"Overall, it's a positive development in jurisprudence."

Schwartz described using ChatGPT as well as GenAI legal assistants, such as LexisNexis Protege and CoCounsel from Thomson Reuters, for researching case law and other tasks.

"You can't completely rely on it," Schwartz cautioned, recommending that cases proffered by GenAI be read to ensure accuracy.

"We are all aware of a horror story where AI comes up with mixed-up case things."

The technology has been the culprit behind false legal citations, far-fetched case precedents, and flat-out fabrications.

In early May, a federal judge in Los Angeles imposed $31,100 in fines and damages on two law firms for an error-riddled petition drafted with the help of GenAI, blasting it as a "collective debacle."

The tech is also being relied on by some who skip lawyers and represent themselves in court, often causing legal errors.

And as GenAI makes it easier and cheaper to draft legal complaints, courts already overburdened by caseloads could see them climb higher, said Shay Cleary of the National Center for State Courts.

"Courts need to be prepared to handle that," Cleary said.

Transformation

Law professor Linna sees the potential for GenAI to be part of the solution though, giving more people the ability to seek justice in courts made more efficient.

"We have a huge number of people who don't have access to legal services," Linna said.

"These tools can be transformative; of course we need to be thoughtful about how we integrate them."

Federal judges in the US capitol have written decisions noting their use of ChatGPT in laying out their opinions.

"Judges need to be technologically up-to-date and trained in AI," Linna said.

GenAI assistants already have the potential to influence the outcome of cases the same way a human law clerk might, reasoned the professor.

Facts or case law pointed out by GenAI might sway a judge's decision, and could be different than what a legal clerk would have come up with.

But if GenAI lives up to its potential and excels at finding the best information for judges to consider, that could make for well-grounded rulings less likely to be overturned on appeal, according to Linna.