OpenAI Researchers Warned Board of AI Breakthrough ahead of CEO Ouster, Sources Say

FILE PHOTO: A keyboard is placed in front of a displayed OpenAI logo in this illustration taken February 21, 2023. REUTERS/Dado Ruvic/Illustration/File Photo/File Photo
FILE PHOTO: A keyboard is placed in front of a displayed OpenAI logo in this illustration taken February 21, 2023. REUTERS/Dado Ruvic/Illustration/File Photo/File Photo
TT
20

OpenAI Researchers Warned Board of AI Breakthrough ahead of CEO Ouster, Sources Say

FILE PHOTO: A keyboard is placed in front of a displayed OpenAI logo in this illustration taken February 21, 2023. REUTERS/Dado Ruvic/Illustration/File Photo/File Photo
FILE PHOTO: A keyboard is placed in front of a displayed OpenAI logo in this illustration taken February 21, 2023. REUTERS/Dado Ruvic/Illustration/File Photo/File Photo

Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.
The previously unreported letter and AI algorithm were key developments before the board's ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.
The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman's firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.
After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.
Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.
Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.



Justice at Stake as Generative AI Enters the Courtroom

Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP
Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP
TT
20

Justice at Stake as Generative AI Enters the Courtroom

Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP
Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP

Generative artificial intelligence (GenAI) is making its way into courts despite early stumbles, raising questions about how it will influence the legal system and justice itself.

Judges use the technology for research, lawyers utilize it for appeals and parties involved in cases have relied on GenAI to help express themselves in court.

"It's probably used more than people expect," said Daniel Linna, a professor at the Northwestern Pritzker School of Law, about GenAI in the US legal system.

"Judges don't necessarily raise their hand and talk about this to a whole room of judges, but I have people who come to me afterward and say they are experimenting with it”.

In one prominent instance, GenAI enabled murder victim Chris Pelkey to address an Arizona courtroom -- in the form of a video avatar -- at the sentencing of the man convicted of shooting him dead in 2021 during a clash between motorists.

"I believe in forgiveness," said a digital proxy of Pelkey created by his sister, Stacey Wales.

The judge voiced appreciation for the avatar, saying it seemed authentic.

"I knew it would be powerful," Wales told , "that that it would humanize Chris in the eyes of the judge."

The AI testimony, a first of its kind, ended the sentencing hearing at which Wales and other members of the slain man's family spoke about the impact of the loss.

Since the hearing, examples of GenAI being used in US legal cases have multiplied.

"It is a helpful tool and it is time-saving, as long as the accuracy is confirmed," said attorney Stephen Schwartz, who practices in the northeastern state of Maine.

"Overall, it's a positive development in jurisprudence."

Schwartz described using ChatGPT as well as GenAI legal assistants, such as LexisNexis Protege and CoCounsel from Thomson Reuters, for researching case law and other tasks.

"You can't completely rely on it," Schwartz cautioned, recommending that cases proffered by GenAI be read to ensure accuracy.

"We are all aware of a horror story where AI comes up with mixed-up case things."

The technology has been the culprit behind false legal citations, far-fetched case precedents, and flat-out fabrications.

In early May, a federal judge in Los Angeles imposed $31,100 in fines and damages on two law firms for an error-riddled petition drafted with the help of GenAI, blasting it as a "collective debacle."

The tech is also being relied on by some who skip lawyers and represent themselves in court, often causing legal errors.

And as GenAI makes it easier and cheaper to draft legal complaints, courts already overburdened by caseloads could see them climb higher, said Shay Cleary of the National Center for State Courts.

"Courts need to be prepared to handle that," Cleary said.

Transformation

Law professor Linna sees the potential for GenAI to be part of the solution though, giving more people the ability to seek justice in courts made more efficient.

"We have a huge number of people who don't have access to legal services," Linna said.

"These tools can be transformative; of course we need to be thoughtful about how we integrate them."

Federal judges in the US capitol have written decisions noting their use of ChatGPT in laying out their opinions.

"Judges need to be technologically up-to-date and trained in AI," Linna said.

GenAI assistants already have the potential to influence the outcome of cases the same way a human law clerk might, reasoned the professor.

Facts or case law pointed out by GenAI might sway a judge's decision, and could be different than what a legal clerk would have come up with.

But if GenAI lives up to its potential and excels at finding the best information for judges to consider, that could make for well-grounded rulings less likely to be overturned on appeal, according to Linna.