Justice at Stake as Generative AI Enters the Courtroom

Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP
Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP
TT

Justice at Stake as Generative AI Enters the Courtroom

Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP
Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP

Generative artificial intelligence (GenAI) is making its way into courts despite early stumbles, raising questions about how it will influence the legal system and justice itself.

Judges use the technology for research, lawyers utilize it for appeals and parties involved in cases have relied on GenAI to help express themselves in court.

"It's probably used more than people expect," said Daniel Linna, a professor at the Northwestern Pritzker School of Law, about GenAI in the US legal system.

"Judges don't necessarily raise their hand and talk about this to a whole room of judges, but I have people who come to me afterward and say they are experimenting with it”.

In one prominent instance, GenAI enabled murder victim Chris Pelkey to address an Arizona courtroom -- in the form of a video avatar -- at the sentencing of the man convicted of shooting him dead in 2021 during a clash between motorists.

"I believe in forgiveness," said a digital proxy of Pelkey created by his sister, Stacey Wales.

The judge voiced appreciation for the avatar, saying it seemed authentic.

"I knew it would be powerful," Wales told , "that that it would humanize Chris in the eyes of the judge."

The AI testimony, a first of its kind, ended the sentencing hearing at which Wales and other members of the slain man's family spoke about the impact of the loss.

Since the hearing, examples of GenAI being used in US legal cases have multiplied.

"It is a helpful tool and it is time-saving, as long as the accuracy is confirmed," said attorney Stephen Schwartz, who practices in the northeastern state of Maine.

"Overall, it's a positive development in jurisprudence."

Schwartz described using ChatGPT as well as GenAI legal assistants, such as LexisNexis Protege and CoCounsel from Thomson Reuters, for researching case law and other tasks.

"You can't completely rely on it," Schwartz cautioned, recommending that cases proffered by GenAI be read to ensure accuracy.

"We are all aware of a horror story where AI comes up with mixed-up case things."

The technology has been the culprit behind false legal citations, far-fetched case precedents, and flat-out fabrications.

In early May, a federal judge in Los Angeles imposed $31,100 in fines and damages on two law firms for an error-riddled petition drafted with the help of GenAI, blasting it as a "collective debacle."

The tech is also being relied on by some who skip lawyers and represent themselves in court, often causing legal errors.

And as GenAI makes it easier and cheaper to draft legal complaints, courts already overburdened by caseloads could see them climb higher, said Shay Cleary of the National Center for State Courts.

"Courts need to be prepared to handle that," Cleary said.

Transformation

Law professor Linna sees the potential for GenAI to be part of the solution though, giving more people the ability to seek justice in courts made more efficient.

"We have a huge number of people who don't have access to legal services," Linna said.

"These tools can be transformative; of course we need to be thoughtful about how we integrate them."

Federal judges in the US capitol have written decisions noting their use of ChatGPT in laying out their opinions.

"Judges need to be technologically up-to-date and trained in AI," Linna said.

GenAI assistants already have the potential to influence the outcome of cases the same way a human law clerk might, reasoned the professor.

Facts or case law pointed out by GenAI might sway a judge's decision, and could be different than what a legal clerk would have come up with.

But if GenAI lives up to its potential and excels at finding the best information for judges to consider, that could make for well-grounded rulings less likely to be overturned on appeal, according to Linna.



Major Publishers Sue Meta for Copyright Infringement Over AI Training

Cars drive past a sign of Meta, the new name for the company formerly known as Facebook, at its headquarters in Menlo Park, California, US, October 28, 2021. (Reuters)
Cars drive past a sign of Meta, the new name for the company formerly known as Facebook, at its headquarters in Menlo Park, California, US, October 28, 2021. (Reuters)
TT

Major Publishers Sue Meta for Copyright Infringement Over AI Training

Cars drive past a sign of Meta, the new name for the company formerly known as Facebook, at its headquarters in Menlo Park, California, US, October 28, 2021. (Reuters)
Cars drive past a sign of Meta, the new name for the company formerly known as Facebook, at its headquarters in Menlo Park, California, US, October 28, 2021. (Reuters)

Publishers Elsevier, Cengage, Hachette, Macmillan and McGraw Hill sued Meta Platforms in Manhattan federal court on Tuesday, alleging that the tech giant misused their books and journal articles to train its artificial intelligence model Llama.

The publishers, as well as author Scott Turow, alleged in the proposed class action complaint that Meta pirated millions of their works and used them without permission to train its large language models to respond to human prompts.

“AI is powering transformative innovations, ‌productivity and creativity ‌for individuals and companies, and courts have rightly ‌found ⁠that training AI ⁠on copyrighted material can qualify as fair use," a Meta spokesperson responded in a statement on Tuesday.

"We will fight this lawsuit aggressively.”

The publishers allege that Meta pirated works ranging from textbooks to scientific articles to novels including "The Fifth Season" by N.K. Jemisin and "The Wild Robot" by Peter Brown for its ⁠AI training.

They asked the court for ‌permission to represent a larger class ‌of copyright owners and an unspecified amount of monetary damages.

"Meta’s mass-scale ‌infringement isn’t public progress, and AI will never be properly ‌realized if tech companies prioritize pirate sites over scholarship and imagination," Maria Pallante, president of the Association of American Publishers, said in a statement.

The lawsuit opens a new front in the ongoing copyright ‌battle between creators and tech companies over AI training, in which dozens of authors, news outlets, ⁠visual ⁠artists and other plaintiffs have sued companies including Meta, OpenAI and Anthropic for infringement.

All of the pending cases will likely revolve around whether AI systems make fair use of copyrighted material by using it to create new, transformative content.

The first two judges to consider the matter issued diverging rulings last year.

Amazon- and Google-backed Anthropic was the first major AI company to settle one of the cases, agreeing last year to pay a group of authors $1.5 billion to resolve a class-action lawsuit that could have cost the company billions more in damages for alleged piracy.


Microsoft, Google and xAI to Give US Govt Early Access to AI Models for Security Checks

A Google logo is seen at a company research facility in Mountain View, California, US, May 13, 2025. (Reuters)
A Google logo is seen at a company research facility in Mountain View, California, US, May 13, 2025. (Reuters)
TT

Microsoft, Google and xAI to Give US Govt Early Access to AI Models for Security Checks

A Google logo is seen at a company research facility in Mountain View, California, US, May 13, 2025. (Reuters)
A Google logo is seen at a company research facility in Mountain View, California, US, May 13, 2025. (Reuters)

Microsoft, Google and Elon Musk’s xAI agreed to give the US government early access to new artificial intelligence models for national security testing, as US officials grow alarmed by the hacking capabilities of Anthropic’s newly unveiled Mythos.

The Center for AI Standards and Innovation at the Department of Commerce said on Tuesday that the agreement would allow it to evaluate the models before deployment and conduct research to assess their capabilities and security risks.

The agreement fulfills a pledge the Trump administration made in July 2025 to partner with technology companies to vet their AI models for “national security risks."

Microsoft will work with ‌US government scientists ‌to test AI systems “in ways that probe unexpected behaviors,” ‌the company ⁠said in a statement. ⁠Together they will develop shared datasets and workflows for testing the company’s models, the company said. Microsoft signed a similar agreement with the UK’s AI Security Institute, according to the statement.

Concern is growing in Washington over the national security risks posed by powerful AI systems. By securing early access to frontier models, US officials are aiming to identify threats ranging from cyberattacks to military misuse before the tools are widely deployed.

The development ⁠of advanced AI systems including Anthropic's Mythos has in recent weeks ‌created a stir globally, including among US officials ‌and corporate America, over their ability to supercharge hackers.

"Independent, rigorous measurement science is essential to understanding ‌frontier AI and its national security implications," CAISI Director Chris Fall said in ‌a statement.

The move builds on previous agreements with OpenAI and Anthropic, established in 2024 under the Biden administration when CAISI was known as the US Artificial Intelligence Safety Institute.

Under former President Joe Biden, the institute focused on developing AI tests, definitions and voluntary safety standards. It ‌was led by Biden tech adviser Elizabeth Kelly, who has since joined Anthropic, according to her LinkedIn profile.

CAISI, which serves ⁠as the government's ⁠main hub for AI model testing, said it had already completed more than 40 evaluations, including on cutting-edge models not yet available to the public.

Developers frequently hand over versions of their models with safety guardrails stripped back so the center can probe for national security risks, the agency said.

xAI did not immediately respond to a request for comment. Google declined to comment.

Last week, the Pentagon said it had reached agreements with seven AI companies to deploy their advanced capabilities on the Defense Department's classified networks as it seeks to broaden the range of AI providers working across the military.

The Pentagon announcement did not include Anthropic, which has been embroiled in a dispute with the Pentagon over guardrails on the military's use of its AI tools.


Samsung Electronics Appoints New TV Chief amid Mounting Competition

FILE PHOTO: The logo of Samsung Electronics is seen at the company's store in Seoul, South Korea, April 15, 2025.   REUTERS/Kim Hong-Ji/File Photo
FILE PHOTO: The logo of Samsung Electronics is seen at the company's store in Seoul, South Korea, April 15, 2025. REUTERS/Kim Hong-Ji/File Photo
TT

Samsung Electronics Appoints New TV Chief amid Mounting Competition

FILE PHOTO: The logo of Samsung Electronics is seen at the company's store in Seoul, South Korea, April 15, 2025.   REUTERS/Kim Hong-Ji/File Photo
FILE PHOTO: The logo of Samsung Electronics is seen at the company's store in Seoul, South Korea, April 15, 2025. REUTERS/Kim Hong-Ji/File Photo

Samsung Electronics, the world's No. 1 TV maker, has replaced its TV head for the first time in more than two years, as it faces mounting competition from Chinese rivals at home and abroad.

Samsung said in a statement on Monday that it has appointed Lee Won-jin, who was previously head of the Global Marketing Office, ⁠as the new ⁠head of its Visual Display Business, succeeding Yong Seok-woo, who will serve as an adviser.

Samsung usually carries out its annual management reshuffle around December, and the company did not disclose the ⁠reason for the replacement.

A Samsung Electronics official told Reuters the new leader is expected to bring a fresh perspective and the change needed for the TV business, which is facing intensifying market competition.

In March, China's TCL Electronics and Japan's Sony signed binding agreements for a strategic partnership in the home entertainment field, increasing pressure on rivals.

The ⁠Nikkei ⁠newspaper previously reported Samsung was considering discontinuing sales of home appliances and TVs in China within this year in the face of competition from Chinese companies that have undercut rivals.

Samsung said last month its TV profit declined in the first quarter because of stagnating demand and rising raw-material costs. Lee had previously worked at Google before moving to Samsung in 2014.