Anthropic Releases AI to Automate Mouse Clicks for Coders

Anthropic logo is seen in this illustration taken May 20, 2024. (Reuters)
Anthropic logo is seen in this illustration taken May 20, 2024. (Reuters)
TT
20

Anthropic Releases AI to Automate Mouse Clicks for Coders

Anthropic logo is seen in this illustration taken May 20, 2024. (Reuters)
Anthropic logo is seen in this illustration taken May 20, 2024. (Reuters)

Anthropic, a startup backed by Alphabet and Amazon.com, released a pair of updated artificial intelligence models on Tuesday, along with a new capability to autonomously perform computer tasks and save users keystrokes.

The new "computer use" feature can tell AI "where to move the mouse, where to click, what to type, in order to do quite complicated tasks," Anthropic's Chief Science Officer Jared Kaplan said in an interview.

The capability is tailored to software developers and represents a move toward AI agents, programs that require little human intervention to carry out multi-step actions. Researchers have touted agents as a frontier for AI development beyond chatbots, which easily conjure prose or computer code though not actions.

Anthropic demonstrated a use case for the feature that entailed coding a basic website, and another that used various programs including Google Search and Apple Maps to plan a sunrise outing.

Anthropic offers software developers three versions of Claude, its family of AI models, at price points that vary based on their performance. This week's updates come to Sonnet, the mid-tier model, and Haiku, the cheapest.

The new 3.5 Haiku can generate computer code in a manner "almost comparable" to the version of Sonnet released in June, according to Kaplan. CEO Dario Amodei told Reuters at the time that the company intended to update Opus, the most capable model, by the end of the year.

The computer use feature is currently limited to the new version of Claude 3.5 Sonnet and comes with safeguards to prevent its application toward spam, fraud and election-related misuse, Anthropic said. Kaplan said the AI still makes mistakes.

Mike Krieger, a co-founder of Instagram who joined Anthropic this spring as chief product officer, said the company wants feedback from business customers to learn where to focus development of the feature. Meanwhile, a labs team inside Anthropic is exploring how to make the capability available for consumers, something Krieger said he personally wants.

"I was booking flights," he said. "I really just want this to be completely automated."

Microsoft on Monday unveiled an application for its clients to build their own agents that can handle queries, identify sales leads and manage inventory.



Justice at Stake as Generative AI Enters the Courtroom

Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP
Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP
TT
20

Justice at Stake as Generative AI Enters the Courtroom

Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP
Generative artificial intelligence has been used in the US legal system by judges performing research, lawyers filing appeals and parties involved in cases who wanted help expressing themselves in court. Jefferson Siegel / POOL/AFP

Generative artificial intelligence (GenAI) is making its way into courts despite early stumbles, raising questions about how it will influence the legal system and justice itself.

Judges use the technology for research, lawyers utilize it for appeals and parties involved in cases have relied on GenAI to help express themselves in court.

"It's probably used more than people expect," said Daniel Linna, a professor at the Northwestern Pritzker School of Law, about GenAI in the US legal system.

"Judges don't necessarily raise their hand and talk about this to a whole room of judges, but I have people who come to me afterward and say they are experimenting with it”.

In one prominent instance, GenAI enabled murder victim Chris Pelkey to address an Arizona courtroom -- in the form of a video avatar -- at the sentencing of the man convicted of shooting him dead in 2021 during a clash between motorists.

"I believe in forgiveness," said a digital proxy of Pelkey created by his sister, Stacey Wales.

The judge voiced appreciation for the avatar, saying it seemed authentic.

"I knew it would be powerful," Wales told , "that that it would humanize Chris in the eyes of the judge."

The AI testimony, a first of its kind, ended the sentencing hearing at which Wales and other members of the slain man's family spoke about the impact of the loss.

Since the hearing, examples of GenAI being used in US legal cases have multiplied.

"It is a helpful tool and it is time-saving, as long as the accuracy is confirmed," said attorney Stephen Schwartz, who practices in the northeastern state of Maine.

"Overall, it's a positive development in jurisprudence."

Schwartz described using ChatGPT as well as GenAI legal assistants, such as LexisNexis Protege and CoCounsel from Thomson Reuters, for researching case law and other tasks.

"You can't completely rely on it," Schwartz cautioned, recommending that cases proffered by GenAI be read to ensure accuracy.

"We are all aware of a horror story where AI comes up with mixed-up case things."

The technology has been the culprit behind false legal citations, far-fetched case precedents, and flat-out fabrications.

In early May, a federal judge in Los Angeles imposed $31,100 in fines and damages on two law firms for an error-riddled petition drafted with the help of GenAI, blasting it as a "collective debacle."

The tech is also being relied on by some who skip lawyers and represent themselves in court, often causing legal errors.

And as GenAI makes it easier and cheaper to draft legal complaints, courts already overburdened by caseloads could see them climb higher, said Shay Cleary of the National Center for State Courts.

"Courts need to be prepared to handle that," Cleary said.

Transformation

Law professor Linna sees the potential for GenAI to be part of the solution though, giving more people the ability to seek justice in courts made more efficient.

"We have a huge number of people who don't have access to legal services," Linna said.

"These tools can be transformative; of course we need to be thoughtful about how we integrate them."

Federal judges in the US capitol have written decisions noting their use of ChatGPT in laying out their opinions.

"Judges need to be technologically up-to-date and trained in AI," Linna said.

GenAI assistants already have the potential to influence the outcome of cases the same way a human law clerk might, reasoned the professor.

Facts or case law pointed out by GenAI might sway a judge's decision, and could be different than what a legal clerk would have come up with.

But if GenAI lives up to its potential and excels at finding the best information for judges to consider, that could make for well-grounded rulings less likely to be overturned on appeal, according to Linna.