OpenAI Unveils 'Operator' Agent that Handles Web Tasks

OpenAI says its new artificial intelligence agent capable of tending to online tasks is trained to check with users when it encounters CAPTCHA puzzles intended to distinguish people from software. Kirill KUDRYAVTSEV / AFP
OpenAI says its new artificial intelligence agent capable of tending to online tasks is trained to check with users when it encounters CAPTCHA puzzles intended to distinguish people from software. Kirill KUDRYAVTSEV / AFP
TT
20

OpenAI Unveils 'Operator' Agent that Handles Web Tasks

OpenAI says its new artificial intelligence agent capable of tending to online tasks is trained to check with users when it encounters CAPTCHA puzzles intended to distinguish people from software. Kirill KUDRYAVTSEV / AFP
OpenAI says its new artificial intelligence agent capable of tending to online tasks is trained to check with users when it encounters CAPTCHA puzzles intended to distinguish people from software. Kirill KUDRYAVTSEV / AFP

OpenAI on Thursday introduced an artificial intelligence program called "Operator" that can tend to online tasks such as ordering items or filling out forms.

Operator can look up web pages and interact with them by typing, clicking, or scrolling the way a person might, according to OpenAI, said AFP.

"Operator can be asked to handle a wide variety of repetitive browser tasks such as filling out forms, ordering groceries, and even creating memes," OpenAI said in an online post.

"The ability to use the same interfaces and tools that humans interact with on a daily basis broadens the utility of AI, helping people save time on everyday tasks while opening up new engagement opportunities for businesses."

An AI "agent," the latest Silicon Valley trend, is a digital helper that is supposed to sense surroundings, make decisions, and take actions to achieve specific goals.

Google in December announced agent capabilities with the launch of Gemini 2.0, its most advanced artificial intelligence model to date.

AI race rival Anthropic two months earlier added a "computer use" feature to its Claude frontier AI model in an experimental public beta phase.

"Developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text," Anthropic said in a post at the time, cautioning that it was a work in progress.

OpenAI described Operator as one of its first AI agents capable of doing work for people independently, designed to complete tasks it is given.

Operator is available only to US users who pay for Pro subscriptions to the OpenAI service "to ensure a safe and iterative rollout," OpenAI said.

"If it encounters challenges or makes mistakes, Operator can leverage its reasoning capabilities to self-correct," OpenAI said.

"When it gets stuck and needs assistance, it simply hands control back to the user."

Operator is trained to ask the user to take over for tasks that require login, payment details, or when solving "CAPTCHA" security challenges intended to distinguish between people and software online, according to OpenAI.

"Users can have Operator run multiple tasks simultaneously by creating new conversations, like ordering a personalized enamel mug on Etsy while booking a campsite on Hipcamp," OpenAI said.



Reddit Sues AI Giant Anthropic Over Content Use

Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
TT
20

Reddit Sues AI Giant Anthropic Over Content Use

Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP

Social media outlet Reddit filed a lawsuit Wednesday against artificial intelligence company Anthropic, accusing the startup of illegally scraping millions of user comments to train its Claude chatbot without permission or compensation.

The lawsuit in a California state court represents the latest front in the growing battle between content providers and AI companies over the use of data to train increasingly sophisticated language models that power the generative AI revolution.

Anthropic, valued at $61.5 billion and heavily backed by Amazon, was founded in 2021 by former executives from OpenAI, the creator of ChatGPT.

The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development.

"This case is about the two faces of Anthropic: the public face that attempts to ingratiate itself into the consumer's consciousness with claims of righteousness and respect for boundaries and the law, and the private face that ignores any rules that interfere with its attempts to further line its pockets," the suit said.

According to the complaint, Anthropic has been training its models on Reddit content since at least December 2021, with CEO Dario Amodei co-authoring research papers that specifically identified high-quality content for data training.

The lawsuit alleges that despite Anthropic's public claims that it had blocked its bots from accessing Reddit, the company's automated systems continued to harvest Reddit's servers more than 100,000 times in subsequent months.

Reddit is seeking monetary damages and a court injunction to force Anthropic to comply with its user agreement terms. The company has requested a jury trial.

In an email to AFP, Anthropic said "We disagree with Reddit's claims and will defend ourselves vigorously."

Reddit has entered into licensing agreements with other AI giants including Google and OpenAI, which allow those companies to use Reddit content under terms that protect user privacy and provide compensation to the platform.

Those deals have helped lift Reddit's share price since it went public in 2024.

Reddit shares closed up more than six percent on Wednesday following news of the lawsuit.

Musicians, book authors, visual artists and news publications have sued the various AI companies that used their data without permission or payment.

AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally changes the original content and is necessary for innovation.

Though most of these lawsuits are still in early stages, their outcomes could have a profound effect on the shape of the AI industry.