OpenAI Unveils 'Operator' Agent that Handles Web Tasks

OpenAI says its new artificial intelligence agent capable of tending to online tasks is trained to check with users when it encounters CAPTCHA puzzles intended to distinguish people from software. Kirill KUDRYAVTSEV / AFP
OpenAI says its new artificial intelligence agent capable of tending to online tasks is trained to check with users when it encounters CAPTCHA puzzles intended to distinguish people from software. Kirill KUDRYAVTSEV / AFP
TT
20

OpenAI Unveils 'Operator' Agent that Handles Web Tasks

OpenAI says its new artificial intelligence agent capable of tending to online tasks is trained to check with users when it encounters CAPTCHA puzzles intended to distinguish people from software. Kirill KUDRYAVTSEV / AFP
OpenAI says its new artificial intelligence agent capable of tending to online tasks is trained to check with users when it encounters CAPTCHA puzzles intended to distinguish people from software. Kirill KUDRYAVTSEV / AFP

OpenAI on Thursday introduced an artificial intelligence program called "Operator" that can tend to online tasks such as ordering items or filling out forms.

Operator can look up web pages and interact with them by typing, clicking, or scrolling the way a person might, according to OpenAI, said AFP.

"Operator can be asked to handle a wide variety of repetitive browser tasks such as filling out forms, ordering groceries, and even creating memes," OpenAI said in an online post.

"The ability to use the same interfaces and tools that humans interact with on a daily basis broadens the utility of AI, helping people save time on everyday tasks while opening up new engagement opportunities for businesses."

An AI "agent," the latest Silicon Valley trend, is a digital helper that is supposed to sense surroundings, make decisions, and take actions to achieve specific goals.

Google in December announced agent capabilities with the launch of Gemini 2.0, its most advanced artificial intelligence model to date.

AI race rival Anthropic two months earlier added a "computer use" feature to its Claude frontier AI model in an experimental public beta phase.

"Developers can direct Claude to use computers the way people do—by looking at a screen, moving a cursor, clicking buttons, and typing text," Anthropic said in a post at the time, cautioning that it was a work in progress.

OpenAI described Operator as one of its first AI agents capable of doing work for people independently, designed to complete tasks it is given.

Operator is available only to US users who pay for Pro subscriptions to the OpenAI service "to ensure a safe and iterative rollout," OpenAI said.

"If it encounters challenges or makes mistakes, Operator can leverage its reasoning capabilities to self-correct," OpenAI said.

"When it gets stuck and needs assistance, it simply hands control back to the user."

Operator is trained to ask the user to take over for tasks that require login, payment details, or when solving "CAPTCHA" security challenges intended to distinguish between people and software online, according to OpenAI.

"Users can have Operator run multiple tasks simultaneously by creating new conversations, like ordering a personalized enamel mug on Etsy while booking a campsite on Hipcamp," OpenAI said.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.