Anthropic Launches Newest AI Model, Three Months after its Last

The arrival of ChatGPT sent shockwaves through the journalism industry. Kirill KUDRYAVTSEV / AFP
The arrival of ChatGPT sent shockwaves through the journalism industry. Kirill KUDRYAVTSEV / AFP
TT
20

Anthropic Launches Newest AI Model, Three Months after its Last

The arrival of ChatGPT sent shockwaves through the journalism industry. Kirill KUDRYAVTSEV / AFP
The arrival of ChatGPT sent shockwaves through the journalism industry. Kirill KUDRYAVTSEV / AFP

Anthropic, a startup backed by Google and Amazon.com, on Thursday released an updated artificial intelligence model and a new layout to boost user productivity, continuing an industry sprint to push technology's frontier.

Three months after rolling out its Claude 3 family of AI models, Anthropic said it was launching Claude 3.5 Sonnet.

Compared with Claude 3 Opus - which CEO Dario Amodei in March called the "Rolls-Royce of models" - Anthropic's latest system scores higher on benchmark exams, runs about twice as fast, and is priced for software developers at a fifth the cost.

AI "models are a bit more fungible than cars," Amodei told Reuters. "I don't have to buy them and hold onto them for 20 years. That's one advantage of our field."

Like Anthropic, ChatGPT's creator OpenAI, Google and others are similarly touting AI advances at a breakneck pace.

For consumers, Anthropic has made its latest technology available for free at Claude.ai and in an iOS app. It also is letting web users opt into a setting called "Artifacts." This organizes the content that users prompt Claude to generate - whether the outline for a novel or a simple computer game - in a window display alongside their chat with the AI.

Coupled with a new group subscription plan, Amodei said Artifacts was a step towards "being able to work collaboratively" and "being able to use your model to produce finished products."

Anthropic plans to release more AI models this year, including Claude 3.5 Opus, it said. "We want to have as fast a release cycle as we can, again, subject to our safety values," Amodei said.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.