Cerebras Launches AI Inference Tool to Challenge Nvidia

Cerebras Systems logo is seen in this illustration taken March 31, 2023. (Reuters)
Cerebras Systems logo is seen in this illustration taken March 31, 2023. (Reuters)
TT
20

Cerebras Launches AI Inference Tool to Challenge Nvidia

Cerebras Systems logo is seen in this illustration taken March 31, 2023. (Reuters)
Cerebras Systems logo is seen in this illustration taken March 31, 2023. (Reuters)

Cerebras Systems launched on Tuesday a tool for AI developers that allows them to access the startup's outsized chips to run applications, offering what it says is a much cheaper option than industry-standard Nvidia processors.

Access to Nvidia graphics processing units (GPUs) - often via a cloud computing provider - to train and deploy large artificial intelligence models used for applications such as OpenAI's ChatGPT can be difficult to obtain and expensive to run, a process developers refer to as inference.

"We're delivering performance that cannot be achieved by a GPU," Cerebras CEO Andrew Feldman told Reuters in an interview. "We're doing it at the highest accuracy, and we're offering it at the lowest price."

The inference portion of the AI market is expected to be fast-growing and attractive - ultimately worth tens of billions of dollars if consumers and businesses adopt AI tools.

The Sunnyvale, California-based company plans to offer several types of the inference product via a developer key and its cloud. The company will also sell its AI systems to customers who prefer to operate their own data centers.

Cerebras' chips - each the size of a dinner plate and called Wafer Scale Engines - avoid one of the issues with AI data crunching: the data crunched by large models that power AI applications typically won't fit on a single chip and can require hundreds or thousands of chips strung together.

That means Cerebras' chips can achieve speedier performances, Feldman said.

It plans to charge users as little as 10 cents per million tokens, which are one of the ways companies can measure the amount of output data from a large model.

Cerebras is aiming to go public and filed a confidential prospectus with the Securities and Exchange Commission this month, the company said.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.