Nations Building Their Own AI Models Add to Nvidia's Growing Chip Demand

FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
TT

Nations Building Their Own AI Models Add to Nvidia's Growing Chip Demand

FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

Nations building artificial intelligence models in their own languages are turning to Nvidia's chips, adding to already booming demand as generative AI takes center stage for businesses and governments, a senior executive said on Wednesday.
Nvidia's third-quarter forecast for rising sales of its chips that power AI technology such as OpenAI's ChatGPT failed to meet investors' towering expectations. But the company described new customers coming from around the world, including governments that are now seeking their own AI models and the hardware to support them, Reuters said.
Countries adopting their own AI applications and models will contribute about low double-digit billions to Nvidia's revenue in the financial year ending in January 2025, Chief Financial Officer Colette Kress said on a call with analysts after Nvidia's earnings report.
That's up from an earlier forecast of such sales contributing high single-digit billions to total revenue. Nvidia forecast about $32.5 billion in total revenue in the third quarter ending in October.
"Countries around the world (desire) to have their own generative AI that would be able to incorporate their own language, incorporate their own culture, incorporate their own data in that country," Kress said, describing AI expertise and infrastructure as "national imperatives."
She offered the example of Japan's National Institute of Advanced Industrial Science and Technology, which is building an AI supercomputer featuring thousands of Nvidia H200 graphics processors.
Governments are also turning to AI as a measure to strengthen national security.
"AI models are trained on data and for political entities -particularly nations - their data are secret and their models need to be customized to their unique political, economic, cultural, and scientific needs," said IDC computing semiconductors analyst Shane Rau.
"Therefore, they need to have their own AI models and a custom underlying arrangement of hardware and software."
Washington tightened its controls on exports of cutting-edge chips to China in 2023 as it sought to prevent breakthroughs in AI that would aid China's military, hampering Nvidia's sales in the region.
Businesses have been working to tap into government pushes to build AI platforms in regional languages.
IBM said in May that Saudi Arabia's Data and Artificial Intelligence Authority would train its "ALLaM" Arabic language model using the company's AI platform Watsonx.
Nations that want to create their own AI models can drive growth opportunities for Nvidia's GPUs, on top of the significant investments in the company's hardware from large cloud providers like Microsoft, said Bob O'Donnell, chief analyst at TECHnalysis Research.



Cerebras Launches AI Inference Tool to Challenge Nvidia

Cerebras Systems logo is seen in this illustration taken March 31, 2023. (Reuters)
Cerebras Systems logo is seen in this illustration taken March 31, 2023. (Reuters)
TT

Cerebras Launches AI Inference Tool to Challenge Nvidia

Cerebras Systems logo is seen in this illustration taken March 31, 2023. (Reuters)
Cerebras Systems logo is seen in this illustration taken March 31, 2023. (Reuters)

Cerebras Systems launched on Tuesday a tool for AI developers that allows them to access the startup's outsized chips to run applications, offering what it says is a much cheaper option than industry-standard Nvidia processors.

Access to Nvidia graphics processing units (GPUs) - often via a cloud computing provider - to train and deploy large artificial intelligence models used for applications such as OpenAI's ChatGPT can be difficult to obtain and expensive to run, a process developers refer to as inference.

"We're delivering performance that cannot be achieved by a GPU," Cerebras CEO Andrew Feldman told Reuters in an interview. "We're doing it at the highest accuracy, and we're offering it at the lowest price."

The inference portion of the AI market is expected to be fast-growing and attractive - ultimately worth tens of billions of dollars if consumers and businesses adopt AI tools.

The Sunnyvale, California-based company plans to offer several types of the inference product via a developer key and its cloud. The company will also sell its AI systems to customers who prefer to operate their own data centers.

Cerebras' chips - each the size of a dinner plate and called Wafer Scale Engines - avoid one of the issues with AI data crunching: the data crunched by large models that power AI applications typically won't fit on a single chip and can require hundreds or thousands of chips strung together.

That means Cerebras' chips can achieve speedier performances, Feldman said.

It plans to charge users as little as 10 cents per million tokens, which are one of the ways companies can measure the amount of output data from a large model.

Cerebras is aiming to go public and filed a confidential prospectus with the Securities and Exchange Commission this month, the company said.