Nvidia Unveils Latest Chips, Technology to Speed up AI Computing

The Nvidia's new Grace CPU Superchip unveiled at the chipmaker's AI developer conference is seen in this undated handout image obtained by Reuters. (Nvidia/Handout via Reuters)
The Nvidia's new Grace CPU Superchip unveiled at the chipmaker's AI developer conference is seen in this undated handout image obtained by Reuters. (Nvidia/Handout via Reuters)
TT

Nvidia Unveils Latest Chips, Technology to Speed up AI Computing

The Nvidia's new Grace CPU Superchip unveiled at the chipmaker's AI developer conference is seen in this undated handout image obtained by Reuters. (Nvidia/Handout via Reuters)
The Nvidia's new Grace CPU Superchip unveiled at the chipmaker's AI developer conference is seen in this undated handout image obtained by Reuters. (Nvidia/Handout via Reuters)

Nvidia Corp on Tuesday announced several new chips and technologies that it said will boost the computing speed of increasingly complicated artificial intelligence algorithms, stepping up competition against rival chipmakers vying for lucrative data center business.

Nvidia's graphic chips (GPU), which initially helped propel and enhance the quality of videos in the gaming market, have become the dominant chips for companies to use for AI workloads. The latest GPU, called the H100, can help reduce computing times from weeks to days for some work involving training AI models, the company said.

The announcements were made at Nvidia's AI developers conference online.

"Data centers are becoming AI factories - processing and refining mountains of data to produce intelligence," said Nvidia Chief Executive Officer Jensen Huang in a statement, calling the H100 chip the "engine" of AI infrastructure.

Companies have been using AI and machine learning for everything from making recommendations of the next video to watch to new drug discovery, and the technology is increasingly becoming an important tool for business.

The H100 chip will be produced on Taiwan Manufacturing Semiconductor Company's cutting edge four nanometer process with 80 billion transistors and will be available in the third quarter, Nvidia said.

The H100 will also be used to build Nvidia's new "Eos" supercomputer, which Nvidia said will be the world's fastest AI system when it begins operation later this year.

Facebook parent Meta announced in January that it would build the world's fastest AI supercomputer this year and it would perform at nearly 5 exaflops. Nvidia on Tuesday said its supercomputer will run at over 18 exaflops.

Exaflop performance is the ability to perform 1 quintillion - or 1,000,000,000,000,000,000 - calculations per second.

In addition to the GPU chip, Nvidia introduced a new processor chip (CPU) called the Grace CPU Superchip that is based on Arm technology. It's the first new chip by Nvidia based on the Arm architecture to be announced since the company's deal to buy Arm Ltd fell apart last month due to regulatory hurdles.

The Grace CPU Superchip, which will be available in the first half of next year, connects two CPU chips and will focus on AI and other tasks that require intensive computing power.

More companies are connecting chips using technology that allows faster data flow between them. Earlier this month Apple Inc unveiled its M1 Ultra chip connecting two M1 Max chips.

Nvidia said the two CPU chips were connected using its NVLink-C2C technology, which was also unveiled on Tuesday.

Nvidia shares were up more than 1% in midday trade.



Nations Building Their Own AI Models Add to Nvidia's Growing Chip Demand

FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
TT

Nations Building Their Own AI Models Add to Nvidia's Growing Chip Demand

FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

Nations building artificial intelligence models in their own languages are turning to Nvidia's chips, adding to already booming demand as generative AI takes center stage for businesses and governments, a senior executive said on Wednesday.
Nvidia's third-quarter forecast for rising sales of its chips that power AI technology such as OpenAI's ChatGPT failed to meet investors' towering expectations. But the company described new customers coming from around the world, including governments that are now seeking their own AI models and the hardware to support them, Reuters said.
Countries adopting their own AI applications and models will contribute about low double-digit billions to Nvidia's revenue in the financial year ending in January 2025, Chief Financial Officer Colette Kress said on a call with analysts after Nvidia's earnings report.
That's up from an earlier forecast of such sales contributing high single-digit billions to total revenue. Nvidia forecast about $32.5 billion in total revenue in the third quarter ending in October.
"Countries around the world (desire) to have their own generative AI that would be able to incorporate their own language, incorporate their own culture, incorporate their own data in that country," Kress said, describing AI expertise and infrastructure as "national imperatives."
She offered the example of Japan's National Institute of Advanced Industrial Science and Technology, which is building an AI supercomputer featuring thousands of Nvidia H200 graphics processors.
Governments are also turning to AI as a measure to strengthen national security.
"AI models are trained on data and for political entities -particularly nations - their data are secret and their models need to be customized to their unique political, economic, cultural, and scientific needs," said IDC computing semiconductors analyst Shane Rau.
"Therefore, they need to have their own AI models and a custom underlying arrangement of hardware and software."
Washington tightened its controls on exports of cutting-edge chips to China in 2023 as it sought to prevent breakthroughs in AI that would aid China's military, hampering Nvidia's sales in the region.
Businesses have been working to tap into government pushes to build AI platforms in regional languages.
IBM said in May that Saudi Arabia's Data and Artificial Intelligence Authority would train its "ALLaM" Arabic language model using the company's AI platform Watsonx.
Nations that want to create their own AI models can drive growth opportunities for Nvidia's GPUs, on top of the significant investments in the company's hardware from large cloud providers like Microsoft, said Bob O'Donnell, chief analyst at TECHnalysis Research.