Nvidia Unveils Latest Chips, Technology to Speed up AI Computing

The Nvidia's new Grace CPU Superchip unveiled at the chipmaker's AI developer conference is seen in this undated handout image obtained by Reuters. (Nvidia/Handout via Reuters)
The Nvidia's new Grace CPU Superchip unveiled at the chipmaker's AI developer conference is seen in this undated handout image obtained by Reuters. (Nvidia/Handout via Reuters)
TT

Nvidia Unveils Latest Chips, Technology to Speed up AI Computing

The Nvidia's new Grace CPU Superchip unveiled at the chipmaker's AI developer conference is seen in this undated handout image obtained by Reuters. (Nvidia/Handout via Reuters)
The Nvidia's new Grace CPU Superchip unveiled at the chipmaker's AI developer conference is seen in this undated handout image obtained by Reuters. (Nvidia/Handout via Reuters)

Nvidia Corp on Tuesday announced several new chips and technologies that it said will boost the computing speed of increasingly complicated artificial intelligence algorithms, stepping up competition against rival chipmakers vying for lucrative data center business.

Nvidia's graphic chips (GPU), which initially helped propel and enhance the quality of videos in the gaming market, have become the dominant chips for companies to use for AI workloads. The latest GPU, called the H100, can help reduce computing times from weeks to days for some work involving training AI models, the company said.

The announcements were made at Nvidia's AI developers conference online.

"Data centers are becoming AI factories - processing and refining mountains of data to produce intelligence," said Nvidia Chief Executive Officer Jensen Huang in a statement, calling the H100 chip the "engine" of AI infrastructure.

Companies have been using AI and machine learning for everything from making recommendations of the next video to watch to new drug discovery, and the technology is increasingly becoming an important tool for business.

The H100 chip will be produced on Taiwan Manufacturing Semiconductor Company's cutting edge four nanometer process with 80 billion transistors and will be available in the third quarter, Nvidia said.

The H100 will also be used to build Nvidia's new "Eos" supercomputer, which Nvidia said will be the world's fastest AI system when it begins operation later this year.

Facebook parent Meta announced in January that it would build the world's fastest AI supercomputer this year and it would perform at nearly 5 exaflops. Nvidia on Tuesday said its supercomputer will run at over 18 exaflops.

Exaflop performance is the ability to perform 1 quintillion - or 1,000,000,000,000,000,000 - calculations per second.

In addition to the GPU chip, Nvidia introduced a new processor chip (CPU) called the Grace CPU Superchip that is based on Arm technology. It's the first new chip by Nvidia based on the Arm architecture to be announced since the company's deal to buy Arm Ltd fell apart last month due to regulatory hurdles.

The Grace CPU Superchip, which will be available in the first half of next year, connects two CPU chips and will focus on AI and other tasks that require intensive computing power.

More companies are connecting chips using technology that allows faster data flow between them. Earlier this month Apple Inc unveiled its M1 Ultra chip connecting two M1 Max chips.

Nvidia said the two CPU chips were connected using its NVLink-C2C technology, which was also unveiled on Tuesday.

Nvidia shares were up more than 1% in midday trade.



OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

AI startups OpenAI and Anthropic have signed deals with the United States government for research, testing and evaluation of their artificial intelligence models, the US Artificial Intelligence Safety Institute said on Thursday.

The first-of-their-kind agreements come at a time when the companies are facing regulatory scrutiny over safe and ethical use of AI technologies.

California legislators are set to vote on a bill as soon as this week to broadly regulate how AI is developed and deployed in the state.

Under the deals, the US AI Safety Institute will have access to major new models from both OpenAI and Anthropic prior to and following their public release.

The agreements will also enable collaborative research to evaluate capabilities of the AI models and risks associated with them, Reuters reported.

"We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on," said Jason Kwon, chief strategy officer at ChatGPT maker OpenAI.

Anthropic, which is backed by Amazon and Alphabet , did not immediately respond to a Reuters request for comment.

"These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said Elizabeth Kelly, director of the US AI Safety Institute.

The institute, a part of the US commerce department's National Institute of Standards and Technology (NIST), will also collaborate with the U.K. AI Safety Institute and provide feedback to the companies on potential safety improvements.

The US AI Safety Institute was launched last year as part of an executive order by President Joe Biden's administration to evaluate known and emerging risks of artificial intelligence models.