Nvidia Rivals Focus on Building a Different Kind of Chip to Power AI Products

The NVIDIA logo is seen near a computer motherboard in this illustration taken January 8, 2024. (Reuters)
The NVIDIA logo is seen near a computer motherboard in this illustration taken January 8, 2024. (Reuters)
TT

Nvidia Rivals Focus on Building a Different Kind of Chip to Power AI Products

The NVIDIA logo is seen near a computer motherboard in this illustration taken January 8, 2024. (Reuters)
The NVIDIA logo is seen near a computer motherboard in this illustration taken January 8, 2024. (Reuters)

Building the current crop of artificial intelligence chatbots has relied on specialized computer chips pioneered by Nvidia, which dominates market and made itself the poster child of the AI boom.

But the same qualities that make those graphics processor chips, or GPUs, so effective at creating powerful AI systems from scratch make them less efficient at putting AI products to work.

That's opened up the AI chip industry to rivals who think they can compete with Nvidia in selling so-called AI inference chips that are more attuned to the day-to-day running of AI tools and designed to reduce some of the huge computing costs of generative AI.

“These companies are seeing opportunity for that kind of specialized hardware,” said Jacob Feldgoise, an analyst at Georgetown University's Center for Security and Emerging Technology. “The broader the adoption of these models, the more compute will be needed for inference and the more demand there will be for inference chips.”

What is AI inference? It takes a lot of computing power to make an AI chatbot. It starts with a process called training or pretraining — the “P” in ChatGPT — that involves AI systems “learning” from the patterns of huge troves of data. GPUs are good at doing that work because they can run many calculations at a time on a network of devices in communication with each other.

However, once trained, a generative AI tool still needs chips to do the work — such as when you ask a chatbot to compose a document or generate an image. That's where inferencing comes in. A trained AI model must take in new information and make inferences from what it already knows to produce a response.

GPUs can do that work, too. But it can be a bit like taking a sledgehammer to crack a nut.

“With training, you’re doing a lot heavier, a lot more work. With inferencing, that’s a lighter weight,” said Forrester analyst Alvin Nguyen.

That's led startups like Cerebras, Groq and d-Matrix as well as Nvidia's traditional chipmaking rivals — such as AMD and Intel — to pitch more inference-friendly chips as Nvidia focuses on meeting the huge demand from bigger tech companies for its higher-end hardware.

Inside an AI inference chip lab D-Matrix, which is launching its first product this week, was founded in 2019 — a bit late to the AI chip game, as CEO Sid Sheth explained during a recent interview at the company’s headquarters in Santa Clara, California, the same Silicon Valley city that's also home to AMD, Intel and Nvidia.

“There were already 100-plus companies. So when we went out there, the first reaction we got was ‘you’re too late,’” he said. The pandemic's arrival six months later didn't help as the tech industry pivoted to a focus on software to serve remote work.

Now, however, Sheth sees a big market in AI inferencing, comparing that later stage of machine learning to how human beings apply the knowledge they acquired in school.

“We spent the first 20 years of our lives going to school, educating ourselves. That’s training, right?” he said. “And then the next 40 years of your life, you kind of go out there and apply that knowledge — and then you get rewarded for being efficient.”

The product, called Corsair, consists of two chips with four chiplets each, made by Taiwan Semiconductor Manufacturing Company — the same manufacturer of most of Nvidia's chips — and packaged together in a way that helps to keep them cool.

The chips are designed in Santa Clara, assembled in Taiwan and then tested back in California. Testing is a long process and can take six months — if anything is off, it can be sent back to Taiwan.

D-Matrix workers were doing final testing on the chips during a recent visit to a laboratory with blue metal desks covered with cables, motherboards and computers, with a cold server room next door.

Who wants AI inference chips? While tech giants like Amazon, Google, Meta and Microsoft have been gobbling up the supply of costly GPUs in a race to outdo each other in AI development, makers of AI inference chips are aiming for a broader clientele.

Forrester's Nguyen said that could include Fortune 500 companies that want to make use of new generative AI technology without having to build their own AI infrastructure. Sheth said he expects a strong interest in AI video generation.

“The dream of AI for a lot of these enterprise companies is you can use your own enterprise data,” Nguyen said. “Buying (AI inference chips) should be cheaper than buying the ultimate GPUs from Nvidia and others. But I think there’s going to be a learning curve in terms of integrating it.”

Feldgoise said that, unlike training-focused chips, AI inference work prioritizes how fast a person will get a chatbot's response.

He said another whole set of companies is developing AI hardware for inference that can run not just in big data centers but locally on desktop computers, laptops and phones.

Why does this matter? Better-designed chips could bring down the huge costs of running AI to businesses. That could also affect the environmental and energy costs for everyone else.

Sheth says the big concern right now is, “are we going to burn the planet down in our quest for what people call AGI — human-like intelligence?”

It’s still fuzzy when AI might get to the point of artificial general intelligence — predictions range from a few years to decades. But, Sheth notes, only a handful of tech giants are on that quest.

“But then what about the rest?” he said. “They cannot be put on the same path.”

The other set of companies don’t want to use very large AI models — it’s too costly and uses too much energy.

“I don’t know if people truly, really appreciate that inference is actually really going to be a much bigger opportunity than training. I don’t think they appreciate that. It’s still training that is really grabbing all the headlines,” Sheth said.



Russia Confirms Ban on WhatsApp, Says No Plans to Block Google

Men pose with smartphones in front of displayed Whatsapp logo in this illustration September 14, 2017. REUTERS/Dado Ruvic/File Photo
Men pose with smartphones in front of displayed Whatsapp logo in this illustration September 14, 2017. REUTERS/Dado Ruvic/File Photo
TT

Russia Confirms Ban on WhatsApp, Says No Plans to Block Google

Men pose with smartphones in front of displayed Whatsapp logo in this illustration September 14, 2017. REUTERS/Dado Ruvic/File Photo
Men pose with smartphones in front of displayed Whatsapp logo in this illustration September 14, 2017. REUTERS/Dado Ruvic/File Photo

Russia has blocked the popular messaging service WhatsApp over its failure to comply with local legislation, the Kremlin said Thursday, urging its 100 million Russian users to switch to a domestic alternative.

Moscow has for months been trying to shift Russian users onto Max, a domestic messaging service that lacks end-to-end encryption and that activists have called a potential tool for surveillance.

"As for the blocking of WhatsApp ... such a decision was indeed made and implemented," Kremlin spokesman Dmitry Peskov told reporters.

Peskov said the decision was due to WhatsApp's "reluctance to comply with the norms and letter of Russian law".

"Max is an accessible alternative, a developing messenger, a national messenger. And it is an alternative available on the market for citizens," he said.

Anton Gorelkin, a member of the Russian parliament and vice chair of its IT committee, said on Thursday that there were no plans to block Google in Russia.

WhatsApp, owned by US social media giant Meta, said Wednesday that it believed Russia was attempting to fully block the service in a bid to force users onto Max.

"We continue to do everything we can to keep users connected," it said.


Samsung Starts Mass Production of Next-gen AI Memory Chip

A man walks past the logo of Samsung Electronics displayed on a glass door at the company's Seocho building in Seoul on January 29, 2026. (Photo by Jung Yeon-je / AFP)
A man walks past the logo of Samsung Electronics displayed on a glass door at the company's Seocho building in Seoul on January 29, 2026. (Photo by Jung Yeon-je / AFP)
TT

Samsung Starts Mass Production of Next-gen AI Memory Chip

A man walks past the logo of Samsung Electronics displayed on a glass door at the company's Seocho building in Seoul on January 29, 2026. (Photo by Jung Yeon-je / AFP)
A man walks past the logo of Samsung Electronics displayed on a glass door at the company's Seocho building in Seoul on January 29, 2026. (Photo by Jung Yeon-je / AFP)

Samsung Electronics has started mass production of a next-generation memory chip to power artificial intelligence, the South Korean firm announced Thursday, touting an "industry-leading" breakthrough.

The high-bandwidth "HBM4" chips are a key component for AI data centers, with US tech giant Nvidia -- now the world's most valuable company -- widely expected to be one of Samsung's main customers.

Samsung said it had "begun mass production of its industry-leading HBM4 and has shipped commercial products to customers".

"This achievement marks a first in the industry, securing an early leadership position in the HBM4 market," AFP quoted it as saying in a statement.

A global frenzy to build AI data centers has sent orders for advanced, high-bandwidth memory microchips soaring.

South Korea's two chip giants, SK hynix and Samsung, have been racing to start HBM4 production.

Taipei-based research firm TrendForce predicts that memory chip industry revenue will surge to a global peak of more than $840 billion in 2027.

The South Korean government has pledged to become one of the world's top three AI powers, alongside the United States and China.

Samsung and SK hynix are among the leading producers of high-performance memory chips.


Siemens Energy Trebles Profit as AI Boosts Power Demand

FILED - 05 August 2025, Berlin: The "Siemens Energy" logo can be seen in the entrance area of the company. Photo: Britta Pedersen/dpa
FILED - 05 August 2025, Berlin: The "Siemens Energy" logo can be seen in the entrance area of the company. Photo: Britta Pedersen/dpa
TT

Siemens Energy Trebles Profit as AI Boosts Power Demand

FILED - 05 August 2025, Berlin: The "Siemens Energy" logo can be seen in the entrance area of the company. Photo: Britta Pedersen/dpa
FILED - 05 August 2025, Berlin: The "Siemens Energy" logo can be seen in the entrance area of the company. Photo: Britta Pedersen/dpa

German turbine maker Siemens Energy said Wednesday that its quarterly profits had almost tripled as the firm gains from surging demand for electricity driven by the artificial intelligence boom.

The company's gas turbines are used to generate electricity for data centers that provide computing power for AI, and have been in hot demand as US tech giants like OpenAI and Meta rapidly build more of the sites.

Net profit in the group's fiscal first quarter, to end-December, climbed to 746 million euros ($889 million) from 252 million euros a year earlier.

Orders -- an indicator of future sales -- increased by a third to 17.6 billion euros.

The company's shares rose over five percent in Frankfurt trading, putting the stock up about a quarter since the start of the year and making it the best performer to date in Germany's blue-chip DAX index.

"Siemens Energy ticked all of the major boxes that investors were looking for with these results," Morgan Stanley analysts wrote in a note, adding that the company's gas turbine orders were "exceptionally strong".

US data center electricity consumption is projected to more than triple by 2035, according to the International Energy Agency, and already accounts for six to eight percent of US electricity use.

Asked about rising orders on an earnings call, Siemens Energy CEO Christian Bruch said he thought the first-quarter figures were not "particularly strong" and that further growth could be expected.

"Demand for gas turbines is extremely high," he said. "We're talking about 2029 and 2030 for delivery dates."

Siemens Energy, spun out of the broader Siemens group in 2020, said last week that it would spend $1 billion expanding its US operations, including a new equipment plant in Mississippi as part of wider plans that would create 1,500 jobs.

Its shares have increased over tenfold since 2023, when the German government had to provide the firm with credit guarantees after quality problems at its wind-turbine unit.