Nvidia Rivals Focus on Building a Different Kind of Chip to Power AI Products

The NVIDIA logo is seen near a computer motherboard in this illustration taken January 8, 2024. (Reuters)
The NVIDIA logo is seen near a computer motherboard in this illustration taken January 8, 2024. (Reuters)
TT

Nvidia Rivals Focus on Building a Different Kind of Chip to Power AI Products

The NVIDIA logo is seen near a computer motherboard in this illustration taken January 8, 2024. (Reuters)
The NVIDIA logo is seen near a computer motherboard in this illustration taken January 8, 2024. (Reuters)

Building the current crop of artificial intelligence chatbots has relied on specialized computer chips pioneered by Nvidia, which dominates market and made itself the poster child of the AI boom.

But the same qualities that make those graphics processor chips, or GPUs, so effective at creating powerful AI systems from scratch make them less efficient at putting AI products to work.

That's opened up the AI chip industry to rivals who think they can compete with Nvidia in selling so-called AI inference chips that are more attuned to the day-to-day running of AI tools and designed to reduce some of the huge computing costs of generative AI.

“These companies are seeing opportunity for that kind of specialized hardware,” said Jacob Feldgoise, an analyst at Georgetown University's Center for Security and Emerging Technology. “The broader the adoption of these models, the more compute will be needed for inference and the more demand there will be for inference chips.”

What is AI inference? It takes a lot of computing power to make an AI chatbot. It starts with a process called training or pretraining — the “P” in ChatGPT — that involves AI systems “learning” from the patterns of huge troves of data. GPUs are good at doing that work because they can run many calculations at a time on a network of devices in communication with each other.

However, once trained, a generative AI tool still needs chips to do the work — such as when you ask a chatbot to compose a document or generate an image. That's where inferencing comes in. A trained AI model must take in new information and make inferences from what it already knows to produce a response.

GPUs can do that work, too. But it can be a bit like taking a sledgehammer to crack a nut.

“With training, you’re doing a lot heavier, a lot more work. With inferencing, that’s a lighter weight,” said Forrester analyst Alvin Nguyen.

That's led startups like Cerebras, Groq and d-Matrix as well as Nvidia's traditional chipmaking rivals — such as AMD and Intel — to pitch more inference-friendly chips as Nvidia focuses on meeting the huge demand from bigger tech companies for its higher-end hardware.

Inside an AI inference chip lab D-Matrix, which is launching its first product this week, was founded in 2019 — a bit late to the AI chip game, as CEO Sid Sheth explained during a recent interview at the company’s headquarters in Santa Clara, California, the same Silicon Valley city that's also home to AMD, Intel and Nvidia.

“There were already 100-plus companies. So when we went out there, the first reaction we got was ‘you’re too late,’” he said. The pandemic's arrival six months later didn't help as the tech industry pivoted to a focus on software to serve remote work.

Now, however, Sheth sees a big market in AI inferencing, comparing that later stage of machine learning to how human beings apply the knowledge they acquired in school.

“We spent the first 20 years of our lives going to school, educating ourselves. That’s training, right?” he said. “And then the next 40 years of your life, you kind of go out there and apply that knowledge — and then you get rewarded for being efficient.”

The product, called Corsair, consists of two chips with four chiplets each, made by Taiwan Semiconductor Manufacturing Company — the same manufacturer of most of Nvidia's chips — and packaged together in a way that helps to keep them cool.

The chips are designed in Santa Clara, assembled in Taiwan and then tested back in California. Testing is a long process and can take six months — if anything is off, it can be sent back to Taiwan.

D-Matrix workers were doing final testing on the chips during a recent visit to a laboratory with blue metal desks covered with cables, motherboards and computers, with a cold server room next door.

Who wants AI inference chips? While tech giants like Amazon, Google, Meta and Microsoft have been gobbling up the supply of costly GPUs in a race to outdo each other in AI development, makers of AI inference chips are aiming for a broader clientele.

Forrester's Nguyen said that could include Fortune 500 companies that want to make use of new generative AI technology without having to build their own AI infrastructure. Sheth said he expects a strong interest in AI video generation.

“The dream of AI for a lot of these enterprise companies is you can use your own enterprise data,” Nguyen said. “Buying (AI inference chips) should be cheaper than buying the ultimate GPUs from Nvidia and others. But I think there’s going to be a learning curve in terms of integrating it.”

Feldgoise said that, unlike training-focused chips, AI inference work prioritizes how fast a person will get a chatbot's response.

He said another whole set of companies is developing AI hardware for inference that can run not just in big data centers but locally on desktop computers, laptops and phones.

Why does this matter? Better-designed chips could bring down the huge costs of running AI to businesses. That could also affect the environmental and energy costs for everyone else.

Sheth says the big concern right now is, “are we going to burn the planet down in our quest for what people call AGI — human-like intelligence?”

It’s still fuzzy when AI might get to the point of artificial general intelligence — predictions range from a few years to decades. But, Sheth notes, only a handful of tech giants are on that quest.

“But then what about the rest?” he said. “They cannot be put on the same path.”

The other set of companies don’t want to use very large AI models — it’s too costly and uses too much energy.

“I don’t know if people truly, really appreciate that inference is actually really going to be a much bigger opportunity than training. I don’t think they appreciate that. It’s still training that is really grabbing all the headlines,” Sheth said.



Xbox Boss Phil Spencer Retires as Microsoft Shakes Up Gaming Unit

During 12 years leading Xbox, Phil Spencer oversaw blockbuster studio buys and an evolution to video games being played just about anywhere players can get online. KEVORK DJANSEZIAN / GETTY IMAGES NORTH AMERICA/AFP
During 12 years leading Xbox, Phil Spencer oversaw blockbuster studio buys and an evolution to video games being played just about anywhere players can get online. KEVORK DJANSEZIAN / GETTY IMAGES NORTH AMERICA/AFP
TT

Xbox Boss Phil Spencer Retires as Microsoft Shakes Up Gaming Unit

During 12 years leading Xbox, Phil Spencer oversaw blockbuster studio buys and an evolution to video games being played just about anywhere players can get online. KEVORK DJANSEZIAN / GETTY IMAGES NORTH AMERICA/AFP
During 12 years leading Xbox, Phil Spencer oversaw blockbuster studio buys and an evolution to video games being played just about anywhere players can get online. KEVORK DJANSEZIAN / GETTY IMAGES NORTH AMERICA/AFP

Microsoft on Friday put out word that Xbox stalwart Phil Spencer is retiring, in a shakeup of leadership at the tech titan's video game unit.

Former Instacart chief operating officer Asha Sharma will take over as head of Microsoft Gaming, with Matt Booty becoming executive vice president and chief content officer, said AFP.

"As we celebrate Xbox's 25th year, the opportunity and innovation agenda in front of us is expansive," Microsoft chief executive Satya Nadella said in a message to employees.

"I am long on gaming and its role at the center of our consumer ambition."

Changes to the gaming team include Sarah Bond leaving her job as Xbox president "to begin a new chapter" away from Microsoft, according to the company.

The shakeup comes as cloud computing and artificial intelligence have become priorities at Microsoft, driving revenue growth but also massive spending on infrastructure to power the technology.

"When I walked through Microsoft's doors as an intern in June of 1988, I could never have imagined the products I'd help build, the players and customers we'd serve or the extraordinary teams I'd be lucky enough to join," Spencer said in a message to colleagues.

"It's been an epic ride and truly the privilege of a lifetime."

Spencer headed the Xbox unit for 12 of his 38 years at Microsoft, nearly tripling the size of the business as video games evolved from packaged software for consoles to subscription services and digital downloads on an array of devices.

Spencer also guided the Xbox team through acquisitions of Activision Blizzard, ZeniMax, and Minecraft.

Xbox boasts more than 500 million monthly users and a vast stable of game studios, along with a subscription gaming service.

"We are witnessing the reinvention of play," Sharma said in a blog post announcing the leadership changes.

"To meet the moment, we will invent new business models and new ways to play by leaning into what we already have: iconic teams, characters and worlds that people love."


Indian PM, President of Saudi Arabia’s SDAIA Discuss AI Cooperation 

Indian Prime Minister Narendra Modi and President of the Saudi Data and Artificial Intelligence Authority (SDAIA) President Dr. Abdullah Al-Ghamdi meet on the sidelines of the India AI Impact Summit 2026. (SPA)
Indian Prime Minister Narendra Modi and President of the Saudi Data and Artificial Intelligence Authority (SDAIA) President Dr. Abdullah Al-Ghamdi meet on the sidelines of the India AI Impact Summit 2026. (SPA)
TT

Indian PM, President of Saudi Arabia’s SDAIA Discuss AI Cooperation 

Indian Prime Minister Narendra Modi and President of the Saudi Data and Artificial Intelligence Authority (SDAIA) President Dr. Abdullah Al-Ghamdi meet on the sidelines of the India AI Impact Summit 2026. (SPA)
Indian Prime Minister Narendra Modi and President of the Saudi Data and Artificial Intelligence Authority (SDAIA) President Dr. Abdullah Al-Ghamdi meet on the sidelines of the India AI Impact Summit 2026. (SPA)

Indian Prime Minister Narendra Modi held talks with President of the Saudi Data and Artificial Intelligence Authority (SDAIA) President Dr. Abdullah Al-Ghamdi on the sidelines of the India AI Impact Summit 2026, reported the Saudi Press Agency on Friday.

Discussions focused on knowledge transfer and the exchange of expertise to accelerate digital development in both nations. They also tackled expanding bilateral cooperation in data and AI.

Al-Ghamdi commended India’s leadership in hosting the summit, noting that such international partnerships are essential for harnessing advanced technology to benefit humanity and achieve shared strategic goals.


India Chases 'DeepSeek Moment' with Homegrown AI

A handout photo made available by the Press Information Bureau (PIB) of Indian Prime Minister Narendra Modi speaking with global leaders at the AI Impact Summit 2026 at Bharat Mandapam in New Delhi, India, 19 February 2026.EPA/PRESS INFORMATION BUREAU HANDOUT HANDOUT
A handout photo made available by the Press Information Bureau (PIB) of Indian Prime Minister Narendra Modi speaking with global leaders at the AI Impact Summit 2026 at Bharat Mandapam in New Delhi, India, 19 February 2026.EPA/PRESS INFORMATION BUREAU HANDOUT HANDOUT
TT

India Chases 'DeepSeek Moment' with Homegrown AI

A handout photo made available by the Press Information Bureau (PIB) of Indian Prime Minister Narendra Modi speaking with global leaders at the AI Impact Summit 2026 at Bharat Mandapam in New Delhi, India, 19 February 2026.EPA/PRESS INFORMATION BUREAU HANDOUT HANDOUT
A handout photo made available by the Press Information Bureau (PIB) of Indian Prime Minister Narendra Modi speaking with global leaders at the AI Impact Summit 2026 at Bharat Mandapam in New Delhi, India, 19 February 2026.EPA/PRESS INFORMATION BUREAU HANDOUT HANDOUT

Fledgling Indian artificial intelligence companies showcased homegrown technologies this week at a major summit in New Delhi, underpinning big dreams of becoming a global AI power.

But analysts said the country was unlikely to have a "DeepSeek moment" -- the sort of boom China had last year with a high-performance, low-cost chatbot -- any time soon, AFP reported.

Still, building custom AI tools could bring benefits to the world's most populous nation.
At the AI Impact Summit, Prime Minister Narendra Modi lauded new Indian AI models, along with other examples of the country's rising profile in the field.

"All the solutions that have been presented here demonstrate the power of 'Made in India' and India's innovative qualities," Modi said Thursday.

One of the startups making a buzz at the five-day summit was Sarvam AI, which this week released two large language models it says were trained from scratch in India.

Its models are optimized to work across 22 Indian languages, says the company, which received government-subsidized access to advanced computer processors.

The five-day summit, which wraps up Friday, is the fourth annual international meeting to discuss the risks and rewards of the fast-growing AI sector.

It is the largest yet and the first in a developing country, with Indian businesses striking deals with US tech giants to build large-scale data center infrastructure to help train and run AI systems.

On Friday, Abu Dhabi-based tech group G42 said the United Arab Emirates would deploy an AI supercomputer system in India, in a project "designed to lower barriers to AI innovation".

So-called sovereign AI has become a priority for many countries hoping to reduce dependence on US and Chinese platforms while ensuring that systems respect local regulations, including on data privacy.

AI models that succeed in India "can be deployed all over the world", Modi said on Thursday.

But experts said the sheer computational might of the United States would be hard to match.

"Despite the headline pledges, we don't expect India to emerge as a frontier AI innovation hub in the near term," said Reema Bhattacharya, head of Asia research at risk intelligence company Verisk Maplecroft.

"Its more realistic trajectory is to become the world's largest AI adoption market, embedding AI at scale through digital public infrastructure and cost-efficient applications," she said.

Another Indian company that drew attention with product debuts this week was the Bengaluru-based Gnani.ai, which introduced its Vachana speech models at the summit.

Trained on more than a million hours of audio, Vachana models generate natural-sounding voices in Indian languages that can process customer interactions and allow people to interact with digital services out loud.

Job disruption and redundancies, including in India's huge call center industry, have been one key focus of discussions at the Delhi summit.

Prihesh Ratnayake, head of AI initiatives at think-tank Factum, told AFP that the new Indian AI models were "not really meant to be global".

"They're India-specific models, and hopefully we'll see their impact over the coming year," he said.

"Why does India need to build for the global scale? India itself is the biggest market."
And Nanubala Gnana Sai at the Cambridge AI Safety Hub said that homegrown models could bring other benefits.

Existing models, even those developed in China, "have intrinsic bias towards Western values, culture and ethos -- as a product of being trained heavily on that consensus", Sai told AFP.

India already has some major strengths, including "technology diffusion, eager talent pool and cheap labor", and dedicated efforts can help startups pivot to artificial intelligence, he said.

"The end-product may not 'rival' ChatGPT or DeepSeek on benchmarks, but will provide leverage for the Global South to have its own stand in an increasingly polarized world."