Nvidia Rivals Focus on Building a Different Kind of Chip to Power AI Products

The NVIDIA logo is seen near a computer motherboard in this illustration taken January 8, 2024. (Reuters)
The NVIDIA logo is seen near a computer motherboard in this illustration taken January 8, 2024. (Reuters)
TT

Nvidia Rivals Focus on Building a Different Kind of Chip to Power AI Products

The NVIDIA logo is seen near a computer motherboard in this illustration taken January 8, 2024. (Reuters)
The NVIDIA logo is seen near a computer motherboard in this illustration taken January 8, 2024. (Reuters)

Building the current crop of artificial intelligence chatbots has relied on specialized computer chips pioneered by Nvidia, which dominates market and made itself the poster child of the AI boom.

But the same qualities that make those graphics processor chips, or GPUs, so effective at creating powerful AI systems from scratch make them less efficient at putting AI products to work.

That's opened up the AI chip industry to rivals who think they can compete with Nvidia in selling so-called AI inference chips that are more attuned to the day-to-day running of AI tools and designed to reduce some of the huge computing costs of generative AI.

“These companies are seeing opportunity for that kind of specialized hardware,” said Jacob Feldgoise, an analyst at Georgetown University's Center for Security and Emerging Technology. “The broader the adoption of these models, the more compute will be needed for inference and the more demand there will be for inference chips.”

What is AI inference? It takes a lot of computing power to make an AI chatbot. It starts with a process called training or pretraining — the “P” in ChatGPT — that involves AI systems “learning” from the patterns of huge troves of data. GPUs are good at doing that work because they can run many calculations at a time on a network of devices in communication with each other.

However, once trained, a generative AI tool still needs chips to do the work — such as when you ask a chatbot to compose a document or generate an image. That's where inferencing comes in. A trained AI model must take in new information and make inferences from what it already knows to produce a response.

GPUs can do that work, too. But it can be a bit like taking a sledgehammer to crack a nut.

“With training, you’re doing a lot heavier, a lot more work. With inferencing, that’s a lighter weight,” said Forrester analyst Alvin Nguyen.

That's led startups like Cerebras, Groq and d-Matrix as well as Nvidia's traditional chipmaking rivals — such as AMD and Intel — to pitch more inference-friendly chips as Nvidia focuses on meeting the huge demand from bigger tech companies for its higher-end hardware.

Inside an AI inference chip lab D-Matrix, which is launching its first product this week, was founded in 2019 — a bit late to the AI chip game, as CEO Sid Sheth explained during a recent interview at the company’s headquarters in Santa Clara, California, the same Silicon Valley city that's also home to AMD, Intel and Nvidia.

“There were already 100-plus companies. So when we went out there, the first reaction we got was ‘you’re too late,’” he said. The pandemic's arrival six months later didn't help as the tech industry pivoted to a focus on software to serve remote work.

Now, however, Sheth sees a big market in AI inferencing, comparing that later stage of machine learning to how human beings apply the knowledge they acquired in school.

“We spent the first 20 years of our lives going to school, educating ourselves. That’s training, right?” he said. “And then the next 40 years of your life, you kind of go out there and apply that knowledge — and then you get rewarded for being efficient.”

The product, called Corsair, consists of two chips with four chiplets each, made by Taiwan Semiconductor Manufacturing Company — the same manufacturer of most of Nvidia's chips — and packaged together in a way that helps to keep them cool.

The chips are designed in Santa Clara, assembled in Taiwan and then tested back in California. Testing is a long process and can take six months — if anything is off, it can be sent back to Taiwan.

D-Matrix workers were doing final testing on the chips during a recent visit to a laboratory with blue metal desks covered with cables, motherboards and computers, with a cold server room next door.

Who wants AI inference chips? While tech giants like Amazon, Google, Meta and Microsoft have been gobbling up the supply of costly GPUs in a race to outdo each other in AI development, makers of AI inference chips are aiming for a broader clientele.

Forrester's Nguyen said that could include Fortune 500 companies that want to make use of new generative AI technology without having to build their own AI infrastructure. Sheth said he expects a strong interest in AI video generation.

“The dream of AI for a lot of these enterprise companies is you can use your own enterprise data,” Nguyen said. “Buying (AI inference chips) should be cheaper than buying the ultimate GPUs from Nvidia and others. But I think there’s going to be a learning curve in terms of integrating it.”

Feldgoise said that, unlike training-focused chips, AI inference work prioritizes how fast a person will get a chatbot's response.

He said another whole set of companies is developing AI hardware for inference that can run not just in big data centers but locally on desktop computers, laptops and phones.

Why does this matter? Better-designed chips could bring down the huge costs of running AI to businesses. That could also affect the environmental and energy costs for everyone else.

Sheth says the big concern right now is, “are we going to burn the planet down in our quest for what people call AGI — human-like intelligence?”

It’s still fuzzy when AI might get to the point of artificial general intelligence — predictions range from a few years to decades. But, Sheth notes, only a handful of tech giants are on that quest.

“But then what about the rest?” he said. “They cannot be put on the same path.”

The other set of companies don’t want to use very large AI models — it’s too costly and uses too much energy.

“I don’t know if people truly, really appreciate that inference is actually really going to be a much bigger opportunity than training. I don’t think they appreciate that. It’s still training that is really grabbing all the headlines,” Sheth said.



Italy Watchdog Orders Meta to Halt WhatsApp Terms Barring Rival AI Chatbots

The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)
The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)
TT

Italy Watchdog Orders Meta to Halt WhatsApp Terms Barring Rival AI Chatbots

The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)
The logo of Meta is seen at Porte de Versailles exhibition center in Paris, France, June 11, 2025. (Reuters)

Italy's antitrust authority (AGCM) on Wednesday ordered Meta Platforms to suspend contractual terms ​that could shut rival AI chatbots out of WhatsApp, as it investigates the US tech group for suspected abuse of a dominant position.

A spokesperson for Meta called the decision "fundamentally flawed," and said the emergence of AI chatbots "put a strain on our systems that ‌they were ‌not designed to support".

"We ‌will ⁠appeal," ​the ‌spokesperson added.

The move is the latest in a string by European regulators against Big Tech firms, as the EU seeks to balance support for the sector with efforts to curb its expanding influence.

Meta's conduct appeared capable of restricting "output, market ⁠access or technical development in the AI chatbot services market", ‌potentially harming consumers, AGCM ‍said.

In July, the ‍Italian regulator opened the investigation into Meta over ‍the suspected abuse of a dominant position related to WhatsApp. It widened the probe in November to cover updated terms for the messaging app's business ​platform.

"These contractual conditions completely exclude Meta AI's competitors in the AI chatbot services ⁠market from the WhatsApp platform," the watchdog said.

EU antitrust regulators launched a parallel investigation into Meta last month over the same allegations.

Europe's tough stance - a marked contrast to more lenient US regulation - has sparked industry pushback, particularly by US tech titans, and led to criticism from the administration of US President Donald Trump.

The Italian watchdog said it was coordinating with the European ‌Commission to ensure Meta's conduct was addressed "in the most effective manner".


Amazon Says Blocked 1,800 North Koreans from Applying for Jobs

Amazon logo (Reuters)
Amazon logo (Reuters)
TT

Amazon Says Blocked 1,800 North Koreans from Applying for Jobs

Amazon logo (Reuters)
Amazon logo (Reuters)

US tech giant Amazon said it has blocked over 1,800 North Koreans from joining the company, as Pyongyang sends large numbers of IT workers overseas to earn and launder funds.

In a post on LinkedIn, Amazon's Chief Security Officer Stephen Schmidt said last week that North Korean workers had been "attempting to secure remote IT jobs with companies worldwide, particularly in the US".

He said the firm had seen nearly a one-third rise in applications by North Koreans in the past year, reported AFP.

The North Koreans typically use "laptop farms" -- a computer in the United States operated remotely from outside the country, he said.

He warned the problem wasn't specific to Amazon and "is likely happening at scale across the industry".

Tell-tale signs of North Korean workers, Schmidt said, included wrongly formatted phone numbers and dodgy academic credentials.

In July, a woman in Arizona was sentenced to more than eight years in prison for running a laptop farm helping North Korean IT workers secure remote jobs at more than 300 US companies.

The scheme generated more than $17 million in revenue for her and North Korea, officials said.

Last year, Seoul's intelligence agency warned that North Korean operatives had used LinkedIn to pose as recruiters and approach South Koreans working at defense firms to obtain information on their technologies.

"North Korea is actively training cyber personnel and infiltrating key locations worldwide," Hong Min, an analyst at the Korea Institute for National Unification, told AFP.

"Given Amazon's business nature, the motive seems largely economic, with a high likelihood that the operation was planned to steal financial assets," he added.

North Korea's cyber-warfare program dates back to at least the mid-1990s.

It has since grown into a 6,000-strong cyber unit known as Bureau 121, which operates from several countries, according to a 2020 US military report.

In November, Washington announced sanctions on eight individuals accused of being "state-sponsored hackers", whose illicit operations were conducted "to fund the regime's nuclear weapons program" by stealing and laundering money.

The US Department of the Treasury has accused North Korea-affiliated cybercriminals of stealing over $3 billion over the past three years, primarily in cryptocurrency.


KAUST Scientists Develop AI-Generated Data to Improve Environmental Disaster Tracking

King Abdullah University of Science and Technology (KAUST) logo
King Abdullah University of Science and Technology (KAUST) logo
TT

KAUST Scientists Develop AI-Generated Data to Improve Environmental Disaster Tracking

King Abdullah University of Science and Technology (KAUST) logo
King Abdullah University of Science and Technology (KAUST) logo

King Abdullah University of Science and Technology (KAUST) and SARsatX, a Saudi company specializing in Earth observation technologies, have developed computer-generated data to train deep learning models to predict oil spills.

According to KAUST, validating the use of synthetic data is crucial for monitoring environmental disasters, as early detection and rapid response can significantly reduce the risks of environmental damage.

Dean of the Biological and Environmental Science and Engineering Division at KAUST Dr. Matthew McCabe noted that one of the biggest challenges in environmental applications of artificial intelligence is the shortage of high-quality training data.

He explained that this challenge can be addressed by using deep learning to generate synthetic data from a very small sample of real data and then training predictive AI models on it.

This approach can significantly enhance efforts to protect the marine environment by enabling faster and more reliable monitoring of oil spills while reducing the logistical and environmental challenges associated with data collection.