Meta Begins Testing its First in-house AI Training Chip

The Meta logo, a keyboard, and robot hands are seen in this illustration taken January 27, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
The Meta logo, a keyboard, and robot hands are seen in this illustration taken January 27, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT

Meta Begins Testing its First in-house AI Training Chip

The Meta logo, a keyboard, and robot hands are seen in this illustration taken January 27, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
The Meta logo, a keyboard, and robot hands are seen in this illustration taken January 27, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

Facebook owner Meta (META.O), opens new tab is testing its first in-house chip for training artificial intelligence systems, a key milestone as it moves to design more of its own custom silicon and reduce reliance on external suppliers like Nvidia (NVDA.O), opens new tab, two sources told Reuters.

The world's biggest social media company has begun a small deployment of the chip and plans to ramp up production for wide-scale use if the test goes well, the sources said.

The push to develop in-house chips is part of a long-term plan at Meta to bring down its mammoth infrastructure costs as the company places expensive bets on AI tools to drive growth.

Meta, which also owns Instagram and WhatsApp, has forecast total 2025 expenses of $114 billion to $119 billion, including up to $65 billion in capital expenditure largely driven by spending on AI infrastructure.

One of the sources said Meta's new training chip is a dedicated accelerator, meaning it is designed to handle only AI-specific tasks. This can make it more power-efficient than the integrated graphics processing units (GPUs) generally used for AI workloads.

Meta is working with Taiwan-based chip manufacturer TSMC (2330.TW), opens new tab to produce the chip, this person said.

The test deployment began after Meta finished its first "tape-out" of the chip, a significant marker of success in silicon development work that involves sending an initial design through a chip factory, the other source said.

A typical tape-out costs tens of millions of dollars and takes roughly three to six months to complete, with no guarantee the test will succeed. A failure would require Meta to diagnose the problem and repeat the tape-out step.

The chip is the latest in the company's Meta Training and Inference Accelerator (MTIA) series. The program has had a wobbly start for years and at one point scrapped a chip at a similar phase of development.

However, Meta last year started using an MTIA chip to perform inference, or the process involved in running an AI system as users interact with it, for the recommendation systems that determine which content shows up on Facebook and Instagram news feeds.

Meta executives have said they want to start using their own chips by 2026 for training, or the compute-intensive process of feeding the AI system reams of data to "teach" it how to perform.

As with the inference chip, the goal for the training chip is to start with recommendation systems and later use it for generative AI products like chatbot Meta AI, the executives said.

"We're working on how would we do training for recommender systems and then eventually how do we think about training and inference for gen AI," Meta's Chief Product Officer Chris Cox said at the Morgan Stanley technology, media and telecom conference last week.

Cox described Meta's chip development efforts as "kind of a walk, crawl, run situation" so far, but said executives considered the first-generation inference chip for recommendations to be a "big success."

Meta previously pulled the plug on an in-house custom inference chip after it flopped in a small-scale test deployment similar to the one it is doing now for the training chip, instead reversing course and placing orders for billions of dollars worth of Nvidia GPUs in 2022.

The social media company has remained one of Nvidia's biggest customers since then, amassing an arsenal of GPUs to train its models, including for recommendations and ads systems and its Llama foundation model series. The units also perform inference for the more than 3 billion people who use its apps each day.

The value of those GPUs has been thrown into question this year as AI researchers increasingly express doubts about how much more progress can be made by continuing to "scale up" large language models by adding ever more data and computing power.

Those doubts were reinforced with the late-January launch of new low-cost models from Chinese startup DeepSeek, which optimize computational efficiency by relying more heavily on inference than most incumbent models.

In a DeepSeek-induced global rout in AI stocks, Nvidia shares lost as much as a fifth of their value at one point. They subsequently regained most of that ground, with investors wagering the company's chips will remain the industry standard for training and inference, although they have dropped again on broader trade concerns.



OpenAI Introducing Ads to ChatGPT

FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo
TT

OpenAI Introducing Ads to ChatGPT

FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI announced Thursday it will begin testing advertisements on ChatGPT in the coming weeks, as the wildly popular artificial intelligence chatbot seeks to increase revenue to cover its soaring costs.

The ads will initially appear in the United States for free and lower-tier subscribers, the company said in a blog post outlining its long-anticipated move into advertising.

The integration of advertising has been a key question for generative AI chatbots, with companies largely reluctant to interrupt the user experience with ads.

But the exorbitant costs of running AI services may have forced OpenAI's hand.
Only a small percentage of its nearly one billion users pay for subscription services, putting pressure on the company to find new revenue sources.

Since ChatGPT's launch in 2022, OpenAI's valuation has soared to $500 billion in funding rounds -- higher than any other private company. Some expect it could go public with a trillion-dollar valuation.

But the ChatGPT maker burns through cash at a furious rate, mostly on the powerful computing required to deliver its services.

With its move, OpenAI brings its business model closer to tech giants Google and Meta, which have built advertising empires on the back of their free-to-use services.

Unlike OpenAI, those companies have massive advertising revenue to fund AI innovation -- with Amazon also building a solid ad business on its shopping and video streaming platforms.

"Ads aren't a distraction from the gen AI race; they're how OpenAI stays in it," said Jeremy Goldman, an analyst at Emarketer.

"If ChatGPT turns on ads, OpenAI is admitting something simple and consequential: the race isn't just about model quality anymore; it's about monetizing attention without poisoning trust," he added.

OpenAI's pivot comes as Google gains ground in the generative AI race, infusing services including Gmail, Maps and YouTube with AI features that -- in addition to its Gemini chatbot -- compete directly with ChatGPT.

To address concerns about its pivot into advertising, OpenAI pledged that ads would never influence ChatGPT's answers and that user conversations would remain private from advertisers.

"Ads do not influence the answers ChatGPT gives you," the company stated, according to AFP. "Answers are optimized based on what's most helpful to you. Ads are always separate and clearly labeled."

In an apparent reference to Meta, TikTok and Google's YouTube -- platforms accused of maximizing user engagement to boost ad views -- OpenAI said it would "not optimize for time spent in ChatGPT."

"We prioritize user trust and user experience over revenue," it added.

The commitment to user well-being is a sensitive issue for OpenAI, which has faced accusations of allowing ChatGPT to prioritize emotional engagement over safety, allegedly contributing to mental distress among some users.


US Allows Nvidia to Send Advanced AI Chips to China with Restrictions

An Nvidia logo and a computer motherboard appear in this illustration taken August 25, 2025. (Reuters)
An Nvidia logo and a computer motherboard appear in this illustration taken August 25, 2025. (Reuters)
TT

US Allows Nvidia to Send Advanced AI Chips to China with Restrictions

An Nvidia logo and a computer motherboard appear in this illustration taken August 25, 2025. (Reuters)
An Nvidia logo and a computer motherboard appear in this illustration taken August 25, 2025. (Reuters)

The US Commerce Department on Tuesday opened the door for Nvidia to sell advanced artificial intelligence chips in China with restrictions, following through on a policy shift announced last month by President Donald Trump.

The change would permit Nvidia to sell its powerful H200 chip to Chinese buyers if certain conditions are met -- including proof of "sufficient" US supply -- while sales of its most advanced processors would still be blocked.

However, uncertainty has grown over how much demand there will be from Chinese companies, as Beijing has reportedly been encouraging tech companies to use homegrown chips.

Chinese officials have informed some firms they would only approve buying H200 chips under special circumstances, such as development labs or university research, news website The Information reported Tuesday, citing people with knowledge of the situation.

The Information had previously reported that Chinese officials were calling on companies there to pause H200 purchases while they deliberated requiring them to buy a certain ratio of AI chips made by Nvidia rivals in China.

In its official update on Tuesday, the US Commerce Department's Bureau of Industry and Security said it had changed the licensing review policy for H200 and similar chips from a presumption of denial to handling applications case-by-case.

Trump announced in December an agreement with Chinese President Xi Jinping to allow Nvidia to export its H200 chips to China, with the US government getting a 25-percent cut of sales.

The move marked a significant shift in US export policy for advanced AI chips, which Joe Biden's administration had heavily restricted over national security concerns about Chinese military applications.

Democrats in Congress have criticized the move as a huge mistake that will help China's military and economy.

- Chinese chips -

Nvidia chief executive Jensen Huang has advocated for the company to be allowed to sell some of its more advanced chips in China, arguing the importance of AI systems around the world being built on US technology.

The chips -- graphic processing units or GPUs -- are used to train the AI models that are the bedrock of the generative AI revolution launched with the release of ChatGPT in 2022.

The GPU sector is dominated by Nvidia, now the world's most valuable company thanks to frenzied global demand and optimism for AI.

H200s are roughly 18 months behind the US company's most state-of-the-art offerings, which will still be off-limits to China.

Nvidia's Huang has repeatedly warned that China is just "nanoseconds behind" the United States as it accelerates the development of domestically produced advanced chips.

On Wednesday, leading Chinese AI startup Zhipu said it had used homegrown Huawei chips to train its new image generator.

Zhipu AI described its tool as "the first state-of-the-art multimodal model to complete the entire training process on a domestically produced chip".

The startup went public in Hong Kong last week and its shares have since soared 75 percent -- one of several dazzling recent initial public offerings by Chinese chip and generative AI companies, as high hopes for the sector outweigh concerns of a potential market crash.


Apple Rolls Out Creator Studio to Boost Services Push, Adds AI Features

A customer compares his old iPhone with the newly launched iPhone 17 pro max at an Apple retail store in Delhi, India, September 19, 2025. (Reuters)
A customer compares his old iPhone with the newly launched iPhone 17 pro max at an Apple retail store in Delhi, India, September 19, 2025. (Reuters)
TT

Apple Rolls Out Creator Studio to Boost Services Push, Adds AI Features

A customer compares his old iPhone with the newly launched iPhone 17 pro max at an Apple retail store in Delhi, India, September 19, 2025. (Reuters)
A customer compares his old iPhone with the newly launched iPhone 17 pro max at an Apple retail store in Delhi, India, September 19, 2025. (Reuters)

Apple on Tuesday unveiled Apple Creator Studio, a new subscription bundle of professional creative software priced at $12.99 a month or $129 a year, as the iPhone maker steps up its push into paid services for creators, students and professionals.

The company has used its services business, which includes its Apple ‌Music and ‌iCloud services, to drive ‌growth ⁠in recent ‌years, helping counter slower hardware growth and generate recurring revenue.

Apple Creator Studio bundles some of the company's best-known creative tools into a single subscription, including Final Cut Pro, Logic Pro ⁠and Pixelmator Pro across Mac and iPad.

The ‌package also adds premium ‍content and ‍new AI-powered features to Apple's productivity apps ‍Keynote, Pages and Numbers, while digital whiteboarding app Freeform will gain enhanced features later.

Final Cut Pro will offer new tools such as transcript-based search, visual search and beat detection to ⁠speed up video editing, while Logic Pro introduces AI-powered features like Synth Player and Chord ID to assist with music creation.

The company's Photoshop-alternative Pixelmator Pro will be available on iPad for the first time and will offer Apple Pencil support.

The subscription launches January 28 on ‌the App Store, Apple said.