Nations Building Their Own AI Models Add to Nvidia's Growing Chip Demand

FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

Nations Building Their Own AI Models Add to Nvidia's Growing Chip Demand

FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

Nations building artificial intelligence models in their own languages are turning to Nvidia's chips, adding to already booming demand as generative AI takes center stage for businesses and governments, a senior executive said on Wednesday.
Nvidia's third-quarter forecast for rising sales of its chips that power AI technology such as OpenAI's ChatGPT failed to meet investors' towering expectations. But the company described new customers coming from around the world, including governments that are now seeking their own AI models and the hardware to support them, Reuters said.
Countries adopting their own AI applications and models will contribute about low double-digit billions to Nvidia's revenue in the financial year ending in January 2025, Chief Financial Officer Colette Kress said on a call with analysts after Nvidia's earnings report.
That's up from an earlier forecast of such sales contributing high single-digit billions to total revenue. Nvidia forecast about $32.5 billion in total revenue in the third quarter ending in October.
"Countries around the world (desire) to have their own generative AI that would be able to incorporate their own language, incorporate their own culture, incorporate their own data in that country," Kress said, describing AI expertise and infrastructure as "national imperatives."
She offered the example of Japan's National Institute of Advanced Industrial Science and Technology, which is building an AI supercomputer featuring thousands of Nvidia H200 graphics processors.
Governments are also turning to AI as a measure to strengthen national security.
"AI models are trained on data and for political entities -particularly nations - their data are secret and their models need to be customized to their unique political, economic, cultural, and scientific needs," said IDC computing semiconductors analyst Shane Rau.
"Therefore, they need to have their own AI models and a custom underlying arrangement of hardware and software."
Washington tightened its controls on exports of cutting-edge chips to China in 2023 as it sought to prevent breakthroughs in AI that would aid China's military, hampering Nvidia's sales in the region.
Businesses have been working to tap into government pushes to build AI platforms in regional languages.
IBM said in May that Saudi Arabia's Data and Artificial Intelligence Authority would train its "ALLaM" Arabic language model using the company's AI platform Watsonx.
Nations that want to create their own AI models can drive growth opportunities for Nvidia's GPUs, on top of the significant investments in the company's hardware from large cloud providers like Microsoft, said Bob O'Donnell, chief analyst at TECHnalysis Research.



Reddit Sues AI Giant Anthropic Over Content Use

Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
TT
20

Reddit Sues AI Giant Anthropic Over Content Use

Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP
Dario Amodei, co-founder and CEO of Anthropic. JULIEN DE ROSA / AFP

Social media outlet Reddit filed a lawsuit Wednesday against artificial intelligence company Anthropic, accusing the startup of illegally scraping millions of user comments to train its Claude chatbot without permission or compensation.

The lawsuit in a California state court represents the latest front in the growing battle between content providers and AI companies over the use of data to train increasingly sophisticated language models that power the generative AI revolution.

Anthropic, valued at $61.5 billion and heavily backed by Amazon, was founded in 2021 by former executives from OpenAI, the creator of ChatGPT.

The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development.

"This case is about the two faces of Anthropic: the public face that attempts to ingratiate itself into the consumer's consciousness with claims of righteousness and respect for boundaries and the law, and the private face that ignores any rules that interfere with its attempts to further line its pockets," the suit said.

According to the complaint, Anthropic has been training its models on Reddit content since at least December 2021, with CEO Dario Amodei co-authoring research papers that specifically identified high-quality content for data training.

The lawsuit alleges that despite Anthropic's public claims that it had blocked its bots from accessing Reddit, the company's automated systems continued to harvest Reddit's servers more than 100,000 times in subsequent months.

Reddit is seeking monetary damages and a court injunction to force Anthropic to comply with its user agreement terms. The company has requested a jury trial.

In an email to AFP, Anthropic said "We disagree with Reddit's claims and will defend ourselves vigorously."

Reddit has entered into licensing agreements with other AI giants including Google and OpenAI, which allow those companies to use Reddit content under terms that protect user privacy and provide compensation to the platform.

Those deals have helped lift Reddit's share price since it went public in 2024.

Reddit shares closed up more than six percent on Wednesday following news of the lawsuit.

Musicians, book authors, visual artists and news publications have sued the various AI companies that used their data without permission or payment.

AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally changes the original content and is necessary for innovation.

Though most of these lawsuits are still in early stages, their outcomes could have a profound effect on the shape of the AI industry.