OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

AI startups OpenAI and Anthropic have signed deals with the United States government for research, testing and evaluation of their artificial intelligence models, the US Artificial Intelligence Safety Institute said on Thursday.

The first-of-their-kind agreements come at a time when the companies are facing regulatory scrutiny over safe and ethical use of AI technologies.

California legislators are set to vote on a bill as soon as this week to broadly regulate how AI is developed and deployed in the state.

Under the deals, the US AI Safety Institute will have access to major new models from both OpenAI and Anthropic prior to and following their public release.

The agreements will also enable collaborative research to evaluate capabilities of the AI models and risks associated with them, Reuters reported.

"We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on," said Jason Kwon, chief strategy officer at ChatGPT maker OpenAI.

Anthropic, which is backed by Amazon and Alphabet , did not immediately respond to a Reuters request for comment.

"These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said Elizabeth Kelly, director of the US AI Safety Institute.

The institute, a part of the US commerce department's National Institute of Standards and Technology (NIST), will also collaborate with the U.K. AI Safety Institute and provide feedback to the companies on potential safety improvements.

The US AI Safety Institute was launched last year as part of an executive order by President Joe Biden's administration to evaluate known and emerging risks of artificial intelligence models.



Nations Building Their Own AI Models Add to Nvidia's Growing Chip Demand

FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
TT

Nations Building Their Own AI Models Add to Nvidia's Growing Chip Demand

FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: AI (Artificial Intelligence) letters and robot hand miniature in this illustration, taken June 23, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

Nations building artificial intelligence models in their own languages are turning to Nvidia's chips, adding to already booming demand as generative AI takes center stage for businesses and governments, a senior executive said on Wednesday.
Nvidia's third-quarter forecast for rising sales of its chips that power AI technology such as OpenAI's ChatGPT failed to meet investors' towering expectations. But the company described new customers coming from around the world, including governments that are now seeking their own AI models and the hardware to support them, Reuters said.
Countries adopting their own AI applications and models will contribute about low double-digit billions to Nvidia's revenue in the financial year ending in January 2025, Chief Financial Officer Colette Kress said on a call with analysts after Nvidia's earnings report.
That's up from an earlier forecast of such sales contributing high single-digit billions to total revenue. Nvidia forecast about $32.5 billion in total revenue in the third quarter ending in October.
"Countries around the world (desire) to have their own generative AI that would be able to incorporate their own language, incorporate their own culture, incorporate their own data in that country," Kress said, describing AI expertise and infrastructure as "national imperatives."
She offered the example of Japan's National Institute of Advanced Industrial Science and Technology, which is building an AI supercomputer featuring thousands of Nvidia H200 graphics processors.
Governments are also turning to AI as a measure to strengthen national security.
"AI models are trained on data and for political entities -particularly nations - their data are secret and their models need to be customized to their unique political, economic, cultural, and scientific needs," said IDC computing semiconductors analyst Shane Rau.
"Therefore, they need to have their own AI models and a custom underlying arrangement of hardware and software."
Washington tightened its controls on exports of cutting-edge chips to China in 2023 as it sought to prevent breakthroughs in AI that would aid China's military, hampering Nvidia's sales in the region.
Businesses have been working to tap into government pushes to build AI platforms in regional languages.
IBM said in May that Saudi Arabia's Data and Artificial Intelligence Authority would train its "ALLaM" Arabic language model using the company's AI platform Watsonx.
Nations that want to create their own AI models can drive growth opportunities for Nvidia's GPUs, on top of the significant investments in the company's hardware from large cloud providers like Microsoft, said Bob O'Donnell, chief analyst at TECHnalysis Research.