OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

AI startups OpenAI and Anthropic have signed deals with the United States government for research, testing and evaluation of their artificial intelligence models, the US Artificial Intelligence Safety Institute said on Thursday.

The first-of-their-kind agreements come at a time when the companies are facing regulatory scrutiny over safe and ethical use of AI technologies.

California legislators are set to vote on a bill as soon as this week to broadly regulate how AI is developed and deployed in the state.

Under the deals, the US AI Safety Institute will have access to major new models from both OpenAI and Anthropic prior to and following their public release.

The agreements will also enable collaborative research to evaluate capabilities of the AI models and risks associated with them, Reuters reported.

"We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on," said Jason Kwon, chief strategy officer at ChatGPT maker OpenAI.

Anthropic, which is backed by Amazon and Alphabet , did not immediately respond to a Reuters request for comment.

"These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said Elizabeth Kelly, director of the US AI Safety Institute.

The institute, a part of the US commerce department's National Institute of Standards and Technology (NIST), will also collaborate with the U.K. AI Safety Institute and provide feedback to the companies on potential safety improvements.

The US AI Safety Institute was launched last year as part of an executive order by President Joe Biden's administration to evaluate known and emerging risks of artificial intelligence models.



Alphabet to Roll out Image Generation of People on Gemini after Pause

A large Google logo is seen at Google's Bay View campus in Mountain View, California on August 13, 2024. (AFP)
A large Google logo is seen at Google's Bay View campus in Mountain View, California on August 13, 2024. (AFP)
TT

Alphabet to Roll out Image Generation of People on Gemini after Pause

A large Google logo is seen at Google's Bay View campus in Mountain View, California on August 13, 2024. (AFP)
A large Google logo is seen at Google's Bay View campus in Mountain View, California on August 13, 2024. (AFP)

Alphabet's Google said on Wednesday it has updated Gemini's AI image-creation model and would roll out the generation of visuals of people in the coming days, after months-long pause of the capability.

In February, Google had paused its AI tool that creates images of people, following inaccuracies in some historical depictions generated by the model.

The issues, where the AI model returned historical images which were sometimes inaccurate, drew flak from users.

The company said it has worked to improve the product, adhere to "product principles" and simulated situations to find weaknesses.

The feature will be made available first to paid users of the Gemini AI chatbot, starting in English and later roll out the model to bring more users and languages.

Google said it has improved the Imagen 3 model to create better images of people, but it would not generate images of specific people, children or graphic content.

OpenAI's Dall-E, Microsoft's CoPilot and recently xAI's Grok are among other AI chatbots that can now generate images.

The search engine giant also said over the coming days, subscribers to Gemini Advanced, Business and Enterprise would have access to chatting with "Gems" or chatbots customized for specific purposes.

Users can write specific instructions for particular purposes and create a Gem, saving them time from rewriting prompts for repetitive use cases.