Meta Unveils More Cautious Approach to ChatGPT Frenzy

A logo of Meta Platforms Inc. is seen at its booth, at the Viva Technology conference dedicated to innovation and startups, at Porte de Versailles exhibition center in Paris, France June 17, 2022. REUTERS/Benoit Tessier
A logo of Meta Platforms Inc. is seen at its booth, at the Viva Technology conference dedicated to innovation and startups, at Porte de Versailles exhibition center in Paris, France June 17, 2022. REUTERS/Benoit Tessier
TT

Meta Unveils More Cautious Approach to ChatGPT Frenzy

A logo of Meta Platforms Inc. is seen at its booth, at the Viva Technology conference dedicated to innovation and startups, at Porte de Versailles exhibition center in Paris, France June 17, 2022. REUTERS/Benoit Tessier
A logo of Meta Platforms Inc. is seen at its booth, at the Viva Technology conference dedicated to innovation and startups, at Porte de Versailles exhibition center in Paris, France June 17, 2022. REUTERS/Benoit Tessier

Facebook-owner Meta on Friday unveiled its own version of the artificial intelligence behind apps such as ChatGPT, saying it would give access to researchers to find fixes to the technology's potential dangers.

Meta described its own AI, called LLaMA, as a "smaller, more performant" model designed to "help researchers advance their work," in what could be seen as veiled criticism of Microsoft's decision to release the technology widely, while keeping the programming code secret.

Microsoft-backed ChatGPT has taken the world by storm with its ability to generate finely crafted texts such as essays or poems in just seconds using technology known as large language models (or LLM).

LLM is part of a field known as generative AI that also includes the capacity to execute images, designs or programming code almost instantaneously upon a simple request.

Usually the more staid actor in big tech, Microsoft has deepened its partnership with OpenAI, the creator of ChatGPT, and earlier this month announced the technology would be integrated into its Bing search engine as well as the Edge browser.

Google, seeing a sudden threat to the dominance of its search engine, quickly announced it would release its own language AI, known as Bard, shortly, AFP reported.

But reports of disturbing exchanges with Microsoft's Bing chatbot -- including it issuing threats and speaking of desires to steal nuclear code or lure one user from his wife -- went viral, raising alarm bells that the technology was not ready.

Meta said these problems, sometimes called hallucinations, could be better remedied if researchers had improved access to the expensive technology.

Thorough research "remains limited because of the resources that are required to train and run such large models," the company said.

This was hindering efforts "to improve their robustness and mitigate known issues, such as bias, toxicity, and the potential for generating misinformation," Meta said.

OpenAI and Microsoft strictly limit access to the technology behind their chatbots, drawing criticism that they are choosing potential profits over improving the technology more quickly for society.

"By sharing the code for LLaMA, other researchers can more easily test new approaches to limiting or eliminating these problems," Meta said.



OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
TT

OpenAI, Anthropic Sign Deals with US Govt for AI Research and Testing

OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)
OpenAI logo is seen in this illustration taken May 20, 2024. (Reuters)

AI startups OpenAI and Anthropic have signed deals with the United States government for research, testing and evaluation of their artificial intelligence models, the US Artificial Intelligence Safety Institute said on Thursday.

The first-of-their-kind agreements come at a time when the companies are facing regulatory scrutiny over safe and ethical use of AI technologies.

California legislators are set to vote on a bill as soon as this week to broadly regulate how AI is developed and deployed in the state.

Under the deals, the US AI Safety Institute will have access to major new models from both OpenAI and Anthropic prior to and following their public release.

The agreements will also enable collaborative research to evaluate capabilities of the AI models and risks associated with them, Reuters reported.

"We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on," said Jason Kwon, chief strategy officer at ChatGPT maker OpenAI.

Anthropic, which is backed by Amazon and Alphabet , did not immediately respond to a Reuters request for comment.

"These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI," said Elizabeth Kelly, director of the US AI Safety Institute.

The institute, a part of the US commerce department's National Institute of Standards and Technology (NIST), will also collaborate with the U.K. AI Safety Institute and provide feedback to the companies on potential safety improvements.

The US AI Safety Institute was launched last year as part of an executive order by President Joe Biden's administration to evaluate known and emerging risks of artificial intelligence models.