Meta Unveils More Cautious Approach to ChatGPT Frenzy

A logo of Meta Platforms Inc. is seen at its booth, at the Viva Technology conference dedicated to innovation and startups, at Porte de Versailles exhibition center in Paris, France June 17, 2022. REUTERS/Benoit Tessier
A logo of Meta Platforms Inc. is seen at its booth, at the Viva Technology conference dedicated to innovation and startups, at Porte de Versailles exhibition center in Paris, France June 17, 2022. REUTERS/Benoit Tessier
TT
20

Meta Unveils More Cautious Approach to ChatGPT Frenzy

A logo of Meta Platforms Inc. is seen at its booth, at the Viva Technology conference dedicated to innovation and startups, at Porte de Versailles exhibition center in Paris, France June 17, 2022. REUTERS/Benoit Tessier
A logo of Meta Platforms Inc. is seen at its booth, at the Viva Technology conference dedicated to innovation and startups, at Porte de Versailles exhibition center in Paris, France June 17, 2022. REUTERS/Benoit Tessier

Facebook-owner Meta on Friday unveiled its own version of the artificial intelligence behind apps such as ChatGPT, saying it would give access to researchers to find fixes to the technology's potential dangers.

Meta described its own AI, called LLaMA, as a "smaller, more performant" model designed to "help researchers advance their work," in what could be seen as veiled criticism of Microsoft's decision to release the technology widely, while keeping the programming code secret.

Microsoft-backed ChatGPT has taken the world by storm with its ability to generate finely crafted texts such as essays or poems in just seconds using technology known as large language models (or LLM).

LLM is part of a field known as generative AI that also includes the capacity to execute images, designs or programming code almost instantaneously upon a simple request.

Usually the more staid actor in big tech, Microsoft has deepened its partnership with OpenAI, the creator of ChatGPT, and earlier this month announced the technology would be integrated into its Bing search engine as well as the Edge browser.

Google, seeing a sudden threat to the dominance of its search engine, quickly announced it would release its own language AI, known as Bard, shortly, AFP reported.

But reports of disturbing exchanges with Microsoft's Bing chatbot -- including it issuing threats and speaking of desires to steal nuclear code or lure one user from his wife -- went viral, raising alarm bells that the technology was not ready.

Meta said these problems, sometimes called hallucinations, could be better remedied if researchers had improved access to the expensive technology.

Thorough research "remains limited because of the resources that are required to train and run such large models," the company said.

This was hindering efforts "to improve their robustness and mitigate known issues, such as bias, toxicity, and the potential for generating misinformation," Meta said.

OpenAI and Microsoft strictly limit access to the technology behind their chatbots, drawing criticism that they are choosing potential profits over improving the technology more quickly for society.

"By sharing the code for LLaMA, other researchers can more easily test new approaches to limiting or eliminating these problems," Meta said.



Australia Bans YouTube Accounts for Children Under 16 in Reversal of Previous Stance 

The YouTube app is displayed on an iPad in Baltimore. (AP)
The YouTube app is displayed on an iPad in Baltimore. (AP)
TT
20

Australia Bans YouTube Accounts for Children Under 16 in Reversal of Previous Stance 

The YouTube app is displayed on an iPad in Baltimore. (AP)
The YouTube app is displayed on an iPad in Baltimore. (AP)

The Australian government announced YouTube will be among the social media platforms that must ensure account holders are at least 16-years-old from December, reversing a position taken months ago on the popular video-sharing service.

YouTube was listed as an exemption in November last year when the Parliament passed world-first laws that will ban Australian children younger than 16 from platforms including Facebook, Instagram, Snapchat, TikTok and X.

Communications Minister Anika Wells released rules Wednesday that decide which online services are defined as “age-restricted social media platforms” and which avoid the age limit.

The age restrictions take effect Dec. 10 and platforms will face fines of up to 50 million Australian dollars ($33 million) for “failing to take responsible steps” to exclude underage account holders, a government statement said. The steps are not defined.

Wells defended applying the restrictions to YouTube and said the government would not be intimidated by threats of legal action from the platform’s US owner, Alphabet Inc.

“The evidence cannot be ignored that four out of 10 Australian kids report that their most recent harm was on YouTube,” Wells told reporters, referring to government research. “We will not be intimidated by legal threats when this is a genuine fight for the wellbeing of Australian kids.”

Children will be able to access YouTube but will not be allowed to have their own YouTube accounts.

YouTube said the government’s decision “reverses a clear, public commitment to exclude YouTube from this ban.”

“We share the government’s goal of addressing and reducing online harms. Our position remains clear: YouTube is a video sharing platform with a library of free, high-quality content, increasingly viewed on TV screens. It’s not social media,” a YouTube statement said, noting it will consider next steps and engage with the government.

Prime Minister Anthony Albanese said Australia would campaign at a United Nations forum in New York in September for international support for banning children from social media.

“I know from the discussions I’ve had with other leaders that they are looking at this and they are considering what impact social media is having on young people in their respective nations,” Albanese said. “It is a common experience. This is not an Australian experience."

Last year, the government commissioned an evaluation of age assurance technologies that was to report last month on how young children could be excluded from social media.

The government had yet to receive that evaluation’s final recommendations, Wells said. But she added the platform users won’t have to upload documents such as passports and driver’s licenses to prove their age.

“Platforms have to provide an alternative to providing your own personal identification documents to satisfy themselves of age,” Wells said. “These platforms know with deadly accuracy who we are, what we do and when we do it. And they know that you’ve had a Facebook account since 2009, so they know that you are over 16."

Exempt services include online gaming, messaging, education and health apps. They are excluded because they are considered less harmful to children.

The minimum age is intended to address harmful impacts on children including addictive behaviors caused by persuasive or manipulative platform design features, social isolation, sleep interference, poor mental and physical health, low life-satisfaction and exposure to inappropriate and harmful content, government documents say.