AI Companies Will Need to Start Reporting their Safety Tests to the US Government

AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken June 23, 2023. (Reuters)
TT

AI Companies Will Need to Start Reporting their Safety Tests to the US Government

AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken June 23, 2023. (Reuters)

The Biden administration will start implementing a new requirement for the developers of major artificial intelligence systems to disclose their safety test results to the government.
The White House AI Council is scheduled to meet Monday to review progress made on the executive order that President Joe Biden signed three months ago to manage the fast-evolving technology.
Chief among the 90-day goals from the order was a mandate under the Defense Production Act that AI companies share vital information with the Commerce Department, including safety tests.
Ben Buchanan, the White House special adviser on AI, said in an interview that the government wants "to know AI systems are safe before they’re released to the public — the president has been very clear that companies need to meet that bar.”
The software companies are committed to a set of categories for the safety tests, but companies do not yet have to comply with a common standard on the tests. The government's National Institute of Standards and Technology will develop a uniform framework for assessing safety, as part of the order Biden signed in October.
AI has emerged as a leading economic and national security consideration for the federal government, given the investments and uncertainties caused by the launch of new AI tools such as ChatGPT that can generate text, images and sounds. The Biden administration also is looking at congressional legislation and working with other countries and the European Union on rules for managing the technology.
The Commerce Department has developed a draft rule on US cloud companies that provide servers to foreign AI developers.
Nine federal agencies, including the departments of Defense, Transportation, Treasury and Health and Human Services, have completed risk assessments regarding AI's use in critical national infrastructure such as the electric grid.
The government also has scaled up the hiring of AI experts and data scientists at federal agencies.
“We know that AI has transformative effects and potential,” Buchanan said. “We’re not trying to upend the apple cart there, but we are trying to make sure the regulators are prepared to manage this technology.



Facebook-Parent Meta Settles with Australia’s Privacy Watchdog over Cambridge Analytica Lawsuit

The logo of Meta Platforms' business group is seen in Brussels, Belgium December 6, 2022. (Reuters)
The logo of Meta Platforms' business group is seen in Brussels, Belgium December 6, 2022. (Reuters)
TT

Facebook-Parent Meta Settles with Australia’s Privacy Watchdog over Cambridge Analytica Lawsuit

The logo of Meta Platforms' business group is seen in Brussels, Belgium December 6, 2022. (Reuters)
The logo of Meta Platforms' business group is seen in Brussels, Belgium December 6, 2022. (Reuters)

Meta Platforms has agreed to a A$50 million settlement ($31.85 million), Australia's privacy watchdog said on Tuesday, closing long-drawn, expensive legal proceedings for the Facebook parent over the Cambridge Analytica scandal.

The Office of the Australian Information Commissioner had alleged that personal information of some users was being disclosed to Facebook's personality quiz app, This is Your Digital Life, as part of the broader scandal.

The breaches were first reported by the Guardian in early 2018, and Facebook received fines from regulators in the United States and the UK in 2019.

Australia's privacy regulator has been caught up in the legal battle with Meta since 2020. The personal data of 311,127 Australian Facebook users was "exposed to the risk of being disclosed" to consulting firm Cambridge Analytica and used for profiling purposes, according to the 2020 statement.

It convinced the high court in March 2023 to not hear an appeal, which is considered to be a win that allowed the watchdog to continue its prosecution.

In June 2023, the country's federal court ordered Meta and the privacy commissioner to enter mediation.

"Today's settlement represents the largest ever payment dedicated to addressing concerns about the privacy of individuals in Australia," the Australian Information Commissioner Elizabeth Tydd said.

Cambridge Analytica, a British consulting firm, was known to have kept personal data of millions of Facebook users without their permission, before using the data predominantly for political advertising, including assisting Donald Trump and the Brexit campaign in the UK.

A Meta spokesperson told Reuters that the company had settled the lawsuit in Australia on a no admission basis, closing a chapter on allegations regarding past practices of the firm.