AI Companies Will Need to Start Reporting their Safety Tests to the US Government

AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken June 23, 2023. (Reuters)
TT

AI Companies Will Need to Start Reporting their Safety Tests to the US Government

AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken June 23, 2023. (Reuters)
AI (Artificial Intelligence) letters are placed on computer motherboard in this illustration taken June 23, 2023. (Reuters)

The Biden administration will start implementing a new requirement for the developers of major artificial intelligence systems to disclose their safety test results to the government.
The White House AI Council is scheduled to meet Monday to review progress made on the executive order that President Joe Biden signed three months ago to manage the fast-evolving technology.
Chief among the 90-day goals from the order was a mandate under the Defense Production Act that AI companies share vital information with the Commerce Department, including safety tests.
Ben Buchanan, the White House special adviser on AI, said in an interview that the government wants "to know AI systems are safe before they’re released to the public — the president has been very clear that companies need to meet that bar.”
The software companies are committed to a set of categories for the safety tests, but companies do not yet have to comply with a common standard on the tests. The government's National Institute of Standards and Technology will develop a uniform framework for assessing safety, as part of the order Biden signed in October.
AI has emerged as a leading economic and national security consideration for the federal government, given the investments and uncertainties caused by the launch of new AI tools such as ChatGPT that can generate text, images and sounds. The Biden administration also is looking at congressional legislation and working with other countries and the European Union on rules for managing the technology.
The Commerce Department has developed a draft rule on US cloud companies that provide servers to foreign AI developers.
Nine federal agencies, including the departments of Defense, Transportation, Treasury and Health and Human Services, have completed risk assessments regarding AI's use in critical national infrastructure such as the electric grid.
The government also has scaled up the hiring of AI experts and data scientists at federal agencies.
“We know that AI has transformative effects and potential,” Buchanan said. “We’re not trying to upend the apple cart there, but we are trying to make sure the regulators are prepared to manage this technology.



Italy Fines OpenAI over ChatGPT Privacy Rules Breach

The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works - Reuters
The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works - Reuters
TT

Italy Fines OpenAI over ChatGPT Privacy Rules Breach

The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works - Reuters
The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works - Reuters

Italy's data protection agency said on Friday it fined ChatGPT maker OpenAI 15 million euros ($15.58 million) after closing an investigation into use of personal data by the generative artificial intelligence application.

The fine comes after the authority found OpenAI processed users' personal data to "train ChatGPT without having an adequate legal basis and violated the principle of transparency and the related information obligations towards users".

OpenAI said the decision was "disproportionate" and that the company will file an appeal against it.

The investigation, which started in 2023, also concluded that the US-based company did not have an adequate age verification system in place to prevent children under the age of 13 from being exposed to inappropriate AI-generated content, the authority said, Reuters reported.

The Italian watchdog also ordered OpenAI to launch a six-month campaign on Italian media to raise public awareness about how ChatGPT works, particularly as regards to data collection of users and non-users to train algorithms.

Italy's authority, known as Garante, is one of the European Union's most proactive regulators in assessing AI platform compliance with the bloc's data privacy regime.

Last year it briefly banned the use of ChatGPT in Italy over alleged breaches of EU privacy rules.

The service was reactivated after Microsoft-backed OpenAI addressed issues concerning, among other things, the right of users to refuse consent for the use of personal data to train the algorithms.

"They've since recognised our industry-leading approach to protecting privacy in AI, yet this fine is nearly twenty times the revenue we made in Italy during the relevant period," OpenAI said, adding the Garante's approach "undermines Italy's AI ambitions".

The regulator said the size of its 15-million-euro fine was calculated taking into account OpenAI's "cooperative stance", suggesting the fine could have been even bigger.

Under the EU's General Data Protection Regulation (GDPR) introduced in 2018, any company found to have broken rules faces fines of up to 20 million euros or 4% of its global turnover.