US Proposes Requiring Reporting for Advanced AI, Cloud Providers

A sign in front of Department of Commerce building is seen before an expected report of new home sales numbers in Washington, US, January 26, 2022. REUTERS/Joshua Roberts/File Photo Purchase Licensing Rights
A sign in front of Department of Commerce building is seen before an expected report of new home sales numbers in Washington, US, January 26, 2022. REUTERS/Joshua Roberts/File Photo Purchase Licensing Rights
TT

US Proposes Requiring Reporting for Advanced AI, Cloud Providers

A sign in front of Department of Commerce building is seen before an expected report of new home sales numbers in Washington, US, January 26, 2022. REUTERS/Joshua Roberts/File Photo Purchase Licensing Rights
A sign in front of Department of Commerce building is seen before an expected report of new home sales numbers in Washington, US, January 26, 2022. REUTERS/Joshua Roberts/File Photo Purchase Licensing Rights

The US Commerce Department said Monday it is proposing to require detailed reporting requirements for advanced artificial intelligence developers and cloud computing providers to ensure the technologies are safe and can withstand cyberattacks.

The proposal from the department's Bureau of Industry and Security would set mandatory reporting to the federal government about development activities of "frontier" AI models and computing clusters.

It would also require reporting on cybersecurity measures as well as outcomes from so-called red-teaming efforts like testing for dangerous capabilities including the ability to assist in cyberattacks or lowering barriers to entry for non-experts to develop chemical, biological, radiological, or nuclear weapons.

External red-teaming has been used for years in cybersecurity to identify new risks, with the term referring to US Cold War simulations where the enemy was termed the "red team."

Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could make some jobs obsolete, upend elections and potentially overpower humans and have catastrophic effects, Reuters reported.

Commerce said the information collected under the proposal "will be vital for ensuring these technologies meet stringent standards for safety and reliability, can withstand cyberattacks, and have limited risk of misuse by foreign adversaries or non-state actors."

President Joe Biden in October 2023 signed an executive order requiring developers of AI systems that pose risks to US national security, the economy, public health or safety to share the results of safety tests with the US government before they are released to the public.

The rule would establish reporting requirements for advanced artificial intelligence (AI) models and computing clusters.

The regulatory push comes as legislative action in Congress on AI has stalled.

Earlier this year, the BIS conducted a pilot survey of AI developers. The Biden administration has taken a series of steps to prevent China from using US technology for AI, as the burgeoning sector raises security concerns.

Top cloud providers include Amazon.com's AWS, Alphabet's Google Cloud and Microsoft's Azure unit.



South Korea Summit to Target ‘Blueprint’ for Using AI in the Military 

Guests attend the opening of an international conference on the responsible use of artificial intelligence (AI) in the military domain, in Seoul, South Korea, 09 September 2024. (EPA/Yonhap)
Guests attend the opening of an international conference on the responsible use of artificial intelligence (AI) in the military domain, in Seoul, South Korea, 09 September 2024. (EPA/Yonhap)
TT

South Korea Summit to Target ‘Blueprint’ for Using AI in the Military 

Guests attend the opening of an international conference on the responsible use of artificial intelligence (AI) in the military domain, in Seoul, South Korea, 09 September 2024. (EPA/Yonhap)
Guests attend the opening of an international conference on the responsible use of artificial intelligence (AI) in the military domain, in Seoul, South Korea, 09 September 2024. (EPA/Yonhap)

South Korea convened an international summit on Monday seeking to establish a blueprint for the responsible use of artificial intelligence (AI) in the military, though any agreement is not expected to have binding powers to enforce it.

More than 90 countries including the United States and China have sent government representatives to the two-day summit in Seoul, which is the second such gathering.

At the first summit was held in Amsterdam last year, where the United States, China and other nations endorsed a modest "call to action" without legal commitment.

"Recently, in the Russia-Ukraine war, an AI-applied Ukrainian drone functioned as David's slingshot," South Korean Defense Minister Kim Yong-hyun said in an opening address.

He was referring to Ukraine's efforts for a technological edge against Russia by rolling out AI-enabled drones, hoping they will help overcome signal jamming as well as enable unmanned aerial vehicles (UAVs) to work in larger groups.

"As AI is applied to the military domain, the military's operational capabilities are dramatically improved. However, it is like a double-edged sword, as it can cause damage from abuse," Kim said.

South Korean Foreign Minister Cho Tae-yul said discussions would cover areas such as a legal review to ensure compliance with international law and mechanisms to prevent autonomous weapons from making life-and-death decisions without appropriate human oversight.

The Seoul summit hoped to agree to a blueprint for action, establishing a minimum level of guard-rails for AI in the military and suggesting principles on responsible use by reflecting principles laid out by NATO, by the US or a number of other countries, according to a senior South Korean official.

It was unclear how many nations attending the summit would endorse the document on Tuesday, which is aiming to be a more detailed attempt to set boundaries on AI use in the military, but still likely lack legal commitments.

The summit is not the only international set of discussions on AI use in the military.

UN countries that belong to the 1983 Convention on Certain Conventional Weapons (CCW) are discussing potential restrictions on lethal autonomous weapons systems for compliance with international humanitarian law.

The US government last year also launched a declaration on responsible use of AI in the military, which covers broader military application of AI, beyond weapons. As of August, 55 countries have endorsed the declaration.

The Seoul summit, co-hosted by the Netherlands, Singapore, Kenya and the United Kingdom, aims to ensure ongoing multi-stakeholder discussions in a field where technological developments are primarily driven by the private sector, but governments are the main decision makers.

About 2,000 people globally have registered to take part in the summit, including representatives from international organizations, academia and the private sector, to attend discussions on topics such as civilian protection and AI use in the control of nuclear weapons.