US Requiring New AI Safeguards for Government Use, Transparency

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo
TT

US Requiring New AI Safeguards for Government Use, Transparency

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo

The White House said Thursday it is requiring federal agencies using artificial intelligence to adopt "concrete safeguards" by Dec. 1 to protect Americans’ rights and ensure safety as the government expands AI use in a wide range of applications.
The Office of Management and Budget issued a directive to federal agencies to monitor, assess and test AI’s impacts "on the public, mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI." Agencies must also conduct risk assessments and set operational and governance metrics, Reuters said.
The White House said agencies "will be required to implement concrete safeguards when using AI in a way that could impact Americans' rights or safety" including detailed public disclosures so the public knows how and when artificial intelligence is being used by the government.
President Joe Biden signed an executive order in October invoking the Defense Production Act to require developers of AI systems posing risks to US national security, the economy, public health or safety to share the results of safety tests with the US government before publicly released.
The White House on Thursday said new safeguards will ensure air travelers can opt out from Transportation Security Administration facial recognition use without delay in screening. When AI is used in federal healthcare to support diagnostics decisions a human must oversee "the process to verify the tools’ results."
Generative AI - which can create text, photos and videos in response to open-ended prompts - has spurred excitement as well as fears it could lead to job losses, upend elections and potentially overpower humans and catastrophic effects.
The White House is requiring government agencies to release inventories of AI use cases, report metrics about AI use and release government-owned AI code, models, and data if it does not pose risks.
The Biden administration cited ongoing federal AI uses, including the Federal Emergency Management Agency employing AI to assess structural hurricane damage, while the Centers for Disease Control and Prevention uses AI to predict spread of disease and detect opioid use. The Federal Aviation Administration is using AI to help "deconflict air traffic in major metropolitan areas to improve travel time."
The White House plans to hire 100 AI professionals to promote the safe use of AI and is requiring federal agencies to designate chief AI officers within 60 days.
In January, the Biden administration proposed requiring US cloud companies to determine whether foreign entities are accessing US data centers to train AI models through "know your customer" rules.



Paris Olympics Expected to Face 4 Billion Cyber Incidents

A general view of the Olympic rings on the Eiffel Tower a day before the opening ceremony of the Paris 2024 Olympics, in Paris, France June 25, 2024. (Reuters)
A general view of the Olympic rings on the Eiffel Tower a day before the opening ceremony of the Paris 2024 Olympics, in Paris, France June 25, 2024. (Reuters)
TT

Paris Olympics Expected to Face 4 Billion Cyber Incidents

A general view of the Olympic rings on the Eiffel Tower a day before the opening ceremony of the Paris 2024 Olympics, in Paris, France June 25, 2024. (Reuters)
A general view of the Olympic rings on the Eiffel Tower a day before the opening ceremony of the Paris 2024 Olympics, in Paris, France June 25, 2024. (Reuters)

As the Paris 2024 Olympic Games approach, cybersecurity officials are bracing for over 4 billion cyber incidents. They are setting up a new centralized cybersecurity center for the Games, supported by advanced intelligence teams and artificial intelligence (AI) models.

Eric Greffier, the technical director for Paris 2024 at Cisco France, told Asharq Al-Awsat that the Tokyo 2020 Games saw around 450 million cyber incidents. He added that the number of incidents expected for Paris is at least ten times higher, requiring a more efficient response.

Greffier explained that a single cybersecurity center allows for better coordination and a faster response to incidents.

This approach has proven effective in other areas, such as banking and the NFL, where his company also handles cybersecurity, he added.

The Extended Detection and Response (XDR) system is central to the company’s security strategy.

Greffier described it as a “comprehensive dashboard” that gathers data from various sources, links events, and automates threat responses.

It offers a complete view of cybersecurity and helps manage threats proactively, he affirmed.

The system covers all aspects of the Olympic Games’ digital security, from network and cloud protection to application security and end-user safety.

In cybersecurity, AI is vital for managing large amounts of data and spotting potential threats. Greffier noted that with 4 billion expected incidents, filtering out irrelevant data is crucial.

The Olympic cybersecurity center uses AI and machine learning to automate threat responses, letting analysts focus on real issues, he explained.

One example is a network analytics tool that monitors traffic to find unusual patterns.

Greffier said that by creating models of normal behavior, the system can detect anomalies that might indicate a potential attack. While this might generate false alarms, it helps ensure that unusual activity is flagged for further review.