Governments Race to Regulate AI Tools

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo
TT

Governments Race to Regulate AI Tools

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo
An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song/File Photo

Rapid advances in artificial intelligence (AI) such as Microsoft-backed OpenAI's ChatGPT are complicating governments' efforts to agree to laws governing the use of the technology, Reuters said.
Here are the latest steps national and international governing bodies are taking to regulate AI tools:
AUSTRALIA
* Planning regulations
Australia will make search engines draft new codes to prevent the sharing of child sexual abuse material created by AI and the production of deepfake versions of the same material, its internet regulator said in September.
BRITAIN
* Planning regulations
Governments and companies need to address the risks of AI head on, Prime Minister Rishi Sunak said on Oct. 26 ahead of the first global AI Safety Summit at Bletchley Park on Nov. 1-2.
Sunak added Britain would set up the world's first AI safety institute to "understand what each new model is capable of, exploring all the risks from social harms like bias and misinformation through to the most extreme risks".
Britain's data watchdog said on Oct. 10 it had issued Snap Inc's Snapchat with a preliminary enforcement notice over a possible failure to properly assess the privacy risks of its generative AI chatbot to users, particularly children.
CHINA
* Implemented temporary regulations
China published proposed security requirements for firms offering services powered by generative AI on Oct. 12, including a blacklist of sources that cannot be used to train AI models.
The country issued a set of temporary measures in August, requiring service providers to submit security assessments and receive clearance before releasing mass-market AI products.
EUROPEAN UNION
* Planning regulations
European lawmakers agreed on Oct. 24 on a critical part of new AI rules outlining the types of systems that will be designated "high risk", and inched closer to a broader agreement on the landmark AI Act, according to five people familiar with the matter. An agreement is expected in December, two co-rapporteurs said.
European Commission President Ursula von der Leyen on Sept. 13 called for a global panel to assess the risks and benefits of AI.
FRANCE
* Investigating possible breaches
France's privacy watchdog said in April it was investigating complaints about ChatGPT.
G7
* Seeking input on regulations
G7 leaders in May called for the development and adoption of technical standards to keep AI "trustworthy".
ITALY
* Investigating possible breaches
Italy's data protection authority plans to review AI platforms and hire experts in the field, a top official said in May. ChatGPT was temporarily banned in the country in March, but it was made available again in April.
JAPAN
* Investigating possible breaches
Japan expects to introduce by the end of 2023 regulations that are likely closer to the US attitude than the stringent ones planned in the EU, an official close to deliberations said in July.
The country's privacy watchdog has warned OpenAI not to collect sensitive data without people's permission.
POLAND
* Investigating possible breaches
Poland's Personal Data Protection Office said on Sept. 21 it was investigating OpenAI over a complaint that ChatGPT breaks EU data protection laws.
SPAIN
* Investigating possible breaches
Spain's data protection agency in April launched a preliminary investigation into potential data breaches by ChatGPT.
UNITED NATIONS
* Planning regulations
The UN Secretary-General António Guterres on Oct. 26 announced the creation of a 39-member advisory body, composed of tech company executives, government officials and academics, to address issues in the international governance of AI.
The UN Security Council held its first formal discussion on AI in July, addressing military and non-military applications of AI that "could have very serious consequences for global peace and security", Guterres said at the time.
US
* Seeking input on regulations
The White House is expected to unveil on Oct. 30 a long-awaited AI executive order, which would require "advanced AI models to undergo assessments before they can be used by federal workers", the Washington Post reported.
The US Congress in September held hearings on AI and an AI forum featuring Meta CEO Mark Zuckerberg and Tesla CEO Elon Musk.
More than 60 senators took part in the talks, during which Musk called for a US "referee" for AI. Lawmakers said there was universal agreement about the need for government regulation of the technology.
On Sept. 12, the White House said Adobe, IBM , Nvidia and five other firms had signed President Joe Biden's voluntary commitments governing AI, which require steps such as watermarking AI-generated content.
A Washington D.C. district judge ruled in August that a work of art created by AI without any human input cannot be copyrighted under US law.
The US Federal Trade Commission opened in July an investigation into OpenAI on claims that it has run afoul of consumer protection laws.



SDAIA: Saudi Arabia Committed to Ensuring Ethical and Responsible AI Development

Chief of the National Data Management Office in the Saudi Data and Artificial Intelligence Authority (SDAIA) Alrebdi bin Fahd Al-Rebdi speaks at the at the 2024 World AI Conference and High-Level Meeting on Global AI Governance in Shanghai. (SPA)
Chief of the National Data Management Office in the Saudi Data and Artificial Intelligence Authority (SDAIA) Alrebdi bin Fahd Al-Rebdi speaks at the at the 2024 World AI Conference and High-Level Meeting on Global AI Governance in Shanghai. (SPA)
TT

SDAIA: Saudi Arabia Committed to Ensuring Ethical and Responsible AI Development

Chief of the National Data Management Office in the Saudi Data and Artificial Intelligence Authority (SDAIA) Alrebdi bin Fahd Al-Rebdi speaks at the at the 2024 World AI Conference and High-Level Meeting on Global AI Governance in Shanghai. (SPA)
Chief of the National Data Management Office in the Saudi Data and Artificial Intelligence Authority (SDAIA) Alrebdi bin Fahd Al-Rebdi speaks at the at the 2024 World AI Conference and High-Level Meeting on Global AI Governance in Shanghai. (SPA)

Chief of the National Data Management Office in the Saudi Data and Artificial Intelligence Authority (SDAIA) Alrebdi bin Fahd Al-Rebdi said on Thursday that the Kingdom, through SDAIA, is dedicated to developing ethical and responsible artificial intelligence (AI) on both a national and global level.

He emphasized SDAIA's crucial role in advancing global AI governance as the national authority responsible for data and AI regulation, development, and usage in the Kingdom.

Al-Rebdi made his remarks at the 2024 World AI Conference and High-Level Meeting on Global AI Governance, themed "Governing AI for Good and for All," held from July 4 to 6 in Shanghai, China.

He said:

"The Kingdom has invested heavily in AI research and development, established specialized centers, and has been keen to strengthen cooperation with leading global technology companies,” he added.

“It seeks to achieve global leadership in this field and benefit from its transformative power in various sectors to achieve the goals of the Saudi Vision 2030,” he went on to say.

Al-Rebdi underscored SDAIA's active engagement with international organizations, governments, and industry leaders to shape global AI governance frameworks. Through partnerships, SDAIA aims to contribute its expertise and perspectives to shape AI policies and standards that foster innovation and uphold ethical principles.

SDAIA is an active member of the international AI community, having participated effectively in the preparation of the initial international scientific report on the safety of advanced AI, which is the result of joint cooperative efforts between 75 AI experts from 30 countries, the European Union, and the United Nations, Al-Rebdi stressed.

He underlined SDAIA's commitment to driving the responsible and ethical development and deployment of AI technologies to benefit humanity through international collaboration, ethical advocacy, regulatory framework development, knowledge exchange and support for AI initiatives on local and international levels.

Al-Rebdi reiterated the importance of upholding ethical principles in AI, including fairness, privacy and security, reliability and safety, transparency and explainability, accountability and responsibility, humanity, and social and environmental benefits.

SDAIA's goal is to ensure that AI technologies are developed with a focus on human needs and to promote both local and global values. SDAIA recognizes AI's potential to impact societies worldwide positively and actively supports initiatives that utilize AI for social good, including healthcare, education, sustainable development, and public safety.

Moreover, Al-Rebdi called for efforts to shape a future where AI serves as a force for positive change, addressing global challenges, promoting sustainable development, and fostering a more inclusive and equitable society.

He invited representatives of participating countries to attend the third edition of the Global AI Summit, organized by the Kingdom and represented by SDAIA in Riyadh in September 2024. The summit will bring together global thought leaders to explore the potential impact of AI across various fields.