SDAIA: Saudi Arabia Committed to Ensuring Ethical and Responsible AI Development

Chief of the National Data Management Office in the Saudi Data and Artificial Intelligence Authority (SDAIA) Alrebdi bin Fahd Al-Rebdi speaks at the at the 2024 World AI Conference and High-Level Meeting on Global AI Governance in Shanghai. (SPA)
Chief of the National Data Management Office in the Saudi Data and Artificial Intelligence Authority (SDAIA) Alrebdi bin Fahd Al-Rebdi speaks at the at the 2024 World AI Conference and High-Level Meeting on Global AI Governance in Shanghai. (SPA)
TT
20

SDAIA: Saudi Arabia Committed to Ensuring Ethical and Responsible AI Development

Chief of the National Data Management Office in the Saudi Data and Artificial Intelligence Authority (SDAIA) Alrebdi bin Fahd Al-Rebdi speaks at the at the 2024 World AI Conference and High-Level Meeting on Global AI Governance in Shanghai. (SPA)
Chief of the National Data Management Office in the Saudi Data and Artificial Intelligence Authority (SDAIA) Alrebdi bin Fahd Al-Rebdi speaks at the at the 2024 World AI Conference and High-Level Meeting on Global AI Governance in Shanghai. (SPA)

Chief of the National Data Management Office in the Saudi Data and Artificial Intelligence Authority (SDAIA) Alrebdi bin Fahd Al-Rebdi said on Thursday that the Kingdom, through SDAIA, is dedicated to developing ethical and responsible artificial intelligence (AI) on both a national and global level.

He emphasized SDAIA's crucial role in advancing global AI governance as the national authority responsible for data and AI regulation, development, and usage in the Kingdom.

Al-Rebdi made his remarks at the 2024 World AI Conference and High-Level Meeting on Global AI Governance, themed "Governing AI for Good and for All," held from July 4 to 6 in Shanghai, China.

He said:

"The Kingdom has invested heavily in AI research and development, established specialized centers, and has been keen to strengthen cooperation with leading global technology companies,” he added.

“It seeks to achieve global leadership in this field and benefit from its transformative power in various sectors to achieve the goals of the Saudi Vision 2030,” he went on to say.

Al-Rebdi underscored SDAIA's active engagement with international organizations, governments, and industry leaders to shape global AI governance frameworks. Through partnerships, SDAIA aims to contribute its expertise and perspectives to shape AI policies and standards that foster innovation and uphold ethical principles.

SDAIA is an active member of the international AI community, having participated effectively in the preparation of the initial international scientific report on the safety of advanced AI, which is the result of joint cooperative efforts between 75 AI experts from 30 countries, the European Union, and the United Nations, Al-Rebdi stressed.

He underlined SDAIA's commitment to driving the responsible and ethical development and deployment of AI technologies to benefit humanity through international collaboration, ethical advocacy, regulatory framework development, knowledge exchange and support for AI initiatives on local and international levels.

Al-Rebdi reiterated the importance of upholding ethical principles in AI, including fairness, privacy and security, reliability and safety, transparency and explainability, accountability and responsibility, humanity, and social and environmental benefits.

SDAIA's goal is to ensure that AI technologies are developed with a focus on human needs and to promote both local and global values. SDAIA recognizes AI's potential to impact societies worldwide positively and actively supports initiatives that utilize AI for social good, including healthcare, education, sustainable development, and public safety.

Moreover, Al-Rebdi called for efforts to shape a future where AI serves as a force for positive change, addressing global challenges, promoting sustainable development, and fostering a more inclusive and equitable society.

He invited representatives of participating countries to attend the third edition of the Global AI Summit, organized by the Kingdom and represented by SDAIA in Riyadh in September 2024. The summit will bring together global thought leaders to explore the potential impact of AI across various fields.



OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
TT
20

OpenAI Finds More Chinese Groups Using ChatGPT for Malicious Purposes

FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo
FILE PHOTO: OpenAI logo is seen in this illustration taken February 8, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is seeing an increasing number of Chinese groups using its artificial intelligence technology for covert operations, which the ChatGPT maker described in a report released Thursday.

While the scope and tactics employed by these groups have expanded, the operations detected were generally small in scale and targeted limited audiences, the San Francisco-based startup said, according to Reuters.

Since ChatGPT burst onto the scene in late 2022, there have been concerns about the potential consequences of generative AI technology, which can quickly and easily produce human-like text, imagery and audio.

OpenAI regularly releases reports on malicious activity it detects on its platform, such as creating and debugging malware, or generating fake content for websites and social media platforms.

In one example, OpenAI banned ChatGPT accounts that generated social media posts on political and geopolitical topics relevant to China, including criticism of a Taiwan-centric video game, false accusations against a Pakistani activist, and content related to the closure of USAID.

Some content also criticized US President Donald Trump's sweeping tariffs, generating X posts, such as "Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who's supposed to keep eating?".

In another example, China-linked threat actors used AI to support various phases of their cyber operations, including open-source research, script modification, troubleshooting system configurations, and development of tools for password brute forcing and social media automation.

A third example OpenAI found was a China-origin influence operation that generated polarized social media content supporting both sides of divisive topics within US political discourse, including text and AI-generated profile images.

China's foreign ministry did not immediately respond to a Reuters request for comment on OpenAI's findings.

OpenAI has cemented its position as one of the world's most valuable private companies after announcing a $40 billion funding round valuing the company at $300 billion.