SDAIA, NTP Launch Saudi Arabia’s 1st National Data Index

The Saudi Data and Artificial Intelligence Authority (SDAIA) and the National Transformation Program (NTP) launched on Monday the first National Data Index (Nudei).
The Saudi Data and Artificial Intelligence Authority (SDAIA) and the National Transformation Program (NTP) launched on Monday the first National Data Index (Nudei).
TT

SDAIA, NTP Launch Saudi Arabia’s 1st National Data Index

The Saudi Data and Artificial Intelligence Authority (SDAIA) and the National Transformation Program (NTP) launched on Monday the first National Data Index (Nudei).
The Saudi Data and Artificial Intelligence Authority (SDAIA) and the National Transformation Program (NTP) launched on Monday the first National Data Index (Nudei).

The Saudi Data and Artificial Intelligence Authority (SDAIA) and the National Transformation Program (NTP) launched on Monday the first National Data Index (Nudei), the developed version of the Open Data Platform, as well as the Data Governance Platform, in a first for the Kingdom.

The move is a bid to achieve the objectives of promoting transparency, creating a national data-based economy, and contributing to the assessment of data maturity in government entities, specified in the Saudi Vision 2030.

The launch was made during the Saudi Data Forum, organized by the SDAIA and NTP, which kicked off in Riyadh on Monday.

Attending the event were Assistant Minister of Interior for Technology Affairs Prince Bandar bin Abdullah bin Mishari, SDAIA President Dr. Abdullah bin Sharaf Al-Ghamdi, several ministers and senior officials dealing with data from public departments, major local and international institutions and companies.

The National Data Index is the result of the collaboration between SDAIA and NTP. It is a dynamic results-based indicator for follow-up and evaluation that was developed with the aim of assessing and tracking the progress of government agencies in data management, and compliance and operational indicators.

The indicator provides government entities with enabling tools that effectively help measure data management practices and achieve advanced evaluation levels. It covers 14 areas of data management through three key components: data management maturity measurement questionnaire, measurement of compliance with national data management controls and specifications, and measurement of operational indicators.

The indicator aims to establish a robust data governance framework and policies, with the aim of controlling data management practices, measuring data management maturity and ensuring compliance, improving the effectiveness of data management operational processes, and developing compliance and investigation-reporting mechanisms.

It also aims at tracking and controlling compliance with regulations, as well as improving data life cycle management processes to ensure accurate, complete and coordinated data and implement data life cycle management processes to deal with data from creation to disposal in a standard-compliant manner.

It will promote a culture of data management through training programs for government employees and help carry out awareness campaigns for beneficiary groups.

The indicator enhances transparency in all government agencies and tracks their progress in implementing data management practices. The results and recommendations help improve data quality, credibility, and integrity.

SDAIA conducted 15 training workshops for 189 participants from 52 government agencies, followed by 12 virtual workshops that benefited 436 participants. They were aimed at raising awareness about the measurement entities.

An upgraded version of the open data platform was launched during the ceremony. It allows individuals, government, and non-government agencies to publish their open data and make it available to beneficiaries, such as entrepreneurs.

This initiative contributes to building a digital economy in the Kingdom. The platform has so far achieved more than 7,000 open data sets, more than 190 open data publishers, and more than 35 use cases.

The data governance platform that was launched aims to register entities covered by the Personal Data Protection Law. It is bound to raise the level of these entities' commitment to the system's provisions by providing support and advice on preserving the privacy of personal data holders and protecting their rights.

The platform aims to create a unified national registry and enable entities to comply with their obligations stipulated in the system. It develops measurement indicators that reflect the results of the extent of compliance with laws and regulations.

Government agencies can benefit from the platform in easy steps: fill out the registration form, log in through the national unified access platform, complete the entity's profile, and submit data for evaluation. Once the entity obtains the official registration certificate, it can benefit from the various services offered on the platform.

The data governance platform provides government agencies with several services, including notification about a possible data leak, privacy impact assessment, legal support, and a self-assessment tool for compliance with the Personal Data Protection Law and its regulations. It also offers compliance assessment, thus helping promote correct practices and identify and address areas of non-compliance.

The platform provides corrective action follow-up services to ensure that issues do not recur and to achieve the highest levels of responsibility and transparency.

In January 2022, SDAIA and NTP signed a memorandum of understanding to launch new strategic partnerships and smart business solutions, which support the strategic objectives of Saudi Vision 2030 assigned to NTP. SDAIA will also come up with quality digital initiatives related to data and intelligence. Artificial technology will be employed to achieve the NTP goals and enable digital transformation in the Kingdom.



Fake Images, Videos in Wartime: How to Tell Fact from Deepfakes

Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)
Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)
TT

Fake Images, Videos in Wartime: How to Tell Fact from Deepfakes

Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)
Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)

As tensions escalate across several fronts in the Middle East, information is spreading almost as quickly as the events themselves.

Social media platforms are often the first place where images, videos, and reports of alleged attacks or military developments appear.

But alongside legitimate information, a wave of misleading or fabricated content is also circulating online, making it increasingly difficult to separate fact from fiction.

A Growing Digital Challenge

Cybersecurity experts warn that the rapid spread of misinformation, particularly through manipulated videos and deepfake technologies, has become a growing digital threat during periods of geopolitical instability.

Maher Yamout, Lead Security Researcher at Kaspersky, told Asharq Al-Awsat that distinguishing reliable information from false narratives becomes especially critical during emergencies, when emotions run high, and people tend to share content quickly without verifying it.

“With developments unfolding in the Middle East, government authorities in Gulf Cooperation Council countries have warned against publishing or circulating information from unknown sources,” he said.

“Fake news, misleading or inaccurate information presented as real news, becomes more dangerous during emergencies.”

Misinformation Spreads Fast

Fake news is not new, but its scale and speed have changed dramatically with the rise of social media and artificial intelligence tools. During periods of geopolitical tension, unverified reports or manipulated videos can spread within minutes, reaching millions before fact-checkers can respond.

Experts generally divide fake news into two main categories. The first involves fully fabricated content designed to influence public opinion or attract traffic to specific websites. The second contains elements of truth but presents them inaccurately because the author failed to verify all the facts or exaggerated certain details.

Both can confuse audiences during crises, particularly when users rely on social media rather than trusted news outlets for updates.

Authorities in several countries have also warned that sharing inaccurate information, even unintentionally, may expose users to legal accountability.

Governments and digital security experts are therefore urging greater digital awareness and responsibility when sharing information during sensitive periods.

AI-Powered Deception

Artificial intelligence has added a new layer to the misinformation problem through so-called deepfake technologies, fabricated videos created using machine learning techniques such as face swapping or synthetic visual generation.

In some cases, authentic footage can be altered to appear as if it documents events that never occurred.

Yamout said verifying information has become more important than ever with the spread of deepfakes.

“Artificial intelligence makes it possible to combine different video clips to produce new scenes showing events or actions that never happened in reality, often with highly realistic results,” he said.

Such technology can make manipulated videos appear convincing and potentially mislead users, especially when they circulate in emotionally charged contexts. Edited clips may appear to show attacks, military movements, or political statements that never took place.

Even when these videos are later debunked, their initial spread can still trigger confusion or public anxiety.

How to Verify Information

Cybersecurity experts say users themselves play a key role in limiting the spread of misinformation. While platforms and regulators are developing tools to detect fake content, individuals can take simple steps to verify information before sharing it.

The first step is checking the source. Websites that publish false information may contain spelling errors in their web addresses or use unusual domains that mimic well-known media outlets.

Yamout advises carefully reviewing the website address and checking the “About Us” section on unfamiliar sites. It is generally safer to rely on official sources such as government websites or trusted media organizations.

Users should also verify the identity of the author or the organization behind the report. If the author is unknown or lacks clear expertise in the subject, the information should be treated cautiously.

Comparing reports with other credible sources is also important. Professional news organizations follow editorial guidelines and verification procedures, meaning major events are typically reported by multiple reputable outlets.

Yamout also highlighted the importance of checking dates and timelines, noting that some misleading content recirculates old events and presents them as recent developments.

He added that social media algorithms can create so-called “echo chambers,” where users are shown content that aligns with their existing views and interests. This makes it essential to consult diverse and reliable sources before forming conclusions.

Playing on Emotions

Many fake news stories are designed to provoke strong emotional reactions. Sensational headlines or dramatic clips are often crafted to trigger fear, anger, or shock, emotions that increase the likelihood that users will quickly share the content.

“Many fake news stories are written in a clever way to provoke strong emotional reactions,” Yamout said.

Maintaining critical thinking and asking a simple question — why was this story written? — can help users avoid spreading misinformation, he added.

This dynamic is amplified on social media platforms, where algorithms tend to promote content that generates strong engagement. Emotionally charged posts can therefore spread faster than balanced reporting.

Spotting Signs of Manipulation

Images and videos themselves may provide clues that they have been altered. Edited photos may display distorted background lines, unnatural shadows, or unrealistic skin tones.

In manipulated videos, inconsistencies may appear in lighting, eye movement, or facial expressions. While these signs are not always easy to detect, particularly on smartphones, they can raise doubts about the authenticity of widely shared clips.

A Shared Digital Responsibility

Experts say limiting the spread of misinformation during crises requires cooperation among governments, technology companies, media organizations, and users.

Yamout said the simplest rule may also be the most effective: “If you are not sure the content is accurate, do not share it.”

Responsible sharing can help curb the spread of misinformation and protect digital communities.

As digital platforms continue to shape how information travels across borders, the ability to critically evaluate online content is becoming an essential skill.

During periods of geopolitical tension and conflict, when rumors and facts can blur, the challenge is not only cybersecurity but also protecting the credibility of information itself.


Adobe Shares Drop after CEO Exit Adds to AI-disruption Concerns

FILE PHOTO: Signage for Adobe is displayed at National Retail Federation (NRF) 2026: Retail's Big Show, in New York City, US, January 12, 2026. REUTERS/Kylie Cooper/File Photo
FILE PHOTO: Signage for Adobe is displayed at National Retail Federation (NRF) 2026: Retail's Big Show, in New York City, US, January 12, 2026. REUTERS/Kylie Cooper/File Photo
TT

Adobe Shares Drop after CEO Exit Adds to AI-disruption Concerns

FILE PHOTO: Signage for Adobe is displayed at National Retail Federation (NRF) 2026: Retail's Big Show, in New York City, US, January 12, 2026. REUTERS/Kylie Cooper/File Photo
FILE PHOTO: Signage for Adobe is displayed at National Retail Federation (NRF) 2026: Retail's Big Show, in New York City, US, January 12, 2026. REUTERS/Kylie Cooper/File Photo

Adobe's shares plunged 9% in premarket trading on Friday after the Photoshop maker said CEO Shantanu Narayen would step down after 18 years at the helm, unsettling investors already wary of AI-driven disruptions to the design software market.

The longtime CEO's exit comes at a critical juncture as Adobe works to reassure investors it can keep pace with sweeping changes brought by artificial intelligence in the software landscape.

It follows a broader slide in software stocks after fears that ⁠AI agents could ⁠supplant some traditional applications that led to a nearly $1 trillion rout in software stocks globally last month.

"The loss of an iconic leader at a time of peak uncertainty around the future of software more broadly, and the positioning of Adobe ⁠specifically in this new GenAI world is bound to further investor uncertainty and anxiety around the shares," said analysts at Morgan Stanley.

Adobe's shares are down about 23% so far this year, extending a slide that has stretched over the past two years.

The company, which makes Illustrator, Premiere Pro and other tools for creative professionals, is among a group of SaaS providers including Salesforce that have ⁠struggled to win ⁠new clients amid a wave of AI start-ups.

On Thursday, Adobe reported double-digit growth in total revenue and customer subscription segments in the first quarter, reflecting resilient spending on its product suite.

"After steering the Adobe ship through rough seas over the past several years, several data points from the most recent quarter suggest the captain (Narayen) may have brought this franchise into a safe harbor, from which it can continue to thrive," Morgan Stanley analysts said.


AI Agent 'Lobster Fever' Grips China Despite Risks

A man wears a lobster hat that represents the OpenClaw logo, an open-source AI assistant at the Baidu headquarters in Beijing on March 11, 2026. (Photo by ADEK BERRY / AFP)
A man wears a lobster hat that represents the OpenClaw logo, an open-source AI assistant at the Baidu headquarters in Beijing on March 11, 2026. (Photo by ADEK BERRY / AFP)
TT

AI Agent 'Lobster Fever' Grips China Despite Risks

A man wears a lobster hat that represents the OpenClaw logo, an open-source AI assistant at the Baidu headquarters in Beijing on March 11, 2026. (Photo by ADEK BERRY / AFP)
A man wears a lobster hat that represents the OpenClaw logo, an open-source AI assistant at the Baidu headquarters in Beijing on March 11, 2026. (Photo by ADEK BERRY / AFP)

Chinese entrepreneur Frank Gao used to spend long hours running his social media accounts but now outsources the chore to AI agent tool OpenClaw, which is taking the country by storm despite official warnings over cybersecurity.

OpenClaw, created in November by an Austrian coder, differs from bots like ChatGPT because it can execute real-life tasks such as sending emails, organizing files or even booking flight tickets.

"Since January, I've spent hours on the lobster every day," Gao told AFP, referring to OpenClaw's red crustacean mascot. "We're family."

After downloading OpenClaw, users connect it to existing artificial intelligence models of their choice, then give it simple instructions through instant messaging apps, as if to a friend or colleague.

The tool has fascinated tech circles worldwide but particularly in China, gripping tech-savvy companies and individuals keen to keep up with the next big thing in AI.

Hundreds of people queued at tech giant Baidu's Beijing headquarters this week for an OpenClaw event where engineers helped attendees set up their "little lobsters".

It was one of many similar meetups to experiment with the tool, which are drawing crowds from Shanghai to Shenzhen.

Some municipalities, including the eastern cities of Wuxi and Hangzhou, have pledged hundreds of thousands of dollars to support the adoption and development of OpenClaw and other AI agents.

But the lobster fever, as it has been dubbed, has also sparked security concerns.

"What's truly scary about agents like OpenClaw is this: once they have your digital keys, they can theoretically access all the services you've authorized, and can autonomously decide when to activate them," Gao warned.

"The attacker effectively gains a 'master key' to your digital identity," said the engineer, who has named his OpenClaw agent "Q" after his business name QLab.

- 'Use with caution' -

Chinese national cybersecurity authorities and Beijing's ministry of industry and IT have warned of the risks of OpenClaw hacks.

"Use intelligent agents such as 'lobster' with caution," national IT research institute expert Wei Liang advised government agencies, public institutions, companies and individuals in a message on state media.

The mixed signals of rolling out policy incentives while issuing warnings "reflects the authorities' cautious tolerance towards 'lobster fever'," Zhang Yi, founder of tech consultancy iiMedia, told AFP.

Austrian programmer Peter Steinberger, who built OpenClaw to help organize his digital life, was hired last month by ChatGPT maker OpenAI.

Meanwhile, a separate team of coders that made Moltbook, a Reddit-like pseudo social network where OpenClaw agents converse, are joining Meta.

Top Chinese tech companies have also been quick to get involved.

The likes of Tencent, Alibaba, ByteDance and Baidu are offering simplified installation and affordable coding plans to help users who want to host OpenClaw agents on their cloud servers -- seen as safer than downloading it onto a personal computer.

In recent days AI companies big and small have also launched their own competing agent tools, such as ByteDance's ArkClaw, Tencent's WorkBuddy and Zhipu AI's AutoClaw.

The relatively low cost for cloud deployment of OpenClaw in China, subsidised by big tech firms, is one factor behind its popularity, said Gao Rui, a senior product manager at Baidu AI Cloud.

"For most people, it's likely just the price of a cup of coffee... which is why people will probably be keen to give it a try," she told AFP.

- FOMO -

Fear of missing out is also a big driver behind OpenClaw's success in China, said Chen Yunfei, an AI developer who created a popular online guide for using the tool.

"Most Chinese people are quite studious and forward-looking, so when confronted with new things, they might have stronger feelings" of so-called FOMO, he said.

Xie Manrui, a programmer whose latest project is a visualized system for managing OpenClaw agents, said the tool had arrived "at the right moment" to change perceptions in China of what AI can do.

"For many, AI is merely a clever chatbot that talks all the time but cannot act," he said.

Either way, it has piqued the curiosity of many young users.

At the Baidu event in Beijing, 24-year-old college student Zheng Huimin was waiting patiently in line with her friends.

"I'd like to give it a go to see what tasks it can actually help me accomplish," she told AFP.