Fake Images, Videos in Wartime: How to Tell Fact from Deepfakes

Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)
Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)
TT

Fake Images, Videos in Wartime: How to Tell Fact from Deepfakes

Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)
Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)

As tensions escalate across several fronts in the Middle East, information is spreading almost as quickly as the events themselves.

Social media platforms are often the first place where images, videos, and reports of alleged attacks or military developments appear.

But alongside legitimate information, a wave of misleading or fabricated content is also circulating online, making it increasingly difficult to separate fact from fiction.

A Growing Digital Challenge

Cybersecurity experts warn that the rapid spread of misinformation, particularly through manipulated videos and deepfake technologies, has become a growing digital threat during periods of geopolitical instability.

Maher Yamout, Lead Security Researcher at Kaspersky, told Asharq Al-Awsat that distinguishing reliable information from false narratives becomes especially critical during emergencies, when emotions run high, and people tend to share content quickly without verifying it.

“With developments unfolding in the Middle East, government authorities in Gulf Cooperation Council countries have warned against publishing or circulating information from unknown sources,” he said.

“Fake news, misleading or inaccurate information presented as real news, becomes more dangerous during emergencies.”

Misinformation Spreads Fast

Fake news is not new, but its scale and speed have changed dramatically with the rise of social media and artificial intelligence tools. During periods of geopolitical tension, unverified reports or manipulated videos can spread within minutes, reaching millions before fact-checkers can respond.

Experts generally divide fake news into two main categories. The first involves fully fabricated content designed to influence public opinion or attract traffic to specific websites. The second contains elements of truth but presents them inaccurately because the author failed to verify all the facts or exaggerated certain details.

Both can confuse audiences during crises, particularly when users rely on social media rather than trusted news outlets for updates.

Authorities in several countries have also warned that sharing inaccurate information, even unintentionally, may expose users to legal accountability.

Governments and digital security experts are therefore urging greater digital awareness and responsibility when sharing information during sensitive periods.

AI-Powered Deception

Artificial intelligence has added a new layer to the misinformation problem through so-called deepfake technologies, fabricated videos created using machine learning techniques such as face swapping or synthetic visual generation.

In some cases, authentic footage can be altered to appear as if it documents events that never occurred.

Yamout said verifying information has become more important than ever with the spread of deepfakes.

“Artificial intelligence makes it possible to combine different video clips to produce new scenes showing events or actions that never happened in reality, often with highly realistic results,” he said.

Such technology can make manipulated videos appear convincing and potentially mislead users, especially when they circulate in emotionally charged contexts. Edited clips may appear to show attacks, military movements, or political statements that never took place.

Even when these videos are later debunked, their initial spread can still trigger confusion or public anxiety.

How to Verify Information

Cybersecurity experts say users themselves play a key role in limiting the spread of misinformation. While platforms and regulators are developing tools to detect fake content, individuals can take simple steps to verify information before sharing it.

The first step is checking the source. Websites that publish false information may contain spelling errors in their web addresses or use unusual domains that mimic well-known media outlets.

Yamout advises carefully reviewing the website address and checking the “About Us” section on unfamiliar sites. It is generally safer to rely on official sources such as government websites or trusted media organizations.

Users should also verify the identity of the author or the organization behind the report. If the author is unknown or lacks clear expertise in the subject, the information should be treated cautiously.

Comparing reports with other credible sources is also important. Professional news organizations follow editorial guidelines and verification procedures, meaning major events are typically reported by multiple reputable outlets.

Yamout also highlighted the importance of checking dates and timelines, noting that some misleading content recirculates old events and presents them as recent developments.

He added that social media algorithms can create so-called “echo chambers,” where users are shown content that aligns with their existing views and interests. This makes it essential to consult diverse and reliable sources before forming conclusions.

Playing on Emotions

Many fake news stories are designed to provoke strong emotional reactions. Sensational headlines or dramatic clips are often crafted to trigger fear, anger, or shock, emotions that increase the likelihood that users will quickly share the content.

“Many fake news stories are written in a clever way to provoke strong emotional reactions,” Yamout said.

Maintaining critical thinking and asking a simple question — why was this story written? — can help users avoid spreading misinformation, he added.

This dynamic is amplified on social media platforms, where algorithms tend to promote content that generates strong engagement. Emotionally charged posts can therefore spread faster than balanced reporting.

Spotting Signs of Manipulation

Images and videos themselves may provide clues that they have been altered. Edited photos may display distorted background lines, unnatural shadows, or unrealistic skin tones.

In manipulated videos, inconsistencies may appear in lighting, eye movement, or facial expressions. While these signs are not always easy to detect, particularly on smartphones, they can raise doubts about the authenticity of widely shared clips.

A Shared Digital Responsibility

Experts say limiting the spread of misinformation during crises requires cooperation among governments, technology companies, media organizations, and users.

Yamout said the simplest rule may also be the most effective: “If you are not sure the content is accurate, do not share it.”

Responsible sharing can help curb the spread of misinformation and protect digital communities.

As digital platforms continue to shape how information travels across borders, the ability to critically evaluate online content is becoming an essential skill.

During periods of geopolitical tension and conflict, when rumors and facts can blur, the challenge is not only cybersecurity but also protecting the credibility of information itself.



Billionaire Elon Musk Enters Courtroom Showdown with OpenAI

Elon Musk arrives at the 10th Breakthrough Prize Ceremony on April 13, 2024, at the Academy Museum of Motion Pictures in Los Angeles. (AP)
Elon Musk arrives at the 10th Breakthrough Prize Ceremony on April 13, 2024, at the Academy Museum of Motion Pictures in Los Angeles. (AP)
TT

Billionaire Elon Musk Enters Courtroom Showdown with OpenAI

Elon Musk arrives at the 10th Breakthrough Prize Ceremony on April 13, 2024, at the Academy Museum of Motion Pictures in Los Angeles. (AP)
Elon Musk arrives at the 10th Breakthrough Prize Ceremony on April 13, 2024, at the Academy Museum of Motion Pictures in Los Angeles. (AP)

Jury selection is to begin Monday in a high-profile legal battle between billionaire Elon Musk and artificial intelligence startup OpenAI, which he accuses of betraying its non-profit mission.

The clash in a courtroom across the bay from San Francisco pits the world's richest man against a startup that Musk once backed and now competes against in the booming AI sector.

OpenAI's ChatGPT is a formidable rival to the Grok chatbot made by Musk's xAI lab.

While the lawsuit filed by Musk is part of a feud between him and OpenAI chief executive Sam Altman, it spotlights a debate whether AI should ultimately benefit the privileged few or society as a whole.

Court filings lay out how Altman tried to convince Musk to back OpenAI in 2015, acting as a co-founder for a non-profit lab whose technology "would belong to the world."

Musk pumped some $38 million into the lab before he left.

OpenAI is now valued at $852 billion, with Microsoft among its backers, and is preparing to go public on the stock market.

The judge presiding over the trial is aiming for a jury to decide by late May whether OpenAI broke a promise to Musk in its drive to be a leader in AI or just smartly rode the technology to glory.

- Musk duped? -

Musk argues in his lawsuit that he was deceived about OpenAI's mission being altruistic.

The tycoon cites an email from Altman in 2017 claiming that he remained "enthusiastic about the non-profit structure" of their AI venture after Musk threatened to cut off funding for the lab.

Just a few months later, however, OpenAI established a commercial subsidiary in the face of needing to invest hundreds of billions of dollars in data centers to power its technology.

Over the course of the following two years, Microsoft pumped billions of dollars into OpenAI and the tech stalwart's stake in the startup is now valued about $135 billion.

Microsoft chief executive Satya Nadella is among those slated to testify at the trial.

- Aimed at Altman -

Along with calling for OpenAI to be forced to revert to a pure nonprofit, Musk's suit urges the ousting of Altman and OpenAI co-founder and president Greg Brockman.

Musk is also seeking as much as $134 billion in damages and to have the court make OpenAI sever ties with Microsoft.

During pre-trial hearings, US Judge Yvonne Gonzalez Rogers mused that Musk team seemed to be "pulling numbers out of the air" when it came to calculating damages.

If the jury sides with Musk, it will be left to Rogers to determine any remedies or payment.

In what OpenAI has dismissed as a public relations stunt, Musk has vowed that any damages awarded in the suit will go to the startup's nonprofit foundation.

- Quest for control? -

OpenAI internal communications brought to light by the lawsuit reveal tensions that culminated with the temporary ouster of Altman as AI chief executive in late 2023.

Musk's legal team highlighted a 2017 entry in Brockman's personal journal reasoning that it would be lying if Altman publicly asserted OpenAI would stay a nonprofit but became a corporation a short time later.

OpenAI now has a hybrid governance structure giving its nonprofit foundation control over a for-profit arm.

In court filings, OpenAI countered that its break-up with Musk was due to his quest for absolute control rather than its nonprofit status.

"This case has always been about Elon generating more power and more money for what he wants," OpenAI said in a post on X, a platform Musk owns.

"His lawsuit remains nothing more than a harassment campaign that's driven by ego, jealousy and a desire to slow down a competitor."

The startup noted that days after Musk entered the AI race in 2023 he called for a 6-month moratorium on development of advanced AI.


SDAIA Showcases Saudi Arabia’s AI Governance Model at UN Session in Geneva

The Saudi Authority for Data and Artificial Intelligence (SDAIA)
The Saudi Authority for Data and Artificial Intelligence (SDAIA)
TT

SDAIA Showcases Saudi Arabia’s AI Governance Model at UN Session in Geneva

The Saudi Authority for Data and Artificial Intelligence (SDAIA)
The Saudi Authority for Data and Artificial Intelligence (SDAIA)

The Saudi Data and Artificial Intelligence Authority (SDAIA) participated in the 29th session of the United Nations Commission on Science and Technology for Development, held in Geneva from April 20 to 24. Under the theme "Science, Technology, and Innovation in the Age of AI," the session gathered global representatives from governments, international organizations, and the private sector.

During the summit, SDAIA presented the Saudi model for regulating and developing the AI sector, highlighting the Kingdom's leadership in data governance and the creation of reliable AI systems. SDAIA emphasized Saudi Arabia's active role in shaping international governance frameworks and its commitment to utilizing AI to achieve the UN Sustainable Development Goals (SDGs) 2030.

Coinciding with the Year of AI 2026, this participation reinforces the Kingdom’s position as a global hub for emerging technologies.

By sharing national expertise and expanding international cooperation, SDAIA continues to support the adoption of responsible AI practices, aligning with Saudi Vision 2030 to build an integrated national system driven by data and innovation.


China's DeepSeek Releases Long-awaited New AI Model

A man takes photos of a DeepSeek display at a shopping mall in Hangzhou, in China's eastern Zhejiang province on April 23, 2026. (Photo by CN-STR / AFP)
A man takes photos of a DeepSeek display at a shopping mall in Hangzhou, in China's eastern Zhejiang province on April 23, 2026. (Photo by CN-STR / AFP)
TT

China's DeepSeek Releases Long-awaited New AI Model

A man takes photos of a DeepSeek display at a shopping mall in Hangzhou, in China's eastern Zhejiang province on April 23, 2026. (Photo by CN-STR / AFP)
A man takes photos of a DeepSeek display at a shopping mall in Hangzhou, in China's eastern Zhejiang province on April 23, 2026. (Photo by CN-STR / AFP)

Chinese startup DeepSeek released a new artificial intelligence model with "drastically reduced" costs Friday, more than a year after it stunned the world with a low-cost reasoning model that matched the capabilities of US rivals.

The AI race has intensified the rivalry between China and the United States, with the White House on Thursday accusing Chinese entities of a massive effort to steal artificial intelligence technology. Beijing called the claim "baseless".

Hangzhou-based DeepSeek burst onto the scene in January last year with a generative AI chatbot, powered by its R1 reasoning model, that upended assumptions of US dominance in the strategic sector.

DeepSeek-V4 "features an ultra-long context", the company said in a statement on social media platform WeChat, hailing it as "world-leading... with drastically reduced compute (and) memory costs" in a separate announcement on X.

V4 supports a context length of one million "tokens" -- small components of text including words or punctuation -- putting it on par with Google's Gemini.

Context length determines how much input a model is able to absorb to help it complete tasks.

The new V4 is released as two versions, DeepSeek-V4-Pro and DeepSeek-V4-Flash, with the latter being "a more efficient and economical choice" because it has smaller parameters.

In terms of "world knowledge", a benchmark for reasoning, V4-Pro trails only the latest Gemini model, DeepSeek said.

A "preview version" of the open source model is now available, the company said, without indicating when a final version would be released.

Experts say V4's arrival marks an "inflection point" in terms of hardware and cost.

"This addresses the long-standing issues of slower performance and higher costs associated with long context lengths, marking a genuine inflection point for the industry," Zhang Yi, the founder of tech research firm iiMedia, told AFP.

"For end users, this will bring widespread, accessible benefits. For instance, if ultra-long context support becomes a standard feature, long-text processing is expected to move beyond high-end research labs and enter mainstream commercial applications," he said.

V4-Pro has 1.6 trillion parameters while the V4-Flash has 284 billion parameters, which refine models' decision-making ability.

The model has also been "optimized" for popular AI Agent products such as Claude Code, OpenClaw, OpenCode and CodeBuddy, the DeepSeek statement said.

It can also run on chips manufactured by Chinese tech giant Huawei, the company added.

Huawei -- sanctioned by the US since 2019 over national security -- said in a statement Friday that the full range of its Ascend SuperPoD products are supporting DeepSeek's V4 series.

DeepSeek's latest release is a "milestone" for Chinese firms, said veteran AI industry analyst Max Liu.

"It's a good thing for the entire domestic AI industry. It can provide better models for domestic users and we can now expect a lot more things -- more products (and a) more competitive market," he told AFP.

"This is no less shocking than when DeepSeek first came out" if its new model indeed matches the performance of leading models from Western labs, he added.

Last year's so-called "DeepSeek shock" sparked a sell-off of AI-related shares and a reckoning on business strategy in what was also described as a "Sputnik moment" for the industry.

The chatbot performed at a similar level to ChatGPT and other top American offerings, but the company said it had taken significantly less computing power to develop.

However, its sudden popularity raised questions over data privacy and censorship, with the chatbot often refusing to answer questions on sensitive topics such as the 1989 Tiananmen crackdown.

DeepSeek's AI tools have been widely adopted by Chinese municipalities and healthcare institutions as well as the financial sector and other businesses.

This has been partly driven by DeepSeek's decision to make its systems open source, with their inner workings public -- in contrast to the proprietary models sold by OpenAI and other Western rivals.

But the White House has accused Chinese firms of vying to "steal" American technology, ahead of an expected summit between Donald Trump and Xi Jinping in Beijing next month.

"The US has evidence that foreign entities, primarily in China, are running industrial-scale distillation campaigns to steal American AI," Trump's science and technology chief advisor Michael Kratsios said in a post on X.

Distillation is a common practice within AI development, often used by companies to create cheaper, smaller versions of their own models.

"The US claims are entirely baseless," Chinese foreign ministry spokesman Guo Jiakun told a news conference in Beijing. "They are a slanderous smear against the achievements of China's artificial intelligence industry."