Fake Images, Videos in Wartime: How to Tell Fact from Deepfakes

Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)
Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)
TT

Fake Images, Videos in Wartime: How to Tell Fact from Deepfakes

Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)
Misinformation spreads rapidly on social media during crises and conflicts (Shutterstock)

As tensions escalate across several fronts in the Middle East, information is spreading almost as quickly as the events themselves.

Social media platforms are often the first place where images, videos, and reports of alleged attacks or military developments appear.

But alongside legitimate information, a wave of misleading or fabricated content is also circulating online, making it increasingly difficult to separate fact from fiction.

A Growing Digital Challenge

Cybersecurity experts warn that the rapid spread of misinformation, particularly through manipulated videos and deepfake technologies, has become a growing digital threat during periods of geopolitical instability.

Maher Yamout, Lead Security Researcher at Kaspersky, told Asharq Al-Awsat that distinguishing reliable information from false narratives becomes especially critical during emergencies, when emotions run high, and people tend to share content quickly without verifying it.

“With developments unfolding in the Middle East, government authorities in Gulf Cooperation Council countries have warned against publishing or circulating information from unknown sources,” he said.

“Fake news, misleading or inaccurate information presented as real news, becomes more dangerous during emergencies.”

Misinformation Spreads Fast

Fake news is not new, but its scale and speed have changed dramatically with the rise of social media and artificial intelligence tools. During periods of geopolitical tension, unverified reports or manipulated videos can spread within minutes, reaching millions before fact-checkers can respond.

Experts generally divide fake news into two main categories. The first involves fully fabricated content designed to influence public opinion or attract traffic to specific websites. The second contains elements of truth but presents them inaccurately because the author failed to verify all the facts or exaggerated certain details.

Both can confuse audiences during crises, particularly when users rely on social media rather than trusted news outlets for updates.

Authorities in several countries have also warned that sharing inaccurate information, even unintentionally, may expose users to legal accountability.

Governments and digital security experts are therefore urging greater digital awareness and responsibility when sharing information during sensitive periods.

AI-Powered Deception

Artificial intelligence has added a new layer to the misinformation problem through so-called deepfake technologies, fabricated videos created using machine learning techniques such as face swapping or synthetic visual generation.

In some cases, authentic footage can be altered to appear as if it documents events that never occurred.

Yamout said verifying information has become more important than ever with the spread of deepfakes.

“Artificial intelligence makes it possible to combine different video clips to produce new scenes showing events or actions that never happened in reality, often with highly realistic results,” he said.

Such technology can make manipulated videos appear convincing and potentially mislead users, especially when they circulate in emotionally charged contexts. Edited clips may appear to show attacks, military movements, or political statements that never took place.

Even when these videos are later debunked, their initial spread can still trigger confusion or public anxiety.

How to Verify Information

Cybersecurity experts say users themselves play a key role in limiting the spread of misinformation. While platforms and regulators are developing tools to detect fake content, individuals can take simple steps to verify information before sharing it.

The first step is checking the source. Websites that publish false information may contain spelling errors in their web addresses or use unusual domains that mimic well-known media outlets.

Yamout advises carefully reviewing the website address and checking the “About Us” section on unfamiliar sites. It is generally safer to rely on official sources such as government websites or trusted media organizations.

Users should also verify the identity of the author or the organization behind the report. If the author is unknown or lacks clear expertise in the subject, the information should be treated cautiously.

Comparing reports with other credible sources is also important. Professional news organizations follow editorial guidelines and verification procedures, meaning major events are typically reported by multiple reputable outlets.

Yamout also highlighted the importance of checking dates and timelines, noting that some misleading content recirculates old events and presents them as recent developments.

He added that social media algorithms can create so-called “echo chambers,” where users are shown content that aligns with their existing views and interests. This makes it essential to consult diverse and reliable sources before forming conclusions.

Playing on Emotions

Many fake news stories are designed to provoke strong emotional reactions. Sensational headlines or dramatic clips are often crafted to trigger fear, anger, or shock, emotions that increase the likelihood that users will quickly share the content.

“Many fake news stories are written in a clever way to provoke strong emotional reactions,” Yamout said.

Maintaining critical thinking and asking a simple question — why was this story written? — can help users avoid spreading misinformation, he added.

This dynamic is amplified on social media platforms, where algorithms tend to promote content that generates strong engagement. Emotionally charged posts can therefore spread faster than balanced reporting.

Spotting Signs of Manipulation

Images and videos themselves may provide clues that they have been altered. Edited photos may display distorted background lines, unnatural shadows, or unrealistic skin tones.

In manipulated videos, inconsistencies may appear in lighting, eye movement, or facial expressions. While these signs are not always easy to detect, particularly on smartphones, they can raise doubts about the authenticity of widely shared clips.

A Shared Digital Responsibility

Experts say limiting the spread of misinformation during crises requires cooperation among governments, technology companies, media organizations, and users.

Yamout said the simplest rule may also be the most effective: “If you are not sure the content is accurate, do not share it.”

Responsible sharing can help curb the spread of misinformation and protect digital communities.

As digital platforms continue to shape how information travels across borders, the ability to critically evaluate online content is becoming an essential skill.

During periods of geopolitical tension and conflict, when rumors and facts can blur, the challenge is not only cybersecurity but also protecting the credibility of information itself.



China's DeepSeek Slashes Prices for New AI Model

This photograph shows screens displaying the logo of DeepSeek, a Chinese artificial intelligence company which develops open-source large language models, in Toulouse, southwestern France on January 29, 2025. (AFP)
This photograph shows screens displaying the logo of DeepSeek, a Chinese artificial intelligence company which develops open-source large language models, in Toulouse, southwestern France on January 29, 2025. (AFP)
TT

China's DeepSeek Slashes Prices for New AI Model

This photograph shows screens displaying the logo of DeepSeek, a Chinese artificial intelligence company which develops open-source large language models, in Toulouse, southwestern France on January 29, 2025. (AFP)
This photograph shows screens displaying the logo of DeepSeek, a Chinese artificial intelligence company which develops open-source large language models, in Toulouse, southwestern France on January 29, 2025. (AFP)

China's ‌DeepSeek is offering developers a 75% discount on its newly unveiled AI model, DeepSeek-V4-Pro, until May 5.

The company is also cutting prices for input cache hits across its entire DeepSeek ‌API lineup ‌to one-tenth of ‌the original ⁠price, it said ⁠in a post on X.

On Friday, DeepSeek launched a preview of its highly anticipated V4 model, which ⁠has been adapted ‌for ‌Huawei's chip technology.

V4 comes in two ‌versions: the more ‌powerful and higher priced Pro, and the lighter, cheaper Flash variant.

The Pro version ‌outperforms other open-source models in world-knowledge benchmarks, trailing ⁠only ⁠Google's closed-source Gemini-Pro-3.1, DeepSeek said.

According to the Chinese startup, the V4 models are particularly suited to AI agent work, which can execute more complex tasks than chatbots but require greater computing power.


Billionaire Elon Musk Enters Courtroom Showdown with OpenAI

Elon Musk arrives at the 10th Breakthrough Prize Ceremony on April 13, 2024, at the Academy Museum of Motion Pictures in Los Angeles. (AP)
Elon Musk arrives at the 10th Breakthrough Prize Ceremony on April 13, 2024, at the Academy Museum of Motion Pictures in Los Angeles. (AP)
TT

Billionaire Elon Musk Enters Courtroom Showdown with OpenAI

Elon Musk arrives at the 10th Breakthrough Prize Ceremony on April 13, 2024, at the Academy Museum of Motion Pictures in Los Angeles. (AP)
Elon Musk arrives at the 10th Breakthrough Prize Ceremony on April 13, 2024, at the Academy Museum of Motion Pictures in Los Angeles. (AP)

Jury selection is to begin Monday in a high-profile legal battle between billionaire Elon Musk and artificial intelligence startup OpenAI, which he accuses of betraying its non-profit mission.

The clash in a courtroom across the bay from San Francisco pits the world's richest man against a startup that Musk once backed and now competes against in the booming AI sector.

OpenAI's ChatGPT is a formidable rival to the Grok chatbot made by Musk's xAI lab.

While the lawsuit filed by Musk is part of a feud between him and OpenAI chief executive Sam Altman, it spotlights a debate whether AI should ultimately benefit the privileged few or society as a whole.

Court filings lay out how Altman tried to convince Musk to back OpenAI in 2015, acting as a co-founder for a non-profit lab whose technology "would belong to the world."

Musk pumped some $38 million into the lab before he left.

OpenAI is now valued at $852 billion, with Microsoft among its backers, and is preparing to go public on the stock market.

The judge presiding over the trial is aiming for a jury to decide by late May whether OpenAI broke a promise to Musk in its drive to be a leader in AI or just smartly rode the technology to glory.

- Musk duped? -

Musk argues in his lawsuit that he was deceived about OpenAI's mission being altruistic.

The tycoon cites an email from Altman in 2017 claiming that he remained "enthusiastic about the non-profit structure" of their AI venture after Musk threatened to cut off funding for the lab.

Just a few months later, however, OpenAI established a commercial subsidiary in the face of needing to invest hundreds of billions of dollars in data centers to power its technology.

Over the course of the following two years, Microsoft pumped billions of dollars into OpenAI and the tech stalwart's stake in the startup is now valued about $135 billion.

Microsoft chief executive Satya Nadella is among those slated to testify at the trial.

- Aimed at Altman -

Along with calling for OpenAI to be forced to revert to a pure nonprofit, Musk's suit urges the ousting of Altman and OpenAI co-founder and president Greg Brockman.

Musk is also seeking as much as $134 billion in damages and to have the court make OpenAI sever ties with Microsoft.

During pre-trial hearings, US Judge Yvonne Gonzalez Rogers mused that Musk team seemed to be "pulling numbers out of the air" when it came to calculating damages.

If the jury sides with Musk, it will be left to Rogers to determine any remedies or payment.

In what OpenAI has dismissed as a public relations stunt, Musk has vowed that any damages awarded in the suit will go to the startup's nonprofit foundation.

- Quest for control? -

OpenAI internal communications brought to light by the lawsuit reveal tensions that culminated with the temporary ouster of Altman as AI chief executive in late 2023.

Musk's legal team highlighted a 2017 entry in Brockman's personal journal reasoning that it would be lying if Altman publicly asserted OpenAI would stay a nonprofit but became a corporation a short time later.

OpenAI now has a hybrid governance structure giving its nonprofit foundation control over a for-profit arm.

In court filings, OpenAI countered that its break-up with Musk was due to his quest for absolute control rather than its nonprofit status.

"This case has always been about Elon generating more power and more money for what he wants," OpenAI said in a post on X, a platform Musk owns.

"His lawsuit remains nothing more than a harassment campaign that's driven by ego, jealousy and a desire to slow down a competitor."

The startup noted that days after Musk entered the AI race in 2023 he called for a 6-month moratorium on development of advanced AI.


SDAIA Showcases Saudi Arabia’s AI Governance Model at UN Session in Geneva

The Saudi Authority for Data and Artificial Intelligence (SDAIA)
The Saudi Authority for Data and Artificial Intelligence (SDAIA)
TT

SDAIA Showcases Saudi Arabia’s AI Governance Model at UN Session in Geneva

The Saudi Authority for Data and Artificial Intelligence (SDAIA)
The Saudi Authority for Data and Artificial Intelligence (SDAIA)

The Saudi Data and Artificial Intelligence Authority (SDAIA) participated in the 29th session of the United Nations Commission on Science and Technology for Development, held in Geneva from April 20 to 24. Under the theme "Science, Technology, and Innovation in the Age of AI," the session gathered global representatives from governments, international organizations, and the private sector.

During the summit, SDAIA presented the Saudi model for regulating and developing the AI sector, highlighting the Kingdom's leadership in data governance and the creation of reliable AI systems. SDAIA emphasized Saudi Arabia's active role in shaping international governance frameworks and its commitment to utilizing AI to achieve the UN Sustainable Development Goals (SDGs) 2030.

Coinciding with the Year of AI 2026, this participation reinforces the Kingdom’s position as a global hub for emerging technologies.

By sharing national expertise and expanding international cooperation, SDAIA continues to support the adoption of responsible AI practices, aligning with Saudi Vision 2030 to build an integrated national system driven by data and innovation.