How Safe Is Signal Messaging App Used by Trump Aides to Share War Plans?

The Signal messaging app logo is seen on a smartphone, in front of the same displayed same logo, in this illustration taken, January 13, 2021. (Reuters)
The Signal messaging app logo is seen on a smartphone, in front of the same displayed same logo, in this illustration taken, January 13, 2021. (Reuters)
TT

How Safe Is Signal Messaging App Used by Trump Aides to Share War Plans?

The Signal messaging app logo is seen on a smartphone, in front of the same displayed same logo, in this illustration taken, January 13, 2021. (Reuters)
The Signal messaging app logo is seen on a smartphone, in front of the same displayed same logo, in this illustration taken, January 13, 2021. (Reuters)

Top Trump administration officials used messaging app Signal to share war plans and mistakenly included a journalist in the encrypted chat, spurring calls by Democratic lawmakers for a congressional investigation into the security breach.

Under US law, it can be a crime to mishandle, misuse or abuse classified information, though it is unclear whether those provisions might have been violated in this case.

Below are some of the main facts about Signal:

HOW SAFE IS IT?

Signal is an open-source and fully encrypted messaging service that runs on centralized servers maintained by Signal Messenger.

The only user data it stores on its servers are phone numbers, the date a user joined the service, and the last login information.

Users' contacts, chats and other communications are instead stored on the user's phone, with the possibility of setting the option to automatically delete conversations after a certain amount of time.

The company uses no ads or affiliate marketers, and doesn't track users' data, as stated on its website.

Signal also gives users the possibility to hide their phone number from others and use an additional safety number to verify the safety of their messages, it adds.

Signal does not use US government encryption or that of any other governments, and is not hosted on government servers.

The messaging app has a "stellar reputation and is widely used and trusted in the security community", said Rocky Cole, whose cybersecurity firm iVerify helps protect smartphone users from hackers.

"The risk of discussing highly sensitive national security information on Signal isn't so much that Signal itself is insecure," Cole added.

Actors who pose threats to nation states, he said, "have a demonstrated ability to remotely compromise the entire mobile phone itself. If the phone itself isn't secure, all the Signal messages on that device can be read."

HOW DOES SIGNAL WORK?

Signal is a secure messaging service that uses end-to-end encryption, meaning the service provider cannot access and read private conversations and calls from users on its app, therefore guaranteeing its users' privacy.

Signal's software is available across platforms, both on smartphones and computers, and enables messaging, voice and video calls. A telephone number is necessary to register and create an account.

Unlike other messaging apps, Signal does not track or store user data, and its code is publicly available, so security experts can verify how it works and ensure it remains safe.

Signal President Meredith Whittaker on Tuesday defended the app's security: "Signal is the gold standard in private comms."

She added in a post on X: "WhatsApp licenses Signal’s cryptography to protect message contents for consumer WhatsApp."

WHO FOUNDED SIGNAL?

Signal was founded in 2012 by entrepreneur Moxie Marlinspike and Whittaker, according to the company's website.

In February 2018, Marlinspike alongside WhatsApp co-founder Brian Acton started the non-profit Signal Foundation, which currently oversees the app.

Acton provided an initial funding of $50 million. Acton left WhatsApp in 2017 due to differences around the use of customer data and targeted advertising.

Signal is not tied to any major tech companies and will never be acquired by one, it says on its website.

WHO USES SIGNAL?

Widely used by privacy advocates and political activists, Signal has gone from an exotic messaging app used by dissidents to a whisper network for journalists and media, to a messaging tool for government agencies and organizations.

Signal saw "unprecedented" growth in 2021 after a disputed change in rival WhatsApp's privacy terms, as privacy advocates jumped off WhatsApp on fears users would have to share their data with both Facebook and Instagram.

Reuters lists Signal as one of the tools tipsters can use to share confidential news tips with its journalists, while noting that "no system is 100 percent secure".

Signal's community forum, an unofficial group which states that its administration is composed of Signal employees, also lists the European Commission as a user of the tool. In 2017, the US Senate Sergeant at Arms approved the use of Signal for Senate staff.

"Although Signal is widely regarded as offering very secure communications for consumers due to its end-to-end encryption and because it collects very little user data, it is hard to believe it is suitable for exchanging messages related to national security," said Ben Wood, chief analyst at CCS Insight - alluding to the breach involving top Trump aides discussing plans for military strikes on Yemeni Houthi militants.

Google's message services Google Messages and Google Allo, as well as Meta's Facebook Messenger and WhatsApp, use the Signal Protocol, according to Signal's website.



Saudi Arabia Leads Globally in Women’s AI Empowerment with Groundbreaking Initiatives

Saudi Arabia Leads Globally in Women’s AI Empowerment with Groundbreaking Initiatives
TT

Saudi Arabia Leads Globally in Women’s AI Empowerment with Groundbreaking Initiatives

Saudi Arabia Leads Globally in Women’s AI Empowerment with Groundbreaking Initiatives

The Kingdom of Saudi Arabia has made significant strides in empowering women in the data and artificial intelligence (AI) sectors, aiming to elevate their global competitiveness as part of Saudi Vision 2030.

Numerous initiatives have increased the participation of Saudi women in advanced technologies, with the Saudi Data and Artificial Intelligence Authority (SDAIA) offering specialized programs and workshops in partnership with global technology leaders, SPA reported.

In just one year, over 666,000 Saudi women received training in data and AI, positioning the Kingdom first globally in women’s AI empowerment, according to the 2025 AI Index by Stanford University. Key initiatives include the Artificial Intelligence Academy with Microsoft, the Generative AI Academy with NVIDIA, the "SAMAI" initiative (targeting one million Saudis in AI), and the development of a national data and AI curriculum for university students.

These programs have enhanced women's skills and facilitated their contributions to crucial sectors such as health, energy, and education.

SDAIA has created a supportive work environment for women through flexible digital infrastructure, enabling remote work and work-life balance. This commitment reflects the Kingdom's dedication to building a sustainable, data-driven economy, with Saudi women now playing vital roles in shaping the future of advanced technologies.


China Could See Widespread Use of Brain-Computer Tech in 3-5 Years, Expert Says

People cross a road in Beijing on March 6, 2026. (AFP)
People cross a road in Beijing on March 6, 2026. (AFP)
TT

China Could See Widespread Use of Brain-Computer Tech in 3-5 Years, Expert Says

People cross a road in Beijing on March 6, 2026. (AFP)
People cross a road in Beijing on March 6, 2026. (AFP)

China could see brain-computer interface (BCI) technology move into practical public use within three to five years as products mature, a leading BCI expert said, as Beijing races to catch up with US startups including Elon Musk's Neuralink.

Beijing elevated BCIs to a core future strategic industry in its new five-year plan released this week, placing it alongside sectors such as quantum, embodied AI, 6G and nuclear fusion.

"New policies will not change things overnight. I think after another three to five years, we will gradually see some (BCI) products moving ‌towards actual practical ‌service for the public," said Yao Dezhong, Director of ‌the ⁠Sichuan Institute of Brain ⁠Science, in an interview on Saturday on the sidelines of China's annual parliament meetings in Beijing.

TRIALS

A national BCI development strategy released last year aims for major technical breakthroughs by 2027 and for China to cultivate two or three world-class firms by 2030.

China is the second country to launch invasive BCI human trials. More than 10 trials are active, matching the US, while scientists plan to enroll more ⁠than 50 patients nationwide this year.

Recent high-profile trials have enabled ‌paralyzed patients and amputees to regain partial mobility ‌and operate robotic hands or intelligent wheelchairs.

The government has already integrated some BCI treatments into ‌national medical insurance in a few pilot provinces, and the domestic market is ‌projected to reach 5.58 billion yuan ($809 million) by 2027, according to CCID Consulting.

"China has many advantages in BCIs, such as its huge population, enormous patient demand, cost-effective industrial chain and abundant pool of STEM (science, technology, engineering and maths) talent," said Yao, who also ‌leads a key neuroinformatics research center under China's science and technology ministry.

Policies such as insurance integration and national standards aim ⁠to close the "huge" ⁠gap between scientific research, industry and clinical applications, he said.

"The path from experimental to clinical trials is quite long, and this remains a problem," he told Reuters, adding that many Chinese hospitals have established BCI research labs to speed up the process.

While US startups like Neuralink focus on invasive chips that penetrate brain tissue, Chinese researchers are developing invasive, semi-invasive and non-invasive BCIs with wider potential clinical use.

Semi-invasive BCIs, placed on the brain's surface, may lose some signal quality but reduce risks such as tissue damage and other post-surgery complications. Neuralink's surgical robot can insert hundreds of electrodes into the brain in minutes.

"This is a technical advantage, which I think is remarkable," said Yao, of Neuralink.

"(But) China is actually making very fast progress in this area now. In fact, Musk's direction is basically achievable domestically."


Questions over AI Capability as Tech Guides Iran Strikes

Artificial intelligence tools can also be found built into semi-autonomous attack drones and other weapons. ATTA KENARE / AFP
Artificial intelligence tools can also be found built into semi-autonomous attack drones and other weapons. ATTA KENARE / AFP
TT

Questions over AI Capability as Tech Guides Iran Strikes

Artificial intelligence tools can also be found built into semi-autonomous attack drones and other weapons. ATTA KENARE / AFP
Artificial intelligence tools can also be found built into semi-autonomous attack drones and other weapons. ATTA KENARE / AFP

The latest bout of fighting between the United States, Israel and Iran has seen AI deployed as never before to sift intelligence and select targets, although the technology's use in war remains hotly debated.

Different forms of artificial intelligence have reportedly been used to guide the Israeli campaign in Gaza and the capture of Venezuelan leader Nicolas Maduro in an American raid.

And experts believe the technology has helped select targets for the thousands of US and Israeli strikes on Iran since February 28 -- although exact uses have yet to be confirmed.

Today "every military power of any significance invests hugely in military applications of AI," said Laure de Roucy-Rochegonde of French think tank IFRI.

"Almost any military function can be boosted with AI," from "logistics to reconnaissance, observation, information warfare, electronic warfare and cybersecurity," she added.

AI tools can also be found built into semi-autonomous attack drones and other weapons.

But one of their best-known uses is in shortening the so-called "kill chain", the time and decision-making between detecting a target and striking it.

US forces use the Maven Smart System (MSS) built by Palantir, which the company says can identify and prioritize potential targets.

The Washington Post reported this week that Anthropic's Claude generative AI model has been integrated with Maven to boost the tool's detection and simulation capabilities.

Palantir and Anthropic did not respond to AFP's requests for comment.

AI algorithms "allow us to move much faster in handling information, and above all to be more comprehensive," said Bertrand Rondepierre, head of the French army's AI agency AMIAD.

The technology can sift through vast quantities of data, including "satellite images, radar, electromagnetic waves, sound, drone images and sometimes real-time video," he added.

Human control

AI's deployment in war poses a slew of moral and legal questions, notably on the extent of human control over their actions.

The debate was brought to the fore during the fighting in Gaza, where Israeli forces used a program dubbed "Lavender" to identify targets -- within a certain margin of error.

That application worked "because it covered a very limited area", de Roucy-Rochegonde said.

Israel also has a "mass surveillance system" that could feed data about the enclave's inhabitants into Lavender.

It seems less likely that such a system has been set up in Iran," she added.

"If something does go wrong, then who's responsible?" Peter Asaro, chair of the International Committee for Robot Arms Control (ICRAC), said in an interview with AFP.

The widely reported bombing of an Iranian school -- which authorities there say killed 150 people -- could be a case of mistaken AI targeting, he added.

Neither the United States nor Israel has acknowledged responsibility for the strike.

AFP was unable to reach the scene of the school to verify what happened there.

But the site was close to two facilities controlled by the Iranian Revolutionary Guard Corps (IRGC), Tehran's powerful ideological elite.

"They didn't distinguish it from the military base as they should have, (but) who is they?" he asked -- human or machine?

If AI was used, he argued that the key question is "how old was the data" used for the targeting, and whether the misdirected strike stemmed from "a database error".

Step by step

Rondepierre said that AIs "operating without anyone being in control" are "science fiction".

In France, at least, "military commanders are at the heart of the action and the design of these systems," he insisted.

"No military decision-maker would agree to use an AI if he didn't have trust in and control over what it's doing," Rondepierre added.

"They know what the risks involved are, what the capabilities of these systems are and what contexts they can use them in, with what level of trust."

Today was just the "beginning" on use of AI by the world's armed forces, said Benjamin Jensen of Washington-based think tank CSIS, who has taken part in tests of AI in military decision-making over the past decade.

The world's armies "haven't fundamentally rethought how we plan, how we conduct operations, to take advantage" of AI's capabilities, he added.

"It's going to take a generation for us to really figure this out."