'Vibe Hacking' Puts Chatbots to Work for Cybercriminals

OpenAI in June revealed a case of ChatGPT assisting a user in developing malicious software. Kirill KUDRYAVTSEV / AFP/File
OpenAI in June revealed a case of ChatGPT assisting a user in developing malicious software. Kirill KUDRYAVTSEV / AFP/File
TT

'Vibe Hacking' Puts Chatbots to Work for Cybercriminals

OpenAI in June revealed a case of ChatGPT assisting a user in developing malicious software. Kirill KUDRYAVTSEV / AFP/File
OpenAI in June revealed a case of ChatGPT assisting a user in developing malicious software. Kirill KUDRYAVTSEV / AFP/File

The potential abuse of consumer AI tools is raising concerns, with budding cybercriminals apparently able to trick coding chatbots into giving them a leg-up in producing malicious programs.

So-called "vibe hacking" -- a twist on the more positive "vibe coding" that generative AI tools supposedly enable those without extensive expertise to achieve -- marks "a concerning evolution in AI-assisted cybercrime" according to American company Anthropic.

The lab -- whose Claude product competes with the biggest-name chatbot, ChatGPT from OpenAI -- highlighted in a report published Wednesday the case of "a cybercriminal (who) used Claude Code to conduct a scaled data extortion operation across multiple international targets in a short timeframe".

Anthropic said the programming chatbot was exploited to help carry out attacks that "potentially" hit "at least 17 distinct organizations in just the last month across government, healthcare, emergency services, and religious institutions".

The attacker has since been banned by Anthropic.

Before then, they were able to use Claude Code to create tools that gathered personal data, medical records and login details, and helped send out ransom demands as stiff as $500,000.

Anthropic's "sophisticated safety and security measures" were unable to prevent the misuse, it acknowledged.

Such identified cases confirm the fears that have troubled the cybersecurity industry since the emergence of widespread generative AI tools, and are far from limited to Anthropic.

"Today, cybercriminals have taken AI on board just as much as the wider body of users," said Rodrigue Le Bayon, who heads the Computer Emergency Response Team (CERT) at Orange Cyberdefense.

Dodging safeguards

Like Anthropic, OpenAI in June revealed a case of ChatGPT assisting a user in developing malicious software, often referred to as malware.

The models powering AI chatbots contain safeguards that are supposed to prevent users from roping them into illegal activities.

But there are strategies that allow "zero-knowledge threat actors" to extract what they need to attack systems from the tools, said Vitaly Simonovich of Israeli cybersecurity firm Cato Networks.

He announced in March that he had found a technique to get chatbots to produce code that would normally infringe on their built-in limits.

The approach involved convincing generative AI that it is taking part in a "detailed fictional world" in which creating malware is seen as an art form -- asking the chatbot to play the role of one of the characters and create tools able to steal people's passwords.

"I have 10 years of experience in cybersecurity, but I'm not a malware developer. This was my way to test the boundaries of current LLMs," Simonovich said.

His attempts were rebuffed by Google's Gemini and Anthropic's Claude, but got around safeguards built into ChatGPT, Chinese chatbot Deepseek and Microsoft's Copilot.

In future, such workarounds mean even non-coders "will pose a greater threat to organizations, because now they can... without skills, develop malware," Simonovich said.

Orange's Le Bayon predicted that the tools were likely to "increase the number of victims" of cybercrime by helping attackers to get more done, rather than creating a whole new population of hackers.

"We're not going to see very sophisticated code created directly by chatbots," he said.

Le Bayon added that as generative AI tools are used more and more, "their creators are working on analyzing usage data" -- allowing them in future to "better detect malicious use" of the chatbots.



Poland Urges Brussels to Probe TikTok Over AI-Generated Content

The TikTok logo is pictured outside the company's US head office in Culver City, California, US, September 15, 2020. (Reuters)
The TikTok logo is pictured outside the company's US head office in Culver City, California, US, September 15, 2020. (Reuters)
TT

Poland Urges Brussels to Probe TikTok Over AI-Generated Content

The TikTok logo is pictured outside the company's US head office in Culver City, California, US, September 15, 2020. (Reuters)
The TikTok logo is pictured outside the company's US head office in Culver City, California, US, September 15, 2020. (Reuters)

Poland has asked the European Commission to investigate TikTok after the social media platform hosted AI-generated content including calls for Poland to withdraw from the EU, it said on Tuesday, adding that the content was almost certainly Russian disinformation.

"The disclosed content poses a threat to public order, information security, and the integrity of democratic processes in Poland and across the European Union," Deputy Digitalization Minister Dariusz Standerski said in a letter sent to the Commission.

"The nature of ‌the narratives, ‌the manner in which they ‌are distributed, ⁠and the ‌use of synthetic audiovisual materials indicate that the platform is failing to comply with the obligations imposed on it as a Very Large Online Platform (VLOP)," he added.

A Polish government spokesperson said on Tuesday the content was undoubtedly Russian disinformation as the recordings contained Russian syntax.

TikTok, representatives ⁠of the Commission and of the Russian embassy in Warsaw did not ‌immediately respond to Reuters' requests for ‍comment.

EU countries are taking ‍measures to head off any foreign state attempts to ‍influence elections and local politics after warning of Russian-sponsored espionage and sabotage. Russia has repeatedly denied interfering in foreign elections.

Last year, the Commission opened formal proceedings against social media firm TikTok, owned by China's ByteDance, over its suspected failure to limit election interference, notably in ⁠the Romanian presidential vote in November 2024.

Poland called on the Commission to initiate proceedings in connection with suspected breaches of the bloc's sweeping Digital Services Act, which regulates how the world's biggest social media companies operate in Europe.

Under the Act, large internet platforms like X, Facebook, TikTok and others must moderate and remove harmful content like hate speech, racism or xenophobia. If they do not, the Commission can impose fines of up to 6% ‌of their worldwide annual turnover.


Saudi National Cybersecurity Authority Launches Service to Verify Suspicious Links

Saudi National Cybersecurity Authority Launches Service to Verify Suspicious Links
TT

Saudi National Cybersecurity Authority Launches Service to Verify Suspicious Links

Saudi National Cybersecurity Authority Launches Service to Verify Suspicious Links

The National Cybersecurity Authority has launched the “Tahqaq” service, aimed at enabling members of the public to proactively and safely deal with circulated links and instantly verify their reliability before visiting them.

This initiative comes within the authority’s strategic programs designed to empower individuals to enhance their cybersecurity, SPA reported.

The authority noted that the “Tahqaq” service allows users to scan circulated links and helps reduce the risks associated with using and visiting suspicious links that may lead to unauthorized access to data. The service also provides cybersecurity guidance to users, mitigating emerging cyber risks and boosting cybersecurity awareness across all segments of society.

The “Tahqaq” service is offered as part of the National Portal for Cybersecurity Services (Haseen) in partnership with the authority’s technical arm, the Saudi Information Technology Company (SITE). The service is available through the unified number on WhatsApp (+966118136644), as well as via the Haseen portal website at tahqaq.haseen.gov.sa.


Saudi Arabia’s Space Sector: A Strategic Pillar of a Knowledge-Based Economy

The Kingdom is developing an integrated sovereign space system encompassing infrastructure and applications, led by national expertise - SPA
The Kingdom is developing an integrated sovereign space system encompassing infrastructure and applications, led by national expertise - SPA
TT

Saudi Arabia’s Space Sector: A Strategic Pillar of a Knowledge-Based Economy

The Kingdom is developing an integrated sovereign space system encompassing infrastructure and applications, led by national expertise - SPA
The Kingdom is developing an integrated sovereign space system encompassing infrastructure and applications, led by national expertise - SPA

Saudi Arabia is undergoing significant transformations toward an innovation-driven knowledge economy, with the space sector emerging as a crucial pillar of Saudi Vision 2030. This sector has evolved from a scientific domain into a strategic driver for economic development, focusing on investing in talent, developing infrastructure, and strengthening international partnerships.

CEO of the Saudi Space Agency Dr. Mohammed Al-Tamimi emphasized that space is a vital tool for human development. He noted that space exploration has yielded significant benefits in telecommunications, navigation, and Earth observation, with many daily technologies stemming from space research, SPA reported.

Dr. Al-Tamimi highlighted a notable shift with the private sector's entry into the space industry, which is generating new opportunities. He stressed that Saudi Arabia aims not just to participate but to lead in creating an integrated space ecosystem encompassing legislation, investment, and innovation.

He also noted the sector's role in fostering national identity among youth, key drivers of the industry. Investing in them is crucial for the Kingdom's future, focusing on creating a space sector that empowers Saudi citizens.

In alignment with international efforts, the Saudi Space Agency signed an agreement with NASA for the first Saudi satellite dedicated to studying space weather, part of the Artemis II mission under a scientific cooperation framework established in July 2024.

According to SPA, the Kingdom is developing an integrated sovereign space system encompassing infrastructure and applications, led by national expertise. This initiative is supported by strategic investments and advanced technologies within a governance framework that meets international standards. Central to this vision is the Neo Space Group, owned by the Public Investment Fund, which aims to establish Saudi Arabia as a space leader.

Saudi Arabia views space as a strategic frontier for human development. Vision 2030 transforms space into a bridge between dreams and achievements, empowering Saudi youth to shape their futures. Space represents not just data and satellites but a national journey connecting ambition with innovation.