-
Liverpool down Real Madrid in Champions League, Bayern edge PSG
-
Van Dijk tells Liverpool to keep calm and follow Arsenal's lead
-
PSG left to sweat on injuries to Dembele and Hakimi
-
Reddit, Kick to be included in Australia's social media ban
-
Ex-Zimbabwe cricket captain Williams treated for 'drug addiction'
-
Padres ace Darvish to miss 2026 MLB season after surgery
-
Diaz hero and villain as Bayern beat PSG in Champions League showdown
-
Liverpool master Real Madrid on Alexander-Arnold's return
-
Van de Ven back in favour as stunning strike fuels Spurs rout
-
Juve held by Sporting Lisbon in stalling Champions League campaign
-
New lawsuit alleges Spotify allows streaming fraud
-
Stocks mostly drop as tech rally fades
-
LIV Golf switching to 72-hole format in 2026: official
-
'At home' Djokovic makes winning return in Athens
-
Manchester City have become 'more beatable', says Dortmund's Gross
-
Merino brace sends Arsenal past Slavia in Champions League
-
Djokovic makes winning return in Athens
-
Napoli and Eintracht Frankfurt in Champions League stalemate
-
Arsenal's Dowman becomes youngest-ever Champions League player
-
Cheney shaped US like no other VP. Until he didn't.
-
Pakistan edge South Africa in tense ODI finish in Faisalabad
-
Brazil's Lula urges less talk, more action at COP30 climate meet
-
Barca's Lewandowski says his season starting now after injury struggles
-
Burn urges Newcastle to show their ugly side in Bilbao clash
-
French pair released after 3-year Iran jail ordeal
-
EU scrambles to seal climate targets before COP30
-
Getty Images largely loses lawsuit against UK AI firm
-
Cement maker Lafarge on trial in France over jihadist funding
-
Sculpture of Trump strapped to a cross displayed in Switzerland
-
Pakistan's Rauf and Indian skipper Yadav punished over Asia Cup behaviour
-
Libbok welcomes 'healthy' Springboks fly-half competition
-
Reeling from earthquakes, Afghans fear coming winter
-
Ronaldo reveals emotional retirement will come 'soon'
-
Munich's surfers stunned after famed river wave vanishes
-
Iran commemorates storming of US embassy with missile replicas, fake coffins
-
Gauff sweeps Paolini aside to revitalise WTA Finals defence
-
Shein vows to cooperate with France in probe over childlike sex dolls
-
Young leftist Mamdani on track to win NY vote, shaking up US politics
-
US government shutdown ties record for longest in history
-
King Tut's collection displayed for first time at Egypt's grand museum
-
Typhoon flooding kills over 40, strands thousands in central Philippines
-
Trent mural defaced ahead of Liverpool return
-
Sabalenka to face Kyrgios in 'Battle of Sexes' on December 28
-
Experts call for global panel to tackle 'inequality crisis'
-
Backed by Brussels, Zelensky urges Orban to drop veto on EU bid
-
After ECHR ruling, Turkey opposition urges pro-Kurd leader's release
-
Stocks drop as tech rally fades
-
UK far-right activist Robinson cleared of terror offence over phone access
-
World on track to dangerous warming as emissions hit record high: UN
-
Nvidia, Deutsche Telekom unveil 1-bn-euro AI industrial hub
'Vibe hacking' puts chatbots to work for cybercriminals
The potential abuse of consumer AI tools is raising concerns, with budding cybercriminals apparently able to trick coding chatbots into giving them a leg-up in producing malicious programmes.
So-called "vibe hacking" -- a twist on the more positive "vibe coding" that generative AI tools supposedly enable those without extensive expertise to achieve -- marks "a concerning evolution in AI-assisted cybercrime" according to American company Anthropic.
The lab -- whose Claude product competes with the biggest-name chatbot, ChatGPT from OpenAI -- highlighted in a report published Wednesday the case of "a cybercriminal (who) used Claude Code to conduct a scaled data extortion operation across multiple international targets in a short timeframe".
Anthropic said the programming chatbot was exploited to help carry out attacks that "potentially" hit "at least 17 distinct organizations in just the last month across government, healthcare, emergency services, and religious institutions".
The attacker has since been banned by Anthropic.
Before then, they were able to use Claude Code to create tools that gathered personal data, medical records and login details, and helped send out ransom demands as stiff as $500,000.
Anthropic's "sophisticated safety and security measures" were unable to prevent the misuse, it acknowledged.
Such identified cases confirm the fears that have troubled the cybersecurity industry since the emergence of widespread generative AI tools, and are far from limited to Anthropic.
"Today, cybercriminals have taken AI on board just as much as the wider body of users," said Rodrigue Le Bayon, who heads the Computer Emergency Response Team (CERT) at Orange Cyberdefense.
- Dodging safeguards -
Like Anthropic, OpenAI in June revealed a case of ChatGPT assisting a user in developing malicious software, often referred to as malware.
The models powering AI chatbots contain safeguards that are supposed to prevent users from roping them into illegal activities.
But there are strategies that allow "zero-knowledge threat actors" to extract what they need to attack systems from the tools, said Vitaly Simonovich of Israeli cybersecurity firm Cato Networks.
He announced in March that he had found a technique to get chatbots to produce code that would normally infringe on their built-in limits.
The approach involved convincing generative AI that it is taking part in a "detailed fictional world" in which creating malware is seen as an art form -- asking the chatbot to play the role of one of the characters and create tools able to steal people's passwords.
"I have 10 years of experience in cybersecurity, but I'm not a malware developer. This was my way to test the boundaries of current LLMs," Simonovich said.
His attempts were rebuffed by Google's Gemini and Anthropic's Claude, but got around safeguards built into ChatGPT, Chinese chatbot Deepseek and Microsoft's Copilot.
In future, such workarounds mean even non-coders "will pose a greater threat to organisations, because now they can... without skills, develop malware," Simonovich said.
Orange's Le Bayon predicted that the tools were likely to "increase the number of victims" of cybercrime by helping attackers to get more done, rather than creating a whole new population of hackers.
"We're not going to see very sophisticated code created directly by chatbots," he said.
Le Bayon added that as generative AI tools are used more and more, "their creators are working on analysing usage data" -- allowing them in future to "better detect malicious use" of the chatbots.
Q.Jaber--SF-PST