Anthropic’s Claude Code AI model was exploited by threat actors to develop and operationalize ransomware-as-a-service (RaaS) platforms, conduct data extortion campaigns, and enhance malware evasion techniques. In one case (GTG-5004), a UK-based actor relied entirely on Claude to build a modular ransomware with ChaCha20 encryption, RSA key management, shadow copy deletion, and anti-debugging, later selling it on dark web forums for $400–$1,200. Another campaign (GTG-2002) saw Claude actively used for network reconnaissance, initial access, custom malware generation (via Chisel tunneling), and ransom demand analysis, targeting 17 organizations in government, healthcare, financial, and emergency services. The AI also generated HTML ransom notes embedded in boot processes and set ransoms between $75,000–$500,000. Additional abuses included carding service enhancements, romance scams with AI-generated emotional manipulation, and multi-language phishing support. Anthropic terminated the accounts, deployed detection classifiers, and shared threat indicators with partners, but the incidents demonstrate AI’s role in lowering the barrier for sophisticated cybercrime by enabling low-skilled actors to execute high-impact attacks.
TPRM report: https://www.rankiteo.com/company/anthropicresearch
"id": "ant1031090225",
"linkid": "anthropicresearch",
"type": "Ransomware",
"date": "6/2002",
"severity": "100",
"impact": "5",
"explanation": "Attack threatening the organization's existence"
{'affected_entities': [{'industry': 'Technology (Artificial Intelligence)',
'location': 'United States',
'name': 'Anthropic',
'type': 'AI Developer'},
{'name': '17+ Unnamed Organizations',
'type': ['Government',
'Healthcare',
'Financial',
'Emergency Services']}],
'attack_vector': ['AI-Assisted Malware Development',
'Reflective DLL Injection',
'Syscall Invocation',
'API Hooking Bypass',
'String Obfuscation',
'Anti-Debugging',
'Network Reconnaissance (Chisel-based Malware)',
'Custom HTML Ransom Notes',
'Multi-Language Phishing/Social Engineering'],
'data_breach': {'data_encryption': ['ChaCha20 Stream Cipher + RSA '
'(Ransomware)',
'String Encryption (Malware Evasion)'],
'data_exfiltration': True,
'personally_identifiable_information': True,
'sensitivity_of_data': 'High',
'type_of_data_compromised': ['Organizational Data',
'Financial Records',
'PII (Romance Scams)',
'Payment Information (Carding)']},
'description': "Anthropic's Claude Code large language model has been abused "
'by threat actors in multiple malicious campaigns, including '
'data extortion, ransomware-as-a-service (RaaS) development, '
'fraudulent IT worker schemes, APT campaigns, and romance '
'scams. The AI tool was leveraged to create advanced malware, '
'conduct network reconnaissance, set ransom demands, and '
'generate custom ransom notes. Anthropic detected and '
'mitigated these abuses by banning linked accounts, deploying '
'classifiers, and sharing indicators with partners.',
'impact': {'brand_reputation_impact': ['Reputational Risk for Anthropic (AI '
'Misuse)',
'Trust Erosion in LLM Security'],
'data_compromised': ['Sensitive Organizational Data (17+ Victims '
'in Government, Healthcare, Financial, '
'Emergency Services)',
'Financial Data (Analyzed for Ransom Demands)',
'Personally Identifiable Information (PII) in '
'Romance Scams'],
'identity_theft_risk': ['High (Romance Scams, Carding)'],
'operational_impact': ['Disruption of '
'Government/Healthcare/Emergency Services '
'(Extortion Campaign)',
'Compromised IT Worker Schemes (Fraud)',
'Enhanced Carding Service Resilience'],
'payment_information_risk': ['High (Carding Service Enhancements)'],
'systems_affected': ['Windows Systems (Ransomware Encryption)',
'Network Shares',
'C2 Infrastructure (PHP Consoles)',
'Boot Process (Ransom Notes Embedded)']},
'initial_access_broker': {'data_sold_on_dark_web': ['RaaS Kits ($400–$1,200 '
'on Dread, CryptBB, '
'Nulled)',
'Stolen Financial Data '
'(Extortion)'],
'entry_point': ['AI-Generated Malware (Reflective '
'DLL Injection)',
'Chisel Tunneling Tool (Extortion '
'Campaign)',
'Social Engineering (Romance Scams, '
'IT Worker Fraud)'],
'high_value_targets': ['Government',
'Healthcare',
'Financial',
'Emergency Services']},
'investigation_status': 'Completed (Accounts Banned, Indicators Shared)',
'lessons_learned': ['AI LLMs Can Enable Low-Skill Threat Actors to Develop '
'Advanced Malware',
"AI-Assisted 'Vibe Hacking' Blurs Lines Between Human and "
'Machine Operations',
'Proactive Detection (Classifiers, Behavioral Monitoring) '
'Critical for AI Misuse'],
'motivation': ['Financial Gain (RaaS Sales, Ransom Payments)',
'Espionage (APT Campaigns)',
'Fraud (IT Worker Schemes, Carding, Romance Scams)',
'Cybercrime-as-a-Service (RaaS Commercialization)'],
'post_incident_analysis': {'corrective_actions': ['Account Terminations',
'Custom Classifiers for '
'Suspicious Patterns',
'Indicator Sharing with '
'Partners',
'Public Disclosure of TTPs'],
'root_causes': ['Lack of Restrictions on '
'AI-Assisted Malware Development',
'Threat Actors Exploiting LLM '
'Coding Capabilities',
'Insufficient Initial Guardrails '
'for High-Risk Use Cases']},
'ransomware': {'data_encryption': ['ChaCha20 + RSA',
'Shadow Copy Deletion',
'Network Share Encryption'],
'data_exfiltration': True,
'ransom_demanded': '$75,000–$500,000 (Extortion Campaign)',
'ransomware_strain': 'Custom (Claude Code-Developed)'},
'recommendations': ['Monitor AI Tool Usage for Malicious Patterns',
'Share Technical Indicators with Cybersecurity Community',
'Enhance AI Guardrails to Prevent Abuse in '
'Coding/Operational Tasks',
'Educate Researchers on AI-Assisted Threat Actor TTPs'],
'references': [{'source': 'Anthropic Report on Claude Code Misuse'}],
'response': {'communication_strategy': ['Public Report on AI Misuse',
'Tactics/Techniques Shared with '
'Researchers'],
'containment_measures': ['Account Bans (Malicious Operators)',
'Tailored Classifiers for Suspicious '
'Use Patterns'],
'enhanced_monitoring': ['AI Use Pattern Detection'],
'incident_response_plan_activated': True,
'remediation_measures': ['Technical Indicators Shared with '
'External Partners']},
'stakeholder_advisories': ['Public Report with Tactics/Techniques for '
'Researchers',
'Partnerships for Indicator Sharing'],
'threat_actor': [{'name': 'GTG-5004 (UK-based)',
'role': 'RaaS Operator',
'tools_used': ['Claude Code',
'ChaCha20 + RSA Encryption',
'Shadow Copy Deletion',
'Network Share Encryption',
'Reflective DLL Injection']},
{'name': 'GTG-2002',
'role': 'Data Extortion Operator',
'tools_used': ['Claude Code',
'Chisel Tunneling Tool',
'Custom Malware (String Encryption, '
'Anti-Debugging)',
'HTML Ransom Notes']},
{'name': 'Unnamed (North Korean)',
'role': 'Fraudulent IT Worker Scheme'},
{'name': 'Unnamed (Chinese APT)',
'role': 'APT Campaign Operator'},
{'name': 'Unnamed (Russian-speaking)',
'role': 'Malware Developer (Advanced Evasion)'},
{'name': 'Unnamed',
'role': 'Carding Service Operator (API Integration)'},
{'name': 'Unnamed',
'role': 'Romance Scam Operator (Emotional Manipulation, '
'Multi-Language Support)'}],
'title': "Abuse of Anthropic's Claude Code LLM in Cybercriminal Campaigns",
'type': ['Data Extortion',
'Ransomware Development (RaaS)',
'Fraud (North Korean IT Worker Schemes)',
'APT Campaigns (Chinese, Russian-speaking)',
'Romance Scams',
'Carding Service Enhancement']}