Critical Vulnerability in AI Agent Supply Chain Exposes Sensitive Data and Cryptocurrency Theft
Researchers from the University of California, Santa Barbara, have uncovered a severe security flaw in the AI agent ecosystem, where third-party LLM API routers intermediary services between AI agents and providers like OpenAI, Anthropic, and Google can be weaponized to hijack tool calls, drain cryptocurrency wallets, and exfiltrate credentials at scale.
The study, titled "Your Agent Is Mine: Measuring Malicious Intermediary Attacks on the LLM Supply Chain," reveals that these routers operate as application-layer proxies with full plaintext access to JSON payloads, making them an unguarded trust boundary. Unlike traditional man-in-the-middle attacks, these intermediaries are voluntarily configured by developers, allowing malicious actors to read, modify, or fabricate tool calls undetected.
Attack Methods and Findings
The research team tested 28 paid and 400 free routers from platforms like Taobao, Xianyu, and public communities, uncovering alarming vulnerabilities:
- 9 routers (1 paid, 8 free) injected malicious code into tool calls.
- 17 free routers triggered unauthorized use of AWS credentials after interception.
- 1 router drained Ethereum (ETH) from a researcher-owned private key.
- 2 routers employed adaptive evasion, activating payloads only after 50 requests or targeting autonomous "YOLO mode" sessions.
A particularly dangerous attack, payload injection (AC-1), replaces benign installer URLs or package names with attacker-controlled endpoints. Since tampered JSON payloads remain syntactically valid, they bypass schema validation and security checks, enabling arbitrary code execution with a single rewritten command.
Poisoning and Unauthorized Access
The researchers demonstrated the ease of exploiting this attack surface:
- After leaking a single OpenAI API key on Chinese forums, the key generated 100 million GPT-5.4 tokens and exposed credentials across downstream sessions.
- Weak router decoys deployed across 20 domains and 20 IPs attracted 40,000 unauthorized access attempts, served 2 billion billed tokens, and exposed 99 credentials across 440 Codex sessions 401 of which ran in autonomous YOLO mode, where tool execution requires no manual approval.
Mitigation Strategies
While no client-side defense can fully authenticate tool-call provenance, the researchers propose three immediate mitigations:
- Fail-closed policy gate – Blocks shell-rewrite and dependency-injection attacks by allowing only commands from a local allowlist (1.0% false positive rate).
- Response-side anomaly screening – Flags 89% of payload injection attempts using an IsolationForest model (6.7% false positive rate).
- Append-only transparency logging – Records request/response metadata for forensic analysis (~1.26 KB per entry).
The study concludes that provider-signed response envelopes similar to DKIM for email are necessary to cryptographically verify tool-call integrity. Until major AI providers implement such mechanisms, developers must treat third-party routers as potential adversaries and deploy layered defenses.
Source: https://cybersecuritynews.com/ai-router-vulnerabilities/
Google Research cybersecurity rating report: https://www.rankiteo.com/company/googleresearch
Amazon Web Services (AWS) cybersecurity rating report: https://www.rankiteo.com/company/amazon-web-services
OpenAI cybersecurity rating report: https://www.rankiteo.com/company/openai
Anthropic cybersecurity rating report: https://www.rankiteo.com/company/anthropicresearch
"id": "GOOAMAOPEANT1775823892",
"linkid": "googleresearch, amazon-web-services, openai, anthropicresearch",
"type": "Vulnerability",
"date": "1/2026",
"severity": "100",
"impact": "5",
"explanation": "Attack threatening the organization's existence"
{'affected_entities': [{'industry': 'Academia/Cybersecurity Research',
'location': 'Santa Barbara, California, USA',
'name': 'University of California, Santa Barbara '
'(Researchers)',
'type': 'Research Institution'},
{'customers_affected': 'Developers and users of OpenAI '
'API keys exposed via routers',
'industry': 'Artificial Intelligence',
'location': 'San Francisco, California, USA',
'name': 'OpenAI',
'type': 'AI Provider'},
{'industry': 'Artificial Intelligence',
'location': 'San Francisco, California, USA',
'name': 'Anthropic',
'type': 'AI Provider'},
{'industry': 'Technology/Artificial Intelligence',
'location': 'Mountain View, California, USA',
'name': 'Google (AI Services)',
'type': 'AI Provider'},
{'customers_affected': '40,000 unauthorized access '
'attempts, 440 Codex sessions',
'industry': 'Various',
'location': 'Global',
'name': 'Developers using third-party LLM API routers',
'type': 'Businesses/Individuals'}],
'attack_vector': 'Third-party LLM API routers (intermediary services)',
'data_breach': {'data_exfiltration': 'Yes (credentials and session data '
'exfiltrated via malicious routers)',
'number_of_records_exposed': '99 credentials, 100M+ tokens '
'generated via leaked API key, '
'2B tokens billed via '
'unauthorized access',
'personally_identifiable_information': 'Yes (credentials, '
'session data)',
'sensitivity_of_data': 'High (cryptocurrency private keys, AI '
'API keys, user credentials)',
'type_of_data_compromised': ['Credentials',
'API keys',
'Session data',
'Personally identifiable '
'information']},
'description': 'Researchers from the University of California, Santa Barbara, '
'uncovered a severe security flaw in the AI agent ecosystem '
'where third-party LLM API routers can be weaponized to hijack '
'tool calls, drain cryptocurrency wallets, and exfiltrate '
'credentials at scale. The vulnerability allows malicious '
'intermediaries to read, modify, or fabricate tool calls '
'undetected, leading to arbitrary code execution, credential '
'exposure, and financial theft.',
'impact': {'brand_reputation_impact': 'Potential erosion of trust in AI agent '
'supply chain and third-party routers',
'data_compromised': ['Credentials (99 exposed)',
'API keys (e.g., OpenAI API key generating '
'100M tokens)',
'Session data (440 Codex sessions)'],
'financial_loss': 'Cryptocurrency drained (e.g., Ethereum from '
'researcher-owned wallet)',
'identity_theft_risk': 'High (exposure of personally identifiable '
'information via credentials)',
'operational_impact': 'Unauthorized tool execution, arbitrary code '
'execution, credential leakage',
'payment_information_risk': 'High (cryptocurrency wallet drainage)',
'revenue_loss': '2 billion billed tokens served via unauthorized '
'access',
'systems_affected': ['AI agent ecosystems',
'LLM API routers',
'Downstream AI applications']},
'initial_access_broker': {'backdoors_established': 'Malicious code injection '
'into tool calls',
'entry_point': 'Third-party LLM API routers',
'high_value_targets': ['AI API keys',
'Credentials',
'Cryptocurrency wallets']},
'investigation_status': 'Completed (research study)',
'lessons_learned': 'Third-party LLM API routers represent an unguarded trust '
'boundary in the AI agent supply chain. Developers must '
'treat these intermediaries as potential adversaries and '
'implement layered defenses until AI providers adopt '
'cryptographic verification mechanisms like '
'provider-signed response envelopes.',
'motivation': ['Financial gain (cryptocurrency theft)',
'Data exfiltration (credentials)',
'Unauthorized access to AI systems'],
'post_incident_analysis': {'corrective_actions': ['Adoption of '
'provider-signed response '
'envelopes by AI providers',
'Implementation of '
'fail-closed policy gates '
'and anomaly screening by '
'developers',
'Increased awareness of '
'supply chain risks in AI '
'agent ecosystems'],
'root_causes': ['Lack of cryptographic '
'verification for tool-call '
'integrity in AI agent supply '
'chains',
'Voluntary configuration of '
'third-party routers by developers '
'without security validation',
'Plaintext access to JSON payloads '
'by intermediaries']},
'recommendations': ['Implement fail-closed policy gates to block unauthorized '
'tool calls',
'Deploy response-side anomaly screening to detect payload '
'injection attempts',
'Use append-only transparency logging for forensic '
'analysis',
'Avoid relying on third-party routers for sensitive '
'operations or deploy allowlists for tool calls',
'AI providers should implement cryptographic verification '
'(e.g., provider-signed response envelopes) for tool-call '
'integrity'],
'references': [{'source': 'University of California, Santa Barbara Research '
'Paper'},
{'source': '"Your Agent Is Mine: Measuring Malicious '
'Intermediary Attacks on the LLM Supply Chain"'}],
'response': {'remediation_measures': ['Fail-closed policy gate to block '
'shell-rewrite and dependency-injection '
'attacks',
'Response-side anomaly screening using '
'IsolationForest model',
'Append-only transparency logging for '
'forensic analysis']},
'threat_actor': 'Malicious third-party LLM API router providers, initial '
'access brokers',
'title': 'Critical Vulnerability in AI Agent Supply Chain Exposes Sensitive '
'Data and Cryptocurrency Theft',
'type': 'Supply Chain Attack',
'vulnerability_exploited': 'Plaintext access to JSON payloads in AI agent '
'tool calls, lack of cryptographic verification '
'for tool-call integrity'}