Cybersecurity researchers at Noma Security disclosed a critical vulnerability dubbed AgentSmith in LangChain‘s LangSmith platform, specifically affecting its public Prompt Hub. This flaw, with a CVSS score of 8.8, could allow malicious AI agents to steal sensitive user data, including OpenAI API keys, and manipulate responses from large language models (LLMs). The vulnerability exploited harmful proxy configurations, enabling attackers to gain unauthorized access to victims' OpenAI accounts, potentially downloading sensitive datasets, inferring confidential information, or causing financial losses by exhausting API usage quotas. In advanced attacks, the malicious proxy could alter LLM responses, leading to fraud or incorrect automated decisions. LangChain confirmed the issue and deployed a fix, introducing new safety measures to prevent future exploitation.
Source: https://hackread.com/agentsmith-flaw-langsmith-prompt-hub-api-keys-data/
TPRM report: https://scoringcyber.rankiteo.com/company/langchain
"id": "lan901061825",
"linkid": "langchain",
"type": "Vulnerability",
"date": "6/2025",
"severity": "100",
"impact": "5",
"explanation": "Attack threatening the organization's existence"
{'affected_entities': [{'industry': 'AI and Technology',
'name': 'LangChain',
'type': 'Company'}],
'attack_vector': 'Man-in-the-Middle (MITM)',
'data_breach': {'data_exfiltration': True,
'sensitivity_of_data': 'High',
'type_of_data_compromised': ['OpenAI API keys',
'Uploaded files',
'Voice inputs']},
'date_detected': '2024-10-29',
'date_publicly_disclosed': '2024-11-06',
'date_resolved': '2024-11-06',
'description': 'Cybersecurity researchers at Noma Security disclosed a '
'critical vulnerability within LangChain‘s LangSmith platform, '
'affecting its public Prompt Hub. This flaw, dubbed AgentSmith '
'with a CVSS score of 8.8, allows malicious AI agents to steal '
'sensitive user data and manipulate responses from large '
'language models (LLMs).',
'impact': {'data_compromised': ['OpenAI API keys',
'Uploaded files',
'Voice inputs'],
'systems_affected': "LangSmith's Prompt Hub"},
'initial_access_broker': {'entry_point': 'Public Prompt Hub'},
'investigation_status': 'Resolved',
'lessons_learned': 'Need for organizations to enhance AI security practices, '
'maintain a centralized inventory of AI agents, implement '
'runtime protections, and enforce strong security '
'governance.',
'motivation': ['Data theft', 'Unauthorized access', 'Financial loss', 'Fraud'],
'post_incident_analysis': {'corrective_actions': ['Deploying a fix',
'Introducing new safety '
'measures',
'Warning messages',
'Persistent banner on agent '
'description pages'],
'root_causes': 'Vulnerability in public Prompt Hub '
'allowing malicious proxy '
'configurations'},
'recommendations': ['Maintain a centralized inventory of AI agents using an '
'AI BOM',
'Implement runtime protections',
'Enforce strong security governance'],
'references': [{'source': 'Noma Security'}, {'source': 'Hackread.com'}],
'response': {'containment_measures': 'Swift deployment of a fix',
'remediation_measures': ['New safety measures',
'Warning messages',
'Persistent banner on agent description '
'pages'],
'third_party_assistance': 'Noma Security'},
'title': "AgentSmith Vulnerability in LangChain's LangSmith Platform",
'type': 'Vulnerability Exploit',
'vulnerability_exploited': 'Hidden malicious proxy in AI agents'}