**Critical ‘LangGrinch’ Vulnerability in LangChain-Core Exposes AI Agent Secrets**
A recently disclosed high-severity vulnerability, dubbed LangGrinch, affects langchain-core—a foundational library powering AI agents in production environments. The flaw, tracked with a CVSS score of 9.3, stems from an insecure serialization and deserialization process in the library’s helper functions. Attackers can exploit it via prompt injection, tricking AI agents into generating structured outputs containing LangChain’s internal marker key. When improperly escaped during serialization, this data can later be deserialized as a trusted object, enabling malicious actions.
The vulnerability resides in langchain-core itself, a core dependency for frameworks and applications across the AI ecosystem. With 847 million total downloads and tens of millions monthly, the library is deeply embedded in modern AI workflows. Exploitation could lead to full environment variable exfiltration, including cloud credentials, database connection strings, vector database secrets, and LLM API keys—potentially escalating to remote code execution under certain conditions.
Security researchers at Cyata identified 12 distinct exploit flows, demonstrating how routine agent operations—such as persisting, streaming, or reconstructing structured data—can inadvertently open attack paths. Notably, the flaw lies in the serialization path, not deserialization, expanding the attack surface reachable from a single prompt. Cyata emphasized the risk of the vulnerability residing in the ecosystem’s "plumbing layer," making it particularly dangerous for production systems.
Patches are available in langchain-core versions 1.2.5 and 0.3.81, following ethical disclosure to LangChain’s maintainers, who implemented remediation and additional security hardening. The discovery underscores the evolving security challenges of agentic AI, where governance and permission boundaries become critical as systems move into production.
LangChain cybersecurity rating report: https://www.rankiteo.com/company/langchain
"id": "LAN1766986573",
"linkid": "langchain",
"type": "Vulnerability",
"date": "12/2025",
"severity": "100",
"impact": "5",
"explanation": "Attack threatening the organization's existence"
{'affected_entities': [{'industry': 'Artificial Intelligence, Technology',
'location': 'Global',
'name': 'LangChain-based AI applications and '
'frameworks',
'size': 'Tens of millions of downloads (847M total '
'downloads of langchain-core)',
'type': 'AI Framework/Library'}],
'attack_vector': 'Prompt Injection',
'data_breach': {'data_exfiltration': 'Yes (via outbound HTTP requests)',
'sensitivity_of_data': 'High (cloud provider credentials, '
'database secrets, LLM API keys)',
'type_of_data_compromised': 'Environment variables, '
'credentials, API keys'},
'description': 'A critical vulnerability in langchain-core, dubbed '
"'LangGrinch,' allows attackers to exploit a serialization and "
'deserialization injection vulnerability. This flaw enables '
'the exfiltration of sensitive secrets, including environment '
'variables, cloud provider credentials, and API keys, and '
'could potentially escalate to remote code execution under '
'certain conditions.',
'impact': {'data_compromised': 'Environment variables, cloud provider '
'credentials, database and RAG connection '
'strings, vector database secrets, LLM API '
'keys',
'operational_impact': 'Potential full environment variable '
'exfiltration, remote code execution under '
'certain conditions',
'systems_affected': 'AI agent frameworks and applications using '
'langchain-core'},
'investigation_status': 'Vulnerability patched and publicly disclosed',
'lessons_learned': 'Agentic AI systems require tight defaults, clear '
'boundaries, and the ability to reduce blast radius. '
"Security must shift from 'what code do we run' to 'what "
'effective permissions does this system end up '
"exercising.'",
'post_incident_analysis': {'corrective_actions': 'Patches released, security '
'hardening steps '
'implemented, and ethical '
'disclosure process '
'followed.',
'root_causes': 'Improper escaping of LangChain’s '
'internal marker key during '
'serialization, allowing untrusted '
'user input to be interpreted as '
'trusted LangChain objects during '
'deserialization.'},
'recommendations': 'Update to langchain-core versions 1.2.5 or 0.3.81 '
'immediately. Implement security hardening steps for '
'agentic AI systems.',
'references': [{'source': 'SiliconANGLE Media'}, {'source': 'Cyata'}],
'response': {'communication_strategy': 'Ethical disclosure to LangChain '
'Maintainers, public advisory by Cyata',
'containment_measures': 'Patches released in langchain-core '
'versions 1.2.5 and 0.3.81',
'remediation_measures': 'Security hardening steps beyond the '
'immediate fix',
'third_party_assistance': 'Cyata (security researchers)'},
'stakeholder_advisories': 'Urgent update recommended for all organizations '
'using langchain-core',
'title': 'LangGrinch Vulnerability in langchain-core',
'type': 'Serialization/Deserialization Injection',
'vulnerability_exploited': 'Improper escaping of LangChain’s internal marker '
'key during serialization'}