Critical RCE Vulnerability in Hugging Face’s LeRobot Exposes AI and Robotics Systems
A severe remote code execution (RCE) vulnerability, tracked as CVE-2026-25874 (CVSS 9.8), has been discovered in Hugging Face’s LeRobot, an open-source robotics machine learning framework with over 21,500 GitHub stars. The flaw allows unauthenticated attackers to execute arbitrary system commands on vulnerable deployments, posing a significant risk to AI and research environments leveraging distributed GPU-based inference.
The vulnerability stems from LeRobot’s asynchronous inference architecture, where policy computations are offloaded to a GPU-backed gRPC-based PolicyServer. The server uses Python’s pickle.loads() function to deserialize incoming data across multiple RPC endpoints including SendPolicyInstructions and SendObservations without proper validation. Since pickle inherently permits arbitrary code execution during deserialization, malicious payloads can trigger system-level commands before type checks are enforced.
Compounding the risk, the gRPC service is configured with add_insecure_port(), exposing communications without TLS or authentication. While LeRobot binds to localhost by default, production deployments often expose the service to 0.0.0.0, enabling remote exploitation. Attackers with network access can scan for exposed instances and deliver crafted payloads without authentication, making the flaw highly scalable.
Security researcher chocapikk identified that the vulnerability arises from unsafe deserialization occurring before validation, allowing malicious objects to execute even if later rejected. Notably, affected code sections included #nosec comments, indicating developers bypassed security linter warnings despite known risks.
To mitigate CVE-2026-25874, organizations are advised to:
- Replace
picklewith secure alternatives like JSON, native protobuf fields, or Hugging Face’ssafetensors. - Enable TLS encryption by switching to
add_secure_port(). - Implement gRPC authentication via interceptors and token-based access controls.
The incident underscores persistent security gaps in machine learning frameworks, where rapid prototyping often overrides secure coding practices. Despite Hugging Face’s development of safetensors to address serialization risks, the flaw highlights inconsistent security implementation in distributed AI systems. As ML frameworks integrate deeper into production and robotics, secure design principles must become a foundational requirement, particularly for architectures handling untrusted network input.
Source: https://cyberpress.org/hugging-face-lerobot-vulnerability/
Hugging Face TPRM report: https://www.rankiteo.com/company/huggingface
"id": "hug1777387852",
"linkid": "huggingface",
"type": "Vulnerability",
"date": "4/2026",
"severity": "100",
"impact": "5",
"explanation": "Attack threatening the organization's existence"
{'affected_entities': [{'customers_affected': 'Users of LeRobot framework '
'(over 21,500 GitHub stars)',
'industry': 'Artificial Intelligence, Machine '
'Learning, Robotics',
'name': 'Hugging Face',
'type': 'Company'}],
'attack_vector': 'Network-based exploitation via exposed gRPC service',
'description': 'A severe remote code execution (RCE) vulnerability, tracked '
'as CVE-2026-25874 (CVSS 9.8), has been discovered in Hugging '
'Face’s LeRobot, an open-source robotics machine learning '
'framework. The flaw allows unauthenticated attackers to '
'execute arbitrary system commands on vulnerable deployments, '
'posing a significant risk to AI and research environments '
'leveraging distributed GPU-based inference.',
'impact': {'brand_reputation_impact': 'Negative impact on Hugging Face’s '
'reputation due to security flaws in '
'open-source framework',
'operational_impact': 'Potential unauthorized system command '
'execution, compromise of AI/robotics '
'environments',
'systems_affected': 'AI and robotics systems using Hugging Face’s '
'LeRobot framework'},
'lessons_learned': 'The incident underscores persistent security gaps in '
'machine learning frameworks, where rapid prototyping '
'often overrides secure coding practices. Secure design '
'principles must become a foundational requirement for '
'architectures handling untrusted network input.',
'post_incident_analysis': {'corrective_actions': ['Secure deserialization '
'practices',
'TLS and authentication for '
'gRPC services',
'Network segmentation and '
'monitoring'],
'root_causes': ['Unsafe deserialization via '
"Python's `pickle.loads()` without "
'validation',
'Exposure of gRPC service via '
'`add_insecure_port()` (no '
'TLS/authentication)',
'Production deployments binding to '
'`0.0.0.0` instead of localhost',
'Bypassing of security linter '
'warnings (`#nosec` comments)']},
'recommendations': ['Replace `pickle` with secure alternatives like JSON, '
'native protobuf fields, or Hugging Face’s `safetensors`.',
'Enable TLS encryption by switching to '
'`add_secure_port()`.',
'Implement gRPC authentication via interceptors and '
'token-based access controls.',
'Adopt secure coding practices in AI/ML frameworks to '
'prevent unsafe deserialization.'],
'references': [{'source': 'Security researcher chocapikk'}],
'response': {'containment_measures': ['Replace `pickle` with secure '
'alternatives (JSON, protobuf, '
'`safetensors`)',
'Enable TLS encryption via '
'`add_secure_port()`',
'Implement gRPC authentication via '
'interceptors and token-based access '
'controls'],
'enhanced_monitoring': 'Recommended for exposed gRPC services',
'network_segmentation': 'Recommended for production deployments',
'remediation_measures': ['Secure deserialization practices',
'Network segmentation for gRPC services',
'Enhanced monitoring of exposed '
'endpoints']},
'title': 'Critical RCE Vulnerability in Hugging Face’s LeRobot Exposes AI and '
'Robotics Systems',
'type': 'Remote Code Execution (RCE)',
'vulnerability_exploited': 'CVE-2026-25874 (Unsafe deserialization via '
"Python's `pickle.loads()` in LeRobot's gRPC "
'PolicyServer)'}