GitHub MCP Server Vulnerable to Prompt Injection Attacks, Researchers Warn
Researchers at Zurich-based Invariant Labs have identified a prompt injection vulnerability in GitHub’s Model Context Protocol (MCP) server, which could expose sensitive code from private repositories. The issue stems from an architectural flaw rather than a coding error, allowing attackers to manipulate AI agents into leaking confidential data.
The attack scenario involves a developer working across both public and private repositories, with an AI agent granted access to the private ones. An attacker posts a malicious issue in a public repository—containing hidden prompts instructing the AI to extract and publish private repository data. When the developer tasks the AI with reviewing the public repository, the agent unknowingly executes the malicious instructions, exposing private code.
While the MCP server operates as designed, the attack is low-complexity and high-impact, with no straightforward fix. Researchers suggest mitigations, such as limiting AI agents to one repository per session and enforcing least-privilege access tokens, but these are not foolproof. Open-source developer Simon Willison described the flaw as a "lethal trifecta" for prompt injection, combining private data access, malicious instruction execution, and exfiltration capabilities.
Prompt injection—where malicious instructions are embedded in seemingly benign data—remains difficult to prevent due to the unstructured nature of AI interactions. Despite warnings dating back over two years, effective defenses are still lacking. A proposed MCP server update would filter contributions to only those from users with push access, but this could block legitimate input.
GitHub’s MCP server, currently in preview (v0.4.0), is open-source, and the vulnerability highlights broader challenges in securing AI-driven development tools. The incident underscores the need for stricter access controls and better prompt injection defenses as AI integration in software development expands.
GitHub TPRM report: https://www.rankiteo.com/company/github
Invariant Labs TPRM report: https://www.rankiteo.com/company/invariant-labs
"id": "gitinv1766037664",
"linkid": "github, invariant-labs",
"type": "Vulnerability",
"date": "5/2025",
"severity": "85",
"impact": "4",
"explanation": "Attack with significant impact with customers data leaks"
{'affected_entities': [{'customers_affected': 'Developers using GitHub MCP '
'server with AI agents '
'configured for private '
'repositories',
'industry': 'Software Development',
'location': 'Global',
'name': 'GitHub',
'size': 'Large',
'type': 'Technology Platform'}],
'attack_vector': 'Malicious issue posted in a public repository containing '
'embedded prompts',
'customer_advisories': "Developers are advised to avoid 'always allow' "
'policies for AI agent actions and to restrict agent '
'access to one repository per session. Additional '
'tools like Guardrails and MCP-scan can provide extra '
'protection.',
'data_breach': {'data_exfiltration': 'Yes (via malicious prompts in public '
'repositories)',
'file_types_exposed': 'Code files, repository metadata',
'sensitivity_of_data': 'High (private repository data)',
'type_of_data_compromised': 'Source code, repository '
'information'},
'description': 'Researchers at Invariant Labs discovered a prompt injection '
'vulnerability in GitHub’s MCP (Model Context Protocol) '
'server, which could result in code leaking from private '
'repositories. The issue stems from an architectural flaw '
'where an AI agent with access to both public and private '
'repositories may follow malicious prompts in a public '
'repository to exfiltrate private data. The attack has low '
'complexity and high potential harm, with no easy '
'architectural solution currently available.',
'impact': {'brand_reputation_impact': 'Potential reputational damage to '
'GitHub and affected developers',
'data_compromised': 'Private repository code and information',
'operational_impact': 'Potential exposure of sensitive code and '
'data from private repositories',
'systems_affected': 'GitHub MCP server, AI agents configured with '
'repository access'},
'investigation_status': 'Ongoing',
'lessons_learned': 'Prompt injection vulnerabilities in AI systems are '
'difficult to mitigate due to their unstructured nature. '
'Developers must exercise caution when configuring AI '
"agents with repository access, avoiding 'always allow' "
'policies and implementing least-privilege principles. '
'Confirmation fatigue can undermine security protections.',
'post_incident_analysis': {'corrective_actions': 'GitHub could implement '
'stricter access controls, '
'such as limiting AI agents '
'to one repository per '
'session and filtering '
'contributions based on user '
'permissions. Developers '
'should adopt '
'least-privilege principles '
"and avoid 'always allow' "
'policies.',
'root_causes': 'Architectural flaw in GitHub MCP '
'server allowing AI agents to '
'access and exfiltrate data from '
'private repositories via malicious '
'prompts in public repositories. '
'Lack of strict access controls and '
'confirmation fatigue among '
'developers.'},
'recommendations': ['Restrict AI agents to one repository per session',
'Use least-privilege access tokens for AI agents',
'Implement additional auditing and control tools like '
"Invariant Labs' Guardrails and MCP-scan",
"Avoid 'always allow' confirmation policies for AI agent "
'actions',
'Carefully review and approve all AI agent tool '
'invocations',
'Consider filtering AI agent access to contributions from '
'users with push access to repositories'],
'references': [{'source': 'Invariant Labs Research'},
{'source': "Simon Willison's Analysis"},
{'source': 'GitHub MCP Server GitHub Repository'}],
'response': {'remediation_measures': 'Mitigation strategies include requiring '
'AI agents to access only one repository '
'per session, using least-privilege '
'access tokens, and implementing '
'additional controls like Invariant '
"Labs' Guardrails and MCP-scan product"},
'stakeholder_advisories': 'Developers using GitHub MCP server should review '
'their AI agent configurations and implement '
'recommended mitigations to prevent private data '
'exposure.',
'title': 'Prompt Injection Vulnerability in GitHub’s MCP Server Leading to '
'Private Repository Code Leak',
'type': 'Prompt Injection',
'vulnerability_exploited': 'Architectural flaw in GitHub MCP server allowing '
'AI agents to access and exfiltrate data from '
'private repositories'}