GitHub (Microsoft)

GitHub (Microsoft)

GitHub’s **Copilot Chat**, an AI-powered coding assistant, was found vulnerable to a critical flaw named **CamoLeak** (CVSS 9.6), allowing attackers to exfiltrate secrets, private source code, and unpublished vulnerability details from repositories. The exploit leveraged GitHub’s invisible markdown comments in pull requests or issues—content hidden from human reviewers but parsed by Copilot Chat. By embedding malicious prompts, attackers tricked the AI into searching for sensitive data (e.g., API keys, tokens, zero-day descriptions) and encoding it as sequences of 1x1 pixel images via GitHub’s **Camo image-proxy service**. The attack bypassed GitHub’s **Content Security Policy (CSP)** by mapping characters to pre-generated Camo URLs, enabling covert data reconstruction through observed image fetch patterns. Proof-of-concept demonstrations extracted **AWS keys, security tokens, and private zero-day exploit notes**—material that could be weaponized for further attacks. GitHub mitigated the issue by disabling image rendering in Copilot Chat (August 14) and blocking Camo-based exfiltration, but the incident highlights risks of AI-assisted workflows expanding attack surfaces. Unauthorized access to proprietary code and vulnerability research poses severe threats to intellectual property and supply-chain security.

Source: https://www.theregister.com/2025/10/09/github_copilot_chat_vulnerability/

TPRM report: https://www.rankiteo.com/company/github

"id": "git3492034100925",
"linkid": "github",
"type": "Vulnerability",
"date": "8/2025",
"severity": "85",
"impact": "4",
"explanation": "Attack with significant impact with customers data leaks"
{'affected_entities': [{'customers_affected': 'Developers/Organizations using '
                                              'Copilot Chat with private '
                                              'repositories',
                        'industry': 'Software Development/DevOps',
                        'location': 'San Francisco, California, USA',
                        'name': 'GitHub (Microsoft)',
                        'size': 'Large (10,000+ employees)',
                        'type': 'Technology Company'}],
 'attack_vector': ['Hidden Markdown Comments in Pull Requests/Issues',
                   'AI Prompt Injection',
                   'Camo Image-Proxy Abuse'],
 'customer_advisories': ['GitHub Security Advisory (2024-08-14)'],
 'data_breach': {'data_exfiltration': True,
                 'file_types_exposed': ['Markdown Files',
                                        'Code Files',
                                        'Private Issues/Pull Requests'],
                 'sensitivity_of_data': 'High (Includes zero-day exploit '
                                        'details and authentication '
                                        'credentials)',
                 'type_of_data_compromised': ['Source Code',
                                              'Secrets (API Keys, Tokens)',
                                              'Unpublished Vulnerability '
                                              'Research']},
 'date_publicly_disclosed': '2024-08-14',
 'date_resolved': '2024-08-14',
 'description': "GitHub's Copilot Chat, an AI-powered coding assistant, was "
                'found to have a critical vulnerability (dubbed **CamoLeak**) '
                'that allowed attackers to exfiltrate secrets, private source '
                'code, and unpublished vulnerability descriptions from '
                "repositories. The flaw exploited Copilot Chat's parsing of "
                "'invisible' markdown comments in pull requests or "
                'issues—content not visible in the standard UI but accessible '
                'to the chatbot. Attackers could embed malicious prompts '
                'instructing Copilot to search for sensitive data (e.g., API '
                'keys, tokens, zero-day descriptions) and exfiltrate it via a '
                "covert channel using GitHub's Camo image-proxy service. The "
                'vulnerability was scored **9.6 on the CVSS scale** and '
                'demonstrated in a proof-of-concept that extracted AWS keys, '
                'security tokens, and unpublished exploit details.',
 'impact': {'brand_reputation_impact': 'Moderate (Trust in AI-assisted coding '
                                       'tools undermined)',
            'data_compromised': ['API Keys',
                                 'Security Tokens',
                                 'Private Source Code',
                                 'Unpublished Zero-Day Vulnerability '
                                 'Descriptions'],
            'identity_theft_risk': 'High (If stolen tokens/keys are abused)',
            'operational_impact': 'High (Potential for stolen '
                                  'credentials/exploits to enable further '
                                  'attacks)',
            'systems_affected': ['GitHub Copilot Chat',
                                 'Private/Internal Repositories']},
 'initial_access_broker': {'entry_point': 'Hidden markdown comments in GitHub '
                                          'pull requests/issues',
                           'high_value_targets': ['Private repositories',
                                                  'Unpublished vulnerability '
                                                  'research',
                                                  'Authentication secrets']},
 'investigation_status': 'Mitigated (Exfiltration vector blocked; long-term '
                         'fix pending)',
 'lessons_learned': 'AI-assisted tools like Copilot Chat expand the attack '
                    'surface by introducing new input channels (e.g., hidden '
                    'markdown) that bypass human review. Content Security '
                    'Policies (CSP) and proxy services (e.g., Camo) can be '
                    'weaponized for covert exfiltration if not properly '
                    'restricted. Developer workflows integrating AI require '
                    'stricter input validation and output monitoring to '
                    'prevent prompt injection and data leakage.',
 'motivation': ['Espionage',
                'Credential Theft',
                'Exploit Development (Zero-Day Theft)'],
 'post_incident_analysis': {'corrective_actions': ['Disabled image rendering '
                                                   'in Copilot Chat.',
                                                   'Blocked Camo-based '
                                                   'exfiltration routes.',
                                                   'Planned long-term fixes to '
                                                   'restrict AI tool access '
                                                   'and harden input '
                                                   'validation.'],
                            'root_causes': ["Copilot Chat's over-permissive "
                                            'access to repository content '
                                            '(inherited from user '
                                            'permissions).',
                                            'Lack of input sanitization for '
                                            "'invisible' markdown comments.",
                                            'Camo image-proxy service '
                                            'repurposed as a covert '
                                            'exfiltration channel.',
                                            'AI tool design assuming trust in '
                                            'contextual inputs without '
                                            'human-visible cues.']},
 'recommendations': ['Audit AI tool permissions to limit access to sensitive '
                     'data.',
                     "Sanitize all inputs (including 'invisible' content like "
                     'markdown comments) before processing by AI assistants.',
                     'Disable unnecessary features (e.g., image rendering) in '
                     'AI tools handling sensitive data.',
                     'Implement behavioral detection for anomalous AI-assisted '
                     'actions (e.g., unusual file access patterns).',
                     'Educate developers on risks of AI prompt injection and '
                     'social engineering via hidden content.'],
 'references': [{'source': 'The Register',
                 'url': 'https://www.theregister.com/2024/08/14/github_copilot_chat_vulnerability/'},
                {'source': 'Legit Security Disclosure (HackerOne)'}],
 'response': {'containment_measures': ['Disabled image rendering in Copilot '
                                       'Chat (2024-08-14)',
                                       'Blocked Camo image-proxy exfiltration '
                                       'route'],
              'incident_response_plan_activated': True,
              'remediation_measures': ['Long-term fix under development'],
              'third_party_assistance': ['Legit Security (Researcher Omer '
                                         'Mayraz)',
                                         'HackerOne (Vulnerability '
                                         'Disclosure)']},
 'title': 'CamoLeak: Critical Vulnerability in GitHub Copilot Chat Enables '
          'Code and Secret Exfiltration',
 'type': ['Data Exfiltration', 'AI-Assisted Attack', 'Supply Chain Risk'],
 'vulnerability_exploited': "CVE-Pending (CamoLeak: Copilot Chat's parsing of "
                            'invisible markdown + Camo image-proxy '
                            'exfiltration)'}
Great! Next, complete checkout for full access to Rankiteo Blog.
Welcome back! You've successfully signed in.
You've successfully subscribed to Rankiteo Blog.
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info has been updated.
Your billing was not updated.