GitLab

GitLab

GitLab's coding assistant Duo was found to be vulnerable to malicious AI prompts hidden in comments, source code, merge request descriptions, and commit messages from public repositories. This allowed researchers to trick the chatbot into making malicious code suggestions, share malicious links, and inject rogue HTML code in responses that stealthily leaked code from private projects. GitLab patched the HTML injection, but the incident highlighted the importance of treating AI tools as part of an app's attack surface.

Source: https://www.csoonline.com/article/3992845/prompt-injection-flaws-in-gitlab-duo-highlights-risks-in-ai-assistants.html

TPRM report: https://scoringcyber.rankiteo.com/company/gitlab

"id": "git816052225",
"linkid": "gitlab",
"type": "Vulnerability",
"date": "5/2025",
"severity": "50",
"impact": "",
"explanation": "Attack limited on finance or reputation: Attack with no impact but news about this attack in the press"
{'affected_entities': [{'customers_affected': None,
                        'industry': 'Technology',
                        'location': None,
                        'name': 'GitLab',
                        'size': None,
                        'type': 'Software Development Platform'}],
 'attack_vector': 'Malicious AI Prompts',
 'data_breach': {'type_of_data_compromised': 'Private project code'},
 'description': 'GitLab’s coding assistant Duo can parse malicious AI prompts '
                'hidden in comments, source code, merge request descriptions '
                'and commit messages from public repositories, researchers '
                'found. This technique allowed them to trick the chatbot into '
                'making malicious code suggestions to users, share malicious '
                'links and inject rogue HTML code in responses that stealthily '
                'leaked code from private projects.',
 'impact': {'data_compromised': 'Code from private projects'},
 'initial_access_broker': {'entry_point': 'Public repositories'},
 'lessons_learned': 'AI tools are part of your app’s attack surface now. If '
                    'they read from the page, that input needs to be treated '
                    'like any other user-supplied data — untrusted, messy, and '
                    'potentially dangerous.',
 'motivation': 'Data Leakage, Malicious Code Injection',
 'references': [{'date_accessed': None,
                 'source': 'Legit Security',
                 'url': None}],
 'response': {'remediation_measures': ['Patched the HTML injection']},
 'title': 'GitLab Coding Assistant Duo Vulnerability',
 'type': 'Prompt Injection',
 'vulnerability_exploited': 'HTML Injection'}
Great! Next, complete checkout for full access to Rankiteo Blog.
Welcome back! You've successfully signed in.
You've successfully subscribed to Rankiteo Blog.
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info has been updated.
Your billing was not updated.