OpenAI

OpenAI

OpenAI fixed a critical vulnerability named ShadowLeak in its Deep Research agent, a tool integrated with services like Gmail and GitHub to analyze user emails and documents. Researchers from Radware discovered that attackers could exploit this flaw via a zero-click attack sending a malicious email with hidden instructions (e.g., white-on-white text) that tricked the AI agent into exfiltrating sensitive data (names, addresses, internal documents) to an attacker-controlled server without any user interaction. The attack bypassed safety checks by framing the exfiltration as a 'compliance validation' request, making it undetectable to victims.The vulnerability posed a severe risk of unauthorized data exposure, particularly for business customers, as it could extract highly sensitive information (contracts, customer records, PII) from integrated platforms like Gmail, Google Drive, or SharePoint. OpenAI patched the issue after disclosure in June 2024, confirming no evidence of active exploitation. However, the flaw highlighted the dangers of prompt injection in autonomous AI tools connected to external data sources, where covert actions evade traditional security guardrails.

Source: https://therecord.media/openai-fixes-zero-click-shadowleak-vulnerability

TPRM report: https://www.rankiteo.com/company/openai

"id": "ope5102051091925",
"linkid": "openai",
"type": "Vulnerability",
"date": "6/2024",
"severity": "85",
"impact": "4",
"explanation": "Attack with significant impact with customers data leaks"
{'affected_entities': [{'customers_affected': 'Unknown (potential ChatGPT '
                                              'Business users)',
                        'industry': 'Artificial Intelligence',
                        'location': 'San Francisco, California, USA',
                        'name': 'OpenAI',
                        'size': 'Large (1,000+ employees)',
                        'type': 'Technology Company'}],
 'attack_vector': ['Malicious Email (Prompt Injection)',
                   'Autonomous AI Agent Exploitation'],
 'data_breach': {'data_exfiltration': True,
                 'file_types_exposed': ['Emails',
                                        'Text documents',
                                        'Structured/semi-structured data'],
                 'personally_identifiable_information': True,
                 'sensitivity_of_data': 'High',
                 'type_of_data_compromised': ['PII (names, addresses)',
                                              'Business documents (contracts, '
                                              'meeting notes)',
                                              'Emails',
                                              'Customer records']},
 'date_publicly_disclosed': '2024-09-03',
 'date_resolved': '2024-09-03',
 'description': 'OpenAI fixed a vulnerability in ChatGPT’s Deep Research '
                "agent, dubbed 'ShadowLeak' by Radware, which could allow "
                'attackers to exfiltrate sensitive data (e.g., names, '
                'addresses, internal documents) via malicious emails without '
                'user interaction. The exploit leveraged prompt injection in '
                'integrated services like Gmail, GitHub, or cloud storage '
                '(Google Drive, Dropbox, SharePoint). The attack required no '
                'user clicks, leaving no network-level evidence, and bypassed '
                'safety checks by framing requests as legitimate (e.g., '
                "'compliance validation'). Radware disclosed the bug to OpenAI "
                'on June 18, 2024, via BugCrowd; OpenAI patched it by early '
                'August and marked it resolved on September 3, 2024. No active '
                'exploitation was observed in the wild.',
 'impact': {'brand_reputation_impact': 'Moderate (proactive disclosure '
                                       'mitigated damage)',
            'data_compromised': ['Personal Identifiable Information (PII)',
                                 'Internal Documents',
                                 'Emails',
                                 'Contracts',
                                 'Meeting Notes',
                                 'Customer Records'],
            'identity_theft_risk': 'High (PII exposure)',
            'operational_impact': 'High (covert data exfiltration via '
                                  'autonomous agents)',
            'systems_affected': ['ChatGPT Deep Research Agent',
                                 'Gmail Integration',
                                 'GitHub Integration',
                                 'Google Drive',
                                 'Dropbox',
                                 'SharePoint']},
 'initial_access_broker': {'entry_point': 'Malicious email ingested by Deep '
                                          'Research agent',
                           'high_value_targets': ['PII',
                                                  'Business documents',
                                                  'Customer records']},
 'investigation_status': 'Resolved',
 'lessons_learned': ['Autonomous AI agents introduce novel attack surfaces '
                     '(e.g., zero-click prompt injection).',
                     'Traditional guardrails (e.g., output safety checks) may '
                     'fail to detect covert tool-driven actions.',
                     'Integrations with third-party services (e.g., Gmail, '
                     'GitHub) expand exposure to prompt injection risks.',
                     "Social engineering tactics (e.g., 'compliance "
                     "validation' framing) can bypass AI safety training."],
 'motivation': ['Data Theft', 'Espionage', 'Financial Gain (potential)'],
 'post_incident_analysis': {'corrective_actions': ['Patched prompt injection '
                                                   'vulnerability in Deep '
                                                   'Research agent.',
                                                   'Enhanced safeguards '
                                                   'against autonomous agent '
                                                   'exploits.',
                                                   'Improved collaboration '
                                                   'with security researchers '
                                                   'via bug bounty program.'],
                            'root_causes': ['Insufficient input sanitization '
                                            'for autonomous agent prompts.',
                                            'Over-reliance on output-based '
                                            'safety checks (failed to detect '
                                            'covert actions).',
                                            'Lack of visibility into '
                                            'agent-driven data exfiltration '
                                            'paths.',
                                            'Social engineering '
                                            'vulnerabilities in AI safety '
                                            'training (e.g., bypass via '
                                            "'public data' claims)."]},
 'recommendations': ['Implement stricter input validation for autonomous '
                     'agents interacting with external data sources.',
                     'Enhance logging/monitoring for agent actions to detect '
                     'covert exfiltration attempts.',
                     'Restrict agent access to sensitive connectors (e.g., '
                     'email, cloud storage) by default.',
                     'Develop adversarial testing frameworks for AI agents to '
                     'proactively identify prompt injection vectors.',
                     'Educate users on risks of AI-driven data processing, '
                     "even for 'trusted' tools."],
 'references': [{'source': 'Recorded Future News'},
                {'source': 'Radware Research Report (Gabi Nakibly, Zvika Babo, '
                           'Maor Uziel)'}],
 'response': {'communication_strategy': 'Public disclosure via Recorded Future '
                                        'News; emphasis on bug bounty program',
              'containment_measures': ['Vulnerability patching',
                                       'Safety guardrail enhancements'],
              'enhanced_monitoring': "Likely (implied by 'continual safeguard "
                                     "improvements')",
              'incident_response_plan_activated': True,
              'remediation_measures': ['Prompt injection defenses',
                                       'Autonomous agent behavior '
                                       'restrictions'],
              'third_party_assistance': ['Radware (disclosure)',
                                         'BugCrowd (reporting platform)']},
 'stakeholder_advisories': 'OpenAI confirmed patch via public statement; no '
                           'formal advisory issued.',
 'title': "OpenAI ChatGPT Deep Research 'ShadowLeak' Vulnerability",
 'type': ['Data Exfiltration', 'Prompt Injection', 'Zero-Click Attack'],
 'vulnerability_exploited': 'ShadowLeak (CVE pending)'}
Great! Next, complete checkout for full access to Rankiteo Blog.
Welcome back! You've successfully signed in.
You've successfully subscribed to Rankiteo Blog.
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info has been updated.
Your billing was not updated.