OpenAI

OpenAI

Tenable Research uncovered seven critical security flaws in OpenAI’s **ChatGPT (including GPT-4o and GPT-5)**, enabling attackers to **steal private user data** and **gain persistent control** over the AI system. The vulnerabilities leverage **prompt injection**—particularly **indirect prompt injection**—where malicious instructions are hidden in external sources (e.g., blog comments, search-indexed websites) to manipulate ChatGPT without user interaction. Techniques like **0-click attacks via search**, **safety bypasses using trusted Bing tracking links**, and **conversation/memory injection** allow attackers to **exfiltrate sensitive data**, **bypass URL protections**, and **embed persistent threats** in the AI’s memory.The flaws demonstrate how attackers can **trick the AI into executing unauthorized actions**, such as **phishing users**, **leaking private conversations**, or **maintaining long-term access** to compromised accounts. While OpenAI is patching these issues, the research underscores a **systemic risk** in LLM security, with experts warning that **prompt injection remains an unsolved challenge** for AI-driven systems. The exposure threatens **millions of users’ data integrity**, erodes trust in AI safety mechanisms, and highlights the urgency for **context-aware security solutions** to mitigate such attacks.

Source: https://hackread.com/chatgpt-vulnerabilities-hackers-hijack-memory/

OpenAI cybersecurity rating report: https://www.rankiteo.com/company/openai

"id": "ope3692336110625",
"linkid": "openai",
"type": "Vulnerability",
"date": "11/2025",
"severity": "100",
"impact": "5",
"explanation": "Attack threatening the organization’s existence"
{'affected_entities': [{'customers_affected': 'Millions of ChatGPT users '
                                              'globally',
                        'industry': 'Artificial Intelligence',
                        'location': 'San Francisco, California, USA',
                        'name': 'OpenAI',
                        'size': 'Large (1,000+ employees)',
                        'type': 'Technology Company'}],
 'attack_vector': ['Indirect Prompt Injection (hidden in comments/blogs)',
                   '0-Click Attack via Search (malicious indexed websites)',
                   'Safety Bypass (trusted Bing.com tracking links)',
                   'Conversation Injection (self-tricking AI via memory '
                   'manipulation)',
                   'Memory Injection (persistent control)'],
 'customer_advisories': ['Users advised to avoid interacting with untrusted '
                         'external content via ChatGPT'],
 'data_breach': {'data_exfiltration': ['Demonstrated via PoC (e.g., Bing.com '
                                       'tracking links)'],
                 'personally_identifiable_information': 'Potential (depends on '
                                                        'user inputs)',
                 'sensitivity_of_data': 'High (user interactions, potentially '
                                        'sensitive queries)',
                 'type_of_data_compromised': ['Private User Data',
                                              'Potentially PII (if '
                                              'exfiltrated)']},
 'description': 'Tenable Research uncovered seven security vulnerabilities in '
                'OpenAI’s ChatGPT (including GPT-5) that enable attackers to '
                'steal private user data and gain persistent control over the '
                'AI chatbot. The flaws leverage prompt injection techniques, '
                'including indirect prompt injection via hidden comments or '
                'indexed websites, bypassing safety features like `url_safe` '
                'and exploiting memory injection for long-term threats. '
                'Proof-of-Concept (PoC) attacks demonstrated phishing, data '
                'exfiltration, and self-tricking AI behaviors, posing risks to '
                'millions of LLM users. OpenAI is addressing the issues, but '
                'prompt injection remains a systemic challenge for AI '
                'security.',
 'impact': {'brand_reputation_impact': ['High (Erosion of trust in AI safety)',
                                        'Negative media coverage'],
            'data_compromised': ['Private User Data',
                                 'Potential PII (via exfiltration)'],
            'identity_theft_risk': ['High (if PII exfiltrated)'],
            'operational_impact': ['Compromised AI Responses',
                                   'Loss of User Trust',
                                   'Potential Misuse of AI for Malicious '
                                   'Actions'],
            'systems_affected': ['ChatGPT (GPT-4o, GPT-5)',
                                 'LLM-Powered Systems Using ChatGPT APIs']},
 'initial_access_broker': {'backdoors_established': ['Memory Injection '
                                                     '(persistent control)'],
                           'entry_point': ['Malicious comments in blogs',
                                           'Indexed websites with hidden '
                                           'prompts'],
                           'high_value_targets': ['ChatGPT user sessions',
                                                  'Sensitive user queries']},
 'investigation_status': 'Ongoing (OpenAI addressing vulnerabilities; prompt '
                         'injection remains unresolved)',
 'lessons_learned': ['Prompt injection remains a systemic risk for LLMs, '
                     'requiring context-aware security solutions.',
                     'Indirect attack vectors (e.g., hidden comments, indexed '
                     'websites) exploit trust in external sources.',
                     'Safety features like `url_safe` can be bypassed via '
                     'trusted domains (e.g., Bing.com).',
                     'Memory manipulation enables persistent threats, '
                     'necessitating runtime protections.',
                     'Collaboration with security researchers (e.g., Tenable) '
                     'is critical for proactive defense.'],
 'motivation': ['Data Theft',
                'Persistent System Control',
                'Exploitation of AI Trust Mechanisms'],
 'post_incident_analysis': {'corrective_actions': ['OpenAI patching specific '
                                                   'vulnerabilities (e.g., '
                                                   'memory injection).',
                                                   'Research into '
                                                   'context-aware defenses for '
                                                   'prompt injection.',
                                                   'Collaboration with '
                                                   'security firms (e.g., '
                                                   'Tenable) for ongoing '
                                                   'testing.',
                                                   'Potential redesign of '
                                                   'safety features to prevent '
                                                   'domain-based bypasses.'],
                            'root_causes': ['Insufficient input sanitization '
                                            'for indirect prompt injection.',
                                            'Over-reliance on trust in '
                                            'external sources (e.g., indexed '
                                            'websites).',
                                            'Weaknesses in safety features '
                                            '(e.g., `url_safe` bypass via '
                                            'Bing.com links).',
                                            'Lack of runtime protections '
                                            'against memory manipulation.',
                                            'Display bugs hiding malicious '
                                            'instructions in code blocks.']},
 'recommendations': ['Implement context-based security controls for LLMs to '
                     'detect and block prompt injection.',
                     'Enhance input validation for external sources (e.g., '
                     'websites, comments) processed by AI.',
                     'Monitor for anomalous AI behaviors (e.g., self-injected '
                     'instructions, hidden code blocks).',
                     'Adopt zero-trust principles for AI interactions, '
                     'assuming external inputs may be malicious.',
                     'Invest in AI-specific security tools that analyze both '
                     'code and environmental risks.',
                     'Educate users about risks of interacting with '
                     'AI-generated content from untrusted sources.',
                     'Regularly audit LLM safety features (e.g., `url_safe`) '
                     'for bypass vulnerabilities.'],
 'references': [{'source': 'Tenable Research Report'},
                {'source': 'Hackread.com',
                 'url': 'https://www.hackread.com/7-chatgpt-flaws-steal-data-persistent-control/'}],
 'response': {'communication_strategy': ['Public disclosure via Tenable '
                                         'Research report',
                                         'Media statements (e.g., '
                                         'Hackread.com)'],
              'containment_measures': ['Patching vulnerabilities (ongoing)',
                                       'Enhancing prompt injection defenses'],
              'enhanced_monitoring': ['Likely (for prompt injection attempts)'],
              'incident_response_plan_activated': 'Yes (OpenAI notified and '
                                                  'working on fixes)',
              'third_party_assistance': ['Tenable Research (vulnerability '
                                         'disclosure)']},
 'stakeholder_advisories': ['Companies using generative AI warned about prompt '
                            'injection risks (via DryRun Security CEO)'],
 'title': 'Seven Security Flaws in OpenAI’s ChatGPT (Including GPT-5) Expose '
          'Users to Data Theft and Persistent Control',
 'type': ['Vulnerability Exploitation',
          'Prompt Injection',
          'Data Exfiltration',
          'Persistent Threat'],
 'vulnerability_exploited': ['Prompt Injection (indirect)',
                             'Weakness in `url_safe` feature (Bing.com '
                             'tracking link evasion)',
                             'Code block display bug (hiding malicious '
                             'instructions)',
                             'Memory Injection (persistent threat mechanism)']}
Great! Next, complete checkout for full access to Rankiteo Blog.
Welcome back! You've successfully signed in.
You've successfully subscribed to Rankiteo Blog.
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info has been updated.
Your billing was not updated.