300 Million Private AI Chat Messages Exposed in Major Firebase Misconfiguration
A critical security lapse in the popular AI chat app Chat & Ask AI exposed 300 million private messages from 25 million users, revealing deeply personal conversations with AI models like ChatGPT, Claude, and Gemini. The breach, discovered by independent security researcher Harry and reported to 404 Media, stemmed from a basic misconfiguration in the app’s Google Firebase database rather than a malicious hack.
The app, available on Google Play and Apple’s App Store, allows users to interact with third-party AI models. However, its Firebase backend was left publicly accessible due to improper security rules. While Firebase databases are secure by default, developers must manually set access controls. In this case, the rules were set to allow read: if true;, effectively leaving the database unlocked. With minimal effort, anyone with a Firebase login could access the entire dataset, including timestamps, user settings, AI model preferences, and custom chatbot names.
A sample of 60,000 users and 1 million messages confirmed the breach affected at least half of the app’s 50 million claimed users. While no passwords or financial data were exposed, the leaked conversations included highly sensitive content: suicide notes, self-harm methods, drug manufacturing instructions, and hacking techniques. Many users treated the AI as a confidant, sharing intimate details under the assumption of privacy.
The incident underscores the risks of "wrapper" apps third-party services that resell access to major AI models (e.g., OpenAI, Google) without implementing equivalent security measures. Firebase misconfigurations are a recurring issue, with past breaches affecting apps like Fortnite trackers. Developers often prioritize speed to market over security audits, leaving databases vulnerable.
The app’s developers secured the database after being alerted, but the damage was already done. The breach serves as a stark reminder of the security gaps in the AI boom, where convenience often overshadows safeguards. For developers, best practices such as testing Firebase rules in production, using security simulators, and encrypting sensitive data are critical to preventing similar exposures.
Source: https://cyberpress.org/ai-chat-app-data-breach-exposes/
Google Cloud cybersecurity rating report: https://www.rankiteo.com/company/google-cloud
"id": "GOO1770717378",
"linkid": "google-cloud",
"type": "Vulnerability",
"date": "2/2026",
"severity": "85",
"impact": "4",
"explanation": "Attack with significant impact with customers data leaks"
{'affected_entities': [{'customers_affected': '25 million users',
'industry': 'Artificial Intelligence / Chatbot '
'Services',
'name': 'Chat & Ask AI',
'size': '25 million users (50 million claimed users)',
'type': 'Mobile Application'}],
'attack_vector': 'Misconfiguration',
'data_breach': {'number_of_records_exposed': '300 million messages',
'personally_identifiable_information': 'None (no passwords or '
'financial data)',
'sensitivity_of_data': 'High (suicide notes, self-harm '
'methods, drug manufacturing '
'instructions, hacking techniques)',
'type_of_data_compromised': ['Private chat messages',
'User settings',
'AI model preferences',
'Custom chatbot names']},
'description': 'A critical security lapse in the popular AI chat app *Chat & '
'Ask AI* exposed 300 million private messages from 25 million '
'users, revealing deeply personal conversations with AI models '
'like ChatGPT, Claude, and Gemini. The breach stemmed from a '
'basic misconfiguration in the app’s Google Firebase database '
'rather than a malicious hack.',
'impact': {'brand_reputation_impact': 'High (privacy violation, sensitive '
'data exposure)',
'data_compromised': '300 million private messages, timestamps, '
'user settings, AI model preferences, custom '
'chatbot names',
'identity_theft_risk': 'Low (no PII like passwords or financial '
'data exposed)',
'payment_information_risk': 'None',
'systems_affected': 'Google Firebase database'},
'lessons_learned': "Risks of third-party 'wrapper' apps reselling AI model "
'access without equivalent security measures. Firebase '
'misconfigurations are a recurring issue due to developers '
'prioritizing speed over security audits.',
'post_incident_analysis': {'corrective_actions': 'Corrected Firebase security '
'rules to restrict access',
'root_causes': 'Improper Firebase security rules '
'(publicly accessible database due '
'to `allow read: if true;` '
'setting)'},
'recommendations': ['Test Firebase rules in production',
'Use security simulators',
'Encrypt sensitive data',
'Implement proper access controls'],
'references': [{'source': '404 Media'}],
'response': {'containment_measures': 'Database secured after being alerted by '
'researcher',
'remediation_measures': 'Corrected Firebase security rules'},
'title': '300 Million Private AI Chat Messages Exposed in Major Firebase '
'Misconfiguration',
'type': 'Data Breach',
'vulnerability_exploited': 'Improper Firebase security rules (publicly '
'accessible database)'}