In early 2025, researchers at Wiz uncovered a **vulnerable database operated by DeepSeek**, exposing highly sensitive corporate and user data. The breach included **chat histories, secret API keys, backend system details, and proprietary workflows** shared by employees via the platform. The leaked data originated from **shadow AI usage**—employees bypassing sanctioned tools to use DeepSeek’s consumer-grade LLM for tasks involving confidential spreadsheets, internal memos, and potentially trade secrets. While no direct financial fraud or ransomware was confirmed, the exposure of **authentication credentials and backend infrastructure details** created a severe risk of follow-on attacks, such as **spear-phishing, insider impersonation, or supply-chain compromises**. The incident highlighted the dangers of ungoverned AI adoption, where **ephemeral interactions with LLMs accumulate into high-value intelligence for threat actors**. DeepSeek’s database misconfiguration enabled attackers to harvest **years of prompt-engineered data**, including employee thought processes, financial forecasts, and operational strategies—effectively handing adversaries a **‘master key’ to internal systems**. Though DeepSeek patched the vulnerability, the breach underscored how **shadow AI expands attack surfaces silently**, with potential long-term repercussions for intellectual property theft, regulatory noncompliance (e.g., GDPR violations), and reputational damage. The exposure aligned with broader trends where **20% of organizations in an IBM study linked data breaches directly to unapproved AI tool usage**, with average costs exceeding **$670,000 per incident**.
Source: https://www.itpro.com/technology/artificial-intelligence/ai-conversations-security-blind-spot
DeepSeek AI cybersecurity rating report: https://www.rankiteo.com/company/deepseek-ai
"id": "DEE5293552111725",
"linkid": "deepseek-ai",
"type": "Breach",
"date": "6/2025",
"severity": "85",
"impact": "4",
"explanation": "Attack with significant impact with customers data leaks"
{'affected_entities': [{'customers_affected': 'Millions (ChatGPT users, '
'including corporate employees)',
'industry': 'Technology',
'location': 'Global (HQ: USA)',
'name': 'OpenAI',
'size': 'Large',
'type': 'AI Developer'},
{'customers_affected': 'Corporate users of Claude',
'industry': 'Technology',
'location': 'Global (HQ: USA)',
'name': 'Anthropic',
'size': 'Medium',
'type': 'AI Developer'},
{'customers_affected': 'Users of DeepSeek’s vulnerable '
'database',
'industry': 'Technology',
'location': 'Global',
'name': 'DeepSeek',
'size': 'Unknown',
'type': 'AI Developer'},
{'customers_affected': 'Organizations using Slack AI',
'industry': 'Technology',
'location': 'Global',
'name': 'Slack (Salesforce)',
'size': 'Large',
'type': 'Enterprise Software'},
{'customers_affected': 'Internal (proprietary code leak '
'in 2023)',
'industry': 'Electronics/Technology',
'location': 'Global (HQ: South Korea)',
'name': 'Samsung',
'size': 'Large',
'type': 'Conglomerate'},
{'customers_affected': '$243,000 scam via AI voice '
'cloning (2019)',
'industry': 'Utilities',
'location': 'UK',
'name': 'Unspecified UK Energy Company',
'size': 'Unknown',
'type': 'Energy'},
{'customers_affected': '90% of companies (MIT Project '
'NANDA 2025)',
'industry': 'All',
'location': 'Global',
'name': 'General Corporate Sector',
'size': 'All',
'type': 'Cross-Industry'}],
'attack_vector': ['Unauthorized AI Tool Usage (Shadow AI)',
'Prompt Engineering Attacks (e.g., Slack AI exploitation)',
'Misconfigured AI Databases (e.g., DeepSeek)',
'Legal Data Retention Orders (e.g., OpenAI’s 2025 lawsuit)',
'Social Engineering via AI-Generated Content (e.g., voice '
'cloning, phishing)'],
'customer_advisories': ['Corporate Clients: Demand transparency from AI '
'vendors on data handling/retention.',
'End Users: Avoid sharing sensitive data with '
'consumer AI tools; use enterprise-approved '
'alternatives.',
'Partners: Include AI data protection clauses in '
'contracts (e.g., right to audit LLM interactions).'],
'data_breach': {'data_encryption': 'Partial (e.g., OpenAI encrypts data at '
'rest, but retention policies create '
'risks)',
'data_exfiltration': 'Confirmed (e.g., DeepSeek, Slack AI, '
'Shadow AI leaks)',
'file_types_exposed': ['Text (prompts/outputs)',
'Spreadsheets (e.g., confidential '
'financial data)',
'Code Repositories',
'Audio (e.g., voice cloning samples)',
'Internal Memos'],
'number_of_records_exposed': 'Unknown (potentially millions '
'across affected platforms)',
'personally_identifiable_information': 'Yes (employee/client '
'records, health data)',
'sensitivity_of_data': 'High (includes PII, financial, '
'proprietary, and health data)',
'type_of_data_compromised': ['Chat Histories',
'Proprietary Code',
'Financial Data',
'Internal Documents',
'Secret Keys',
'Backend System Details',
'Employee/Patient Health Records',
'Trade Secrets']},
'date_publicly_disclosed': '2024-10-01',
'description': "The incident highlights the systemic risks of 'Shadow "
"AI'—unauthorized use of consumer-grade AI tools (e.g., "
'ChatGPT, Claude, DeepSeek) by employees in corporate '
'environments. Sensitive corporate data, including proprietary '
'code, financial records, internal memos, and employee health '
'records, is routinely shared with these tools, expanding '
'attack surfaces. Legal orders (e.g., OpenAI’s 2025 court case '
'with the New York Times) and vulnerabilities (e.g., '
'DeepSeek’s exposed database, Slack AI’s prompt engineering '
'attack) demonstrate how AI interactions can be weaponized by '
'cybercriminals to mimic employees, exfiltrate data, or craft '
'targeted phishing attacks. The lack of governance frameworks '
'(63% of organizations per IBM 2025) exacerbates risks, with '
'breach costs reaching up to $670,000 for high-shadow-AI '
'firms. Regulatory noncompliance (e.g., GDPR) and employee '
'nonadherence to policies (58% admit sharing sensitive data) '
'further compound the threat.',
'impact': {'brand_reputation_impact': 'High (publicized breaches, regulatory '
'actions)',
'customer_complaints': 'Likely (due to privacy violations)',
'data_compromised': ['Proprietary Code (e.g., Samsung 2023 '
'incident)',
'Financial Records (22% of UK employees use '
'shadow AI for financial tasks)',
'Internal Memos/Trade Secrets',
'Employee Health Records',
'Client Data (58% of employees admit sharing '
'sensitive data)',
'Chat Histories (e.g., DeepSeek’s exposed '
'database)',
'Secret Keys/Backend Details'],
'financial_loss': 'Up to $670,000 per breach (IBM 2025); Potential '
'GDPR fines up to €20M or 4% global revenue',
'identity_theft_risk': 'High (AI-generated impersonation attacks)',
'legal_liabilities': ['GDPR Noncompliance (Fines up to €20M)',
'Lawsuits (e.g., New York Times vs. OpenAI '
'2025)',
'Contractual Violations with Clients'],
'operational_impact': ['Loss of Intellectual Property',
'Erosion of Competitive Advantage',
'Disruption of Internal Communications '
'(e.g., AI-drafted memos leaking secrets)',
'Increased Scrutiny from Regulators'],
'payment_information_risk': 'Moderate (22% use shadow AI for '
'financial tasks)',
'revenue_loss': 'Potential 4% global revenue (GDPR fines) + breach '
'costs',
'systems_affected': ['Corporate AI Tools (e.g., Slack AI)',
'Third-Party LLMs (ChatGPT, Claude, DeepSeek)',
'Enterprise Workflows Integrating '
'Unsanctioned AI',
'Legal/Compliance Systems (Data retention '
'conflicts)']},
'initial_access_broker': {'backdoors_established': 'Potential (e.g., '
'AI-trained datasets sold '
'on dark web)',
'data_sold_on_dark_web': 'Likely (e.g., chat '
'histories, proprietary '
'data)',
'entry_point': ['Employee Use of Unsanctioned AI '
'Tools',
'Misconfigured AI Databases (e.g., '
'DeepSeek)',
'Prompt Injection Attacks (e.g., '
'Slack AI)',
'Legal Data Retention Orders (e.g., '
'OpenAI 2025)'],
'high_value_targets': ['Financial Forecasts',
'Product Roadmaps',
'Legal Strategies',
'M&A Plans',
'Employee Health Records'],
'reconnaissance_period': 'Ongoing (years of '
'accumulated prompts in '
'some cases)'},
'investigation_status': 'Ongoing (industry-wide; no single investigation)',
'lessons_learned': ['Shadow AI is pervasive (90% of companies affected, per '
'MIT 2025) and often invisible to IT teams.',
'Employee convenience trumps compliance (58% admit '
'sharing sensitive data; 40% would violate policies for '
'efficiency).',
'AI governance lags behind adoption (63% of organizations '
'lack frameworks, per IBM 2025).',
'Legal risks extend beyond breaches: data retention '
'policies can conflict with lawsuits (e.g., OpenAI 2025).',
'AI platforms’ default settings (e.g., 30-day deletion '
'lags) create unintended compliance gaps.',
'Prompt engineering attacks can bypass traditional '
'security controls (e.g., Slack AI leak).',
'Silent breaches are more damaging: firms may not realize '
'data is compromised until exploited (e.g., AI-generated '
'phishing).'],
'motivation': ['Financial Gain (e.g., $243,000 scam via AI voice cloning in '
'2019)',
'Corporate Espionage',
'Data Harvesting for Dark Web Sales',
'Disruption of Business Operations',
'Exploitation of AI Training Data'],
'post_incident_analysis': {'corrective_actions': ['Mandate AI Lifecycle '
'Governance (IBM’s 4-pillar '
'framework).',
'Deploy AI Firewalls to '
'Block Unauthorized Tools.',
'Enforce ‘Zero Trust’ for '
'AI: Verify All '
'Prompts/Outputs.',
'Conduct Red-Team Exercises '
'for Prompt Injection '
'Attacks.',
'Partner with AI Vendors '
'for Enterprise-Grade '
'Controls (e.g., private '
'LLMs).',
'Establish Cross-Functional '
'AI Risk Committees (IT, '
'Legal, HR).'],
'root_causes': ['Lack of AI-Specific Governance '
'(63% of orgs per IBM 2025).',
'Over-Reliance on Employee '
'Compliance (58% admit policy '
'violations).',
'Default Data Retention in LLMs '
'(e.g., OpenAI’s 30-day deletion '
'lag).',
'Inadequate Vendor Risk Management '
'for AI Tools.',
'Cultural Prioritization of '
'Convenience Over Security (71% UK '
'employees use shadow AI).',
'Technical Gaps: No Runtime '
'Controls for AI Interactions.']},
'recommendations': [{'technical': ['Implement AI runtime controls and network '
'monitoring for unauthorized tool usage.',
'Deploy centralized inventories to track '
'AI models/data flows (IBM’s lifecycle '
'governance).',
'Enforce strict data retention policies '
'(e.g., immediate deletion of temporary '
'chats).',
'Conduct penetration testing for AI '
'systems and prompt injection '
'vulnerabilities.',
'Use adaptive behavioral analysis to '
'detect anomalous AI interactions.']},
{'policy': ['Develop clear AI usage policies with tiered '
'access controls (e.g., ban high-risk tools '
'like DeepSeek).',
'Mandate regular training on shadow AI risks '
'(e.g., Anagram’s compliance programs).',
'Align AI governance with GDPR/CCPA '
'requirements (e.g., data minimization by '
'design).',
'Establish incident response playbooks '
'specifically for AI-related breaches.']},
{'cultural': ['Foster innovation while setting guardrails '
'(Gartner’s 2025 approach: ‘harness shadow '
'AI’).',
'Encourage reporting of unauthorized AI use '
'without punishment (to reduce hiding '
'behavior).',
'Involve employees in vetting AI tools for '
'enterprise adoption (Leigh McMullen’s '
'suggestion).',
'Highlight real-world consequences (e.g., '
'$243K voice-cloning scam) in awareness '
'campaigns.']},
{'strategic': ['Treat AI as a critical third-party risk '
'(e.g., vendor assessments for LLM '
'providers).',
'Budget for AI-specific cyber insurance to '
'cover shadow AI breaches.',
'Collaborate with regulators to shape AI '
'data protection standards.',
'Monitor dark web for leaked AI-trained '
'datasets (e.g., employee prompts sold by '
'initial access brokers).']}],
'references': [{'date_accessed': '2024-10-01',
'source': 'ITPro',
'url': 'https://www.itpro.com'},
{'date_accessed': '2025-01-01',
'source': 'MIT Project NANDA: State of AI in Business 2025'},
{'date_accessed': '2025-06-01',
'source': 'IBM Cost of Data Breach Report 2025',
'url': 'https://www.ibm.com/reports/data-breach'},
{'date_accessed': '2025-05-01',
'source': 'Gartner Security and Risk Management Summit 2025',
'url': 'https://www.gartner.com/en/conferences'},
{'date_accessed': '2025-03-01',
'source': 'Anagram: Employee Compliance Report 2025'},
{'date_accessed': '2025-01-01',
'source': 'Wiz Research: DeepSeek Vulnerability Disclosure',
'url': 'https://www.wiz.io'},
{'date_accessed': '2024-09-01',
'source': 'PromptArmor: Slack AI Exploitation Study',
'url': 'https://www.promptarmor.com'},
{'date_accessed': '2025-06-01',
'source': 'New York Times vs. OpenAI (2025 Court Documents)'}],
'regulatory_compliance': {'fines_imposed': 'Potential: Up to €20M or 4% '
'global revenue (GDPR)',
'legal_actions': ['New York Times vs. OpenAI (2025, '
'data retention lawsuit)',
'Unspecified lawsuits from '
'affected corporations'],
'regulations_violated': ['GDPR (Article 5: Data '
'Minimization)',
'CCPA (California Consumer '
'Privacy Act)',
'Sector-Specific '
'Regulations (e.g., HIPAA '
'for health data)'],
'regulatory_notifications': ['Likely required under '
'GDPR/CCPA for '
'breaches',
'OpenAI’s '
'court-mandated data '
'retention (2025, '
'later reversed)']},
'response': {'communication_strategy': ['Public Disclosures (e.g., OpenAI’s '
'transparency reports)',
'Employee Advisories (e.g., '
'Microsoft’s UK survey findings)',
'Stakeholder Reports (e.g., IBM’s '
'Cost of Data Breach 2025)'],
'containment_measures': ['Blanket AI Bans (e.g., Samsung 2023)',
'Employee Training (e.g., Anagram’s '
'compliance programs)',
'AI Runtime Controls (Gartner 2025 '
'recommendation)'],
'enhanced_monitoring': 'Recommended (e.g., tracking unauthorized '
'AI tool usage)',
'incident_response_plan_activated': 'Partial (e.g., Samsung’s '
'2023 ChatGPT ban)',
'network_segmentation': 'Recommended (IBM/Gartner)',
'recovery_measures': ['AI Policy Overhauls',
'Ethical AI Usage Guidelines',
'Incident Response Playbooks for Shadow '
'AI'],
'remediation_measures': ['Centralized AI Inventory (IBM’s '
'lifecycle governance)',
'Penetration Testing for AI Systems',
'Network Monitoring for Unauthorized AI '
'Usage',
'30-Day Data Deletion Policies '
'(OpenAI’s post-lawsuit commitment)'],
'third_party_assistance': ['Wiz (DeepSeek vulnerability '
'disclosure)',
'PromptArmor (Slack AI attack '
'research)',
'IBM/Gartner (governance '
'frameworks)']},
'stakeholder_advisories': ['CISOs: Prioritize AI governance frameworks and '
'employee training.',
'Legal Teams: Audit AI data retention policies for '
'compliance conflicts.',
'HR: Integrate AI usage into acceptable use '
'policies and disciplinary codes.',
'Board Members: Treat shadow AI as a top-tier '
'enterprise risk.'],
'threat_actor': ['Opportunistic Cybercriminals',
'State-Sponsored Actors (Potential)',
'Insider Threats (Unintentional)',
'Competitors (Industrial Espionage Risk)',
'AI Platform Misconfigurations (e.g., DeepSeek)'],
'title': 'Shadow AI Data Leakage and Privacy Risks in Corporate Environments '
'(2024-2025)',
'type': ['Data Leakage',
'Privacy Violation',
'Shadow IT Risk',
'AI Supply Chain Vulnerability',
'Insider Threat (Unintentional)'],
'vulnerability_exploited': ['Lack of AI Governance Frameworks',
'Default Data Retention Policies in LLMs (e.g., '
'OpenAI’s 30-day deletion lag)',
'Employee Bypass of Sanctioned Tools',
'Weak Authentication in AI Platforms',
'Unmonitored Data Exfiltration via AI Prompts']}