In 2023, Samsung imposed a ban on generative AI tools in a key division after employees inadvertently exposed sensitive corporate data including proprietary source code and confidential meeting notes by inputting them into unauthorized public LLMs like ChatGPT. The incident highlighted the risks of shadow AI, where employees use unsanctioned AI tools without IT oversight, leading to potential data leaks, regulatory non-compliance, and loss of intellectual property. The breach underscored how unmonitored AI interactions can compromise critical business assets, as these models may retain, reuse, or expose inputted data during training. While Samsung’s ban aimed to mitigate risks, experts warned such measures often backfire by driving usage underground, exacerbating security blind spots. The case exemplifies how inadvertent data exposure via generative AI even without malicious intent can create severe operational, legal, and reputational consequences for global enterprises reliant on proprietary innovation.
Source: https://www.infosecurity-magazine.com/news-features/shadow-ai-governance-cisos/
TPRM report: https://www.rankiteo.com/company/samsung-sds-america
"id": "sam0232302091825",
"linkid": "samsung-sds-america",
"type": "Breach",
"date": "6/2023",
"severity": "85",
"impact": "4",
"explanation": "Attack with significant impact with customers data leaks"
{'affected_entities': [{'industry': 'Technology/Electronics',
'location': 'Global (Headquartered in South Korea)',
'name': 'Samsung (Example Case)',
'size': 'Large Enterprise',
'type': 'Corporation'},
{'industry': 'Multiple (Cross-Industry)',
'location': 'United Kingdom',
'name': 'Unspecified UK Companies (RiverSafe Report)',
'type': ['Corporations', 'SMEs']},
{'industry': 'Multiple (Cross-Industry)',
'location': 'Global',
'name': 'Global Organizations (IBM Report)',
'type': ['Corporations', 'Enterprises']}],
'attack_vector': ['Unauthorized AI Tool Usage',
'Employee Negligence',
'Lack of IT Oversight'],
'customer_advisories': ['Transparency Reports on AI Usage Policies',
'Data Protection Assurances'],
'data_breach': {'data_exfiltration': ['Unintentional (via LLM Training Data '
'Retention)',
'Potential Dark Web Exposure'],
'file_types_exposed': ['Text (Notes, Code)', 'Documents'],
'personally_identifiable_information': 'Potential '
'(Context-Dependent)',
'sensitivity_of_data': 'High (Confidential/Proprietary)',
'type_of_data_compromised': ['Proprietary Business Data',
'Source Code',
'Meeting Notes',
'Personal Data (Potential)']},
'date_publicly_disclosed': '2024-2025',
'description': 'The incident highlights the widespread and unmanaged use of '
'generative AI tools (e.g., Google Gemini, OpenAI ChatGPT) by '
'employees without IT oversight, leading to inadvertent data '
'leakage, compliance violations, and security risks. IBM’s '
'2025 report found 20% of organizations had staff using '
'unsanctioned AI tools, while RiverSafe’s 2024 report noted 1 '
'in 5 UK companies experienced sensitive data exposure via '
'generative AI. Shadow AI differs from shadow IT due to its '
'seamless integration into workflows, lack of traceable '
'activity, and risks like AI hallucinations, poor '
'decision-making, and regulatory non-compliance. Bans on AI '
'tools (e.g., Samsung’s 2023 restriction) are deemed '
'ineffective, as they drive usage underground and fail to '
'address the root cause: lack of governance and visibility.',
'impact': {'brand_reputation_impact': ['Loss of Trust', 'Regulatory Scrutiny'],
'data_compromised': ['Sensitive Corporate Data',
'Meeting Notes',
'Source Code',
'Proprietary Information'],
'legal_liabilities': ['Data Protection Violations',
'Contractual Breaches'],
'operational_impact': ['Poor Business Decisions',
'Regulatory Non-Compliance',
'Security Incident Proliferation']},
'initial_access_broker': {'data_sold_on_dark_web': 'Potential (Secondary '
'Risk)',
'entry_point': ['Employee Devices '
'(Personal/Corporate)',
'Web Browsers',
'Unauthorized SaaS AI Tools'],
'high_value_targets': ['Proprietary Data',
'Source Code',
'Strategic Meeting Notes'],
'reconnaissance_period': 'N/A (Opportunistic)'},
'investigation_status': 'Ongoing (Industry-Wide Issue)',
'lessons_learned': ['Shadow AI poses unique risks compared to shadow IT due '
'to its stealthy, browser-based nature and integration '
'into workflows.',
'Bans on AI tools are counterproductive, driving usage '
'underground and reducing visibility.',
'Organizations must balance AI adoption with governance, '
'including sanctioned toolsets, training, and monitoring.',
'Traditional security tools (e.g., CASB) are insufficient '
'for detecting AI-specific risks like data leakage via '
'LLMs.'],
'motivation': ['Operational Efficiency', 'Lack of Awareness', 'Convenience'],
'post_incident_analysis': {'corrective_actions': ['Develop AI Acceptable Use '
'Policies (AUPs) with clear '
'sanctions for violations.',
'Pilot AI monitoring '
'solutions (e.g., '
'browser-based DLP, AI '
'activity logs).',
'Establish cross-functional '
'AI governance committees '
'(IT, Legal, Compliance).',
'Conduct regular audits of '
'AI tool usage and data '
'flows.',
'Partner with AI vendors to '
'enforce enterprise-grade '
'security controls (e.g., '
'data residency, access '
'logs).'],
'root_causes': ['Lack of AI-Specific Security '
'Policies',
'Inadequate Employee Training on '
'AI Risks',
'Absence of Tools to Monitor AI '
'Interactions',
'Over-Reliance on Traditional '
'Security Controls (e.g., CASB)',
'Rapid AI Adoption Outpacing '
'Governance']},
'recommendations': ['Implement AI governance policies defining approved '
'tools, use cases, and data handling rules.',
'Deploy specialized monitoring for AI interactions (e.g., '
'browser plugins, DLP for AI inputs).',
'Educate employees on risks of unsanctioned AI tools and '
'provide approved alternatives.',
'Integrate AI risk assessments into third-party vendor '
'evaluations and compliance audits.',
"Adopt 'AI Security by Design' principles, including data "
'minimization and anonymization for LLM inputs.'],
'references': [{'date_accessed': '2025', 'source': 'McKinsey & Company'},
{'date_accessed': '2025',
'source': 'IBM Cost of a Data Breach Report 2025'},
{'date_accessed': '2024',
'source': 'RiverSafe Report on Shadow AI (2024)'},
{'date_accessed': '2024',
'source': 'Infosecurity Magazine (Interview with Anton '
'Chuvakin, Google Cloud)'},
{'date_accessed': '2024',
'source': 'Presidio (Dan Lohrmann, Field CISO)'},
{'date_accessed': '2024',
'source': 'Noma Security (Diana Kelley, CISO)'}],
'regulatory_compliance': {'regulations_violated': ['GDPR (Potential)',
'Industry-Specific Data '
'Protection Laws',
'Contractual Obligations']},
'response': {'communication_strategy': ['Internal Awareness Campaigns',
'Stakeholder Reporting on Risks'],
'containment_measures': ['Partial/Full Bans on AI Tools '
'(Ineffective)',
'Employee Training Initiatives'],
'enhanced_monitoring': ['Proposals for AI Activity Tracking '
'(e.g., Browser Extensions, DLP for AI)'],
'remediation_measures': ['Development of AI Governance '
'Frameworks',
'Implementation of AI-Specific '
'Visibility Tools',
'Policy Updates for Sanctioned AI '
'Usage']},
'stakeholder_advisories': ['IT Leaders',
'CISOs',
'Compliance Officers',
'Legal Teams'],
'threat_actor': 'Internal (Employees)',
'title': 'Shadow AI Security and Privacy Risks in Enterprise Environments '
'(2024-2025)',
'type': ['Data Leakage',
'Shadow AI',
'Compliance Violation',
'AI Hallucination Risk'],
'vulnerability_exploited': ['Lack of AI Governance Policies',
'Absence of Visibility/Monitoring Tools',
'Unsecured Public LLM Interactions']}