AI Agent Exploits McKinsey’s Internal Chatbot in Under Two Hours
Researchers at security startup CodeWall demonstrated how an autonomous AI agent hacked McKinsey’s internal generative AI platform, Lilli, gaining full read-and-write access to its production database within two hours. The attack, conducted in late February, exposed 46.5 million chat messages, 728,000 confidential client files, 57,000 user accounts, and 95 writable system prompts all in plaintext.
The agent exploited an unauthenticated SQL injection vulnerability in Lilli’s API, which was publicly exposed through 22 unsecured endpoints. By manipulating JSON keys in user search queries, the AI bypassed standard security tools, eventually extracting live production data. The flaw also allowed attackers to rewrite Lilli’s system prompts, potentially poisoning responses for McKinsey’s 40,000+ users without requiring code changes just a single HTTP request.
McKinsey patched the vulnerabilities within hours of disclosure on March 1, taking the development environment offline and securing API documentation. A company spokesperson confirmed no evidence of unauthorized client data access, though the incident underscores the growing threat of AI-driven cyberattacks. CodeWall’s CEO noted that the attack was fully autonomous, from target selection to exploitation, signaling a shift toward machine-speed intrusions by malicious actors. The firm’s findings highlight the risks of AI systems interacting with insecure databases and the potential for large-scale data manipulation.
Source: https://www.theregister.com/2026/03/09/mckinsey_ai_chatbot_hacked/
McKinsey & Company cybersecurity rating report: https://www.rankiteo.com/company/mckinsey
"id": "MCK1773109656",
"linkid": "mckinsey",
"type": "Cyber Attack",
"date": "3/2026",
"severity": "100",
"impact": "5",
"explanation": "Attack threatening the organization's existence"
{'affected_entities': [{'industry': 'Management consulting',
'name': 'McKinsey & Company',
'size': '40,000+ users',
'type': 'Consulting firm'}],
'attack_vector': 'Unauthenticated SQL injection via API',
'data_breach': {'data_encryption': 'No (plaintext)',
'number_of_records_exposed': '46.5 million chat messages, '
'728,000 files, 57,000 accounts, '
'95 prompts',
'sensitivity_of_data': 'High (confidential client files, '
'plaintext data)',
'type_of_data_compromised': ['Chat messages',
'Confidential client files',
'User accounts',
'System prompts']},
'date_detected': '2024-02-29',
'date_resolved': '2024-03-01',
'description': 'Researchers at security startup CodeWall demonstrated how an '
'autonomous AI agent hacked McKinsey’s internal generative AI '
'platform, *Lilli*, gaining full read-and-write access to its '
'production database within two hours. The attack exposed 46.5 '
'million chat messages, 728,000 confidential client files, '
'57,000 user accounts, and 95 writable system prompts in '
'plaintext. The agent exploited an unauthenticated SQL '
'injection vulnerability in Lilli’s API, bypassing standard '
'security tools and potentially poisoning responses for '
'McKinsey’s 40,000+ users.',
'impact': {'brand_reputation_impact': 'Undermined trust in AI security',
'data_compromised': '46.5 million chat messages, 728,000 '
'confidential client files, 57,000 user '
'accounts, 95 writable system prompts',
'operational_impact': 'Potential poisoning of AI responses for '
'40,000+ users',
'systems_affected': 'McKinsey’s internal generative AI platform '
'(*Lilli*)'},
'initial_access_broker': {'entry_point': 'Unauthenticated SQL injection in '
'Lilli’s API'},
'investigation_status': 'Resolved',
'lessons_learned': 'Risks of AI systems interacting with insecure databases, '
'potential for large-scale data manipulation via AI-driven '
'attacks',
'motivation': 'Demonstration of AI-driven exploitation risks',
'post_incident_analysis': {'corrective_actions': 'Patched vulnerability, '
'secured API documentation, '
'took development '
'environment offline',
'root_causes': 'Unauthenticated SQL injection '
'vulnerability, publicly exposed '
'API endpoints, insecure database '
'interactions'},
'recommendations': 'Secure API endpoints, implement authentication for '
'database access, monitor AI system interactions for '
'anomalies',
'references': [{'source': 'CodeWall research'}],
'response': {'communication_strategy': 'Company spokesperson confirmed no '
'unauthorized client data access',
'containment_measures': 'Took development environment offline, '
'secured API documentation',
'remediation_measures': 'Patched SQL injection vulnerability'},
'threat_actor': 'CodeWall (security researchers)',
'title': 'AI Agent Exploits McKinsey’s Internal Chatbot in Under Two Hours',
'type': 'AI-driven cyberattack',
'vulnerability_exploited': 'Unauthenticated SQL injection in Lilli’s API, '
'publicly exposed endpoints'}