AI Infrastructure Security Crisis: Exposed Systems, Hardcoded Flaws, and Rampant Misconfigurations
A recent investigation by the Intruder team reveals a alarming trend in AI infrastructure security, as rapid adoption outpaces safeguards. Scanning over 2 million hosts with 1 million exposed services, researchers found AI deployments riddled with vulnerabilities more severe than any other software category they’ve analyzed.
No Authentication by Default
A core issue: many self-hosted AI projects ship without authentication enabled, leaving sensitive data and tools exposed. Real-world examples included chatbots with unrestricted access to user conversation histories, multimodal LLMs vulnerable to jailbreaking, and even NSFW chatbots leaking API keys in plaintext. One OpenUI-based instance exposed full LLM conversation logs, while others allowed malicious users to bypass safety guardrails using corporate infrastructure to generate illegal content or solicit criminal advice.
Exposed Agent Platforms and Business Logic
Agent management platforms like n8n and Flowise were frequently found misconfigured, with some instances mistakenly exposed to the internet. One Flowise deployment revealed an entire LLM chatbot’s business logic, including credential lists (though stored values remained protected). Another exposed parsing tools and local functions capable of server-side code execution. Across sectors government, finance, and marketing over 90 exposed instances were identified, enabling attackers to modify workflows, redirect traffic, or poison responses.
Unsecured Ollama APIs: A Gateway to Frontier Models
Researchers discovered 5,200+ exposed Ollama APIs with connected models, 31% of which responded to unauthenticated queries. While Ollama doesn’t store conversation data, many instances wrapped paid models from Anthropic, Google, Deepseek, Moonshot, and OpenAI 518 in total. Responses ranged from health-focused assistants to cloud management integrations, highlighting the risks of unauthorized access to enterprise systems.
Insecure by Design
Lab analysis uncovered systemic flaws:
- Poor deployment practices: Misconfigured Docker setups, hardcoded credentials, and applications running as root.
- No authentication on fresh installs: Users granted high-privilege access by default.
- Static credentials: Embedded in setup examples and
docker-composefiles. - New vulnerabilities: Arbitrary code execution found in a popular AI project within days.
Root Cause: Speed Over Security
The findings underscore a broader industry shift vendors and adopters prioritizing rapid deployment over decades of security best practices. While some projects abandon safeguards entirely, the pressure to outpace competitors exacerbates the problem. The result: AI infrastructure with a 2.6 CVE-per-day average (as seen in the ClawdBot incident), where misconfigurations and weak sandboxing amplify risks.
The investigation serves as a stark reminder of the security debt accumulating in the AI gold rush.
Source: https://thehackernews.com/2026/05/we-scanned-1-million-exposed-ai.html
FlowiseAI cybersecurity rating report: https://www.rankiteo.com/company/flowiseai
DeepSeek AI cybersecurity rating report: https://www.rankiteo.com/company/deepseek-ai
Anthropic cybersecurity rating report: https://www.rankiteo.com/company/anthropicresearch
OpenAI cybersecurity rating report: https://www.rankiteo.com/company/openai
n8n cybersecurity rating report: https://www.rankiteo.com/company/n8n
"id": "FLODEEANTOPEN8N1777984637",
"linkid": "flowiseai, deepseek-ai, anthropicresearch, openai, n8n",
"type": "Vulnerability",
"date": "5/2025",
"severity": "85",
"impact": "4",
"explanation": "Attack with significant impact with customers data leaks"
{'affected_entities': [{'industry': ['AI',
'Technology',
'Government',
'Finance',
'Marketing'],
'type': ['Government',
'Finance',
'Marketing',
'Enterprise']}],
'attack_vector': ['Exposed APIs',
'Unauthenticated Access',
'Hardcoded Credentials',
'Poor Deployment Practices'],
'data_breach': {'personally_identifiable_information': 'Yes',
'sensitivity_of_data': 'High',
'type_of_data_compromised': ['LLM conversation logs',
'API keys',
'Business logic',
'Credential lists',
'User conversation histories']},
'description': 'A recent investigation by the Intruder team reveals an '
'alarming trend in AI infrastructure security, as rapid '
'adoption outpaces safeguards. Scanning over 2 million hosts '
'with 1 million exposed services, researchers found AI '
'deployments riddled with vulnerabilities more severe than any '
'other software category analyzed. Key issues include no '
'authentication by default, exposed agent platforms, unsecured '
'Ollama APIs, and systemic flaws like poor deployment '
'practices, hardcoded credentials, and static credentials in '
'setup examples.',
'impact': {'brand_reputation_impact': 'High',
'data_compromised': ['LLM conversation logs',
'API keys',
'Business logic',
'Credential lists',
'User conversation histories',
'Personally identifiable information'],
'identity_theft_risk': 'High',
'operational_impact': ['Unauthorized modification of workflows',
'Traffic redirection',
'Response poisoning',
'Server-side code execution'],
'systems_affected': ['Self-hosted AI projects',
'Agent management platforms (n8n, Flowise)',
'Ollama APIs',
'Multimodal LLMs',
'Chatbots',
'NSFW chatbots']},
'investigation_status': 'Completed',
'lessons_learned': 'The investigation highlights the security debt '
'accumulating in the AI gold rush, where rapid deployment '
'is prioritized over security best practices. Key lessons '
'include the need for default authentication, secure '
'deployment practices, and avoiding hardcoded credentials.',
'motivation': ['Opportunistic Exploitation',
'Data Theft',
'Unauthorized Access'],
'post_incident_analysis': {'corrective_actions': ['Implement default '
'authentication in AI '
'projects',
'Secure deployment '
'practices (avoid root '
'access, remove hardcoded '
'credentials)',
'Regular security audits '
'and vulnerability '
'assessments',
'Enhanced monitoring and '
'sandboxing for exposed '
'APIs',
'Adopt secure coding and '
'deployment standards'],
'root_causes': ['Rapid adoption of AI outpacing '
'security safeguards',
'No authentication by default in '
'self-hosted AI projects',
'Poor deployment practices '
'(misconfigured Docker, hardcoded '
'credentials)',
'Static credentials embedded in '
'setup examples',
'Pressure to outpace competitors '
'leading to security shortcuts']},
'recommendations': ['Enable authentication by default in AI projects',
'Secure Docker setups and avoid running applications as '
'root',
'Remove hardcoded credentials from setup examples',
'Implement static and dynamic security testing for AI '
'projects',
'Enhance sandboxing and monitoring for exposed APIs',
'Adopt secure deployment practices and regular '
'vulnerability assessments'],
'references': [{'source': 'Intruder Team Investigation'}],
'title': 'AI Infrastructure Security Crisis: Exposed Systems, Hardcoded '
'Flaws, and Rampant Misconfigurations',
'type': ['Misconfiguration',
'Authentication Bypass',
'Data Exposure',
'Code Execution'],
'vulnerability_exploited': ['No Authentication by Default',
'Misconfigured Docker Setups',
'Hardcoded Credentials',
'Static Credentials in Setup Files',
'Arbitrary Code Execution']}