AI Security Gaps Expose Enterprises to Rising Risks in 2025-2026, Report Finds
A new briefing from the AIUC-1 Consortium, developed with input from Stanford’s Trustworthy AI Research Lab and over 40 security executives, highlights critical vulnerabilities in enterprise AI deployments as systems shift from pilot programs to production environments handling sensitive data and business transactions. The report, which includes insights from CISOs at Confluent, Elastic, UiPath, Deutsche Börse, and researchers from MIT Sloan, Scale AI, and Databricks, projects escalating risks for organizations in 2026 amid rapid AI adoption.
A 2025 EY survey cited in the briefing reveals that 64% of companies with annual revenue over $1 billion have lost more than $1 million to AI failures, while one in five reported breaches linked to shadow AI unauthorized or unmonitored AI use by employees.
Three Dominant AI Security Challenges
The briefing identifies three primary risk categories:
-
The Agent Challenge
AI systems have evolved from simple assistants to autonomous agents capable of executing multi-step tasks without human approval. These agents often operate with overprivileged access, leading to unintended consequences 80% of surveyed organizations reported risky behaviors, including unauthorized system access and data exposure. Yet, only 21% of executives have full visibility into agent permissions, tool usage, or data access patterns.
Omar Khawaja (Databricks) noted that AI components frequently change across supply chains, while existing security controls assume static assets, creating blind spots. -
The Visibility Challenge
63% of employees using AI tools in 2025 pasted sensitive data including source code and customer records into personal chatbot accounts. Enterprises now average 1,200 unofficial AI applications, with 86% lacking visibility into AI data flows. Shadow AI breaches cost $670,000 more on average than standard incidents due to delayed detection and unclear exposure scope. -
The Trust Challenge
Prompt injection, once an academic concern, has become a recurring production issue, ranking #1 on OWASP’s 2025 LLM Top 10. The vulnerability stems from LLMs’ inability to reliably separate instructions from data input. 53% of companies now use retrieval-augmented generation (RAG) or agentic pipelines, introducing new attack surfaces.
Existing Frameworks Fall Short
Current AI governance frameworks, such as NIST AI RMF and ISO 42001, provide high-level risk management structures but lack technical controls for agent-specific threats, including tool call validation, prompt injection logging, and containment testing.
Sanmi Koyejo (Stanford Trustworthy AI Lab) found that model-level guardrails alone are insufficient fine-tuning attacks bypassed Claude Haiku (72%) and GPT-4o (57%). Early adopters of technically grounded AI security standards report faster procurement, clearer audits, and reduced friction in regulated environments.
Mitigation Strategies
The briefing recommends continuous adversarial testing integrated into agent operations. Nancy Wang (1Password) advocates for platform-built guardrails, including sandboxed tool execution, scoped credentials, and runtime policy enforcement, to reduce reliance on custom engineering. She suggests tiering agents by risk level, with high-stakes deployments undergoing continuous testing and lower-risk agents relying on standardized controls.
Koyejo’s lab demonstrated that automated red-teaming (AutoRedTeamer) can cut computational costs by 42-58% while improving vulnerability coverage. For resource-constrained organizations, he recommends automated testing tied to deployment pipelines, runtime guardrails for sensitive agents, and selective human red-teaming for critical systems.
Wang emphasized that least-privilege access, short-lived credentials, and scoped tokens proven in cloud security can similarly limit AI agent risks by restricting unauthorized access.
Source: https://www.helpnetsecurity.com/2026/03/03/enterprise-ai-agent-security-2026/
Elastic TPRM report: https://www.rankiteo.com/company/elastic
Deutsche Börse TPRM report: https://www.rankiteo.com/company/deutsche-borse
Confluent TPRM report: https://www.rankiteo.com/company/confluent
UiPath TPRM report: https://www.rankiteo.com/company/uipath
"id": "conuipdeuela1772541735",
"linkid": "confluent, uipath, deutsche-borse, elastic",
"type": "Vulnerability",
"date": "1/2025",
"severity": "85",
"impact": "4",
"explanation": "Attack with significant impact with customers data leaks"
{'affected_entities': [{'industry': 'Technology',
'name': 'Confluent',
'type': 'Enterprise'},
{'industry': 'Technology',
'name': 'Elastic',
'type': 'Enterprise'},
{'industry': 'Technology',
'name': 'UiPath',
'type': 'Enterprise'},
{'industry': 'Finance',
'name': 'Deutsche Börse',
'type': 'Enterprise'},
{'industry': 'Cybersecurity',
'name': '1Password',
'type': 'Enterprise'},
{'name': 'Companies with annual revenue over $1 billion',
'size': 'Large',
'type': 'Enterprise'}],
'attack_vector': ['Unauthorized AI Tool Usage',
'Prompt Injection',
'Overprivileged AI Agents'],
'data_breach': {'personally_identifiable_information': 'Yes',
'sensitivity_of_data': 'High',
'type_of_data_compromised': ['Source Code',
'Customer Records',
'Personally Identifiable '
'Information']},
'date_publicly_disclosed': '2025',
'description': 'A new briefing from the AIUC-1 Consortium highlights critical '
'vulnerabilities in enterprise AI deployments as systems shift '
'from pilot programs to production environments handling '
'sensitive data and business transactions. The report projects '
'escalating risks for organizations in 2026 amid rapid AI '
'adoption, including shadow AI breaches, overprivileged AI '
'agents, and prompt injection attacks.',
'impact': {'data_compromised': ['Sensitive Data (source code, customer '
'records)',
'Personally Identifiable Information'],
'financial_loss': '> $1 million (64% of companies with annual '
'revenue over $1 billion)',
'operational_impact': ['Delayed Detection of Breaches',
'Unclear Exposure Scope'],
'systems_affected': ['AI Agents', 'LLMs', 'RAG Pipelines']},
'lessons_learned': ['Model-level guardrails alone are insufficient against '
'fine-tuning attacks.',
'Existing AI governance frameworks lack technical '
'controls for agent-specific threats.',
'Shadow AI breaches result in higher financial losses due '
'to delayed detection.',
'Automated red-teaming can significantly reduce '
'computational costs while improving vulnerability '
'coverage.'],
'post_incident_analysis': {'corrective_actions': ['Integrate automated '
'red-teaming into '
'deployment pipelines.',
'Implement runtime '
'guardrails for sensitive '
'AI agents.',
'Enhance visibility into AI '
'tool usage and data access '
'patterns.',
'Adopt least-privilege '
'access and scoped '
'credentials for AI '
'agents.'],
'root_causes': ['Rapid AI adoption without '
'adequate security controls.',
'Lack of visibility into shadow AI '
'and AI data flows.',
'Overprivileged AI agents with '
'insufficient permission controls.',
'Insufficient technical controls '
'in existing AI governance '
'frameworks.']},
'recommendations': ['Implement continuous adversarial testing integrated into '
'agent operations.',
'Adopt platform-built guardrails (e.g., sandboxed tool '
'execution, scoped credentials).',
'Tier agents by risk level with high-stakes deployments '
'undergoing continuous testing.',
'Use automated testing tied to deployment pipelines for '
'resource-constrained organizations.',
'Apply least-privilege access, short-lived credentials, '
'and scoped tokens to limit AI agent risks.',
'Enhance visibility into AI data flows and agent '
'permissions.'],
'references': [{'source': 'AIUC-1 Consortium Briefing'},
{'source': 'EY Survey 2025'},
{'source': 'OWASP’s 2025 LLM Top 10'},
{'source': 'Stanford Trustworthy AI Research Lab'}],
'regulatory_compliance': {'regulations_violated': ['NIST AI RMF',
'ISO 42001']},
'response': {'containment_measures': ['Sandboxed Tool Execution',
'Scoped Credentials',
'Runtime Policy Enforcement'],
'enhanced_monitoring': ['Visibility into AI Data Flows',
'Agent Permission Tracking'],
'remediation_measures': ['Automated Red-Teaming (AutoRedTeamer)',
'Continuous Adversarial Testing',
'Tiered Agent Risk Management']},
'title': 'AI Security Gaps Expose Enterprises to Rising Risks in 2025-2026',
'type': ['AI Security Vulnerabilities', 'Data Breach', 'Shadow AI'],
'vulnerability_exploited': ['Lack of Visibility into AI Data Flows',
'Insufficient Agent Permission Controls',
'Prompt Injection Vulnerabilities']}