New AI Threat "Promptware" Turns Assistants Into Silent Spy Tools
Researchers from Ben-Gurion University, Tel Aviv University, and Harvard including cybersecurity expert Bruce Schneier have uncovered a dangerous evolution in AI attacks dubbed "Promptware." Unlike traditional prompt injection, this technique hijacks large language models (LLMs) to execute malicious actions without user interaction, effectively turning AI assistants into stealthy surveillance tools.
The attack, detailed in the paper "The Promptware Kill Chain," exploits AI integrations with everyday apps. In one demonstrated scenario, attackers send a malicious Google Calendar invite containing hidden instructions. The AI, with access to the victim’s calendar and email, automatically processes the prompt, mistaking it for a legitimate Zoom meeting request. The assistant then launches Zoom, activates the camera, and streams video to the attacker’s server all without alerts or user input. Since the AI operates within its granted permissions, the attack bypasses traditional security checks.
The researchers mapped a seven-stage kill chain based on 36 real-world attacks, mirroring advanced cyberwarfare tactics:
- Initial Access – Malicious prompts embedded in emails or calendar invites.
- Privilege Escalation – "Jailbreaking" AI to bypass safety filters.
- Reconnaissance – AI scans files or emails for sensitive data.
- Persistence – Prompts self-replicate to survive system restarts.
- Command & Control – AI connects to attacker-controlled servers.
- Lateral Movement – Spreads via automated emails to contacts.
- Actions on Objective – Exfiltrates data, steals cryptocurrency, or conducts surveillance.
Unlike static prompt injections, Promptware mutates, spreads, and executes code autonomously, posing risks beyond data theft including silent espionage or fraud. The threat escalates as AI assistants gain deeper integration with devices, potentially granting access to cameras, microphones, and system controls with a single malicious prompt.
To counter the threat, the researchers propose a defense-in-depth approach:
- Input sanitization to strip hidden prompts from emails and calendars.
- Permission limits requiring explicit user approval for sensitive actions (e.g., camera access).
- AI activity monitoring to flag anomalous behavior, such as unexpected meetings.
- Isolation by running AI in sandboxes without direct tool access.
The findings highlight a critical shift in cybersecurity: AI systems must be treated as potential malware vectors, not just tools vulnerable to manipulation. As LLMs like Siri and Cortana evolve, layered security measures will be essential to prevent exploitation.
Google TPRM report: https://www.rankiteo.com/company/googleresearch
Zoom TPRM report: https://www.rankiteo.com/company/zoom
"id": "goozoo1770908381",
"linkid": "googleresearch, zoom",
"type": "Cyber Attack",
"date": "2/2026",
"severity": "100",
"impact": "5",
"explanation": "Attack threatening the organization's existence"
{'affected_entities': [{'type': 'AI Assistant Users'}],
'attack_vector': ['Malicious prompts in emails', 'Malicious calendar invites'],
'data_breach': {'data_exfiltration': 'Yes (streamed to attacker-controlled '
'servers)',
'sensitivity_of_data': 'High (e.g., personally identifiable '
'information, surveillance data)',
'type_of_data_compromised': ['Sensitive files',
'Emails',
'Video/audio streams']},
'description': 'Researchers from Ben-Gurion University, Tel Aviv University, '
'and Harvard uncovered a dangerous evolution in AI attacks '
"dubbed 'Promptware.' This technique hijacks large language "
'models (LLMs) to execute malicious actions without user '
'interaction, turning AI assistants into stealthy surveillance '
'tools. The attack exploits AI integrations with everyday '
'apps, such as malicious Google Calendar invites containing '
'hidden instructions that trigger unauthorized actions like '
'activating cameras and streaming video to attacker-controlled '
'servers.',
'impact': {'data_compromised': 'Sensitive data (e.g., files, emails)',
'operational_impact': 'Unauthorized access to cameras, '
'microphones, and system controls',
'systems_affected': ['AI assistants (e.g., LLMs)',
'Integrated applications (e.g., Zoom, email, '
'calendar)']},
'initial_access_broker': {'entry_point': ['Malicious emails',
'Malicious calendar invites']},
'lessons_learned': 'AI systems must be treated as potential malware vectors, '
'not just tools vulnerable to manipulation. Layered '
'security measures are essential to prevent exploitation '
'as AI assistants gain deeper integration with devices.',
'motivation': ['Espionage', 'Data exfiltration', 'Fraud'],
'post_incident_analysis': {'corrective_actions': ['Implement input '
'sanitization',
'Enforce permission limits '
'for sensitive actions',
'Deploy AI activity '
'monitoring',
'Isolate AI in sandboxes'],
'root_causes': 'Exploitation of AI integrations '
'with applications, lack of input '
'sanitization, and over-permissive '
'AI access to system controls'},
'recommendations': ['Input sanitization to strip hidden prompts from emails '
'and calendars',
'Permission limits requiring explicit user approval for '
'sensitive actions (e.g., camera access)',
'AI activity monitoring to flag anomalous behavior',
'Isolation by running AI in sandboxes without direct tool '
'access'],
'references': [{'source': "Research paper: 'The Promptware Kill Chain'"}],
'response': {'enhanced_monitoring': 'Proposed AI activity monitoring to flag '
'anomalous behavior'},
'title': "New AI Threat 'Promptware' Turns Assistants Into Silent Spy Tools",
'type': 'AI Exploitation',
'vulnerability_exploited': 'AI integrations with applications (e.g., Google '
'Calendar, Zoom)'}