The National Cyber Security Centre has warned that a growing misunderstanding about a new type of artificial intelligence vulnerability could lead to major data breaches affecting UK organisations.
The security agency said many developers and cyber professionals were drawing the wrong parallels between so‑called prompt injection attacks in generative AI systems and the long‑established problem of SQL injection in traditional web applications.
Prompt injection involves malicious instructions that influence how a large language model behaves. SQL injection involves malicious database queries that exploit flaws in how applications handle user input.
The NCSC said these two attack types differ in important ways. It said those differences affect how organisations should manage the risk.
In new guidance, the centre said prompt injection attacks against systems built on large language models may not be fully preventable. It contrasted this with SQL injection, which software engineers can often block through strict separation of data and instructions and careful query handling.
The NCSC said that large language models do not reliably separate instructions from data. It said attackers can exploit this behaviour by embedding instructions inside content that looks like ordinary text.
The organisation warned that a belief that prompt injection can be solved through a single technical fix could leave systems exposed. It said this view could repeat earlier periods when firms underest
Source: https://itbrief.co.uk/story/ncsc-warns-ai-prompt-injection-could-drive-huge-uk-data-breaches
TPRM report: https://www.rankiteo.com/company/ncsc
"id": "ncs1765202704",
"linkid": "ncsc",
"type": "Vulnerability",
"date": "12/2025",
"severity": "85",
"impact": "4",
"explanation": "Attack with significant impact with customers data leaks"
{'affected_entities': [{'location': 'United Kingdom',
'name': 'UK organisations',
'type': 'Organisations'}],
'attack_vector': 'Prompt Injection',
'description': 'The National Cyber Security Centre (NCSC) has warned that a '
'growing misunderstanding about prompt injection attacks in '
'generative AI systems could lead to major data breaches '
'affecting UK organisations. The NCSC highlighted that '
'developers and cyber professionals are incorrectly comparing '
'prompt injection to SQL injection, which may result in '
'inadequate risk management. Prompt injection involves '
'malicious instructions influencing large language models, '
'unlike SQL injection, which exploits database query flaws. '
'The NCSC noted that prompt injection may not be fully '
'preventable due to the inability of large language models to '
'reliably separate instructions from data.',
'impact': {'data_compromised': 'Potential major data breaches',
'operational_impact': 'Inadequate risk management leading to '
'system exposure',
'systems_affected': 'Generative AI systems, large language models'},
'lessons_learned': 'Prompt injection attacks differ from SQL injection and '
'may not be fully preventable. Organisations must '
'understand these differences to manage risks effectively.',
'post_incident_analysis': {'corrective_actions': 'Educate developers and '
'cyber professionals on the '
'differences between prompt '
'injection and SQL '
'injection. Develop robust '
'risk management strategies '
'for generative AI systems.',
'root_causes': 'Misunderstanding of prompt '
'injection vulnerabilities and '
'incorrect parallels drawn with SQL '
'injection.'},
'recommendations': 'Avoid relying on a single technical fix for prompt '
'injection. Implement comprehensive risk management '
'strategies for generative AI systems.',
'references': [{'source': 'National Cyber Security Centre (NCSC)'}],
'response': {'communication_strategy': 'NCSC guidance on prompt injection '
'risks'},
'stakeholder_advisories': 'NCSC guidance on prompt injection risks and '
'differences from SQL injection.',
'title': 'Misunderstanding of Prompt Injection Vulnerabilities Leading to '
'Potential Data Breaches',
'type': 'AI Vulnerability Misunderstanding',
'vulnerability_exploited': 'Lack of separation between instructions and data '
'in large language models'}