Jaguar Land Rover (JLR) suffered a devastating cyberattack that halted production for **five weeks**, crippling its global operations and just-in-time supply chain. The attack disrupted manufacturing at JLR and forced around **5,000 supplier companies** to pause operations, leading to an estimated financial loss of **£1.9 billion ($2.5 billion)**—potentially the most costly hack in British history. Annual production dropped by **25%** due to the prolonged outage, with recovery only achieved in early October after a 'challenging quarter.' The cascading impact on suppliers amplified the economic damage, demonstrating the attack’s severe operational and financial consequences.
Source: https://www.wired.com/story/amazon-explains-how-its-aws-outage-took-down-the-web/
TPRM report: https://www.rankiteo.com/company/jaguar-land-rover_1
"id": "jag3762037102625",
"linkid": "jaguar-land-rover_1",
"type": "Cyber Attack",
"date": "10/2025",
"severity": "100",
"impact": "5",
"explanation": "Attack threatening the organization's existence"
{'affected_entities': [{'customers_affected': 'Widespread (exact number '
'unspecified)',
'industry': 'Technology / Cloud Computing',
'location': 'Global (Headquartered in Seattle, WA, '
'USA)',
'name': 'Amazon Web Services (AWS)',
'size': 'Hyperscale',
'type': 'Cloud Service Provider'},
{'industry': 'Multiple',
'location': 'Global',
'name': 'AWS Customers (Various)',
'type': ['Businesses',
'Government Agencies',
'Individuals']}],
'customer_advisories': 'AWS recommended customers to monitor service health '
'dashboards, subscribe to notifications, and review '
'best practices for building resilient architectures '
'on AWS.',
'date_detected': '2023-11-20T00:00:00Z',
'date_publicly_disclosed': '2023-11-23T00:00:00Z',
'date_resolved': '2023-11-21T00:00:00Z',
'description': 'Amazon Web Services (AWS) experienced a major outage on '
'Monday caused by Domain System Registry failures in its '
'DynamoDB service. The incident led to cascading issues, '
'including disruptions in the Network Load Balancer service '
'and the inability to launch new EC2 Instances. The outage '
'lasted approximately 15 hours, significantly impacting '
'customers and illustrating the global reliance on '
'hyperscalers like AWS. AWS confirmed the root causes in a '
'post-event summary and committed to improving availability '
'based on lessons learned.',
'impact': {'brand_reputation_impact': 'Highlighted global reliance on AWS and '
'potential vulnerabilities in '
'hyperscale cloud infrastructure',
'downtime': '15 hours',
'operational_impact': 'Widespread service disruptions for AWS '
'customers, cascading outages across '
'dependent services, backlog of requests due '
'to inability to launch new EC2 instances',
'systems_affected': ['DynamoDB',
'Network Load Balancer',
'EC2 Instances']},
'investigation_status': 'Completed (Post-event summary published)',
'lessons_learned': 'The incident highlighted the critical dependency on core '
'AWS services like DynamoDB and Network Load Balancer. AWS '
'acknowledged the need to improve redundancy, failover '
'mechanisms, and the ability to dynamically scale '
'resources during high-stress scenarios. The outage also '
'underscored the cascading risks in cloud infrastructure '
'and the importance of rapid incident response to mitigate '
'widespread impact.',
'post_incident_analysis': {'corrective_actions': ['Improvements to DynamoDB '
'redundancy and failover '
'mechanisms',
'Enhanced monitoring and '
'automated remediation for '
'Network Load Balancer',
'Optimization of EC2 '
'Instance launch processes '
'under high load',
'Stress testing to identify '
'and mitigate potential '
'choke points',
'Strengthened incident '
'response protocols for '
'faster recovery'],
'root_causes': ['Domain System Registry failures '
'in DynamoDB service',
'Disruptions in Network Load '
'Balancer, critical for managing '
'data flow',
'Inability to launch new EC2 '
'Instances, leading to request '
'backlogs',
'Cascading failures due to '
'interdependencies between AWS '
'services']},
'recommendations': ['Enhance redundancy and failover mechanisms for critical '
'services like DynamoDB.',
'Improve real-time monitoring and automated remediation '
'for Network Load Balancer disruptions.',
'Optimize EC2 Instance launch processes to prevent '
'backlog buildup during outages.',
'Conduct regular stress tests to simulate high-load '
'scenarios and identify potential choke points.',
'Strengthen communication protocols with customers during '
'major incidents to provide timely updates and guidance.'],
'references': [{'date_accessed': '2023-11-25',
'source': 'WIRED - AWS Post-Event Summary',
'url': 'https://www.wired.com/story/aws-outage-dynamodb-dns-failure/'},
{'date_accessed': '2023-11-23',
'source': 'AWS Official Post-Mortem',
'url': 'https://health.aws.amazon.com/health/status'}],
'response': {'communication_strategy': ['Post-event summary published on AWS '
'website',
'Public acknowledgment of impact on '
'customers'],
'containment_measures': ['Isolation of affected DynamoDB '
'components',
'Mitigation of Network Load Balancer '
'disruptions'],
'enhanced_monitoring': 'Planned improvements to availability and '
'resilience',
'incident_response_plan_activated': True,
'recovery_measures': ['Post-event analysis',
'System stability improvements'],
'remediation_measures': ['Restoration of EC2 Instance launch '
'capabilities',
'Clearing backlog of requests']},
'stakeholder_advisories': 'AWS published a detailed post-event summary '
'outlining the root causes, impact, and remediation '
'steps. Customers were advised to review their '
'dependency on AWS services and implement backup or '
'failover strategies where possible.',
'title': 'AWS Major Outage Due to DNS Resolution Issues and DynamoDB Failures',
'type': ['Service Disruption', 'DNS Outage', 'Cloud Infrastructure Failure']}