Anthropic and Google: AI vendors' response to security flaws: It wasn't me

Anthropic and Google: AI vendors' response to security flaws: It wasn't me

AI Security Flaws: Vendors Shift Blame While Risks Persist

AI vendors have increasingly positioned their tools as essential for cybersecurity defense yet when vulnerabilities emerge in their own systems, they often dismiss them as "expected behavior" or "by-design risks." Recent incidents highlight this pattern, raising concerns about accountability and the broader security implications of AI adoption.

In one case, researchers demonstrated how three widely used AI agents Anthropic’s Claude Code Security Review, Google’s Gemini CLI Action, and Microsoft’s GitHub Copilot could be exploited to steal API keys and access tokens. All three vendors acknowledged the findings through bug bounty payouts: Anthropic awarded $100 (upgrading the severity score from 9.3 to 9.4) and updated its documentation, Google paid $1,337, and GitHub, after initially dismissing the issue as unreproducible, later awarded $500. None issued CVEs or public advisories.

A separate disclosure revealed a critical flaw in Anthropic’s Model Context Protocol (MCP), which researchers warned could expose up to 200,000 servers to complete takeover. Despite 10 high- and critical-severity CVEs tied to MCP-dependent tools collectively downloaded over 150 million times Anthropic declined to patch the root issue, calling it "an explicit part of how MCP stdio servers work" and not a secure default. The burden of mitigation falls on developers and organizations using the protocol.

The lack of federal AI regulations in the U.S. further complicates the issue. Anthropic itself recently cautioned that its latest model is too dangerous to release publicly due to its ability to identify security flaws yet the company faces no regulatory consequences for deploying high-risk systems. Meanwhile, the industry’s refusal to address fundamental vulnerabilities shifts responsibility to end users, leaving downstream applications and enterprises exposed.

These incidents underscore a broader trend: AI vendors promote their tools as security solutions while distancing themselves from the risks they introduce. Without stronger accountability, the gap between AI’s promised protections and its real-world vulnerabilities will only widen.

Source: https://www.theregister.com/2026/04/19/ai_vendors_response_to_security/

Anthropic cybersecurity rating report: https://www.rankiteo.com/company/anthropicresearch

Google cybersecurity rating report: https://www.rankiteo.com/company/google

"id": "ANTGOO1776608825",
"linkid": "anthropicresearch, google",
"type": "Vulnerability",
"date": "4/2026",
"severity": "85",
"impact": "4",
"explanation": "Attack with significant impact with customers data leaks"
{'affected_entities': [{'industry': 'Technology/AI',
                        'name': 'Anthropic',
                        'type': 'AI Vendor'},
                       {'industry': 'Technology/AI',
                        'name': 'Google',
                        'type': 'AI Vendor'},
                       {'industry': 'Technology/AI',
                        'name': 'Microsoft (GitHub)',
                        'type': 'AI Vendor'}],
 'attack_vector': ['Exploitation of AI agent vulnerabilities',
                   'Protocol design flaws'],
 'data_breach': {'sensitivity_of_data': 'High',
                 'type_of_data_compromised': ['API keys', 'Access tokens']},
 'description': 'AI vendors have increasingly positioned their tools as '
                'essential for cybersecurity defense yet when vulnerabilities '
                'emerge in their own systems, they often dismiss them as '
                "'expected behavior' or 'by-design risks.' Recent incidents "
                'highlight this pattern, raising concerns about accountability '
                'and the broader security implications of AI adoption. '
                'Researchers demonstrated how three widely used AI agents '
                '(Anthropic’s *Claude Code Security Review*, Google’s *Gemini '
                'CLI Action*, and Microsoft’s *GitHub Copilot*) could be '
                'exploited to steal API keys and access tokens. A separate '
                'disclosure revealed a critical flaw in Anthropic’s *Model '
                'Context Protocol (MCP)*, which could expose up to 200,000 '
                'servers to complete takeover. The lack of federal AI '
                'regulations in the U.S. further complicates the issue, with '
                'vendors shifting responsibility to end users.',
 'impact': {'brand_reputation_impact': 'Negative impact due to lack of '
                                       'accountability',
            'data_compromised': ['API keys',
                                 'Access tokens',
                                 'Sensitive server data'],
            'operational_impact': 'Potential complete server takeover',
            'systems_affected': ['Anthropic’s *Claude Code Security Review*',
                                 'Google’s *Gemini CLI Action*',
                                 'Microsoft’s *GitHub Copilot*',
                                 'MCP-dependent tools']},
 'lessons_learned': "AI vendors often dismiss vulnerabilities as 'expected "
                    "behavior' or 'by-design risks,' shifting accountability "
                    'to end users. The lack of federal AI regulations '
                    'exacerbates security risks.',
 'post_incident_analysis': {'corrective_actions': ['Bug bounty programs',
                                                   'Documentation updates',
                                                   'Developer mitigation '
                                                   'guidance'],
                            'root_causes': ['Design flaws in AI protocols',
                                            'Lack of secure defaults',
                                            'Vendor accountability gaps']},
 'recommendations': 'Stronger accountability for AI vendors, issuance of CVEs '
                    'for critical vulnerabilities, and regulatory oversight to '
                    'address AI security risks.',
 'references': [{'source': 'Researcher disclosures'}],
 'response': {'communication_strategy': ['Limited public advisories',
                                         'No CVEs issued'],
              'remediation_measures': ['Bug bounty payouts',
                                       'Documentation updates']},
 'title': 'AI Security Flaws: Vendors Shift Blame While Risks Persist',
 'type': ['Vulnerability Exploitation', 'Data Exposure'],
 'vulnerability_exploited': ['API key and access token theft',
                             'Model Context Protocol (MCP) flaws']}
Great! Next, complete checkout for full access to Rankiteo Blog.
Welcome back! You've successfully signed in.
You've successfully subscribed to Rankiteo Blog.
Success! Your account is fully activated, you now have access to all content.
Success! Your billing info has been updated.
Your billing was not updated.