Educational Research

When Your AI Assistant
Becomes the Attack Vector

A comprehensive analysis of security vulnerabilities in AI coding assistants from October 2025 through January 2026.

7 Critical CVEs
560+ Exposed MCP Servers
800+ Malicious npm Packages
$500K+ Confirmed Stolen
CVE-2025-49596

The Silent Siege

How AI coding tools became the new frontline for cyber warfare

Supply Chain Attacks

The Nx attack demonstrated the first AI-weaponized npm compromise, targeting Claude Code, Gemini CLI, and Amazon Q with specialized malware.

Trust Boundary Violations

CVE-2025-49596 showed how MCP servers binding to 0.0.0.0 allow browser-to-backdoor RCE, crossing critical trust boundaries.

Prompt Injection

File content treated as trusted input enables indirect prompt injection, allowing attackers to manipulate AI behavior through seemingly benign files.

CVE Database

7 critical vulnerabilities documented

CVE-2025-49596 9.4

MCP Inspector RCE

Browser-to-backdoor remote code execution via 0.0.0.0 binding. 560+ servers exposed publicly.

CVE-2025-59828 9.8

Yarn C2 Plugin

Malicious Yarn plugin executes code before user trust prompt. Full system compromise.

CVE-2025-52882 8.8

VS Code Extension WebSocket Hijacking

Cross-site WebSocket hijacking in Claude's VS Code extension.

CVE-2025-54795 8.7

Command Injection via Prompt

Claude's InversePrompt feature allows command injection through crafted prompts.

CVE-2025-54136 7.2

Cursor MCPoison

Prompt injection vulnerability in Cursor's MCP integration.

CVE-2025-53109/53110 7.3

Filesystem MCP Escape

Sandbox escape through filesystem MCP server.

Attack Timeline

October 2025 - January 2026

Security Toolkit

Educational tools for understanding and defending against AI vulnerabilities

Scanners

  • mcp_inspector.py — MCP vulnerability scanner
  • npm_audit.py — Supply chain analyzer
  • cli_detector.py — AI CLI discovery
  • credential_scanner.py — Credential exposure finder

Enumeration

  • filesystem_enum.py — File reconnaissance
  • env_harvester.py — Environment auditor
  • trust_boundary_mapper.py — Trust analysis

Defense

  • mitre_attack_mapper.py — ATT&CK mapping
  • sbom_generator.py — CycloneDX/SPDX SBOM
  • mcp_hardening.py — MCP security config
  • siem_integration.py — SIEM event generator

The Silent Siege: How AI Coding Tools Became the New Frontline for Cyber Warfare

This research repository accompanies the full article published on Substack and stefanwiest.de. Explore the complete analysis of how AI coding assistants became attack vectors, the supply chain vulnerabilities exposed, and what organizations can do to defend themselves.

Read on stefanwiest.de Read on Substack
AI Security

Educational Purpose Only

This repository is for educational and research purposes only. The exploit demonstrations are safe, sandboxed examples designed to help security researchers, developers, and organizations understand AI coding assistant vulnerabilities. Test only on systems you own or have explicit permission to test.