Cisco Breached After Trivy Supply Chain Attack Hits AI Product Source
Cisco Breached After Trivy Supply Chain Attack Hits AI Product Source
AI Sec News Weekly #2 – 194 sources scanned
Supply chain attacks used to be a patience game — compromise one library, wait months, hope someone important pulls it in. That model is dead. What we're watching now is more like a speed run: one initial compromise cascading through multiple organizations and ecosystems in days, not months. The blast radius isn't theoretical anymore.
Most teams can tell you their direct dependencies; far fewer can trace what happens when a transitive dependency three layers deep gets owned. This week has some sharp examples of what that looks like in practice - and a few teams shipping tools that might actually help.
This Week's Stories
Cisco Dev Environment Breached via Trivy Supply Chain Attack, AI Product Source Stolen
Threat actors leveraged credentials stolen in the recent Trivy supply chain compromise (CVE-2026-33634) to breach Cisco's internal development environment through a malicious GitHub Action plugin. The attackers cloned over 300 GitHub repositories – including source code for Cisco AI Assistants, AI Defense, and unreleased AI products – and stole AWS keys that were used for unauthorized access across multiple Cisco AWS accounts. BleepingComputer reports the stolen repos include code belonging to banks, BPOs, and US government agencies.
Why it matters: This is the first confirmed major-enterprise breach from the Trivy campaign, and the fact that AI product source was specifically targeted suggests the stolen code has value beyond typical IP theft – model architectures, inference pipelines, and defense logic are all now in adversary hands.
BleepingComputer by Lawrence Abrams
Anthropic Accidentally Ships Claude Code Source Maps in npm Release
Anthropic confirmed that version 2.1.88 of the Claude Code npm package shipped with a source map file exposing the tool's full TypeScript codebase – nearly 2,000 files and 512,000+ lines of code. The leaked repo (now 84k+ GitHub stars) revealed internal architecture including a self-healing memory system, multi-agent orchestration, a background autonomy feature called KAIROS, and an Undercover Mode with system prompts instructing Claude to hide Anthropic affiliation when contributing to open-source repos.
Why it matters: Attackers now have a detailed map of Claude Code's tool-use system, agent spawning logic, and persistence mechanisms – the exact components you'd probe for jailbreaks or unauthorized autonomous execution.
Axios npm Package Compromised: Supply Chain Attack Delivers Cross-Platform RAT
On March 31, 2026, two malicious versions of axios, the enormously popular JavaScript HTTP client with over 100 million weekly downloads, were briefly published to npm via a compromised maintainer account. The packages contained a hidden dependency that deployed a cross-platform remote access trojan (RAT) to any machine that ran npm install (or equivalent in other package managers like Bun) during a two-hour window.
Why it matters:The attack proved that npm account credentials alone can override every CI/CD safeguard a project has — registry auth is the weak link, and GitHub-side protections are irrelevant once it's compromised.
Snyk by Liran Tal
Tool Spotlight
New repos and releases worth trying.
GitHub Agentic Workflows Ships Integrity-Isolated Cache and Secret Stripping
Six releases hit github/gh-aw in one week, and the security content is unusually dense. The headline feature is integrity-aware cache storage: cached data now lives on dedicated git branches scoped to integrity levels (merged, approved, unapproved, none), so an unapproved-integrity run can't read artifacts written by a merged-integrity run. Legacy cache entries with no provenance are automatically invalidated on upgrade.
Why it matters: Agentic CI is one of the few places where prompt injection meets real IAM credentials and production artifacts – these are the kinds of architectural fixes that turn don't run agents on PRs from forks from policy into enforcement.
Trail of Bits Releases MuTON and mewt for Agent-Driven Mutation Testing
Trail of Bits open-sourced MuTON and mewt, two mutation-testing tools built on tree-sitter instead of regex. mewt is the language-agnostic core supporting Solidity, Rust, Go, and more; MuTON adds first-class support for TON blockchain languages (FunC, Tolk, Tact). Both are designed for agentic use – structured output instead of stdout dumps, mutant prioritization to skip redundant mutations, and a configuration optimization skill that lets agents set up campaigns without hand-tuning.
Why it matters: Mutation testing has always been too slow and too noisy for CI – agent-friendly output formats and prioritized mutant generation are what it needed to become a realistic automated gate rather than a quarterly audit exercise.
Trail of Bits by Bo Henderson
datasette-llm-usage 0.2a0 Logs Full Prompts, Responses, and Tool Calls to SQLite
Simon Willison's latest Datasette plugin (0.2a0) logs complete prompts, responses, and tool calls to an llm_usage_prompt_log table in Datasette's internal SQLite database – toggled via a single config flag. Pricing and allowance features were carved out into a separate datasette-llm-accountant plugin, keeping this one focused on raw observability. Access to the prompt page now requires an explicit llm-usage-simple-prompt permission.
Why it matters: A zero-dependency forensic trail for LLM interactions that doesn't require a third-party platform – though full prompts and responses in a SQLite file is also a PII incident waiting to happen if the database isn't treated as sensitive.
Community Chatter
What practitioners are debating.
Microsoft's Deputy CISO Argues AI Security Is Just Security, Applied Differently
Yonatan Zunger, Microsoft's Corporate VP and Deputy CISO for AI, published a post framing AI security as an extension of existing fundamentals – identity, data governance, Zero Trust, risk management, and SecOps – rather than a novel discipline. The piece targets CISOs and argues that organizations with mature security programs already have most of what they need; the delta is understanding where AI's specific properties (non-determinism, emergent behavior, data dependency) change the threat model within those existing pillars.
Why it matters: The AI security is just security framing is comforting but dangerous – it risks underinvesting in the exact places where non-determinism and emergent tool use break assumptions that traditional controls were built on.
Microsoft Security Blog by Yonatan Zunger
Quick Hits
- ChatGPT DNS Side Channel Allowed Data Exfiltration (The Register) – Check Point demonstrated DNS-based data smuggling from ChatGPT's code-execution sandbox; OpenAI patched the flaw in February.
- OpenAI Also Patches Codex GitHub Token Exposure (The Hacker News) – Alongside the DNS exfil fix, OpenAI patched a Codex vulnerability that could leak GitHub tokens from the runtime environment.
- LLMs Hallucinate and Botch Dependency Upgrade Advice (Dark Reading) – Sonatype tested ~258k AI-generated dependency recommendations and found models hallucinate packages and suggest versions with known vulns.
- Claude TypeScript SDK Gets Path Traversal CVE (VulDB) – CVE-2026-34451 lets model-supplied paths escape the Claude SDK's filesystem sandbox; fixed in anthropic-sdk-typescript 0.81.0.
- Four CrewAI CVEs Enable Sandbox Escape and RCE (SecurityWeek) – Four chainable flaws in CrewAI's Code Interpreter allow sandbox escape, RCE, SSRF, and local file reads – no full patch yet.
- AIRTBench Benchmarks Autonomous AI Red Teaming Capabilities (GitHub) – Dreadnode released AIRTBench, an open-source benchmark measuring how well LLMs can autonomously red-team other systems.