Shai-Hulud worm rode @bitwarden/cli 2026.4.0, abusing GitHub Actions secrets
Shai-Hulud worm rode @bitwarden/cli 2026.4.0, abusing GitHub Actions secrets
AI Sec News Weekly #6 — 204 sources scanned
When did 'npm install' become a remote procedure call into your org? Package managers look like catalogs, but their true job is executing other people’s code at the most connected points in our workflow—dev laptops and CI. Worms don’t need zero-days when lifecycle hooks and build tokens line up; they just follow the permission gradient downhill.
One team learned that the hard way this week: a preinstall isn’t just pre—it’s privileged. The small mental shift that helps: dependency resolution is a control plane, not a convenience API. Once we see it that way, the blast radius maps itself, and the surprises stop feeling surprising. The details below are worth your coffee.
This Week's Stories
Trojanized @bitwarden/cli@2026.4.0 Drops ‘Shai‑Hulud’ Self‑Spreading Worm
Version 2026.4.0 of the @bitwarden/cli npm package (≈78k weekly downloads) was malicious, shipping a preinstall hook (bw_setup.js) that grabbed the Bun runtime from GitHub to execute a 10 MB obfuscated payload (bw1.js) calling itself “Shai‑Hulud: The Third Coming.” The worm harvests SSH keys, cloud creds, npm tokens, and AI dev artifacts (Claude tokens ~/.claude.json, MCP configs), plus secrets from AWS SSM/Secrets Manager, Azure Key Vault, and GCP Secret Manager, then exfiltrates to Dune‑themed public GitHub repos it creates.
Why it matters: A verified package name can now mask a worm that ransacks cloud and AI developer secrets across an org.
Security Week by Ionut Arghire
LMDeploy VLM SSRF (CVE-2026-33626) Exploited Hours After Disclosure
A server-side request forgery in LMDeploy’s vision-language image loader (load_image in lmdeploy/vl/utils.py) lets arbitrary URL fetches hit internal services; all builds ≤0.12.0 with VLM support are affected (CVE-2026-33626, CVSS 7.5, Igor Stepansky/Orca). Sysdig honeypots saw the first exploit 12h31m after disclosure from 103.116.72.119, with an eight‑minute run probing AWS IMDS, Redis, MySQL, and a secondary HTTP admin, plus OOB DNS to requestrepo.com and 127.0.0.1 scans, while swapping between internlm‑xcomposer2 and OpenGVLab/InternVL2‑8B.
Why it matters: Inference servers quietly inherit SSRF risk, exposing IMDS and internal DBs through seemingly harmless media loaders.
Tool Spotlight
New repos and releases worth trying.
Trailmark turns polyglot codebases into queryable graphs for Claude
Trail of Bits open‑sourced Trailmark, a Python library that parses code into a call graph and exposes it to Claude via a skills API. It uses tree‑sitter and rustworkx to index functions, classes, and call edges for fast queries—callers/callees, all paths, and reachability from untrusted inputs. It supports 17 languages (C, Rust, Go, Python, Solidity, Circom, more) and ships eight Claude Code skills for mutation triage, test‑vector generation, protocol diagrams, and attack‑surface mapping.
Why it matters: Graph‑native code reasoning enables automated prioritization of security‑critical paths across sprawling, mixed‑language repos—work that used to take humans days.
Trail of Bits by Scott Arciszewski
Glyph claims sub‑millisecond prompt‑injection detection for low‑latency agents
EnkryptAI released Glyph, an open‑source prompt‑injection detector that advertises sub‑millisecond latency. The pitch targets inline screening for model inputs in agents and other tight loops. The repo is light on method and benchmarking details, so the headline claim is doing most of the work.
Why it matters: If the latency claim holds, prompt‑injection checks stop being the thing teams rip out to hit P99.
GPT‑5.5 via Codex: Willison’s plugin rides the subscription lane
Simon Willison published llm‑openai‑via‑codex, a plugin for his llm CLI that sends prompts to OpenAI’s /backend-api/codex/responses endpoint using your existing ChatGPT/Codex subscription. It let him run his pelican benchmark against GPT‑5.5 without ChatGPT’s UI prompts in the way. OpenAI leaders have said this subscription path is supported, even as the official API rollout lags.
Why it matters: Accessing flagship models through a subscription endpoint shifts authentication, logging, and safety policy to a different control plane than your standard API stack.
Community Chatter
What practitioners are debating.
Cal.com shuts its code; veterans say AI won’t sink OSS
After years on AGPL, Cal.com closed its code, with CEO Bailey Pumfleet calling openness a “bank-vault blueprint” that AI-armed attackers now study 100× faster. The Register’s Steven J. Vaughan-Nichols pushes back, citing decades of OSS in commercial stacks and Greg KH’s skepticism. OSSRA 2026 flags a 107% vuln surge, while Simon Willison counters that open source can pool “token” audit budgets that proprietary shops must fund alone.
Why it matters: The argument is shifting from ideology to economics: whoever can burn more tokens on auditing wins the next bug wave.
The Register Security by Steven J. Vaughan-Nichols
Ex-OpenAI researcher: open-source stacks can match Anthropic’s Mythos
On Bluesky, a recap of Ari Herbert-Voss’s Black Hat Asia talk claims an open-source toolchain, properly orchestrated, can match Anthropic’s limited-access Mythos for bug discovery. That challenges the premise that capability is locked behind closed weights. The post has low engagement but points to a full talk.
Why it matters: Exclusivity as a safety control loses credibility when the recipe is public and the ingredients are cheap.
Bluesky (@johonotodai.bsky.social)
Quick Hits
- SGLang RCE via Malicious GGUF Chat Templates (The Hacker News) — SGLang bug (CVE-2026-5760, CVSS 9.8) lets malicious GGUF chat_templates trigger RCE when loaded via /v1/rerank.
- Malicious Packages Install LLM Proxy Backdoor on Servers (Aikido Security) — Malicious npm/PyPI packages deploy a gpt-proxy backdoor that turns Kubernetes hosts into LLM relays via Chisel tunnels to Chinese infrastructure.
- US Targets Chinese Firms Exploiting US AI Models (SecurityWeek) — White House vows crackdown on China-based firms 'exploiting' US-made AI models, with sanctions and a bipartisan bill in motion.
- Pipelock Releases Open-Source Firewall for AI Agents (GitHub) — New open-source pipelock adds an agent firewall to block risky tool calls and prompt hijacks in MCP setups.