Thursday, May 14, 2026

Mini Shai-Hulud Breach: Practical Incident Assessment & Recovery Guide for Engineering Teams

PyPi & NPM Breach Mini Shai-Halud
Navigation
    Run Express App Security Scan - It's Free. How Many Stars Does Your App Get?

    Verified by Cyber Chief - Source list: NHS England Digital CC-4781 | TanStack postmortem | CISA advisory | Microsoft Security Blog

    Key Takeaways

    Mini Shai-Hulud is a credential-stealing npm and PyPI supply-chain worm active since late April 2026. If your developer machines or CI runners installed a compromised package version from TanStack, Mistral AI, SAP CAP, UiPath, OpenSearch, or Guardrails AI — treat them as compromised and rotate every GitHub, npm, and cloud token immediately.

    • What it is: A worm that runs during npm install, uses a two-stage Bun-bootstrap architecture to execute an obfuscated credential-stealing JS payload, and harvests your credentials.

    • Who’s affected: Teams using SAP CAP, TanStack, Mistral AI, UiPath, OpenSearch, Guardrails AI npm packages — or the PyPI lightning 2.6.2/2.6.3 package — between April 29 and mid-May 2026.

    • Affected packages: The attack compromised four core npm packages used in the SAP Cloud Application Programming Model (CAP), which are critical for building and deploying applications on SAP BTP, affecting approximately 570,000 weekly downloads. On April 29, 2026, four npm packages from the SAP development ecosystem were published in malicious versions, marking the first time the Shai-Hulud worm family has directly targeted the SAP supply chain.

    • First 24-hour action: Check your lockfiles now. If you find a match, rotate all GitHub tokens, npm tokens, and cloud credentials before anything else. If a compromised package version ran on a developer machine or CI runner, all secrets from that environment should be rotated.

    • How it spreads: Stolen GitHub tokens let it publish malicious versions of your own packages and inject GitHub Actions workflows into repos you own. The malicious payload was designed to run on the Bun JavaScript runtime, allowing it to bypass traditional Node.js monitoring tools during the npm install process.

    • Forward-looking: Raider Container Security by Cyber Chief detects this worm’s signature in containerised CI runners.

    This incident marks a significant shift in the threat landscape for SAP customers, as the attack surface for SAP systems has expanded beyond traditional components to include npm packages that can directly impact production credentials and environments. The attack has a significant impact due to its ability to compromise developer environments and CI/CD pipelines, potentially affecting enterprise SAP deployments.

    This is a practical triage and decision guide for engineering leads, not a malware reverse-engineering report.

    What Mini Shai-Hulud Is (And Why You Should Care)

    You opened LinkedIn this morning and saw "Mini Shai-Hulud" trending. Your team uses TanStack. Or Mistral AI. Or one of your SAP build pipelines. Or you have no idea what you use. Either way, the question in your head right now is: do I need to drop everything and deal with this, or can it wait until Monday? Let me give you a straight answer — and then a triage playbook to find out for sure.

    Mini Shai-Hulud is the third major wave of the Shai-Hulud npm worm family. The original worm hit npm in September 2025; Shai-Hulud 2.0 followed in December 2025. Mini Shai-Hulud launched on April 29 2026 — targeting the SAP supply chain for the first time, new territory for this worm family.

    It expanded on May 11–12 2026 to TanStack, Mistral AI, UiPath, OpenSearch, Guardrails AI, and a long tail of smaller ecosystems. Total: 172 packages, 403 malicious versions, roughly 518 million cumulative weekly downloads.

    The core behaviour matters because it changes what you do next. When a developer or CI runner runs npm install on an affected package, a preinstall hook executes before the install completes. That script pulls the Bun JavaScript runtime (Bun isn't monitored by most Node.js security tools) and runs an 11.6–11.7MB obfuscated JavaScript payload.

    The payload harvests credentials, then uses your stolen GitHub tokens to spread, publishing new compromised versions and injecting GitHub Actions workflows across every repo your token can reach.

    I'm not telling you this is impossible to deal with. I'm telling you that it is doable, if you act systematically and in the right order.

    Think of this post as an ER triage guide for your software dev team, particularly if you don't have in-house security expertise:

    • Act now: builds ran with affected packages.

    • Act this week: affected versions are in your lockfiles but nothing ran.

    • Harden & have a coffee: no matches anywhere.

    An abstract visualization depicts interconnected software packages arranged in a network pattern, symbolizing the complexity of npm packages and GitHub tokens. The design hints at the potential risks of compromised packages and supply chain attacks, emphasizing the importance of secure developer environments and validated GitHub tokens.

    Quick “Am I Affected?” Checklist

    Run these checks now:

    1. Search your lockfiles (package-lock.json, yarn.lock, pnpm-lock.yaml, requirements.txt, poetry.lock) for these exact strings to identify any compromised package version:

    • mbt@1.2.48

    • @cap-js/sqlite@2.2.2

    • @cap-js/postgres@2.2.2

    • @cap-js/db-service@2.10.1

    • @tanstack/* versions dated April–May 2026

    • @squawk/* any version

    • @mistralai/mistralai@2.2.x

    • mistralai==2.4.6 (PyPI)

    • guardrails-ai==0.10.1 (PyPI)

    • lightning==2.6.2 or lightning==2.6.3 (PyPI — note: this is the lightning umbrella package, not pytorch-lightning, which was not affected)

    1. File system grep under node_modules for large single-line JS files (~11.6–11.7MB) named execution.js, router_runtime.js, router_init.js, or setup.mjs. These obfuscated JavaScript files may be found at the package root and within package tarballs, as these are the files the payload drops.

    2. GitHub repo and branch audit: search your GitHub organisation for public repos whose description matches “A Mini Shai-Hulud has Appeared” or Dune-themed names. Look for branches named dependabout/github_actions/format/setup-formatter or similar — these are injected workflow branches the worm creates using stolen GitHub tokens.

    3. Inspect configuration files such as .vscode/tasks.json and .claude/settings.json in any repo you cloned or opened in VS Code or Claude Code during this period. Look for auto-run tasks, SessionStart hooks, or any malicious modifications that download Bun or execute remote scripts.

    4. Run a lockfile grep one-liner:

    grep -rE "mbt@1\.2\.48|cap-js/sqlite@2\.2\.2|cap-js/postgres@2\.2\.2|cap-js/db-service@2\.10\.1" .
    

    And for PyPI:

    pip show lightning | grep -E "^Version: 2\.6\.[23]$"
    
    1. Check GitHub Actions run logs from April 29 to now for any workflow that ran npm install on the affected packages. If you find one, assume that runner’s environment variables, repository secrets, and github_token were harvested.

    2. Check your npm token list for tokens you don’t recognise — the worm mints new npm publish tokens using OIDC abuse.

    3. Check package caches (such as npm, yarn, or pip caches) for remnants of compromised packages, as these may persist even after removal from lockfiles or node_modules.

    Note: If a compromised package version ran on a developer machine or CI runner, immediately rotate all secrets from that environment, including npm tokens, GitHub tokens, cloud credentials, and Kubernetes service account tokens.

    Decision rule

    Any affected package installed on developer machines, build servers, or GitHub Actions runners between late April 2026 and now means those environments are compromised. Continue to "Immediate Response: 0–24 Hours" now — don't skip ahead to containment.

    What Happened: Campaign Overview and Timeline

    April 29 2026 - The SAP Wave

    The campaign opened with four SAP ecosystem npm packages: mbt@1.2.48 (the Cloud MTA Build Tool), @cap-js/sqlite@2.2.2, @cap-js/postgres@2.2.2, and @cap-js/db-service@2.10.1. This was the first time the Shai-Hulud worm family directly targeted the SAP supply chain — a significant escalation, because SAP build tooling runs in environments that hold production credentials.

    For SAP customers, this highlights the critical importance of security and risk management, as threats targeting SAP environments can have a direct impact on business operations.

    It is essential that SAP customers to monitor for additional compromised packages within the SAP ecosystem to mitigate further risks. The four SAP CAP packages alone had approximately 570,000 weekly downloads at the time of compromise.

    Think about SAP penetration testing if that isn't already part of your SAP security structure.

    May 11–12 2026 - The Broad Expansion

    Two weeks later, TeamPCP dramatically widened the blast radius. The second wave hit:

    • TanStack (React Query, Router, Table, Form, and related packages)

    • Mistral AI's JavaScript SDK

    • UiPath's npm libraries

    • OpenSearch JavaScript clients

    • Guardrails AI

    • Plus a long tail of smaller ecosystems: @squawk, @tallyui, @beproduct, @dirigible-ai, @draftauth/@draftlab, @mesadev, @supersurkhet, @taskflow-corp, @tolka, and @ml-toolkit-ts.

    Cross-Ecosystem: PyPI Packages Hit Too

    Simultaneously, the campaign expanded to PyPI: mistralai==2.4.6, guardrails-ai==0.10.1, and lightning==2.6.2 / lightning==2.6.3. That last one is worth calling out explicitly: lightning (the PyPI umbrella package) was compromised, not pytorch-lightning.

    The pytorch-lightning package is a separate legacy project and was not affected. If your team pins pytorch-lightning, you are not in scope for the PyPI arm of this campaign.

    Scale of breach

    Across all three waves: 172 packages, 403 malicious versions, attributed to TeamPCP. Cumulative weekly downloads reached approximately 518 million — not because 518 million developers ran a compromised install this week, but because these packages underpin major ecosystems installed as deep transitive dependencies worldwide. Supply chain attacks at this scale propagate through packages nobody consciously chose to trust.

    Affected Packages and Where They Run

    Defining "Affected"

    “Affected” means a legitimate library was trojanized for a specific version window, known as a compromised package version. The package itself isn’t malicious — TeamPCP compromised publishing credentials or misconfigured OIDC trusted-publishing rules, then published malicious versions that look identical to legitimate releases. It is crucial to supervise package versions used on developer machines and CI environments to detect and mitigate the use of compromised package versions. Short-lived version windows don’t protect you if your lockfile pinned the malicious version or your build server cached it.

    Typical installation locations

    Developer laptops running VS Code or Claude Code, shared CI runners (GitHub Actions, GitLab CI, CircleCI, Jenkins), internal Docker images built from npm install steps, and SAP-specific build environments using the Cloud MTA Build Tool.

    The SAP npm packages are a particularly sensitive case — the cloud MTA build tool is often run in environments that hold deployment credentials for SAP BTP, which connects directly to production data.

    Environment

    Risk Level

    Examples

    Developer machines

    High

    VS Code workstations, claude code users, local Node.js development

    CI/CD runners

    Critical

    github actions, GitLab CI, Jenkins with Node.js

    Docker build images

    Critical

    Node.js base images, SAP build containers

    SAP build environments

    Critical

    cloud mta build tool images, BTP pipelines

    Your Own Private Registries Are at Risk Too

    Here's the part most teams miss. The worm spreads via stolen npm tokens. If any developer machine or CI runner ran one of the compromised packages, the attacker may now hold npm publish rights to packages your team owns. Audit your own private registries and internal packages for suspicious version bumps, added router_init.js or setup.mjs files. Mini Shai-Hulud spreading to your internal packages — which your customers install — is a much bigger problem than the upstream incident.

    Build Your Blast Radius Inventory

    Before you move to the response phase, spend 30 minutes mapping your exposure. Which services and repos depend on each affected package? Where do those dependencies run — dev laptops only, or shared CI/CD too? Which CI environments hold high-privilege secrets (org-wide GitHub tokens, production deploy keys)? This inventory isn't optional — it determines your rotation priority order in the next section.

    If your team finds compromised packages and needs help scoping the blast radius, talk to my team, we've been mapping and helping our penetration testing as a service clients recover quickly from these incidents.

    How Mini Shai-Hulud Works (In Plain English)

    You don't need to understand malware internals to protect your team. But you do need to understand the kill chain, because each stage tells you what you're rotating and why.

    Stage 1 — The Preinstall Hook

    The worm lives in the package.json of the compromised package, specifically within lifecycle scripts such as the preinstall hook—or in the prepare or optionalDependencies fields pointing to a malicious GitHub repository. When your pipeline or developer machine runs npm install, npm evaluates and executes the preinstall lifecycle script automatically, before processing the rest of the dependency graph and applying security checks. This means the malicious code runs as a side effect of a dependency install that appears completely routine, without requiring you to type anything suspicious.

    Stage 2 — The Bun Bootstrap

    The hook pulls the Bun JavaScript runtime — a deliberate choice. Bun isn't Node.js, so most process monitors and SIEM rules don't flag it. The payload (router_runtime.js, execution.js, router_init.js) runs outside the environment your security tooling was built to watch — 11.6–11.7MB of obfuscated JavaScript that downloads quietly.

    Stage 3 — Credential Harvest

    The payload scans for credentials from various sources, including configuration files (such as AWS/Azure/GCP CLI config files), SSH keys, browser session cookies, Claude Code access tokens (.claude/settings.json), GitHub CLI tokens, and environment variables. An embedded Python script scrapes secrets from GitHub Actions runner memory — capturing things that never touch disk. The payload is specifically designed to harvest credentials like GitHub tokens, npm tokens, and cloud service credentials, and exfiltrate them to a public GitHub repository created by the attacker.

    Stage 4 — Persistence and Spreading

    This is what makes Mini Shai-Hulud a worm rather than just a stealer. Using stolen GitHub tokens, it commits malicious .vscode/tasks.json and injects a claude code session hook into .claude/settings.json in every repo it can reach, ensuring the payload is reactivated each time a new coding session starts. The next developer to open one of those projects re-infects themselves automatically. Stolen npm tokens publish new compromised versions; workflow injection dumps secrets from CI environments it hasn’t directly touched, with exfiltrated data sent to a public GitHub repository created by the attacker.

    Stage 5 — Exfiltration via GitHub

    Stolen data is encrypted with attacker-controlled keys and exfiltrated to attacker-controlled public GitHub repositories — dead-drop repos with descriptions like “A Mini Shai-Hulud has Appeared”. The payload is specifically designed to steal GitHub credentials and secrets, often leveraging package preinstall scripts to extract sensitive information. Attackers may use a validated GitHub token, especially with workflow scope, to dynamically instantiate collectors that enumerate repository secrets via the GitHub API. This is clever operational security: the traffic looks like normal GitHub API calls. Your corporate firewall won’t flag api.github.com as suspicious developer activity. Additionally, public GitHub search can reveal real-time activity and compromised repositories related to the attack, providing valuable cybersecurity insights.

    Think of a SLSA-signed malicious package like a forged passport that passes customs — the signature was real, the contents were poisoned. Provenance tells you where a package was built; it doesn’t tell you the build environment’s credentials were clean. Valid provenance does not guarantee a clean package.

    TeamPCP used two compromise methods: stolen static npm tokens and misconfigured OIDC trusted-publishing rules — letting them publish without direct account access.

    The image depicts an abstract representation of a pipeline or workflow, featuring multiple connection points symbolizing various stages in software development, such as npm install and package publishing. This visual metaphor highlights the complexity of managing developer environments and the potential risks of compromised packages and malicious code within the npm ecosystem.

    What the Payload Tries to Steal

    The scope of what Mini Shai-Hulud harvests is wider than most teams realise. Here’s what it targets by environment, including configuration files (which may contain credentials and API keys) and malicious files that can be placed at the package root directory to facilitate payload execution.

    On Developer Machines

    On developer workstations, the payload goes after: GitHub tokens, SSH keys, browser session cookies, AWS/Azure/GCP CLI config files, Claude Code access tokens (.claude/settings.json), .env files in open project directories, and npm tokens in .npmrc. It may also inject a claude code session hook into .claude/settings.json to ensure persistent execution of malicious payloads at the start of each coding session. Developer machines have broad long-lived access — personal GitHub accounts, multiple org memberships, VPN credentials, local database connections. Configuration files containing credentials and API keys are especially targeted, highlighting the importance of securing these files to prevent unauthorized access. A single infected workstation can expose secrets across a dozen services.

    In CI/CD Pipelines

    In GitHub Actions runners and similar CI environments, the worm targets the default github_token, repository secrets injected as environment variables, cloud credentials used for deployment steps, npm publish tokens, and Kubernetes service account tokens. Attackers may embed a formatter workflow within the CI/CD pipeline, which can facilitate secret exfiltration by automating the extraction and transmission of sensitive data. The embedded Python script scraping runner memory is particularly effective — it captures secrets that were never written to disk and won’t show up in a log audit. However, workflow logs can be used to trace the sequence of automated actions during a malicious workflow execution, potentially revealing security bypasses and compromise steps.

    In Cloud Infrastructure

    Cloud targets: AWS STS credentials, Secrets Manager and Parameter Store values, Azure Key Vault tokens, GCP Secret Manager values, Kubernetes service account tokens, and HashiCorp Vault credentials. The worm also targets SAP cloud MTA build tool credentials — the April 29 SAP npm packages were not a random choice.

    All exfiltrated stolen data is encrypted with attacker-controlled keys before it leaves your environment. Assume everything the payload could access was successfully stolen.

    Immediate Response: 0–24 Hours

    How fast do you have to act? Fast. A compromised GitHub token is live and usable the moment it’s stolen. If you found affected packages in your environment, start here — not with “let’s assess the situation over the next few days.”

    Attackers may disguise malicious commits as routine attempts to update dependencies, using commit messages like "update dependencies" to conceal their intent. Be aware that compromised packages added to your environment may include obfuscated files, altered dependencies, or lifecycle scripts that execute malicious payloads during installation.

    Immediately uninstall any compromised packages and reinstall the last known good versions using the command:

    npm uninstall <package> && npm install <package>@<version> --ignore-scripts. 

    This helps ensure that any malicious code is removed and prevents lifecycle scripts from executing during reinstallation.

    Step 1 - Freeze Releases and Pause Workflows

    Stop new releases that depend on affected packages. Pause GitHub Actions workflows that run npm install. Restrict creation of new npm tokens org-wide while triage is in progress.

    Step 2 - Uninstall and Downgrade the Compromised Packages

    Uninstall malicious versions and reinstall clean pinned versions with --ignore-scripts to prevent any further hook execution:

    npm uninstall mbt && npm install mbt@1.2.47 --ignore-scripts
    npm uninstall @cap-js/sqlite && npm install @cap-js/sqlite@2.2.1 --ignore-scripts
    npm uninstall @cap-js/postgres && npm install @cap-js/postgres@2.2.1 --ignore-scripts
    npm uninstall @cap-js/db-service && npm install @cap-js/db-service@2.10.0 --ignore-scripts
    

    For PyPI: pip install lightning==2.6.1 --ignore-installed. Use --no-deps if you need to avoid re-pulling transitive dependencies.

    Step 3 - Rotate Every Credential on Every Affected Host

    Rotate all of the following on every host that ran a compromised package:

    • GitHub personal access tokens (PATs) and gh auth token

    • npm tokens (both classic and granular access tokens)

    • Cloud credentials: AWS IAM access keys, Azure service principal secrets, GCP service account keys

    • Kubernetes service account tokens if the affected runner had cluster access

    • SSH keys stored on the machine

    • Any API keys or service tokens in .env files

    • SAP cloud credentials if you ran the cloud MTA build tool

    Don't try to assess which credentials "might have been" accessed. The payload scanned broadly — assume it found everything.

    Step 4 - Clean Up the GitHub Footprint

    The worm left traces in your GitHub organisation. Remove them now:

    • Delete branches matching dependabout/* or dependabout/github_actions/format/setup-formatter

    • Delete unknown GitHub Actions workflows added in the last 2–4 weeks

    • Find and delete any public GitHub repository created by the attacker, especially those with the description “A Mini Shai-Hulud has Appeared” or Dune-themed names, as these may have been used as destinations for exfiltrated data

    • Audit for and remove any unauthorized GitHub repositories to maintain codebase integrity and prevent malicious activity

    • Revoke unfamiliar OAuth apps, PATs, and GitHub Apps in your organisation settings

    Step 5 - Check .vscode/tasks.json and .claude/settings.json

    Inspect every repo your team has cloned for malicious auto-run entries. In .vscode/tasks.json, look for tasks that download Bun or run remote scripts with runOn: “folderOpen”. In configuration files such as .claude/settings.json, check for a claude code session hook or SessionStart hooks that curl or execute remote content, as these can ensure persistent reactivation of the payload each time a coding session starts. A developer re-infects themselves just by opening a poisoned project directory.

    Step 6 - Isolate Affected Machines

    Pull affected developer machines and CI runners off sensitive networks. For shared build runners: stop them, snapshot for forensics if needed, and plan to rebuild from a known-good baseline rather than trying to clean in place.

    Step 7 - Notify and Document

    Keep it factual, brief, no speculation. Internal comms template: "

    We identified packages affected by the Mini Shai-Hulud supply chain incident in our [environment]. We are rotating credentials and rebuilding affected environments. No customer data has been confirmed exfiltrated at this time. We expect to restore normal operations by [time]."

    Simultaneously, document: packages and versions found, machines that ran them, credentials rotated, and unauthorised GitHub or cloud activity observed. This is your postmortem record and regulatory evidence.

    Short-Term Containment and Eradication: Days 1–7

    Org-Wide Code Search

    Once the immediate bleeding is stopped, run a thorough org-wide search. Look for .claude/settings.json changes, .vscode/tasks.json changes, and new or modified GitHub Actions workflow files across all repos — not just the ones you know used affected packages. Also, use public GitHub search to identify compromised repositories and monitor real-time activity related to the attack. Search your GitHub organisation for public repos with descriptions matching “A Mini Shai-Hulud has Appeared” or Dune-themed names. This npm worm spreads laterally via injected workflow branches; you may find traces in repos with no direct dependency on the compromised packages.

    Search for router_init.js, setup.mjs, router_runtime.js, and execution.js in your container image build contexts, CI runner caches, and artifact stores. Also grep your GitHub Actions workflow files for curl or wget commands fetching Bun or suspiciously large JavaScript files.

    Rebuild Don't Clean

    Rebuild CI runners and container base images from known-good baselines. Don't try to clean in place — the worm's persistence mechanisms are subtle and you may miss one. Especially important for shared runners.

    Clear npm, yarn, and pnpm caches on all build hosts and developer workstations. Reinstall with pinned versions and --ignore-scripts for security-sensitive builds. Update base Docker images to pull clean dependency versions.

    Tighten GitHub Governance Now

    This step will give you the biggest bang for your buck that will keep paying back in the future. Implement these governance controls in parallel:

    • Require pull request reviews before merging changes to .github/workflows/*.yml

    • Enable branch protection on your main branch

    • Restrict which identities can publish to your npm organisation or internal package registries

    • Review and tighten the OIDC trusted-publishing rules for your npm packages — avoid trusting the entire repository, and instead narrow trust to specific repos and specific workflow file paths to reduce the risk of compromise through misconfigured trust policies

    Document the Scope

    Before you close out the incident, write down the breached packages and versions found, every affected machine and runner, credentials rotated, and unauthorised GitHub or cloud activity observed. Unexpected use of credentials that should now be invalid is a signal that rotation was incomplete.

    Assessing Impact on Developer Machines vs CI/CD

    Why the Distinction Matters

    Not all infections are equal in blast radius. You probably can't rebuild everything simultaneously — so here's how to prioritise.

    Developer laptops and workstants hold broad, long-lived access - secrets and tokens for multiple organisations, personal cloud accounts, VPN credentials, SSH keys for production bastions, years of cached credential files. A single infected workstation exposes secrets across dozens of services.

    The persistence mechanism (.vscode/tasks.json + .claude/settings.json) means the machine keeps re-infecting itself until you rebuild it.

    CI/CD environments are more access-constrained than a developer's personal machine, but hold high-sensitivity secrets: github_token with workflow-modification rights, org-wide deploy keys, production cloud credentials, and Kubernetes service account tokens with cluster-admin scope. The embedded Python script scraping runner memory captures these before they expire.

    Priority Order for Limited Capacity

    If your team can't rebuild everything in parallel, here's the order:

    1. Shared CI/build runners first: these hold org-wide secrets and the most sensitive GitHub Actions permissions. Rebuild and rotate secrets.

    2. High-privilege developer workstations: engineering leads, release engineers, DevOps/platform engineers, anyone with admin-level GitHub or cloud access.

    3. Standard developer workstations: rotate credentials first, rebuild as bandwidth allows.

    4. Any machine where the Python memory-scraper ran: full rotation of every repository secret in any workflow that ran on those machines.

    Hardening GitHub, Tokens, and GitHub Actions Workflows

    Minimise Long-Lived Tokens

    Long-lived GitHub tokens are the fuel this worm runs on. Move toward short-lived OIDC tokens for everything that supports them. For GitHub Actions, configure OIDC for AWS, Azure, and GCP deployments instead of storing long-lived cloud secrets or credentials as repository secrets. Never use bypass_2fa=true. Set token expiry to 30 days or less.

    Control Who Can Modify Workflows

    The most impactful single control you can implement today: require pull request reviews before merging changes to any file in .github/workflows/. Also apply this to package.json scripts - a malicious preinstall hook slipped in via an unreviewed PR is exactly how this npm worm seeds supply chain attacks downstream.

    Your branch protection rules on main should also include a requirement that at least one reviewer approves workflow file changes.

    Tighten npm Trusted-Publisher OIDC Rules

    If your npm packages use trusted publishing, narrow the OIDC trust rules to specific repo names (not wildcards), specific workflow file paths, and specific branch refs (refs/heads/main only). Avoid trusting the entire repository, as this broad trust setting can allow any workflow in any branch to publish, increasing the risk of compromise. Instead, restrict trust to only the specific workflow files and branches required. A rule like “any workflow in any branch of this account can publish” is exactly the misconfiguration this worm exploited.

    Watch Workflow Logs for These Signals

    Set up alerts — or at minimum, periodically grep your GitHub Actions workflow logs — for:

    • Suspicious activity in the workflow log, such as unexpected steps or unauthorized actions, which can reveal the sequence of automated actions and potential security bypasses.

    • Sudden creation of public repositories in your organisation with Dune-themed names or descriptions matching “A Mini Shai-Hulud has Appeared” — the worm’s dead-drop exfiltration signature

    • Workflow runs that download the Bun JS runtime or fetch large JS files via curl

    • format-results artifacts containing unexpectedly large JSON blobs

    • New branches matching dependabout/* patterns

    • Workflow runs that create new npm tokens or push to unfamiliar npm packages

    Audit npm Tokens After Rotation

    Don't stop at rotation. This npm worm mints new npm publish tokens during execution — tokens that persist even after you rotate the originals. Check your npm account settings for unrecognised granular access tokens and revoke them.Improving Dependency and Package Hygiene

    Reduce future supply chain breaches through improved dependency management. Cyber Chief's Raider Container Security module will do this for you on auto-pilot so you can concentrate on keeping your CEO happy by shipping new features on time.

    Improving Dependency and Package Hygiene

    Enforce Lockfiles as Code

    Treat your lockfile (package-lock.json, yarn.lock, pnpm-lock.yaml) as a first-class code artefact. Every lockfile change should go through pull request review — not just the package.json update. A lockfile change that you didn't expect is a red flag. Make this a required check in your CI configuration: if package-lock.json changes in a PR but package.json doesn't, that PR needs explanation.

    Pin Exact Versions for Critical Build Tools

    Broad semver ranges (^2.2.0, ~2.2.0) are convenient until they're not. For critical build tooling, the cloud MTA Build Tool, SAP core packages, GitHub Actions runner images, anything that touches your credentials or your deployment pipeline — pin exact versions.

    The 24–48 hour dependency cooldown window is a realistic policy for most engineering teams: don't install a new minor or patch release of a critical package until 24–48 hours after it's published. That window catches most of the incident reporting for a compromised package release.

    Build an Internal Allowlist for High-Impact Packages

    Maintain an internal allowlist of approved versions for your highest-risk packages — the ones with deep access to secrets or build credentials. Cross-reference it against the known compromised versions from this incident: mbt@1.2.48, @cap-js/sqlite@2.2.2, @cap-js/postgres@2.2.2, @cap-js/db-service@2.10.1, and the others in the IOC list. Any npm install that tries to pull a blocked version should fail loud.

    SBOM-Based Automated Blocking

    The manual allowlist works for your direct dependencies. For your full transitive dependency graph — the hundreds of packages that get pulled in because something you use depends on them — you need automation. Cyber Chief's Dependency module generates a live SBOM and automatically flags or blocks known-malicious versions as new incidents are registered. See how Cyber Chief's dependency security module works.

    Securing Containers and Build Environments with Raider Container Security

    Where this npm/PyPi Breach Actually Executes

    Many of the most impactful infections happened inside ephemeral containers. GitHub Actions runners are containers. Dockerised SAP build agents are containers. Any workflow that runs npm install and holds a github_token, cloud deploy credential, or npm publish token is a high-value target.

    The worm runs, harvests, and exfiltrates within the lifetime of a single CI job — the container is gone before most post-hoc log analysis would catch it.

    The image depicts an abstract representation of cloud infrastructure featuring container symbols and security shields, symbolizing the protection of developer environments against threats like compromised packages and malicious code. Elements such as npm tokens and GitHub actions are visually integrated, highlighting the importance of secure package management and development tools in modern software workflows.

    What Raider Detects

    Raider Container Security monitors process behaviour inside those containers in real time. It detects:

    • Unexpected downloads of the Bun JavaScript runtime from within a build step

    • Execution of large obfuscated JavaScript files (router_init.js, execution.js, router_runtime.js)

    • The embedded Python memory-scraping behaviour targeting GitHub Actions runner process memory

    • Unusual outbound connections from build containers to GitHub dead-drop repositories

    • npm token minting operations triggered from within a build step

    When Raider flags one of these behaviours, it terminates the process and alerts your team before exfiltration completes. The difference between "caught mid-execution" and "found out three weeks later when GitHub notified us about unexpected publishes."

    Original Cyber Chief Data Point

    We ran Raider Container Security across customer CI environments scanning for Mini Shai-Hulud signatures across the 172 known compromised packages. [NEEDS REAL DATA] — insert: environments scanned, % where an IOC was present but undetected, most common missed signal (expected: Python memory-scraper inside ephemeral runners).

    Why This Matters for Teams Without a Security Hire

    I'm not telling you that container security is easy. I'm telling you the hard parts — network behaviour baselining, process allow-listing, real-time memory-scrape detection — don't have to be manual. Raider operationalises those checks so your team ships, not threat-hunts. You don't need a security team. You need one tool watching the containers.

    Supply chain attacks that execute inside ephemeral CI containers are the hardest class of threat to catch after the fact. When Raider detects "A Mini Shai-Hulud has Appeared" dead-drop traffic from a build container, it terminates the process before exfiltration completes. Setup is a one-time CI integration. Start a free Raider Container Security scan — no credit card, 5 minutes.

    Your Long-Term Strategy: Building Supply Chain Resilience

    The short-term actions get you through this incident. Long-term resilience means not starting from zero the next time. And there will be a next time — Mini Shai-Hulud is the third wave in eight months.

    A Lightweight 3–6 Month Roadmap

    A lightweight supply-chain policy doesn't require a dedicated security team. It requires four things: dependency pinning for critical build tools, npm script review in your PR process, GitHub Actions governance (workflow change reviews, scoped OIDC, branch protection), and a credential rotation playbook you've actually practiced. Add a persistent monitor for public repos in your GitHub organisation with descriptions matching "A Mini Shai-Hulud has Appeared" — the dead-drop naming convention this campaign used will appear in future waves.

    Set a quarterly review cadence for critical build tools — the cloud MTA build tool, SAP CAP core packages, TanStack router, and any package between your CI pipeline and production. Test updates in pre-production monitored by Raider before rolling out. 30 minutes per quarter per package family.

    Maintain SBOM-driven inventory of which services, containers, and developer machines use which versions of your highest-risk packages. When the next supply chain attacks land, you want to be answering "are we affected?" in 10 minutes, not 10 hours.

    Run at least one IR drill per year for npm + GitHub Actions supply chain attacks. Walk through: someone reports a compromised package your team uses. How long to identify affected environments? How long to rotate credentials? How long to rebuild CI runners? The drill reveals gaps before an attacker does. Default Raider Container Security on for every new pipeline.

    Mini Shai-Hulud won't be the last one. There have been three Shai-Hulud waves in eight months. The teams that came out of this in good shape weren't the ones with the biggest security budgets — they were the ones with lockfiles pinned, GitHub Actions hardened, and containerised builds being watched. None of that requires a security hire. All of it requires you to start this week.

    Run a free Raider Container Security scan against your CI runners and build containers to see if Mini Shai-Hulud signatures are present. No credit card required, it takes less than 7 minutes to see results. → Start your free Cyber Chief Express Scan

    Click the green button below to see how Cyber Chief works.


    FAQ

    Do I need to wipe every developer machine that ran npm install on an affected package?

    You're not completely wrong to ask this — the persistence mechanism (.vscode/tasks.json + .claude/settings.json modifications) means cleaning in place is unreliable. For machines with broad access (admins, release engineers, senior devs), rebuild from a known-good baseline. For lower-privilege machines, a full credential rotation plus manual removal of the malicious config files is the minimum viable response.

    Are my GitHub Actions workflows safe if I only used the default github_token?

    No. The default github_token has read/write access to the triggering repository and can be used to modify GitHub Actions workflows. Mini Shai-Hulud specifically uses stolen GitHub tokens to inject new workflow files and add dependabout/ branches. If a runner that held your default github_token ran a compromised package, audit your workflows and branches for injected changes and rotate your token scopes.

    We found no affected versions in our lockfiles — are we in the clear?

    Not entirely. Check three additional things: your npm and Docker build caches (malicious versions can be cached even if your lockfile was later updated), any base container images built during the April 29 – May 14 window that included an npm install step, and your own internal packages for any suspicious version bumps or added setup.mjs / router_init.js files. If all three are clean, you're in good shape — but harden your GitHub Actions workflows anyway.

    Is Mini Shai-Hulud only a JavaScript/npm problem?

    No. The PyPI side of the campaign — mistralai==2.4.6, guardrails-ai==0.10.1, lightning==2.6.2, and lightning==2.6.3 — demonstrates the worm family's cross-ecosystem reach. The lightning package (the PyPI umbrella package, not the older pytorch-lightning legacy package) was specifically compromised. Any Python environment that ran pip install on those versions in the affected window is also in scope.

    How does Raider Container Security fit with other security tools we already use?

    Raider focuses on runtime container behaviour — it watches what processes actually execute inside your build containers and CI runners. It complements rather than replaces SCA tools (which scan static dependency trees) and SAST tools (which analyse source code). The gap Raider fills is ephemeral runtime behaviour: the Bun download that happens mid-build, the memory-scraping Python process that runs and exits before your log-based SIEM would catch it. For supply chain attacks that execute inside CI containers, runtime monitoring is the layer most teams are missing.

    How is Mini Shai-Hulud different from the original Shai-Hulud worm?

    The original Shai-Hulud (September 2025) targeted Node.js ecosystems via npm lifecycle hooks and spread via stolen npm tokens. Shai-Hulud 2.0 (December 2025) added GitHub Actions workflow injection and cross-repo persistence via .vscode/tasks.json. Mini Shai-Hulud adds three new capabilities: SAP supply chain targeting (first time for this family), PyPI cross-ecosystem expansion, and the .claude/settings.json persistence vector targeting Claude Code users. Scale is also larger: 172 packages vs the original worm's ~20.

    Did the malicious packages pass npm provenance and SLSA checks?

    Yes — and this is the part that catches most teams off guard. SLSA provenance and npm trusted publishing tell you that a package was built in a specific, logged environment. They do not tell you that the environment's publishing credentials were clean at build time. In cases where TeamPCP exploited misconfigured npm OIDC trusted-publishing rules, the malicious packages were published through GitHub Actions — so they carry valid provenance attestations. Treat valid provenance as a positive signal, not a guarantee. The correct check is: does the package content match what the maintainer intended to ship?