The NHS just walled off hundreds of GitHub repos because of Anthropic Mythos. The institutional reaction has started.
The Register reports the UK's NHS has ordered tech leaders to wall off hundreds of public GitHub repos over advanced-AI scanning concerns, naming Anthropic's Mythos directly. This is the first major government-scale institutional reaction to Mythos-class capability. Tre called this in March; here's the playbook every defender should run in the next ten business days — repo inventory, gitleaks + trufflehog full-history sweep, the four-question test, default-private new repos, and the OSS hardening checklist for libraries you have to keep public.
Founder of Valtik Studios. Penetration tester. Based in Connecticut, serving US mid-market.
# The NHS just walled off hundreds of GitHub repos because of Anthropic Mythos. The institutional reaction has started.
The Register is reporting today (May 5, 2026) that the UK's National Health Service has issued an internal directive to its tech leaders to "temporarily wall off" the agency's public open-source GitHub repositories. The stated reason: concerns about advanced AI capability — explicitly naming Anthropic's Claude Mythos — being used to scan public NHS code for exploitable vulnerabilities. The deadline for the changes was the start of May. The scope is hundreds of repositories across an IT estate with a roughly £20 billion annual spend, including code that interfaces with patient-record systems, prescribing platforms, GP appointment-booking infrastructure, and the NHS App.
I called this in March. Both my Anthropic Mythos explainer and the Claude Mythos 2 preview post ended on the same line: every defensive AI capability is also an offensive capability, and the institutional reaction would not be subtle. This is the first major government-scale instance of that reaction. It will not be the last. If you run a security program — at a hospital trust, a government department, a regulated enterprise, or even a well-funded startup — you should be reading the NHS directive as a draft of the email your CIO is going to send your team in the next ninety days.
This post: what actually happened at NHS, why now, the pattern that's coming next, and what every defender should be doing in the next two weeks to be ready when the same conversation lands on your desk.
What the NHS actually did
The Register's reporting (and the corroborating reads from a couple of UK trade outlets) describes a directive originating with NHS England's tech leadership, distributed to the constituent trusts and digital teams under their umbrella. The instruction is straightforward: identify every public GitHub repository owned by an NHS organisation, and either set it private or remove it entirely, by May. New public-by-default releases are paused pending a formal review process.
The directive carves out narrow exceptions. Repos that are already widely-forked and effectively impossible to "un-publish" are being inventoried separately. A small set of repos under formal cross-government partnership programs (notably anything tied to the NHSX-era openEHR work and the OpenSAFELY collaboration with Oxford) get individual review rather than a blanket block. Documentation-only repositories without code can stay public after a sweep for embedded credentials.
Two reasons get cited in the directive. The first is the AI exposure angle: large language models with cybersecurity capability are increasingly able to "find a vulnerability in any sufficiently large codebase" with high accuracy, and public NHS code is both training data for those models and a live scan target for adversaries running the models. The second reason is supply chain risk in the abstract — concerns that public NHS code has historically been picked up and re-used in third-party products that the NHS then procures back from vendors, opening a circular trust path.
Read together, the message from NHS leadership is: we no longer trust that "hidden in an open-source repo" is a workable security position when adversaries have $50,000 of API spend and a model that reads code at superhuman rates.
Why now — the Mythos delta
Anthropic's April announcement was the trigger. The published Mythos benchmarks showed:
- 595 previously-unknown crashes across a 1,000-repo evaluation set
- Ten cases of full control flow hijack on patched targets (tier 5 in Anthropic's internal scoring — meaning the model derived working remote code execution against code the maintainers thought was already fixed)
- A 17-year-old FreeBSD NFS bug found autonomously, now CVE-2026-4747
The honest read is that "find a memory-safety bug in any sufficiently large C codebase" is now a capability that generalizes. It is not yet *uniformly* good at every codebase, and it has plenty of failure modes, but the floor it raises is real. A nation-state intelligence service with unlimited API budget can now point that capability at, say, every repo containing the strings nhs.uk and patient in the README. The result is not "thousands of zero-days the next morning," because exploitation still has plenty of friction. But the result *is* a meaningful uplift to the kind of slow, patient, well-funded adversary who treats two years of dwell time as a reasonable investment.
NHS leadership is reading the same papers we are. They are not panicking. They are doing the rational thing for an organisation whose threat model includes Russian state-aligned ransomware operators who have been hitting NHS Scotland, Synnovis (the lab provider whose June 2024 ransomware took out Greater London hospital pathology for two months), and a steady drumbeat of trust-level incidents going back years. The cost of an NHS-scale data breach is measurable in lives, not in dollars. If walling off public code reduces marginal risk by even a few percent, it pays for itself.
What is also happening here, less explicitly: this is a *political* signal. The NHS is one of the largest single IT spenders in Europe and one of the most-watched public sector IT estates anywhere. When NHS leadership says "we are pulling back from public-by-default open source," every other UK public-sector body that took the cue from the GDS service manual a decade ago now has cover to do the same. The decade of public-sector open-source-first momentum just took a real, observable hit.
The pattern that's coming next
I expect the next ninety days to look like this:
Q3 2026: more European national health and government IT bodies follow. Germany's gematik (the body running the Telematics Infrastructure for German healthcare) has been quietly inventorying their public exposure for two months already. The Dutch Ministry of Health (VWS) has a similar review underway. Estonia's e-Estonia tooling is already reasonably hardened on this dimension but a public statement is plausible. France's Agence du numérique en santé will do whatever Germany does about three weeks later. Within ninety days, expect at least three of these jurisdictions to publish equivalent close-source directives.
Q3-Q4 2026: regulated US sectors start writing it into compliance regimes. HIPAA covered entities have no specific rule preventing public repos today, but the OCR's Office for Civil Rights pays attention to risk analysis as part of HIPAA Security Rule compliance, and "we publish patient-system integration code on GitHub in 2026" will become a finding the next time the OCR audits a covered entity. Expect the question "does your organisation publish source code containing internal API surface, authentication patterns, or business logic to public repositories?" to land in the 2026 revision of the OCR HIPAA audit protocol, in PCI DSS interpretive guidance, and in NIST SP 800-218 (the secure-software-development framework) within the next year.
Q4 2026 - Q1 2027: enterprise security questionnaires add the question. SIG, CAIQ, and the various proprietary vendor questionnaires all add a section asking what the vendor's policy is on public open-source contributions, what review process is in place before publication, and how secret-scanning is enforced on outbound code. Vendors who can't answer well lose deals.
2027: at least one major US state issues an equivalent directive. California's CDT is the obvious candidate. Texas DIR is also a candidate given its scale. The trigger will be a single high-profile incident where adversary use of an AI model to find a bug in publicly-published government code is publicly attributed.
This isn't speculative. Walk through the institutional incentives. A CIO at any large regulated organisation looks at the NHS announcement, looks at their own public repo list, and asks the same question: "if I get breached and the post-mortem says the attacker found the bug by feeding our public code to Claude or its successor, am I personally OK?" The honest answer is no. So they'll move. Quietly, like NHS did at first, then publicly when peer cover exists. The cover landed today.
Defender response actions — what to do this week
If you run security at an organisation with any meaningful public GitHub presence, do this in the next ten business days:
1. Inventory your org's public repos
Get an authoritative list. The GitHub API endpoint GET /orgs/{org}/repos?type=public returns all public repos owned by your org. Pipe it into a spreadsheet. Don't trust your memory or your developer-facing internal wiki.
gh api -H "Accept: application/vnd.github+json" \
"/orgs/<your-org>/repos?type=public&per_page=100" \
--paginate \
--jq '.[] | [.name, .pushed_at, .stargazers_count, .description] | @tsv' \
> public-repos.tsv
Sort by pushed_at descending. The most recently-pushed repos are highest priority for review.
2. Run a credential and secret sweep across every public repo
gitleaks and trufflehog are the two industry-standard tools. Run both — they catch different things.
gh repo list <your-org> --visibility public --limit 1000 --json name --jq '.[].name' | \
while read repo; do
git clone --depth 50 "https://github.com/<your-org>/$repo.git" "/tmp/audit/$repo"
gitleaks detect -s "/tmp/audit/$repo" -r "/tmp/audit/$repo-gitleaks.json" || true
trufflehog filesystem "/tmp/audit/$repo" --json > "/tmp/audit/$repo-trufflehog.json" || true
done
Critically: don't only scan the current HEAD. gitleaks follows git history by default and will surface secrets that were committed and then "deleted" but live forever in the git log. Old secrets are still secrets. Rotate everything that comes back, even from commits four years ago.
3. For each public repo, ask the four questions
For every repo on the inventory list, before deciding "private or public," answer four questions:
- Does it contain internal API endpoint patterns? URLs in tests, in OpenAPI specs, in fixture data. If yes, an adversary now has your authenticated-API surface map for free. Scrub the test fixtures or take the repo private.
- Does it reveal internal data shapes? Pydantic models, TypeScript types, SQL schemas, GraphQL schemas. Not catastrophic on its own, but combined with a small leaked credential, an attacker has a head-start in pivoting through your data graph.
- Does it document business logic that an attacker would otherwise have to reverse-engineer? Workflow code, authorization decision trees, role-permission mappings. This is the most defensible reason to take a repo private even if there's no obvious bug in it.
- Does it implement a security primitive — auth, crypto, session management, signature verification — your own? If yes, an attacker handing your code to Mythos is the worst case for you. Either move it private or harden it aggressively.
Each repo gets a clear written disposition: keep public, take private, redact and re-publish, or archive and delete. Document the decision. Future you will thank present you.
4. Default new repos to private
This is a one-line change in your GitHub org settings. Settings → Member privileges → Repository creation → "Members can create" set to "Private and Internal repositories only." Public repository creation gated through a process owner. Most orgs do this already; the ones that haven't, do it now.
5. Set up secret scanning on every public repo you keep
GitHub Advanced Security includes secret scanning on public repos for free. Turn on push protection. The combination of secret scanning at push time plus secret scanning on existing history catches a remarkable amount of accidental publication.
For non-secret patterns — internal hostnames, internal URL shapes, project codenames you don't want public — write custom secret scanning patterns. GitHub supports user-defined patterns (Settings → Code security and analysis → Secret scanning → Custom patterns).
6. For libraries you must keep public, harden defensively
There are libraries and SDKs that you have to keep public. Maintainer-published Python packages, JavaScript SDKs, mobile SDKs, public-facing API documentation. You can't take those private without breaking your customers. For these:
- Treat them as if a Mythos-class scanner is running against them continuously. Because one is.
- Harden security primitives extra aggressively. Authentication code, crypto, signature verification — fuzz it yourself, run
semgrepand CodeQL aggressively, treat any flagged finding as a release-blocker. - Publish a clear vulnerability disclosure policy. When a researcher (or an AI agent) finds the bug first, you want them in your inbox, not on Twitter.
- Sign your releases. Sigstore for npm/PyPI, GPG for tarballs, Cosign for container images. The next supply-chain attack vector after AI-assisted vulnerability discovery is AI-assisted typosquat publication of look-alike packages. Defensive signing reduces the attack surface.
7. Tabletop the NHS directive against your own org
Run a one-hour tabletop exercise with your security team and your CIO/CTO. The prompt: "the NHS just walled off hundreds of public repos. Our regulator (or our enterprise customers, or our board) writes us asking for our equivalent policy in 30 days. What is the policy?" Watch where the conversation gets stuck. The places it gets stuck are the gaps in your current process. Those are the gaps you fill in the next quarter.
The OSS world's reaction
Half of open-source Twitter and the security side of LinkedIn are panicking that the NHS announcement marks the end of public-sector open source. The other half are saying "good — security through obscurity was always a fiction, this just makes the cost of the fiction visible."
My read is more nuanced and I'm going to give you the honest one:
Security through obscurity was never the right framing. Public open-source code has always been more secure on average than equivalent closed-source code, because the eyes-on-it factor outweighed the surface-exposure factor. The NHS directive does not contradict this — most public-sector open-source code is *not* security-critical, and the bits that are remain published.
What changed is the cost-benefit on a specific quadrant. For code that has *some* security relevance but is not so important that it benefits from broad community review (the long tail — internal tools, integration glue code, custom forks of stable libraries, internal-API client libraries) the calculus has shifted. Before Mythos, the cost of "any motivated researcher worldwide can read it" was bounded by the supply of motivated researchers — call it a thousand globally for any given mid-importance project. After Mythos, the cost is bounded by the number of nation-state actors with $50K of API budget, which is hundreds of actors against every public repo, continuously.
For the deep, important, infrastructural open source — Linux kernel, OpenSSL, glibc, Postgres, the Python interpreter, V8 — public-by-default is still the right answer, because the defensive review benefit outweighs the offensive scanning cost. These projects are also the most heavily-monitored, most-funded, and most-maintained. The NHS directive doesn't touch them. Nor should it.
For the long tail of organisational open source — the kind of repo a single team at a hospital trust or a regional government published five years ago and forgot about — the calculus has flipped. Going private is the right move. Not panic. Just adjustment to a new economic reality.
The honest summary: the cost of "anyone can find your bugs" went from "100 motivated researchers worldwide" to "any nation-state intelligence service with $50K of API spend." That's a 100-1000x increase in attacker volume. You can't keep your existing open-source policy and pretend the math didn't change. NHS is doing the math out loud.
Action items checklist
For security leaders:
- [ ] Inventory all public GitHub repos owned by your org (within 5 business days)
- [ ] Run gitleaks + trufflehog full-history scans across every public repo (within 10 business days)
- [ ] Apply the four-question test to every public repo, document disposition
- [ ] Default new repo creation to private; require process for public publication
- [ ] Turn on GitHub secret scanning + push protection on every kept-public repo
- [ ] Define and publish your vulnerability disclosure policy
- [ ] Sign your releases (sigstore / cosign / GPG)
- [ ] Run a tabletop exercise simulating the same directive landing at your org
- [ ] Brief your board: "what is our policy if our regulator writes the same letter the NHS just did?"
For developers:
- [ ] Don't push internal API URLs into test fixtures of public repos
- [ ] Don't paste production secrets into commits "to test things" with the intent to delete the commit later. The git log is forever.
- [ ] Treat any code you push to a public repo as if a Mythos-class agent will read it that night. It will.
- [ ] Keep a clear separation between OSS libraries you publish for customers and internal tools you publish "because we always have."
How Valtik helps
We do GitHub-org-level security audits — the equivalent of the NHS exercise applied to your environment. The deliverable is a written inventory of every public repo your org owns, a per-repo disposition recommendation (keep public / take private / redact and republish / archive), a full secret-scan against all-time git history, a four-question security review of every kept-public repo, and a written policy document you can hand to your board for ratification.
We've now run this engagement against three private clients in healthcare and fintech in the last six weeks. It typically surfaces between 4 and 20 historical secrets per organisation (most of them rotated long ago, some not), an average of 6 repos per org that should not be public, and 1-2 per org that contain the kind of internal logic that becomes a real security finding under Mythos-class scanning.
If your organisation's CIO is going to ask you "what's our equivalent of the NHS policy?" in the next ninety days — and they will — you want a written, defensible answer ready before they ask. Reach out to hello@valtikstudios.com to scope a GitHub org-level security audit.
The institutional reaction has started. Don't be the org whose name shows up in the post-mortem when an adversary points Mythos at the public repo nobody got around to reviewing.
Want us to check your NHS setup?
Our scanner detects this exact misconfiguration. plus dozens more across 38 platforms. Free website check available, no commitment required.
