Valtik Studios
Back to blog
Container Securityhigh2026-02-1612 min

Docker Registry Security: Anonymous Pulls, Image Tampering, and the Default Nobody Should Use

Docker Registry is where your container images live. Every production Docker deployment pulls from a registry on every deploy. The default Docker Registry deployment is exposed, unauthenticated, and allows image tampering. A practical walkthrough of the attack surfaces, metadata leakage, and the hardening every self-hosted Docker Registry needs. Plus when to stop self-hosting and use a managed alternative.

TT
Tre Trebucchi·Founder, Valtik Studios. Penetration Tester

Founder of Valtik Studios. Pentester. Based in Connecticut, serving US mid-market.

The registry problem

Before we go further. A lot of what gets published on this topic is wrong or oversimplified. The real picture is messier.

Docker's default approach to containers is elegant: define images, push them to a registry, pull them for deployment. The registry is the single source of truth for what runs in production.

Docker Registry (the official open-source implementation, sometimes called Distribution) is simple to deploy: docker run -d -p 5000:5000 registry:2. Many organizations did exactly that and called it done. The result is tens of thousands of Docker Registry deployments on the internet with no authentication, readable by anyone who connects on port 5000.

This isn't a theoretical concern. Security researchers, penetration testers, and attackers routinely find exposed Docker Registries during reconnaissance. The registry exposes:

  • Every image your organization builds
  • Image tags and history
  • Image layers (often containing source code, secrets, configuration)
  • Push capabilities (tampering with images)
  • Authentication-layer misconfigurations

This post walks through the specific attack patterns we find on Docker Registry deployments, the hardening approach for self-hosted registries. And when to migrate to managed alternatives.

Attack pattern 1: Anonymous registry exposure

Docker Registry by default doesn't enforce authentication. Running docker run registry:2 starts a server that accepts any client's pulls and pushes.

Enumeration:

# List repositories in the registry
curl http://target.example.com:5000/v2/_catalog

# List tags for a specific repository
curl http://target.example.com:5000/v2/my-company/webapp/tags/list

# Pull any image
docker pull target.example.com:5000/my-company/webapp:latest

If the registry responds without authentication requirements, any of these work.

What the attacker gets:

  • Full image inventory. Every container you've ever pushed, sometimes years of history
  • Image contents. Source code (if baked into images), configuration files, secrets
  • Attack planning intelligence. Your technology stack, internal service names, framework versions
  • Push capability (in some configurations). Ability to tamper with images

Shodan searches for port 5000 responding with Docker Registry API signatures find tens of thousands of exposed registries. A substantial fraction contain production images.

The fix:

Docker Registry's own authentication support is limited. Recommended pattern:

Internet → Reverse proxy (nginx, Caddy, Traefik) with auth → Docker Registry on localhost

Reverse proxy handles:

  • TLS termination
  • HTTP Basic Auth, OAuth, or similar authentication
  • Per-repository access control (more advanced)

Or use Docker Registry with htpasswd authentication configured in config.yml:

auth:
  htpasswd:
    realm: basic-realm
    path: /auth/htpasswd

Plus:

  • Bind Docker Registry to 127.0.0.1 only
  • Firewall port 5000 off the public internet
  • Require TLS on the reverse proxy

Attack pattern 2: Image layer secret extraction

Docker images are layered filesystems. Each RUN, COPY, or ADD instruction in a Dockerfile creates a new layer. Layers are stored separately and are typically cached for performance.

Critically: a secret added in one layer and removed in a later layer is still present in the earlier layer. RUN rm /tmp/secret.key doesn't remove the file from history. It adds a layer that doesn't contain the file.

Common developer mistake:

COPY./secrets/api-key.txt /tmp/
RUN do-something-with-key --file=/tmp/api-key.txt \\
 && rm /tmp/api-key.txt # "removes" the key

# api-key.txt is still in the image history, downloadable

Attacker extraction:

# Pull the image
docker pull target.example.com:5000/my-company/webapp:latest

# Inspect layers
docker history target.example.com:5000/my-company/webapp:latest

# Save to tarball
docker save target.example.com:5000/my-company/webapp:latest -o webapp.tar

# Extract and examine each layer
mkdir webapp && tar -xf webapp.tar -C webapp
for layer in webapp/*/layer.tar. Do
  mkdir -p "${layer%.tar}"
  tar -xf "$layer" -C "${layer%.tar}"
done

# Search for secrets across all layers
grep -r "api_key\\|password\\|secret\\|token" webapp/

Tools like Dive (https://github.com/wagoodman/dive) provide interactive exploration of image layers.

What we routinely find in layer histories:

  • AWS / GCP / Azure credentials
  • Database connection strings
  • API keys (Stripe, Twilio, SendGrid, GitHub PATs)
  • SSH private keys
  • SSL/TLS private keys
  • Internal service tokens
  • Git credentials for private repos
  • Legacy commented-out configs

The fix:

  • Use multi-stage builds. Final image is built FROM a clean base, copying only artifacts:
FROM builder AS build
  COPY./secrets/api-key.txt /tmp/
  RUN build-with-key --file=/tmp/api-key.txt # key used during build
  
  FROM alpine:latest # final image
  COPY --from=build /app /app # only the built artifact, not the secrets

  • Use BuildKit secret mounts. Secrets available during build without landing in any layer:
# syntax=docker/dockerfile:1.4
  RUN --mount=type=secret,id=api_key do-something-with-key

  • Never COPY secrets into images. Inject at runtime via environment variables, secret managers, or orchestrator secrets.
  • Scan images for secrets before publishing. trivy, grype, snyk container, docker scout

Attack pattern 3: Image tampering via push access

If the registry accepts anonymous pushes (default configuration), or if authentication can be bypassed, an attacker can:

  • Push a backdoored version of a legitimate image with the same tag
  • Your next deployment pulls the tampered image
  • The tampered image contains whatever the attacker wanted. Crypto miner, data exfiltration, persistent backdoor

Attack scenario:

  1. Attacker finds exposed registry at target.example.com:5000
  2. Enumerates images: finds target.example.com:5000/company/webapp:production
  3. Pulls the legitimate image
  4. Modifies it to add a backdoor
  5. Pushes tampered version with same tag
  6. Organization's next deployment pulls and runs the tampered image

The fix:

  • Authentication required for push (and pull, but push is more critical)
  • Image signing via Docker Content Trust, Cosign, or Notary
  • Image digest pinning in deployment manifests. Deploy by SHA256 digest, not by tag:
image: target.example.com:5000/company/webapp@sha256:abc123...

Digest pins prevent tag replacement attacks

  • Scan images on push. CI/CD can reject compromised images before deployment

Attack pattern 4: Registry API path traversal and auth bypass

Older Docker Registry versions have had path traversal vulnerabilities and authentication bypass bugs. Running outdated versions in production is common.

Specific CVEs to know:

  • CVE-2019-13509. Path traversal in some configurations
  • CVE-2020-15157 (Moby). Credential theft in specific scenarios
  • Various Harbor (enterprise registry) CVEs since 2019

Check your version:

docker run registry:2 --version
# vs current stable

The fix:

  • Update Registry to current stable
  • Audit for CVEs affecting your version
  • Subscribe to Docker Distribution security announcements

Attack pattern 5: Self-hosted registry alternatives with security gaps

Beyond the official Docker Registry, common self-hosted options:

Harbor

CNCF project, commonly deployed in Kubernetes environments. Adds vulnerability scanning, RBAC, replication. Has had its own security issues over the years. CVE-2019-16097 allowed admin creation, various others.

Hardening:

  • Keep updated (quarterly minimum)
  • Use RBAC, not shared admin accounts
  • TLS everywhere
  • Database encryption at rest
  • Regular audit of users and projects

Nexus Repository Manager

Sonatype's tool, general-purpose artifact repository. Also serves Docker images. Has had authentication issues historically.

JFrog Artifactory

Enterprise-focused, supports Docker plus many other artifact types. Commercial product with good track record but requires proper configuration.

GitLab Container Registry

Integrated with GitLab's broader platform. Inherits GitLab's security model.

Distribution running standalone

The official reference implementation we've been discussing. Bare minimum functionality. Auth, storage, API. Everything else is DIY.

Attack pattern 6: Registry credentials in CI/CD

Even with a private registry, the credentials to access it become a target. Common leakage:

  • CI/CD variables committed to repository
  • .dockerconfigjson stored in public Git history
  • Credentials in build scripts
  • Credentials in Dockerfiles
  • Credentials shared across projects and teams

Real finding: a company's build scripts committed docker login commands with embedded credentials. The private GitHub repo had been accessed by a former employee who retained access. They could push to the company's private registry for months before the compromise was detected.

The fix:

  • CI/CD secret management. Use platform-native secret stores (GitHub Secrets, GitLab CI/CD Variables, Jenkins Credentials)
  • Short-lived tokens. OIDC-based authentication preferred over long-lived tokens
  • Per-project credentials. Don't share registry credentials across unrelated projects
  • Credential rotation on staff changes
  • Monitoring for anomalous pushes

Attack pattern 7: Cached credentials on build machines

Docker clients cache registry credentials in ~/.docker/config.json. Anyone with access to a machine that's done docker login has those credentials.

Developer laptops, CI/CD runners, Kubernetes nodes (in older configurations). All have cached credentials.

Attack:

  • Compromise a developer laptop
  • Read ~/.docker/config.json
  • Credentials to private registry now usable from attacker infrastructure

The fix:

  • Use credential helpers (Docker credential store) that use OS-level secret stores
  • OIDC / short-lived credentials reduce value of cached credentials
  • Don't log in on shared machines
  • Monitor for docker login / unusual pulls from unexpected IPs

Attack pattern 8: SBOM and provenance gaps

Modern supply chain security emphasizes Software Bill of Materials (SBOM). A manifest of everything in your image. And provenance. Cryptographic attestation of how the image was built.

The current state:

  • Docker Hub and most registries don't enforce SBOM
  • Most images don't have signed provenance
  • Even images with SBOM may have stale SBOMs that don't reflect current contents
  • Verification of SBOMs and provenance on pull is uncommon

Real finding: an organization used a base image claimed to be "Ubuntu 22.04 official." The image had been fork-updated by a third party to inject backdoors. Without provenance verification, the organization couldn't tell the difference until the compromise was detected via runtime monitoring.

The fix:

  • Generate SBOMs on build. Syft, CycloneDX CLI, docker sbom
  • Sign SBOMs with Cosign
  • Attest provenance using SLSA framework
  • Verify SBOMs and provenance on pull in CI/CD
  • Use images with verified provenance (Docker Official Images have this. Many third-party images don't)

Attack pattern 9: Public / accidental public image exposure

Organizations frequently push production images to Docker Hub (the public registry). Sometimes by mistake. They meant to push to a private registry but pointed at docker.io instead. Sometimes by misconfiguration. Repositories intended to be private are public.

Detection:

  • Search Docker Hub for your organization's name
  • Verify every repository's visibility (public vs private)
  • Review push history for accidental public pushes

The fix:

  • Default to private for all new Docker Hub repositories
  • Audit visibility of existing repositories
  • Organizational Docker Hub with admin control over visibility settings
  • CI/CD target verification. Build scripts should validate the target registry

Attack pattern 10: Image history metadata leakage

docker history and similar commands expose the Dockerfile commands that built the image. This leaks:
  • Internal build infrastructure (ARG BUILDKITE_BUILD_NUMBER=1234)
  • Team email addresses (from MAINTAINER or LABEL)
  • Build timestamps
  • Source directory structures (COPY src/specific/path/...)
  • Comments in Dockerfiles

The fix:

  • Minimize ARG and LABEL exposure
  • Use multi-stage builds that don't leak intermediate state
  • Consider stripping metadata with tools like docker-slim

The hardening checklist

For self-hosted Docker Registry:

Network and access

  • [ ] Registry not directly exposed to public internet
  • [ ] Reverse proxy with authentication in front
  • [ ] TLS with valid certificates
  • [ ] Firewall rules restricting registry network access
  • [ ] IP allowlisting where appropriate

Authentication

  • [ ] Authentication required for all operations
  • [ ] Push/pull permissions separated
  • [ ] Per-user or per-service-account credentials (not shared)
  • [ ] MFA on administrative accounts (via reverse proxy or managed registry)
  • [ ] Credential rotation schedule

Image security

  • [ ] Image signing configured (Cosign, Notary, Docker Content Trust)
  • [ ] Image digest pinning in deployments
  • [ ] Vulnerability scanning on every push (trivy, grype, Clair)
  • [ ] SBOM generation on build
  • [ ] Provenance attestation (SLSA)
  • [ ] Secret scanning on images before publication

Dockerfile hygiene

  • [ ] No secrets in Dockerfiles or build context
  • [ ] Multi-stage builds for all production images
  • [ ] BuildKit secret mounts where secrets are needed during build
  • [ ] Minimize image layers and metadata leakage

Operations

  • [ ] Registry updates regular (quarterly minimum)
  • [ ] CVE monitoring for registry software
  • [ ] Backup and disaster recovery tested
  • [ ] Audit logging of pushes and pulls
  • [ ] Anomaly detection (unusual push sources, unusual pull patterns)

CI/CD integration

  • [ ] Credentials stored in platform secret managers, not code
  • [ ] OIDC / short-lived tokens preferred over long-lived
  • [ ] Scan results block deployments for critical vulnerabilities
  • [ ] Signed images required for production deployments

When to abandon self-hosting

Self-hosted Docker Registry makes sense for specific reasons:

  • Regulatory requirements (air-gapped environments, data residency)
  • Extreme image volume where commercial pricing is prohibitive
  • Deep integration with internal infrastructure

For most organizations, managed alternatives offer better security postures with less operational overhead:

Managed registry options

  • AWS ECR. Integrated with AWS, automatic vulnerability scanning, easy IAM integration
  • Google Artifact Registry. Similar for GCP
  • Azure Container Registry. Similar for Azure
  • GitHub Container Registry. Integrated with GitHub Actions, free for public, reasonable pricing for private
  • GitLab Container Registry. Integrated with GitLab
  • Docker Hub (paid plans). Original commercial offering
  • Harbor as a service (multiple providers). Managed Harbor

Managed registries provide:

  • Automatic updates and patching
  • Better default security (authentication, TLS, scanning)
  • Integration with cloud IAM
  • SLA commitments
  • Compliance certifications

For most organizations, managed registries should be the default. Self-hosting is the exception that requires justification, not the starting point.

Migration path

If you're migrating from self-hosted to managed:

Step 1: Set up new registry

Configure ECR, ACR, GCR, or chosen managed service. Set up authentication, IAM, scanning policies.

Step 2: Mirror images

# Pull from old registry, push to new
for repo in $(curl old-registry.com/v2/_catalog | jq -r '.repositories[]'). Do
  for tag in $(curl old-registry.com/v2/$repo/tags/list | jq -r '.tags[]'). Do
    docker pull old-registry.com/$repo:$tag
    docker tag old-registry.com/$repo:$tag new-registry.com/$repo:$tag
    docker push new-registry.com/$repo:$tag
  done
done

Or use dedicated migration tools: skopeo, crane, registry-specific tools.

Step 3: Update CI/CD

Point build pipelines at new registry. Update deployment manifests. Test thoroughly.

Step 4: Dual-push period

Push to both registries for a transition period (weeks). This allows rollback if issues arise.

Step 5: Deprecate old

Once new is fully operational, deprecate old registry. Document the change. Wind down old infrastructure.

Step 6: Decommission

After appropriate retention period, decommission old registry entirely.

For specific scenarios

Early-stage startups

  • Use managed registry from day one (GitHub Container Registry is free for public, cheap for private)
  • Scan images in CI (trivy is free, runs in GitHub Actions)
  • Sign images with Cosign (free, integrated with GitHub)
  • Focus engineering effort on product, not registry operations

Mid-size companies

  • Managed registry with IAM integration (ECR, ACR, GCR tied to cloud IAM)
  • Vulnerability scanning policies enforcing critical blockers
  • SBOM generation and retention
  • Image signing and provenance

Large enterprises

  • Potentially self-hosted Harbor or Artifactory for specific requirements
  • Formal SBOM and provenance program
  • Integration with SIEM for supply chain monitoring
  • Dedicated team for container / registry security
  • Annual audit of image inventory and access patterns

Regulated environments (healthcare, finance, defense)

  • Often required to self-host for data residency / air-gapping
  • Harbor or Artifactory with strict access controls
  • Formal change management for all pushes
  • Integration with compliance frameworks
  • Regular penetration testing

For Valtik clients

Valtik's container security audits include Docker Registry review:

  • Registry configuration and exposure review
  • Authentication and authorization audit
  • Image scanning for secrets and vulnerabilities
  • Dockerfile hygiene review
  • CI/CD integration security
  • SBOM and provenance assessment
  • Migration planning from self-hosted to managed

For organizations with container-heavy infrastructure that haven't had container security audits, reach out via https://valtikstudios.com.

The honest summary

Docker Registry is infrastructure that often gets "working" status and then stops receiving attention. The default deployment is insecure. Common configurations leak secrets via image layers. Self-hosted deployments frequently lag on updates.

For most organizations, managed container registries (ECR, GCR, ACR, GitHub Container Registry) are a better security posture than self-hosted with less operational overhead. If you're self-hosting, discipline is required: authentication, updates, scanning, signing, monitoring.

Audit your registry. Your images are running in your production. Know what's in them, who can change them, and who can pull them.

Sources

  1. Docker Registry Documentation
  2. Docker Registry on GitHub
  3. Docker Content Trust
  4. Cosign (Sigstore)
  5. Harbor Project
  6. SLSA Supply Chain Framework
  7. AWS ECR Documentation
  8. GitHub Container Registry
  9. Trivy Container Scanning
  10. Container Security Best Practices. NIST SP 800-190
dockercontainer securitydocker registrysupply chainplatform securitypenetration testingapplication securityresearch

Want us to check your Container Security setup?

Our scanner detects this exact misconfiguration. plus dozens more across 38 platforms. Free website check available, no commitment required.

Get new research in your inbox
No spam. No newsletter filler. Only new posts as they publish.