Kubernetes Admission Controllers: The Policy Layer Most Clusters Forget
Most Kubernetes clusters we audit have RBAC sort-of configured and NetworkPolicies mostly working. And wide-open admission policy. A compromised service account that can create pods can create privileged pods, mount the host filesystem, and escape containers. Here is the admission controller configuration that stops this.
Founder of Valtik Studios. Pentester. Based in Connecticut, serving US mid-market.
# Kubernetes Admission Controllers: The Policy Layer Most Clusters Forget
Walk into a production Kubernetes cluster audit. RBAC roles exist. NetworkPolicies are in place for pod-to-pod segmentation. EDR running on nodes. Good.
Then check what happens when someone with create pods RBAC tries to create a pod that requests hostPath: /, privileged: true, hostNetwork: true. And mounts /var/run/docker.sock. If the admission controllers aren't configured, the pod gets created and immediately breaks out to host-root on the node.
Kubernetes admission controllers are the gatekeepers that enforce policy on objects AFTER authentication and authorization but BEFORE the API server writes to etcd. They're what stops legitimate-RBAC users from creating unsafe resources. Most clusters we audit have them misconfigured or missing.
The 2026 admission controller configuration that matters, the options for enforcement (Gatekeeper, Kyverno, Pod Security Admission). And the specific policies every cluster should enforce.
What admission controllers do
You can tell how much experience someone has with this by whether they treat the control as binary. It isn't.
When a request arrives at the Kubernetes API server:
- Authentication. Is the requester who they claim to be?
- Authorization (RBAC). Is the requester allowed to do this?
- Admission. Does this request comply with cluster policies?
- Validation + schema check
- Persistence to etcd
Admission runs between authorization and persistence. Two types:
- Mutating admission controllers modify the request (e.g., inject sidecars, add labels, rewrite container images)
- Validating admission controllers reject requests that violate policy
Kubernetes has built-in admission controllers for some things (NamespaceLifecycle, LimitRanger, ResourceQuota). For security policy, you almost always need additional controllers via webhook admission.
The three main options in 2026
Pod Security Admission (PSA). Built-in since 1.25
PSA replaced Pod Security Policies (PSP), which were deprecated in 1.21 and removed in 1.25. PSA enforces the Kubernetes Pod Security Standards:
- Privileged. No restrictions, full container breakout possible
- Baseline. Minimally restrictive, prevents known privilege escalations (e.g., no privileged pods, no hostPID, no hostNetwork)
- Restricted. Hardened best practices (runAsNonRoot, read-only root filesystem, seccomp profiles, minimal capabilities)
Deployed via namespace labels:
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/enforce-version: latest
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
Simple, built-in, zero external dependencies. But limited in policy expressiveness. You get three tiers, not arbitrary rules.
Open Policy Agent (OPA) Gatekeeper
OPA Gatekeeper provides full policy-as-code via Rego language. Highly expressive. You can enforce any policy you can describe declaratively.
Policies are Kubernetes custom resources. Constraints and constraint templates define rules. The community maintains a library at github.com/open-policy-agent/gatekeeper-library.
Strengths: extremely flexible, community policies, full audit trail.
Weaknesses: Rego is a learning curve, performance overhead on high-throughput clusters.
Kyverno
Kyverno is a Kubernetes-native policy engine that uses YAML for policy definitions. No Rego to learn. Policies look like familiar K8s resources.
Strengths: lower learning curve, K8s-native feel, strong community support.
Weaknesses: less expressive than OPA for complex conditions, newer than OPA.
Both Gatekeeper and Kyverno are mature production choices. Kyverno has gained momentum in 2024-2026 for its simpler UX.
Policies every cluster should enforce
Whether you use PSA, Gatekeeper, or Kyverno, these are the policies that block common attacks.
Block privileged containers
# Kyverno example
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: deny-privileged-containers
spec:
validationFailureAction: Enforce
rules:
- name: check-privileged
match:
any:
- resources:
kinds: [Pod]
validate:
message: "Privileged containers aren't allowed"
pattern:
spec:
=(initContainers):
- =(securityContext):
=(privileged): "false"
containers:
- =(securityContext):
=(privileged): "false"
Rationale: privileged containers share the host's kernel capabilities and can trivially escape to the node.
Block hostPath mounts
# Block pods from mounting host filesystem
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-host-path
spec:
validationFailureAction: Enforce
rules:
- name: host-path
match:
any:
- resources:
kinds: [Pod]
validate:
message: "hostPath volumes aren't allowed"
pattern:
spec:
=(volumes):
- X(hostPath): "null"
Rationale: hostPath mounts let a pod access files on the host node. hostPath: / is root filesystem access.
Block hostNetwork, hostPID, hostIPC
Pods with hostNetwork can sniff all node network traffic. hostPID lets them see host processes. hostIPC shares IPC namespace with host.
Require runAsNonRoot
Containers must specify a non-root user:
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
Rationale: containers running as root in the container still have significant capability even with user namespace remapping.
Block:latest image tags
Require specific image tags or digests:
# Look for image tag in pattern like "nginx:latest"
validate:
message: "Image must have a specific version tag"
pattern:
spec:
containers:
- image: "!*:latest"
Rationale::latest is unpredictable. A future pod restart pulls a potentially different image. For production, pin to digest (image: nginx@sha256:abc...).
Require resource limits
validate:
message: "CPU and memory limits required"
pattern:
spec:
containers:
- resources:
limits:
memory: "?*"
cpu: "?*"
Rationale: pods without limits can consume all node resources, leading to node-wide denial of service.
Block insecure capabilities
validate:
message: "Capabilities NET_ADMIN, SYS_ADMIN, SYS_PTRACE not allowed"
pattern:
spec:
containers:
- securityContext:
capabilities:
add:
- "!NET_ADMIN"
- "!SYS_ADMIN"
- "!SYS_PTRACE"
Rationale: SYS_ADMIN alone allows container escape via many paths. NET_ADMIN enables packet sniffing and iptables modification.
Enforce seccomp profiles
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
Rationale: seccomp filters syscalls available to a container. RuntimeDefault is the minimum sane filter. Localhost profiles offer tighter control.
Block service account token auto-mount for pods that don't need it
validate:
message: "Explicitly set automountServiceAccountToken"
pattern:
spec:
automountServiceAccountToken: "?*"
Rationale: auto-mounted service account tokens let compromised pods call the Kubernetes API. Disable where not needed.
Restrict image registries
validate:
message: "Images must come from approved registries"
pattern:
spec:
containers:
- image: "registry.company.com/* | gcr.io/approved-project/*"
Rationale: prevents pulling from arbitrary registries. Controls supply chain risk.
Require NetworkPolicy in every namespace
Can be enforced via a Gatekeeper constraint that audits namespaces without NetworkPolicy objects. Not a pod-level policy but a namespace-level one.
Cluster-level hardening
Beyond pod policies:
Disable anonymous API access
Ensure --anonymous-auth=false on the API server. Default in most managed K8s, confirm on self-hosted.
Enable audit logging
# /etc/kubernetes/audit-policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: RequestResponse
resources:
- group: ""
resources: ["secrets"]
- level: Request
verbs: ["create", "update", "patch", "delete"]
Audit logs ship to SIEM. Essential for incident investigation.
Restrict etcd access
Etcd contains all cluster secrets unencrypted by default. Enable encryption at rest. Restrict etcd network access to only the API server.
Node-level hardening
- Kernel parameters (
net.ipv4.ip_forward,kernel.unprivileged_userns_clone) - Restrict kubelet port (only API server should reach kubelet)
- Node-level seccomp and AppArmor profiles
- EDR agent on every node (even managed cluster nodes where supported)
Operational notes
Deploy admission controllers in audit mode first
Deploy in "warn" or "audit" mode for 1-2 weeks. Collect violations. Fix legitimate applications that would be blocked. Then flip to "enforce".
PSA supports this via labels:
pod-security.kubernetes.io/enforce: baseline
pod-security.kubernetes.io/warn: restricted
pod-security.kubernetes.io/audit: restricted
Warn logs violations. Audit writes to audit log. Enforce blocks.
Version-pin your admission policies
Admission policies themselves are objects managed via GitOps. Treat them like code. Changes go through code review.
Monitor admission controller availability
If your admission controller webhook is down and failurePolicy: Fail is set, the entire cluster rejects pod creation. Good. Fails closed. But you need to know when it's down.
If failurePolicy: Ignore is set, the webhook failing becomes a security bypass. Audit this. Prefer Fail with monitoring.
Namespace exceptions
System namespaces (kube-system, kube-public, kube-node-lease) often need privileged access. Exempt them from restricted policies.
Infrastructure namespaces (monitoring, logging, ingress) often need some privilege. Apply baseline policies, not restricted.
Application namespaces get full restricted.
What we test in an admission controller engagement
Our Kubernetes engagements include:
- Admission controller inventory. What's deployed, what's enforced, what's audit-mode
- Policy coverage analysis vs CIS Kubernetes Benchmark
- Attack simulation. We attempt container breakout techniques from a standard-RBAC service account
- Namespace-by-namespace policy application review
- Audit log configuration and SIEM integration
- Cluster-level hardening review (API server, kubelet, etcd, node)
- CNI security (NetworkPolicy coverage, Cilium policies if applicable)
- Runtime security (Falco, Tetragon deployment)
- Supply chain controls (image signing with Cosign, admission verification)
Typical K8s engagement: 2-4 weeks depending on cluster count and complexity.
Compliance mapping
- PCI DSS 4.0. Requirement 2 (secure configurations), Requirement 6 (secure systems)
- HIPAA Security Rule. 164.312(a) access controls, (c) integrity, (e) transmission security
- SOC 2. CC6 (Access), CC7 (Operations)
- CIS Kubernetes Benchmark. Direct mapping to most CIS controls
- CNCF K8s security baseline
Our reports map findings to each applicable framework.
Resources
- Kubernetes Pod Security Standards: https://kubernetes.io/docs/concepts/security/pod-security-standards/
- OPA Gatekeeper: https://open-policy-agent.github.io/gatekeeper/
- Kyverno: https://kyverno.io/
- CIS Kubernetes Benchmark (v1.10 or later for K8s 1.27+)
- CNCF Security Whitepaper
- NIST SP 800-190 (Application Container Security Guide)
- KEP-2579 (Pod Security Admission)
Hire Valtik Studios
Kubernetes audits are a specialty area. If your cluster is running production workloads and the admission controllers haven't been explicitly configured and tested, there are almost certainly policy gaps. Our engagements produce specific policy configurations, not generic recommendations. Typical finding rate: 5-8 material gaps per cluster audited.
Reach us at valtikstudios.com.
Want us to check your Kubernetes setup?
Our scanner detects this exact misconfiguration. plus dozens more across 38 platforms. Free website check available, no commitment required.
