A comprehensive study guide and practical resource for the Certified Kubernetes Security Specialist (CKS) exam. This repository covers essential K8s security concepts, best practices, and hands-on labs to help you master Kubernetes security fundamentals and pass the CKS certification.
- Security combines many different complex processes
- Environments change e.g., security cannot stay in a certain state
- Attackers have advantage
- They decide time
- They pick what to attack like weakest link
- Defense in Depth
- Least Privilege
- Limiting the Attack Surface
Note
Redundancy is good... in security. In case of application development, follow DRY ("Don't repeat yourself"), but in security consider Layered defense and Redundancy.
- Host Operating System Security 🐧
- Kubernetes Cluster Security ☸️
- It's already done in Managed K8s Service e.g., by AWS or Google
- Application Security 🐳
- Kubernetes Nodes should only do one thing Kubernetes
- Reduce Attack Surface
- Remove unncessary applications
- Keep up to date
- Runtime Security Tools
- Find and identify malicious processes
- Restrict IAM / SSH access
- Kubernetes components are running secure and up-to date:
- API Server
- kubelet
- ETCD
- Restrict (external) access
- Use Authentication → Authorization
- AdmissionControllers
- NodeRestriction
- Custom Policies (OPA)
- Enable Audit Logging
- Security Benchmarking
- Encrypt Traffic to ETCD
- Use Secrets / no hardcoded credentials
- RBAC
- Container Sandboxing
- Container Hardening
- Attack Surface
- Run as user
- Readonly filesystem
- Vulnerability Scanning
- mTLS / ServiceMeshes
Note
You must be familiar with Kubernetes architecture and basic components. For basic introduction, you can should check out Foundation.md
podman run --name c1 -d ubuntu sh -c 'sleep 1d'
podman exec c1 ps aux
podman run --name c2 -d ubuntu sh -c 'sleep 999d'
podman exec c2 ps aux
ps aux | grep sleepThen create two container in the same namespace
podman rm c2 --force
# Run the c2 in the PID namespace as c1
podman run --name c2 --pid=container:c1 -d ubuntu sh -c 'sleep 999d'
podman exec c2 ps aux
podman exec c1 ps aux- Firewall Rules in Kubernetes
- Implemented by the Network Plugins CNI (Calico / Weave)
- Namespace level
- Restrict the ingress and/or Egress for a group of Pods based on certain rules and conditions
- By default every pod can access every pod
- Pods are NOT isolated
kind: NetworkPolicy
metadata:
name: example
namespace: default
spec:
podSelector:
matchLabels:
id: frontend
policyTypes:
- EgressThe above example is a valid Network policy which:
- denies all outgoing traffic
- from pods with label id=frontend
- in namespace default
kind: NetworkPolicy
metadata:
name: example
namespace: default
spec:
podSelector:
matchLabels:
id: frontend # will be applied to these pods
policyTypes:
- Egress # will be about outgoing traffic
egress:
- to:
# allow outgoing traffic to namespace with label id=ns1 AND port 80
- namespaceSelector:
matchLabels:
id: ns1
ports:
- protocol: TCP
port: 80
- to:
# allow outgoing traffic to pods with label id=backend in same namespace (default)
- podSelector:
matchLabels:
id: backendIn the above example two egress rules are connected with "OR".
- Possible to have multiple NPs selecting the same pods
- If a pod has more than one NP
- then the union of all NPs is applied
- order doesn't affect policy result
You can check out the example.yaml which is being a merged-policy of example2a.yaml and example2b.yaml
We'll create a very simple scenario with one frontend pod and one backend pod, and we'll check the connectivity between each pod before and after of creating our NetworkPolicy.
kubectl run frontend --image=nginx:alpine
kubectl run backend --image=nginx:alpine
kubectl expose pod frontend --port 80
kubectl expose pod backend --port 80
kubectl get pod,svc
# Now check connectivity from frontend to backend, & vice versa
kubectl exec frontend -- curl backend
kubectl exec backend -- curl frontend
# Now we'll create our network policy
vim default-deny.yaml
kubectl -f default-deny.yaml create
kubectl exec frontend -- curl backend
kubectl exec backend -- curl frontendUse the example default-deny.yaml for practice.
-- based on podSelectors
We will specifically allow frontend pod to connect to backend pod e.g., we'll create one network policy to allow outgoing traffic from frontend and one incoming traffic from frontend to backend.
vim frontend.yaml
kubectl -f frontend.yaml create
kubectl exec frontend -- curl backend
vim backend.yaml
kubectl -f backend.yaml create
kubectl exec frontend -- curl backend
# It will still not workImportant
Our default-deny policy even denies default DNS traffic (port 53), as we need DNS resolution for frontend to connect with backend.
kubectl get pod --show-labels -owide
kubectl exec frontend -- curl <backend-IP>Note
If you would like to allow DNS resolution, you have to extend your default-deny policy where you would allow Egress to the port 53.
You can check out allow-dns-resolution.yaml
-- based on namespaceSelectors
kubectl create ns cassandra
# Add "ns: cassandra" labels
kubectl edit ns cassandra
# OR
kubectl label namespace cassandra ns=cassandra
kubectl -n cassandra run cassandra --image=nginx:alpine
kubectl -n cassandra get pod -owide
kubectl exec backend -- curl <cassandra-IP>Now we'll allow backend pods to have egress traffic the namespace cassandra, where our database pod is running.
vim backend.yaml # edit to add Egress policy
kubectl -f backend.yaml apply
vim cassandra-deny.yaml
kubectl -f backend.yaml create
vim cassandra.yaml
kubectl -f cassandra.yaml create
kubectl exec backend -- curl <cassandra-IP>
# FAILS! Try to debug yourself :)Note
In previous step, we didn't modified default namespace to add labels "ns: default"
kubectl edit ns default
# Now try
kubectl exec backend -- curl <cassandra-IP>
# You can also allow DNS traffic in cassandra namespace
vim cassandra-deny.yaml
kubectl -f cassandra-deny.yaml apply # Add egress to port 53 (TCP & UDP)
# Expose cassandra as a service
kubectl -n cassandra expose pod cassandra --port 80
kubectl exec backend -- curl cassandra.cassandra- based on additional pod label
- and additional port
You can check extended cassandra.yaml
- only expose sevices externally if needed
- cluster internal services/dashboards can also be accessed using kubectl port-forward
- Kubernetes Dashboard had too many privileges on the cluster
- without RBAC or too broad roles
- Kubernetes Dashboard was exposed to the internet
- which is isn't by default
- Creates a proxy server between localhost and the Kubernetes API Server
- uses connection as configured in the kubeconfig
- allows to access API locally just over http and without authentication
- forwards connections from a localhost-port to a pod-port
- more generic than kubectl proxy
- can be used for all TCP traffic not just HTTP
Note
If you have dashboard and you want to expose it externally without using kubectl, you should have some precautions. You could do it for example: with an Ingress (Nginx Ingress) and URL of your custom domain. Then you have to implement some authentication.
Refer to the Official Kubernetes Dashboard GitHub Repo
Important
You have to install Helm first to install latest kubernetes-dashboard (v7+). Refer to Official Helm Docs for installation.
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
kubectl get ns
kubectl -n kubernetes-dashboard get pod,svcImportant
Don't do this in production!
Check out Dashboard Arguments docs in official kubernetes/dashboard repo
kubectl -n kubernetes-dashboard get pod,deploy,svc
kubectl -n kubernetes-dashboard edit deploy kubernetes-dashboard-api
# add --insecure-port=8000 to `spec.template.spec.containers.args`
# also add --disable-csrf-protection
kubectl -n kubernetes-dashboard edit svc kubernetes-dashboard-kong-proxy
# Change type to "NodePort"
kubectl -n kubernetes-dashboard get svcTry to access the Kubernetes Dashboard on: https://<worker-node_External-IP>:<kong-proxy-NodePort>
Generate the token with:
kubectl -n kubernetes-dashboard create token kubernetes-dashboard-kongImportant
Most probably you will only be able to access with insecure https, no longer with http. But still it is unsafe.
kubectl -n kubernetes-dashboard get sa
kubectl get clusterroles | grep view
# Only the kubernetes-dashboard access
k -n kubernetes-dashboard create rolebinding insecure --serviceaccount kubernetes-dashboard:kubernetes-dashboard-kong --clusterrole view -oyaml --dry-run=client
# Cluster wise access
k -n kubernetes-dashboard create clusterrolebinding insecure --serviceaccount kubernetes-dashboard:kubernetes-dashboard-kong --clusterrole view -oyaml --dry-run=clientExplore more about it, and check out the updated Kubernetes Dashboard Docs
There are three main services in Kubernetes:
- ClusterIP
- NodePort
- LoadBalancer
Ingress is a collection of rules that define how external users can access services within a cluster, acting as a single entry point for traffic. An Ingress Controller is a component that watches for these Ingress resource and implements them, typically by managing a reverse proxy and load balancer within the cluster.
Important
DELETE ALL your existing NetworkPolicies from previous sections.
Refer to their Official Installation Guide for latest version of ingress controller or run the below command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.13.0/deploy/static/provider/cloud/deploy.yaml
k get pod,svc,serviceaccounts -n ingress-nginxHere, service/ingress-nginx-controller would probably be LoadBalancer type, but still you would have NodePort like this: 80:30277/TCP,443:30421/TCP
Note
You can change the service type of ingress-nginx-controller to NodePort specifically by doing:
kubectl -n ingress-nginx edit svc ingress-nginx-controller
# From your local terminal
# Change the NodePort accordingly
curl http://<worker-node_ExternalIP>:30277It should return "404 Not Found" Title
vim ingress.yaml
kubectl -f ingress.yaml create
kubectl get ingCheck out the example minimal-ingress.yaml file.
kubectl run pod1 --image=nginx:alpine
kubectl run pod2 --image=nginx:alpine
kubectl expose pod pod1 --port 80 --name service1
kubectl expose pod pod2 --port 80 --name service2
kubectl get pod,svcTry to hit the endpoints:
curl http://34.28.6.202:30277/service1
curl http://34.28.6.202:30277/service2Try to access with HTTPS NodePort of ingress-nginx-controller
curl https://34.28.6.202:30421/service1
# Failed due to curl: (60) SSL certificate problem: self-signed certificate
# But it works with:
curl https://34.28.6.202:30421/service1 -k
curl https://34.28.6.202:30421/service1 -kvBecuse of Kubernetes Ingress Controller Fake Certificate
Note
We are following the Official Kubernetes Ingress Docs
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
# Common Name: secure-ingress.com
ls # Check if key.pem & cert.pem is addedkubectl create secret tls -h
kubectl create secret tls <secret-name> --cert=path/to/cert.pem --key=path/to/key.pem
kubectl get ing,secretCheck out the secure-ingress.yaml file for reference
vim ingress.yaml
kubectl apply -f ingress.yaml
# OR
k -n <NAMESPACE> edit ing <Ingress-Name># Will not work with:
curl https://34.28.6.202:30421/service1 -k
# Actually will work using the 'secure-ingress.com' as it set to be host
curl https://secure-ingress.com:30421/service1 --resolve secure-ingress.com:30421:34.28.6.202 -k
curl https://secure-ingress.com:30421/service1 --resolve secure-ingress.com:30421:34.28.6.202 -kvWhen you run virtual machines on a cloud platform—like AWS, Azure, or Google Cloud – each virtual machine (or node) has access to something called a metadata service.
This metadata service is basically an internal endpoint that only the virtual machine can reach. It provides important information about the VM itself—like its IP, its hostname, and more.
- Metadata service API by default reachable from VMs
- Can contain cloud credentials for VMs / Nodes
- Can contain provisioning data like kubelet credentials
- Ensure that cloud-instance-account has only the necessary permissions
- Each cloud provider has a set of recommendations to follow
- Not in the hands of Kubernetes
By default the pods and its containers can contact the metadata service. So for attackers it's possible to contact the metadata service from a container (which is running as a process) and query sensitive informations.
You can refer to Official Docs for the specific Cloud Provider metadata. In this case we are refering Google Cloud VM metadata
# Access the metadata from any node
curl "http://metadata.google.internal/computeMetadata/v1/instance/image" -H "Metadata-Flavor: Google"
curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/0/" -H "Metadata-Flavor: Google"# Create a pod
kubectl run nginx --image=nginx:alpine
# Go inside the pod
kubectl exec nginx -it -- sh
# Then run the curl commands to access the metadata
# and You can from a pod too!We can allow certain pods or labels to access this metadata service and deny certain pods' access.
Note
Check out np_cloud_metadata_deny.yaml and np_cloud_metadata_allow.yaml for reference.
vim deny-metadata.yaml
kubectl create -f deny-metadata.yaml
kubectl exec nginx -it -- sh
# Then curl metadata.google.internal, it will not reach
# Now create label specific allow for metadata service
vim allow-metadata.yaml
kubectl create -f allow-metadata.yaml
# Add 'role=metadata-accessor' to the specific pod(s)
kubectl label pod nginx role=metadata-accessor
# OR
kubectl edit pod nginx
kubectl exec nginx -it -- sh
# curl metadata.google.internal, it will reach because only label matchesConclusion: Only allow pods with certain label to access the metadata endpoint.
Use CIS benchmark to review security configuration
- Best practices and guidlines for the secure configuration of a target system, applications or services
- Covering more than 18 technology groups
- It is developed through unique consensus-based process comprised of cybersecurity professionals and subject matter experts around the world
The CIS Kubernetes Benchmark specifically focuses on security recommendations for Kubernetes clusters. It covers areas like how to secure the Kubernetes control plane, worker nodes, and overall cluster configurations.
We can use and apply these CIS Benchmark to our Default K8s Security Rules. Or you can customize the Cloud-managed Kubernetes solution.
Download the latest CIS Benchmark PDF from the official cisecurity.org
Note
We are going to follow: CIS_Kubernetes_Benchmark_v1.9.0 PDF (or you can check any latest version of it)
- Check k8s version of your Document
- Control Plane: page 16 rule 1.1.1
- Worker Node: page 204 rule 4.2.10
Refer to the Official kube-bench GitHub Repo
podman run --pid=host -v /etc:/etc:ro -v /var:/var:ro -t docker.io/aquasec/kube-bench:latest --version 1.33You can now check which security configuration PASS, FAIL or WARN according to your current config. Also you get Remediations for them to make the system more secure.








