Skip to content

soumosarkar297/kubernetes-security-guide

Repository files navigation

Kubernetes Security Specialist (CKS) – Study Guide

A comprehensive study guide and practical resource for the Certified Kubernetes Security Specialist (CKS) exam. This repository covers essential K8s security concepts, best practices, and hands-on labs to help you master Kubernetes security fundamentals and pass the CKS certification.

K8s Security Best Practices

  • Security combines many different complex processes
  • Environments change e.g., security cannot stay in a certain state
  • Attackers have advantage
    • They decide time
    • They pick what to attack like weakest link

Security Principles

  1. Defense in Depth
  2. Least Privilege
  3. Limiting the Attack Surface

Note

Redundancy is good... in security. In case of application development, follow DRY ("Don't repeat yourself"), but in security consider Layered defense and Redundancy.

Kubernetes Security Categories

  1. Host Operating System Security 🐧
  2. Kubernetes Cluster Security ☸️
    • It's already done in Managed K8s Service e.g., by AWS or Google
  3. Application Security 🐳

Host OS Security

  • Kubernetes Nodes should only do one thing Kubernetes
  • Reduce Attack Surface
    • Remove unncessary applications
    • Keep up to date
  • Runtime Security Tools
  • Find and identify malicious processes
  • Restrict IAM / SSH access

Kubernetes Cluster Security

  • Kubernetes components are running secure and up-to date:
    • API Server
    • kubelet
    • ETCD
  • Restrict (external) access
  • Use Authentication → Authorization
  • AdmissionControllers
    • NodeRestriction
    • Custom Policies (OPA)
  • Enable Audit Logging
  • Security Benchmarking
  • Encrypt Traffic to ETCD

Application Security

  • Use Secrets / no hardcoded credentials
  • RBAC
  • Container Sandboxing
  • Container Hardening
    • Attack Surface
    • Run as user
    • Readonly filesystem
  • Vulnerability Scanning
  • mTLS / ServiceMeshes

Container Isolation in action

Note

You must be familiar with Kubernetes architecture and basic components. For basic introduction, you can should check out Foundation.md

Create two containers and check if they can see each other

podman run --name c1 -d ubuntu sh -c 'sleep 1d'
podman exec c1 ps aux

podman run --name c2 -d ubuntu sh -c 'sleep 999d'
podman exec c2 ps aux

ps aux | grep sleep

Then create two container in the same namespace

podman rm c2 --force

# Run the c2 in the PID namespace as c1
podman run --name c2 --pid=container:c1 -d ubuntu sh -c 'sleep 999d'

podman exec c2 ps aux
podman exec c1 ps aux

Network Policies

  • Firewall Rules in Kubernetes
  • Implemented by the Network Plugins CNI (Calico / Weave)
  • Namespace level
  • Restrict the ingress and/or Egress for a group of Pods based on certain rules and conditions

Without NetworkPolicies

  • By default every pod can access every pod
  • Pods are NOT isolated

Example Visualization of NetworkPolicies

Egress NetworkPolicies Ingress NetworkPolicies Namespace NetworkPolicies IpBlock NetworkPolicies

Example of yaml Declarative Configurations of NetworkPolicy

kind: NetworkPolicy
metadata:
  name: example
  namespace: default
spec:
  podSelector:
    matchLabels:
      id: frontend
  policyTypes:
  - Egress

The above example is a valid Network policy which:

  • denies all outgoing traffic
  • from pods with label id=frontend
  • in namespace default
kind: NetworkPolicy
metadata:
  name: example
  namespace: default
spec:
  podSelector:
    matchLabels:
      id: frontend  # will be applied to these pods
  policyTypes:
  - Egress  # will be about outgoing traffic
  egress:
  - to:
  # allow outgoing traffic to namespace with label id=ns1 AND port 80
    - namespaceSelector:
        matchLabels:
          id: ns1
    ports:
    - protocol: TCP
      port: 80
  - to:
  # allow outgoing traffic to pods with label id=backend in same namespace (default)
    - podSelector:
        matchLabels:
          id: backend

In the above example two egress rules are connected with "OR".

Multiple NetworkPolicies

  • Possible to have multiple NPs selecting the same pods
  • If a pod has more than one NP
    • then the union of all NPs is applied
    • order doesn't affect policy result

You can check out the example.yaml which is being a merged-policy of example2a.yaml and example2b.yaml

Default Deny NetworkPolicy

We'll create a very simple scenario with one frontend pod and one backend pod, and we'll check the connectivity between each pod before and after of creating our NetworkPolicy.

kubectl run frontend --image=nginx:alpine
kubectl run backend --image=nginx:alpine

kubectl expose pod frontend --port 80
kubectl expose pod backend --port 80

kubectl get pod,svc

# Now check connectivity from frontend to backend, & vice versa
kubectl exec frontend -- curl backend
kubectl exec backend -- curl frontend

# Now we'll create our network policy
vim default-deny.yaml
kubectl -f default-deny.yaml create

kubectl exec frontend -- curl backend
kubectl exec backend -- curl frontend

Use the example default-deny.yaml for practice.

Allow frontend pods to talk to backend pods

-- based on podSelectors

We will specifically allow frontend pod to connect to backend pod e.g., we'll create one network policy to allow outgoing traffic from frontend and one incoming traffic from frontend to backend.

vim frontend.yaml
kubectl -f frontend.yaml create

kubectl exec frontend -- curl backend

vim backend.yaml
kubectl -f backend.yaml create

kubectl exec frontend -- curl backend
# It will still not work

Important

Our default-deny policy even denies default DNS traffic (port 53), as we need DNS resolution for frontend to connect with backend.

kubectl get pod --show-labels -owide
kubectl exec frontend -- curl <backend-IP>

Note

If you would like to allow DNS resolution, you have to extend your default-deny policy where you would allow Egress to the port 53.

You can check out allow-dns-resolution.yaml

Allow backend pods to talk to database pods

-- based on namespaceSelectors

pod-frontend → pod-backend → pod-cassandra

kubectl create ns cassandra

# Add "ns: cassandra" labels
kubectl edit ns cassandra
# OR
kubectl label namespace cassandra ns=cassandra

kubectl -n cassandra run cassandra --image=nginx:alpine
kubectl -n cassandra get pod -owide

kubectl exec backend -- curl <cassandra-IP>

Now we'll allow backend pods to have egress traffic the namespace cassandra, where our database pod is running.

vim backend.yaml # edit to add Egress policy
kubectl -f backend.yaml apply

vim cassandra-deny.yaml
kubectl -f backend.yaml create

vim cassandra.yaml
kubectl -f cassandra.yaml create

kubectl exec backend -- curl <cassandra-IP>
# FAILS! Try to debug yourself :)

Note

In previous step, we didn't modified default namespace to add labels "ns: default"

kubectl edit ns default

# Now try
kubectl exec backend -- curl <cassandra-IP>

# You can also allow DNS traffic in cassandra namespace
vim cassandra-deny.yaml
kubectl -f cassandra-deny.yaml apply # Add egress to port 53 (TCP & UDP)

# Expose cassandra as a service
kubectl -n cassandra expose pod cassandra --port 80
kubectl exec backend -- curl cassandra.cassandra

Extend restriction between backend & cassandra

  • based on additional pod label
  • and additional port

You can check extended cassandra.yaml


GUI Elements and the Dashboard

  • only expose sevices externally if needed
  • cluster internal services/dashboards can also be accessed using kubectl port-forward

Tesla Hack 2018

  • Kubernetes Dashboard had too many privileges on the cluster
    • without RBAC or too broad roles
  • Kubernetes Dashboard was exposed to the internet
    • which is isn't by default

Kubectl proxy

  • Creates a proxy server between localhost and the Kubernetes API Server
  • uses connection as configured in the kubeconfig
  • allows to access API locally just over http and without authentication

kubectl-proxy

Kubectl port-forward

  • forwards connections from a localhost-port to a pod-port
  • more generic than kubectl proxy
  • can be used for all TCP traffic not just HTTP

kubectl port-forward

Note

If you have dashboard and you want to expose it externally without using kubectl, you should have some precautions. You could do it for example: with an Ingress (Nginx Ingress) and URL of your custom domain. Then you have to implement some authentication.

Install & Access the Kubernetes Dashboard

Refer to the Official Kubernetes Dashboard GitHub Repo

Important

You have to install Helm first to install latest kubernetes-dashboard (v7+). Refer to Official Helm Docs for installation.

helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard

kubectl get ns
kubectl -n kubernetes-dashboard get pod,svc

Make Dashboard Available Externally on HTTP (or insecure HTTPS)

Important

Don't do this in production!

Check out Dashboard Arguments docs in official kubernetes/dashboard repo

kubectl -n kubernetes-dashboard get pod,deploy,svc

kubectl -n kubernetes-dashboard edit deploy kubernetes-dashboard-api
# add --insecure-port=8000 to `spec.template.spec.containers.args`
# also add --disable-csrf-protection

kubectl -n kubernetes-dashboard edit svc kubernetes-dashboard-kong-proxy
# Change type to "NodePort"

kubectl -n kubernetes-dashboard get svc

Try to access the Kubernetes Dashboard on: https://<worker-node_External-IP>:<kong-proxy-NodePort>

Generate the token with:

kubectl -n kubernetes-dashboard create token kubernetes-dashboard-kong

Important

Most probably you will only be able to access with insecure https, no longer with http. But still it is unsafe.

Give more permission to kubernetes-dashboard ServiceAccount

kubectl -n kubernetes-dashboard get sa
kubectl get clusterroles | grep view

# Only the kubernetes-dashboard access
k -n kubernetes-dashboard create rolebinding insecure --serviceaccount kubernetes-dashboard:kubernetes-dashboard-kong --clusterrole view -oyaml --dry-run=client

# Cluster wise access
k -n kubernetes-dashboard create clusterrolebinding insecure --serviceaccount kubernetes-dashboard:kubernetes-dashboard-kong --clusterrole view -oyaml --dry-run=client

Explore more about it, and check out the updated Kubernetes Dashboard Docs


Ingress objects with Security Control

What is Ingress

There are three main services in Kubernetes:

  • ClusterIP
  • NodePort
  • LoadBalancer

Ingress is a collection of rules that define how external users can access services within a cluster, acting as a single entry point for traffic. An Ingress Controller is a component that watches for these Ingress resource and implements them, typically by managing a reverse proxy and load balancer within the cluster.

Setup an Ingress with Services

Important

DELETE ALL your existing NetworkPolicies from previous sections.

Example Ingress

Install Kubernetes Nginx Controller

Refer to their Official Installation Guide for latest version of ingress controller or run the below command:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.13.0/deploy/static/provider/cloud/deploy.yaml

k get pod,svc,serviceaccounts -n ingress-nginx

Here, service/ingress-nginx-controller would probably be LoadBalancer type, but still you would have NodePort like this: 80:30277/TCP,443:30421/TCP

Note

You can change the service type of ingress-nginx-controller to NodePort specifically by doing:
kubectl -n ingress-nginx edit svc ingress-nginx-controller

Try to check HTTP connection Locally using NodePort

# From your local terminal
# Change the NodePort accordingly
curl http://<worker-node_ExternalIP>:30277

It should return "404 Not Found" Title

Create the Ingress resource

vim ingress.yaml
kubectl -f ingress.yaml create
kubectl get ing

Check out the example minimal-ingress.yaml file.

Create Pod and Services for Ingress

kubectl run pod1 --image=nginx:alpine
kubectl run pod2 --image=nginx:alpine
kubectl expose pod pod1 --port 80 --name service1
kubectl expose pod pod2 --port 80 --name service2

kubectl get pod,svc

Try to hit the endpoints:

curl http://34.28.6.202:30277/service1
curl http://34.28.6.202:30277/service2

Secure an Ingress with TLS

Try to access with HTTPS NodePort of ingress-nginx-controller

curl https://34.28.6.202:30421/service1
# Failed due to curl: (60) SSL certificate problem: self-signed certificate

# But it works with:
curl https://34.28.6.202:30421/service1 -k
curl https://34.28.6.202:30421/service1 -kv

Becuse of Kubernetes Ingress Controller Fake Certificate

Note

We are following the Official Kubernetes Ingress Docs

Step 1: Greate our TLS Certificate & Key

openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
# Common Name: secure-ingress.com

ls # Check if key.pem & cert.pem is added

Step 2: Create a new TLS Secret for Ingress

kubectl create secret tls -h
kubectl create secret tls <secret-name> --cert=path/to/cert.pem --key=path/to/key.pem

kubectl get ing,secret

Step 3: Edit your ingress.yaml to add tls configuration

Check out the secure-ingress.yaml file for reference

vim ingress.yaml
kubectl apply -f ingress.yaml

# OR
k -n <NAMESPACE> edit ing <Ingress-Name>

Step 4: Check the new endpoint locally using curl

# Will not work with:
curl https://34.28.6.202:30421/service1 -k

# Actually will work using the 'secure-ingress.com' as it set to be host
curl https://secure-ingress.com:30421/service1 --resolve secure-ingress.com:30421:34.28.6.202 -k
curl https://secure-ingress.com:30421/service1 --resolve secure-ingress.com:30421:34.28.6.202 -kv

Diagram of our TLS Secured Ingress

Secured Ingress


Protect Node Metadata and Endpoint

When you run virtual machines on a cloud platform—like AWS, Azure, or Google Cloud – each virtual machine (or node) has access to something called a metadata service.

This metadata service is basically an internal endpoint that only the virtual machine can reach. It provides important information about the VM itself—like its IP, its hostname, and more.

Cloud Platform Node Metadata

  • Metadata service API by default reachable from VMs
  • Can contain cloud credentials for VMs / Nodes
  • Can contain provisioning data like kubelet credentials

Limit permissions for instance credentials

  • Ensure that cloud-instance-account has only the necessary permissions
  • Each cloud provider has a set of recommendations to follow
  • Not in the hands of Kubernetes

Access Sensitive Node Metadata

By default the pods and its containers can contact the metadata service. So for attackers it's possible to contact the metadata service from a container (which is running as a process) and query sensitive informations.

You can refer to Official Docs for the specific Cloud Provider metadata. In this case we are refering Google Cloud VM metadata

Access GCP metadata from Instance

# Access the metadata from any node
curl "http://metadata.google.internal/computeMetadata/v1/instance/image" -H "Metadata-Flavor: Google"
curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/0/" -H "Metadata-Flavor: Google"

Access GCP metadata from a Pod

# Create a pod
kubectl run nginx --image=nginx:alpine

# Go inside the pod
kubectl exec nginx -it -- sh
# Then run the curl commands to access the metadata
# and You can from a pod too!

Restrict Access using NetworkPolicies

Retrict Access to Metadata Service

We can allow certain pods or labels to access this metadata service and deny certain pods' access.

vim deny-metadata.yaml
kubectl create -f deny-metadata.yaml

kubectl exec nginx -it -- sh
# Then curl metadata.google.internal, it will not reach

# Now create label specific allow for metadata service
vim allow-metadata.yaml
kubectl create -f allow-metadata.yaml

# Add 'role=metadata-accessor' to the specific pod(s)
kubectl label pod nginx role=metadata-accessor
# OR
kubectl edit pod nginx

kubectl exec nginx -it -- sh
# curl metadata.google.internal, it will reach because only label matches

Conclusion: Only allow pods with certain label to access the metadata endpoint.


CIS Benchmarks

Use CIS benchmark to review security configuration

Overview of CIS (Center for Internet Secuirty)

  • Best practices and guidlines for the secure configuration of a target system, applications or services
  • Covering more than 18 technology groups
  • It is developed through unique consensus-based process comprised of cybersecurity professionals and subject matter experts around the world

CIS Kubernetes Benchmark

The CIS Kubernetes Benchmark specifically focuses on security recommendations for Kubernetes clusters. It covers areas like how to secure the Kubernetes control plane, worker nodes, and overall cluster configurations.

We can use and apply these CIS Benchmark to our Default K8s Security Rules. Or you can customize the Cloud-managed Kubernetes solution.

CIS Benchmarks in action

Download the latest CIS Benchmark PDF from the official cisecurity.org

Note

We are going to follow: CIS_Kubernetes_Benchmark_v1.9.0 PDF (or you can check any latest version of it)

  • Check k8s version of your Document
  • Control Plane: page 16 rule 1.1.1
  • Worker Node: page 204 rule 4.2.10

kube-bench

Refer to the Official kube-bench GitHub Repo

podman run --pid=host -v /etc:/etc:ro -v /var:/var:ro -t docker.io/aquasec/kube-bench:latest --version 1.33

You can now check which security configuration PASS, FAIL or WARN according to your current config. Also you get Remediations for them to make the system more secure.


Author

Reference

About

Complete CKS (Certified Kubernetes Security Specialist) study guide with practical examples, security best practices, and hands-on labs for Kubernetes security mastery.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages