Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Event-Notes/CCDC2024/CCDC-Qualifier-2024/README.md
Original file line number Diff line number Diff line change
@@ -1 +1 @@
Big Empty
No Read ME currently written.
2 changes: 1 addition & 1 deletion Event-Notes/Service-First-15/DNS/Linux/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,6 @@ The only service that this may rely on is the a proxy if we are exposing a DNS s
## First 30
* Audit the DNS Server each machine is configured to use (/etc/resolv.conf, nmtui)
* Can Wazuh do this? What about Zabbix
* Is DNSSec something that is good
* Question(Need to look into): Is DNSSec something that is good?
## Stretch Goals
Enable DNSSec.
114 changes: 114 additions & 0 deletions Projects/Cyber-Range-K8/K8_cluster_creation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@

# Kubernetes Cluster Creation Notes

## TOC
- [Kubernetes Cluster Creation Notes](#kubernetes-cluster-creation-notes)
- [TOC](#toc)
Comment thread
P-rple marked this conversation as resolved.
- [Prerequisites](#prerequisites)
- [Steps](#steps)
- [Install Docker on Each Machine](#install-docker-on-each-machine)
- [Install Kubernetes Components on Each Machine](#install-kubernetes-components-on-each-machine)
- [Step 1: Initialize the Control Plane](#step-1-initialize-the-control-plane)
- [Step 1.1: Set Up kubeconfig for kubectl on Control Plane](#step-11-set-up-kubeconfig-for-kubectl-on-control-plane)
- [Step 1.2: Deploy a Pod Network(In This Case, Calico)](#step-12-deploy-a-pod-networkin-this-case-calico)
- [Why?](#why)
- [Step 2: Join Worker Nodes to the Cluster](#step-2-join-worker-nodes-to-the-cluster)
- [Step 3: Verify the Cluster](#step-3-verify-the-cluster)
- [Troubleshooting](#troubleshooting)
- [Notes](#notes)



## Prerequisites
1. Ensure all machines (control plane and worker nodes) are running a compatible Linux OS.
Comment thread
P-rple marked this conversation as resolved.

## Steps
1. Install Docker or another container runtime on each machine.
2. Install `kubeadm`, `kubelet`, and `kubectl` on each machine.

### Install Docker on Each Machine
```sh
Comment thread
P-rple marked this conversation as resolved.
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get install -y docker-ce
sudo systemctl enable docker
sudo systemctl start docker
```

### Install Kubernetes Components on Each Machine
Comment thread
P-rple marked this conversation as resolved.
```sh
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl
sudo curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
```

## Step 1: Initialize the Control Plane
Comment thread
P-rple marked this conversation as resolved.
On the control plane (master) node:

```sh
sudo kubeadm init --apiserver-advertise-address=<control-plane-ip> --pod-network-cidr=192.168.0.0/16
```
cidr is based of control plane ip.
Comment thread
P-rple marked this conversation as resolved.

### Step 1.1: Set Up kubeconfig for kubectl on Control Plane
Installs the command line for k8 + allows the current user blueteam to use kubectl.
Comment thread
P-rple marked this conversation as resolved.
```sh
sudo mkdir -p /home/blueteam/.kube
sudo cp /etc/kubernetes/admin.conf /home/blueteam/.kube/config
sudo chown blueteam:blueteam /home/blueteam/.kube/config
```

### Step 1.2: Deploy a Pod Network(In This Case, Calico)

This is a required step, do not make the same mistake I made.
```sh
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
```

#### Why?
Pod to Pod communcation:K8 requires a pod network layer for the pods to communicate
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you mean for these to be on separate lines?

You need to add newlines between them, This is all one paragraph.

Network Policies: Add-ons like Calico also enable you to implement network policies that can restrict how pods communicate with each other, enhancing the security of your cluster.
Cluster Scalability: Efficient networking is crucial for cluster scalability. It ensures that as your cluster grows, network performance remains stable and reliable.

## Step 2: Join Worker Nodes to the Cluster
*remember each Node needs to have kubeadm installed.*
Comment thread
P-rple marked this conversation as resolved.
On each worker node:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does the output of the commands look like?

What about the output of them when you successfully joined a node to the cluster.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not have pictures, nor outputs to show at the moment. Will likely, not have any for some time.


1. **Obtain the `kubeadm join` command** from the control plane:
```sh
kubeadm token create --print-join-command
```

2. **Run the `kubeadm join` command** on each worker node:
```sh
sudo kubeadm join <control-plane-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
```

## Step 3: Verify the Cluster
On the control plane node, verify that all nodes have joined the cluster:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does this look like when you successfully joined a node to the cluster?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not have pictures, nor outputs to show at the moment. Will likely, not have any for some time.


```sh
kubectl get nodes
```

### Troubleshooting
- Check the kubelet logs if nodes are not joining or not in `Ready` state:
```sh
journalctl -u kubelet -f
```

- Ensure the necessary ports are open and that the worker nodes can access the required container images.
- `netstat ano | grep <port#>`
- If you are redeploying an entirely new cluster on a machine that previously had a clsuter then make sure you delete the cluster. I spent way to much time troubleshooting when I could've simply deleted the cluster and redownloaded the program.
-
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extra Blank bulletpoints can be removed

Suggested change
-

-
## Notes
- The `--pod-network-cidr=192.168.0.0/16` flag in the `kubeadm init` command is necessary for network add-ons like Calico.
- Ensure that the control plane IP is correct and accessible from the worker nodes.
*This is taken from my notes and added to the repo, so there are extra-explainations/suggestions that I had added for me.*
24 changes: 24 additions & 0 deletions Projects/Cyber-Range-K8/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Project Overview

Project Overview
This project aims to migrate services to a Kubernetes (K8s) cluster using Ansible scripts, ensuring minimal downtime, secure deployment, and seamless data transfer. The migration will involve deploying services on both k8 clusters and LXC-contained clusters, and the strategy will hopefullt be adapted to work with existing clusters.

## PLAN
### Goal
1. Automate the deployment of services using Ansible on a K8s cluster.
2. Migrate cluster information to an entirely new cluster
3. Generalize the script to work for different machines, in prep for competiton
### Steps:
1. Set Up a test environment manually
2. Set Up a test environment automatically using ansible
3. Migrate cluster infomation manually
4. Migrate cluster infomation automaticall using ansible
5. Consolidate & Generalize the steps and information for use in the competition



#### Current issues.
- Cluster is currenlty down....
> setup VMs on personal machine

- Migratation sucks...
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can be removed

Suggested change
- Migratation sucks...

93 changes: 93 additions & 0 deletions Projects/Cyber-Range-K8/nginx_modify_content.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
# Updating the Contents of an Nginx Webpage in Kubernetes

## TOC
- [Updating the Contents of an Nginx Webpage in Kubernetes](#updating-the-contents-of-an-nginx-webpage-in-kubernetes)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The TOC can be updated to remove the document title and TOC entry.

<!- omit-from-toc --> is something that can be used if you generate this with a markdown tool. (Look up your specific tool)

- [TOC](#toc)
- [Step 1: Create a ConfigMap with the New HTML Content](#step-1-create-a-configmap-with-the-new-html-content)
- [Step 2: Mount the ConfigMap as a Volume in the Nginx Pod](#step-2-mount-the-configmap-as-a-volume-in-the-nginx-pod)
- [Step 3: Verify the Updated Webpage](#step-3-verify-the-updated-webpage)
- [Notes](#notes)


## Step 1: Create a ConfigMap with the New HTML Content
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is a Config Map and why are we writing HTML?


1. Create a new HTML file on your local machine, for example, `index.html`:

```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Welcome to My Custom Nginx Page</title>
</head>
<body>
<h1>Hello, Mr. Chris</h1>
<p>Custom Nginx page through Kubernetes.</p>
</body>
</html>
```

2. Create a ConfigMap in Kubernetes to hold your custom HTML content:

```sh
kubectl create configmap nginx-html --from-file=index.html
```

This command creates a ConfigMap named `nginx-html` that contains the `index.html` file.

## Step 2: Mount the ConfigMap as a Volume in the Nginx Pod

1. Update your Nginx Deployment to mount the ConfigMap as a volume. Modify your `nginx-deployment.yaml` file:

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
# Here
volumeMounts:
- name: nginx-html
mountPath: /usr/share/nginx/html
subPath: index.html
volumes:
- name: nginx-html
configMap:
name: nginx-html
```

2. Apply the updated Deployment:

```sh
kubectl apply -f nginx-deployment.yaml
```

This step will update your Nginx pods to serve the custom `index.html` from the ConfigMap.

## Step 3: Verify the Updated Webpage

1. Access the Nginx webpage as before using your node's IP and NodePort or the Ingress setup.

2. You should see the updated content on your webpage, displaying the custom HTML you provided.

## Notes

- **ConfigMap Limitations**: ConfigMaps are intended for small configuration data. If your HTML content is extensive, consider using a PersistentVolume to store your web content instead.
- **PersistentVolume Approach**: If you want to serve more complex web content, consider creating a PersistentVolume and PersistentVolumeClaim, and then mount that volume in your Nginx pods.
- **ConfigMap**: A way to manage and share configuration data with your applications in Kubernetes without hardcoding the information inside the application itself. Akin to a more specific settings page for your deployment. Useful for having different settings for different enviornments, and being able to quickly swap out settings.
- *This doc was taken from my notes, so there are small choices I had placed to help me better understand.*
93 changes: 93 additions & 0 deletions Projects/Cyber-Range-K8/nginx_website_accessible_outside.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@

# Deploying an Nginx Website on Kubernetes

## TOC
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same TOC recommendations

- [Deploying an Nginx Website on Kubernetes](#deploying-an-nginx-website-on-kubernetes)
- [TOC](#toc)
- [Step 1: Create a Deployment for Nginx](#step-1-create-a-deployment-for-nginx)
- [Step 2: Expose Nginx via a Service](#step-2-expose-nginx-via-a-service)
- [Step 3: Access the Nginx Website Externally](#step-3-access-the-nginx-website-externally)
- [Step 5: Verify External Access](#step-5-verify-external-access)


## Step 1: Create a Deployment for Nginx
Create a Deployment resource to run an Nginx container. This will ensure that an Nginx pod is always running.

1. Create a YAML file named `nginx-deployment.yaml`:

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
```

2. Apply the Deployment using `kubectl`:

```sh
kubectl apply -f nginx-deployment.yaml
```

## Step 2: Expose Nginx via a Service
To make the Nginx deployment accessible within the Kubernetes cluster, create a Service.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is a Service


1. Create a YAML file named `nginx-service.yaml`:

```yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
```

2. Apply the Service using `kubectl`:

```sh
kubectl apply -f nginx-service.yaml
```

3. Verify that the Service has been created:

```sh
kubectl get services
```

Note the `NodePort` assigned by Kubernetes.

## Step 3: Access the Nginx Website Externally
To access the Nginx website from outside the Kubernetes cluster, you need to access the `NodePort` on one of your cluster's nodes.

1. Determine the external IP address of one of your nodes (you can use `kubectl get nodes -o wide` to see the IPs).

2. Access the Nginx website in your browser or using `curl` by visiting:

```
http://<node-ip>:<node-port>
```
## Step 5: Verify External Access
Visit the domain or IP specified in your Ingress resource. You should see the default Nginx welcome page.

*This doc was taken from my notes, so there are small choices I had placed to help me better understand.*
1 change: 1 addition & 0 deletions Projects/Cyber-Range-K8/yaml_configs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
This folder contains different pre-configured yaml files for possible different request for CCDC + any other CCDC related compeitions.
28 changes: 28 additions & 0 deletions Projects/Cyber-Range-K8/yaml_configs/nginx_daemonset.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
#By using a DaemonSet instead of a Deployment, you ensure that an Nginx pod runs on each worker node in your Kubernetes cluster. This is ideal when you want each node to serve a copy of the application, such as for distributed logging, monitoring agents, or in this case, Nginx web servers.

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-daemonset
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: nginx-html
mountPath: /usr/share/nginx/html
subPath: index.html
volumes:
- name: nginx-html
configMap:
name: nginx-html
Loading