This guide covers the steps for deploying the DeathStarBench Social Network to Google Kubernetes Engine (GKE) and performing performance profiling. It includes steps from setting up the project in GKE to deploying the application, simulating workloads, and analyzing performance.
- Prerequisites
- Setting up GKE Cluster
- Deploying the Application Using Helm
- Profiling CPU, Memory, and Latency
- Scaling and Load Balancing
- Troubleshooting
-
Google Cloud Project
- Set up a Google Cloud Project and enable Kubernetes Engine.
- Ensure
gcloudCLI is installed and authenticated with your Google Cloud account.
-
Google Kubernetes Engine (GKE)
- Install
kubectlandgcloudCLI tools. - Enable Kubernetes Engine API in your project.
- Set your project ID and configure the
gcloudCLI.
- Install
-
Helm
- Install Helm for managing Kubernetes applications.
-
Create a Kubernetes cluster in GKE:
gcloud container clusters create deathstar-cluster \ --zone=asia-south1-a \ --cluster-version=latest \ --num-nodes=1
-
Configure
kubectlto use the new cluster:gcloud container clusters get-credentials deathstar-cluster --zone asia-south1-a
-
Verify the cluster status:
kubectl get nodes
-
Clone the DeathStarBench repository and navigate to the
socialNetworkfolder:git clone https://github.com/DeathStarBench/DeathStarBench.git cd DeathStarBench/socialNetwork/helm-chart/socialnetwork -
Create a namespace for the application:
kubectl create namespace deathstarbench
-
Install the application using Helm:
helm install social-network . -n deathstarbench --create-namespace -
Check the status of the pods:
kubectl get pods -n deathstarbench
To profile the application's performance:
-
Simulate Workload:
- Write a script to simulate a read and write workload for the social network.
-
Use Prometheus and Grafana:
-
Install Prometheus for monitoring:
helm install prometheus stable/prometheus --namespace deathstarbench
-
Set up Grafana for visualizing performance metrics.
-
Configure Prometheus as a data source in Grafana.
-
-
Run the Workload and Monitor:
- Start the workload script and monitor the CPU, memory usage, and latency using Grafana.
To scale the deployment and improve load balancing:
-
Horizontal Pod Autoscaling:
- Configure Horizontal Pod Autoscaler (HPA) to automatically scale the number of replicas based on CPU or memory usage.
-
Cluster Autoscaler:
- Enable Google Cloud's Cluster Autoscaler to scale the cluster nodes based on workload demands.
-
Test Different Configurations:
- Test different configurations by adjusting the number of nodes or using multiple node pools.
If you encounter issues such as pod scheduling errors due to insufficient resources, consider the following:
- Check the current node resources:
kubectl describe node <node-name>