A Kubernetes operator that deploys and manages the OCP Support Web application on OpenShift clusters. Built with controller-runtime and distributed via OLM (Operator Lifecycle Manager).
The operator manages the full lifecycle of the OCP Support Web application:
- Creates a ServiceAccount with OAuth redirect annotations
- Creates a scoped ClusterRole for the app ServiceAccount with least-privilege access
- Generates and stores an OAuth cookie secret
- Deploys the application with an OpenShift OAuth proxy sidecar
- Creates a Route with TLS re-encryption
- Sets up a ServiceMonitor for Prometheus metrics scraping
- Auto-detects the cluster apps domain from
Ingress/cluster - Creates a
gather-commonConfigMap for customizing gather definitions - Cleans up cluster-scoped resources (ClusterRole, ClusterRoleBinding) on CR deletion via a finalizer
- OpenShift 4.12+
- OLM installed (included by default on OpenShift)
- User Workload Monitoring enabled (for metrics)
Copy deploy/gitops-install.yaml into your GitOps repository. It contains everything needed to deploy the operator and application:
- Namespace
- CatalogSource (pulls the operator catalog from quay.io)
- OperatorGroup (scoped to a single namespace)
- Subscription (installs the operator via OLM)
- OCPSupportWeb CR (deploys the application)
oc apply -f deploy/gitops-install.yamlmake bundle-build bundle-push
oc apply -f bundle/make image-build image-push
make deploy OPERATOR_IMG=quay.io/youruser/ocp-support-web-operator:v0.1.0 APP_IMG=quay.io/youruser/ocp-support-web:v0.1.0Create an OCPSupportWeb custom resource:
apiVersion: support.openshift.io/v1alpha1
kind: OCPSupportWeb
metadata:
name: ocpsupportweb
namespace: ocp-support-web
spec:
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
memory: 512MiCheck status:
oc get ocpsupportwebThe URL column shows the route where the application is accessible.
| Field | Description | Default |
|---|---|---|
spec.image |
Application container image (web UI, ACM agents, must-gather) | RELATED_IMAGE_APP env var |
spec.oauthProxyImage |
OAuth proxy sidecar image | registry.redhat.io/openshift4/ose-oauth-proxy-rhel9:latest |
spec.clusterDomain |
Cluster apps domain | Auto-detected from Ingress/cluster |
spec.route.host |
Custom route hostname | Auto-generated |
spec.resources |
App container resource requirements | 50m CPU / 128Mi-512Mi memory |
spec.oauthProxyResources |
OAuth proxy resource requirements | 10m CPU / 32Mi-64Mi memory |
spec.allowedGroups |
OpenShift groups allowed to access the app | ["cluster-admins"] |
Container images are configurable in the CR spec:
spec.image— the application image (used for the web UI, ACM remote agents, and standalone must-gather)spec.oauthProxyImage— the OAuth proxy sidecar
The operator supports the OLM RELATED_IMAGE_* convention for automatic image mirroring via ImageContentSourcePolicy:
RELATED_IMAGE_APP— application imageRELATED_IMAGE_OAUTH_PROXY— OAuth proxy
For disconnected environments, set spec.image to the mirrored location of the application image in your internal registry.
The application image can also be used directly as a must-gather image with oc adm must-gather:
oc adm must-gather --image=quay.io/redhat-consulting-services/ocp-support-web:v3.0.0This auto-detects installed operators and collects diagnostics for all of them using native Go API calls — no operator-specific must-gather images required.
The operator creates a gather-common ConfigMap in the application namespace. Users can edit this ConfigMap to add custom resources, pod exec commands, node commands, and log specifications. Changes take effect on the next gather request without restarting the application.
See the application README for the full ConfigMap format and examples.
make build # Build operator binary
make test # Run tests
make run # Run operator locally (requires kubeconfig)
make image-build # Build container image
make image-push # Push container image
make deploy # Deploy to cluster without OLM
make undeploy # Remove from clustercmd/main.go Entry point
api/v1alpha1/ CRD types (OCPSupportWeb)
internal/controller/ Reconciliation logic
config/crd/bases/ CRD YAML
config/rbac/ RBAC for the operator itself
config/manager/ Operator Deployment manifest
bundle/ OLM bundle (CSV, CRD, metadata)
deploy/ GitOps-ready install manifests
The operator exposes controller-runtime metrics on port 8080 and the application exposes custom metrics on port 8081. Both are scraped via ServiceMonitors using OpenShift User Workload Monitoring.
Operator metrics include reconciliation counts and duration. Application metrics include HTTP request counts/duration, active must-gather jobs, and etcd diagnostic job counts.
Assisted by: Claude