WIP: add OLMv1 controlplane hoists#8470
Conversation
Signed-off-by: grokspawn <jordan@nimblewidget.com>
📝 WalkthroughWalkthroughThis pull request registers OLMv1 components in the HostedControlPlane reconciler and implements four new OLM v1 control-plane components: 🚥 Pre-merge checks | ✅ 10 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (10 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #8470 +/- ##
==========================================
+ Coverage 37.53% 37.68% +0.15%
==========================================
Files 751 399 -352
Lines 92026 63717 -28309
==========================================
- Hits 34544 24014 -10530
+ Misses 54841 37838 -17003
+ Partials 2641 1865 -776 see 352 files with indirect coverage changes
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (2)
control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/catalogd/deployment.go (1)
23-31: ⚡ Quick winGuard against duplicate
--external-address/--system-namespaceargs
container.Args = append(...)is unconditional. If the base deployment manifest already includes--external-addressor--system-namespace, the container will receive duplicate flags. Mostcobra/pflag-based binaries treat duplicate flags as a fatal error, causing the pod to crash-loop.♻️ Proposed fix
- if container.Name == ComponentName { - container.Args = append(container.Args, - fmt.Sprintf("--external-address=%s", externalAddress), - fmt.Sprintf("--system-namespace=%s", cpContext.HCP.Namespace), - ) - } + if container.Name == ComponentName { + upsertArg := func(args []string, flag, value string) []string { + prefix := flag + "=" + for _, a := range args { + if strings.HasPrefix(a, prefix) { + return args // already set, skip + } + } + return append(args, fmt.Sprintf("%s=%s", flag, value)) + } + container.Args = upsertArg(container.Args, "--external-address", externalAddress) + container.Args = upsertArg(container.Args, "--system-namespace", cpContext.HCP.Namespace) + }(Add
"strings"to the import block.)🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/catalogd/deployment.go` around lines 23 - 31, The code unconditionally appends flags to container.Args causing duplicate --external-address or --system-namespace flags; update the loop that finds the container (container := &deployment.Spec.Template.Spec.Containers[i]) to first scan container.Args (using strings.HasPrefix) for existing "--external-address=" and "--system-namespace=" entries and only append the corresponding fmt.Sprintf(...) args when they are not already present; add "strings" to the import block to use strings.HasPrefix and reference ComponentName, externalAddress and cpContext.HCP.Namespace when constructing the flags.control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/util/util.go (1)
33-51: ⚡ Quick winPrefer
podspec.UpsertEnvVarsfor env var injection and add volume dedup guardsThree raw
appendcalls here (Volumes, VolumeMounts, Env) have no duplicate checks. If the base deployment manifest already declares a"kubeconfig"volume or a"KUBECONFIG"env var, calling this function will:
- Create a duplicate volume → Kubernetes admission will reject the pod.
- Create duplicate env vars → only consistent by convention with the last-wins rule, not intent.
The env var injection is inconsistent with the
podspec.UpsertEnvVarspattern used by the other OLMv1 adapters (clusterolmoperator,consoleoperator). Apply the same approach here and guard the volume/mount injection:♻️ Proposed fix
- deployment.Spec.Template.Spec.Volumes = append(deployment.Spec.Template.Spec.Volumes, hostedKubeconfigVolume) + // Only add the volume if it is not already present (idempotency) + volumeExists := false + for _, v := range deployment.Spec.Template.Spec.Volumes { + if v.Name == "kubeconfig" { + volumeExists = true + break + } + } + if !volumeExists { + deployment.Spec.Template.Spec.Volumes = append(deployment.Spec.Template.Spec.Volumes, hostedKubeconfigVolume) + } for i := range deployment.Spec.Template.Spec.Containers { container := &deployment.Spec.Template.Spec.Containers[i] - container.VolumeMounts = append(container.VolumeMounts, corev1.VolumeMount{ - Name: "kubeconfig", - MountPath: "/etc/openshift/kubeconfig", - ReadOnly: true, - }) - - container.Env = append(container.Env, corev1.EnvVar{ - Name: "KUBECONFIG", - Value: kubeconfigPath, - }) + // Only add the volume mount if not already present + mountExists := false + for _, vm := range container.VolumeMounts { + if vm.Name == "kubeconfig" { + mountExists = true + break + } + } + if !mountExists { + container.VolumeMounts = append(container.VolumeMounts, corev1.VolumeMount{ + Name: "kubeconfig", + MountPath: "/etc/openshift/kubeconfig", + ReadOnly: true, + }) + } + // Use UpsertEnvVars for idempotent env var injection + podspec.UpsertEnvVars(container, []corev1.EnvVar{ + {Name: "KUBECONFIG", Value: kubeconfigPath}, + }) }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/util/util.go` around lines 33 - 51, The code currently appends a "kubeconfig" Volume, per-container VolumeMount and a raw EnvVar "KUBECONFIG" unconditionally (see deployment.Spec.Template.Spec.Volumes, corev1.VolumeMount{Name: "kubeconfig"}, and EnvVar{Name: "KUBECONFIG"}), which can create duplicates; update the logic to first check for an existing volume named "kubeconfig" and only append if missing, check each container.VolumeMounts for an existing mount named "kubeconfig" before appending, and replace the raw container.Env append with the shared podspec.UpsertEnvVars utility to upsert the "KUBECONFIG" env var (use UpsertEnvVars on each container) so envs follow the established pattern and avoid duplicate env entries.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In
`@control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/catalogd/component.go`:
- Line 40: The call to InjectAvailabilityProberContainer is passing the wrong
type namespace: replace component.AvailabilityProberOpts with
podspec.AvailabilityProberOpts where InjectAvailabilityProberContainer(...) is
invoked (e.g., in the function(s) that call InjectAvailabilityProberContainer in
component.go and the sibling operatorcontroller/component.go), so the passed
struct type matches the method signature and the rest of the codebase.
In
`@control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/clusterolmoperator/deployment.go`:
- Around line 31-36: Update the stale comment that references IN_CLUSTER_MODE to
correctly reference the actual environment variable HYPERSHIFT_MODE used in the
deployment (the env entry {Name: "HYPERSHIFT_MODE", Value: "true"}); edit the
comment text above that env var so it describes HYPERSHIFT_MODE and its
semantics (use HOSTED_KUBECONFIG for ClusterCatalog/ClusterExtension and
in-cluster config for ClusterOperator status reporting) to avoid confusion when
cross-referencing operator env-var docs.
In
`@control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/clusterolmoperator/HYPERSHIFT_DUAL_API.md`:
- Around line 190-192: Update the three local filesystem references in
HYPERSHIFT_DUAL_API.md (the entries for the Enhancement Proposal, HyperShift
Component, and Shared Kubeconfig Utility) to use repository-relative paths or
canonical GitHub URLs instead of `~/devel/...`; locate the three lines listing
`~/devel/openshift/enhancements/enhancements/olm/olmv1-hypershift-support.md`,
`~/devel/openshift/hypershift/control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/clusterolmoperator/`,
and
`~/devel/openshift/hypershift/control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/util/util.go`
and replace each with the appropriate repo-relative path (e.g., relative path
from repo root) or full GitHub URL to the target file/directory. Ensure links
are valid and use the same markdown list format.
In
`@control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/consoleoperator/component.go`:
- Around line 39-46: The availability prober is configured with
KubeconfigVolumeName: "kubeconfig" but adaptDeployment never injects that
volume; update adaptDeployment to call util.InjectHostedClusterKubeconfig so the
"kubeconfig" volume is created (matching other OLMv1 components) and keep
InjectAvailabilityProberContainer(...) as-is, or alternatively change
InjectAvailabilityProberContainer call to set KubeconfigVolumeName to "" if you
intend to use in-cluster config; also reconcile the NeedsManagementKASAccess()
return/comment to reflect the chosen approach.
---
Nitpick comments:
In
`@control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/catalogd/deployment.go`:
- Around line 23-31: The code unconditionally appends flags to container.Args
causing duplicate --external-address or --system-namespace flags; update the
loop that finds the container (container :=
&deployment.Spec.Template.Spec.Containers[i]) to first scan container.Args
(using strings.HasPrefix) for existing "--external-address=" and
"--system-namespace=" entries and only append the corresponding fmt.Sprintf(...)
args when they are not already present; add "strings" to the import block to use
strings.HasPrefix and reference ComponentName, externalAddress and
cpContext.HCP.Namespace when constructing the flags.
In `@control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/util/util.go`:
- Around line 33-51: The code currently appends a "kubeconfig" Volume,
per-container VolumeMount and a raw EnvVar "KUBECONFIG" unconditionally (see
deployment.Spec.Template.Spec.Volumes, corev1.VolumeMount{Name: "kubeconfig"},
and EnvVar{Name: "KUBECONFIG"}), which can create duplicates; update the logic
to first check for an existing volume named "kubeconfig" and only append if
missing, check each container.VolumeMounts for an existing mount named
"kubeconfig" before appending, and replace the raw container.Env append with the
shared podspec.UpsertEnvVars utility to upsert the "KUBECONFIG" env var (use
UpsertEnvVars on each container) so envs follow the established pattern and
avoid duplicate env entries.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: 4056123a-540f-481f-94c8-e63e7f8f5c91
📒 Files selected for processing (18)
control-plane-operator/controllers/hostedcontrolplane/hostedcontrolplane_controller.gocontrol-plane-operator/controllers/hostedcontrolplane/v2/assets/catalogd/service.yamlcontrol-plane-operator/controllers/hostedcontrolplane/v2/olmv1/catalogd/component.gocontrol-plane-operator/controllers/hostedcontrolplane/v2/olmv1/catalogd/deployment.gocontrol-plane-operator/controllers/hostedcontrolplane/v2/olmv1/catalogd/service.gocontrol-plane-operator/controllers/hostedcontrolplane/v2/olmv1/clusterolmoperator/HYPERSHIFT_DUAL_API.mdcontrol-plane-operator/controllers/hostedcontrolplane/v2/olmv1/clusterolmoperator/component.gocontrol-plane-operator/controllers/hostedcontrolplane/v2/olmv1/clusterolmoperator/component_test.gocontrol-plane-operator/controllers/hostedcontrolplane/v2/olmv1/clusterolmoperator/deployment.gocontrol-plane-operator/controllers/hostedcontrolplane/v2/olmv1/clusterolmoperator/deployment_test.gocontrol-plane-operator/controllers/hostedcontrolplane/v2/olmv1/component.gocontrol-plane-operator/controllers/hostedcontrolplane/v2/olmv1/consoleoperator/component.gocontrol-plane-operator/controllers/hostedcontrolplane/v2/olmv1/consoleoperator/component_test.gocontrol-plane-operator/controllers/hostedcontrolplane/v2/olmv1/consoleoperator/deployment.gocontrol-plane-operator/controllers/hostedcontrolplane/v2/olmv1/consoleoperator/deployment_test.gocontrol-plane-operator/controllers/hostedcontrolplane/v2/olmv1/operatorcontroller/component.gocontrol-plane-operator/controllers/hostedcontrolplane/v2/olmv1/operatorcontroller/deployment.gocontrol-plane-operator/controllers/hostedcontrolplane/v2/olmv1/util/util.go
| "service.yaml", | ||
| component.WithAdaptFunction(adaptService), | ||
| ). | ||
| InjectAvailabilityProberContainer(component.AvailabilityProberOpts{ |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== InjectAvailabilityProberContainer signatures =="
rg -n --type=go 'InjectAvailabilityProberContainer\s*\('
echo
echo "== AvailabilityProberOpts type declarations/aliases =="
rg -n --type=go 'type\s+AvailabilityProberOpts\b'
echo
echo "== Call sites in new OLMv1 components =="
rg -n --type=go 'InjectAvailabilityProberContainer\(' control-plane-operator/controllers/hostedcontrolplane/v2/olmv1Repository: openshift/hypershift
Length of output: 5203
Use podspec.AvailabilityProberOpts instead of component.AvailabilityProberOpts to match the method signature.
Line 40 uses the non-existent component.AvailabilityProberOpts type. The InjectAvailabilityProberContainer method expects podspec.AvailabilityProberOpts. Additionally, the sibling operatorcontroller/component.go:36 in the same olmv1 group has the same issue. All other components in the codebase correctly use podspec.AvailabilityProberOpts.
Fix
import (
component "github.com/openshift/hypershift/support/controlplane-component"
+ "github.com/openshift/hypershift/support/podspec"
"k8s.io/apimachinery/pkg/runtime/schema"
)
@@
- InjectAvailabilityProberContainer(component.AvailabilityProberOpts{
+ InjectAvailabilityProberContainer(podspec.AvailabilityProberOpts{
KubeconfigVolumeName: "kubeconfig",
RequiredAPIs: []schema.GroupVersionKind{
{Group: "olm.operatorframework.io", Version: "v1", Kind: "ClusterCatalog"},
},
}).📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| InjectAvailabilityProberContainer(component.AvailabilityProberOpts{ | |
| import ( | |
| component "github.com/openshift/hypershift/support/controlplane-component" | |
| "github.com/openshift/hypershift/support/podspec" | |
| "k8s.io/apimachinery/pkg/runtime/schema" | |
| ) | |
| // ... other code ... | |
| InjectAvailabilityProberContainer(podspec.AvailabilityProberOpts{ | |
| KubeconfigVolumeName: "kubeconfig", | |
| RequiredAPIs: []schema.GroupVersionKind{ | |
| {Group: "olm.operatorframework.io", Version: "v1", Kind: "ClusterCatalog"}, | |
| }, | |
| }). |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In
`@control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/catalogd/component.go`
at line 40, The call to InjectAvailabilityProberContainer is passing the wrong
type namespace: replace component.AvailabilityProberOpts with
podspec.AvailabilityProberOpts where InjectAvailabilityProberContainer(...) is
invoked (e.g., in the function(s) that call InjectAvailabilityProberContainer in
component.go and the sibling operatorcontroller/component.go), so the passed
struct type matches the method signature and the rest of the codebase.
| // IN_CLUSTER_MODE tells cluster-olm-operator it's running in HyperShift mode | ||
| // In this mode, it should: | ||
| // - Use HOSTED_KUBECONFIG client for ClusterCatalog/ClusterExtension | ||
| // - Use in-cluster config for ClusterOperator status reporting to management cluster | ||
| {Name: "HYPERSHIFT_MODE", Value: "true"}, | ||
|
|
There was a problem hiding this comment.
Stale comment — says IN_CLUSTER_MODE but the env var is HYPERSHIFT_MODE
The comment on line 34 describes IN_CLUSTER_MODE while the actual env var injected is HYPERSHIFT_MODE. This will confuse anyone cross-referencing the operator's env-var documentation.
📝 Proposed fix
- // IN_CLUSTER_MODE tells cluster-olm-operator it's running in HyperShift mode
- // In this mode, it should:
- // - Use HOSTED_KUBECONFIG client for ClusterCatalog/ClusterExtension
- // - Use in-cluster config for ClusterOperator status reporting to management cluster
+ // HYPERSHIFT_MODE tells cluster-olm-operator it's running in HyperShift mode.
+ // In this mode, it should:
+ // - Use HOSTED_KUBECONFIG client for ClusterCatalog/ClusterExtension
+ // - Use in-cluster config for ClusterOperator status reporting to management cluster
{Name: "HYPERSHIFT_MODE", Value: "true"},🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In
`@control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/clusterolmoperator/deployment.go`
around lines 31 - 36, Update the stale comment that references IN_CLUSTER_MODE
to correctly reference the actual environment variable HYPERSHIFT_MODE used in
the deployment (the env entry {Name: "HYPERSHIFT_MODE", Value: "true"}); edit
the comment text above that env var so it describes HYPERSHIFT_MODE and its
semantics (use HOSTED_KUBECONFIG for ClusterCatalog/ClusterExtension and
in-cluster config for ClusterOperator status reporting) to avoid confusion when
cross-referencing operator env-var docs.
| - Enhancement Proposal: `~/devel/openshift/enhancements/enhancements/olm/olmv1-hypershift-support.md` | ||
| - HyperShift Component: `~/devel/openshift/hypershift/control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/clusterolmoperator/` | ||
| - Shared Kubeconfig Utility: `~/devel/openshift/hypershift/control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/util/util.go` |
There was a problem hiding this comment.
Replace local filesystem references with repo/URL links
Line 190–192 use ~/devel/... paths, which are not usable for other contributors. Please switch to repository-relative paths or canonical GitHub URLs.
Suggested patch
-- Enhancement Proposal: `~/devel/openshift/enhancements/enhancements/olm/olmv1-hypershift-support.md`
-- HyperShift Component: `~/devel/openshift/hypershift/control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/clusterolmoperator/`
-- Shared Kubeconfig Utility: `~/devel/openshift/hypershift/control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/util/util.go`
+- Enhancement Proposal: `https://github.com/openshift/enhancements/blob/master/enhancements/olm/olmv1-hypershift-support.md`
+- HyperShift Component: `control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/clusterolmoperator/`
+- Shared Kubeconfig Utility: `control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/util/util.go`📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - Enhancement Proposal: `~/devel/openshift/enhancements/enhancements/olm/olmv1-hypershift-support.md` | |
| - HyperShift Component: `~/devel/openshift/hypershift/control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/clusterolmoperator/` | |
| - Shared Kubeconfig Utility: `~/devel/openshift/hypershift/control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/util/util.go` | |
| - Enhancement Proposal: `https://github.com/openshift/enhancements/blob/master/enhancements/olm/olmv1-hypershift-support.md` | |
| - HyperShift Component: `control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/clusterolmoperator/` | |
| - Shared Kubeconfig Utility: `control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/util/util.go` |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In
`@control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/clusterolmoperator/HYPERSHIFT_DUAL_API.md`
around lines 190 - 192, Update the three local filesystem references in
HYPERSHIFT_DUAL_API.md (the entries for the Enhancement Proposal, HyperShift
Component, and Shared Kubeconfig Utility) to use repository-relative paths or
canonical GitHub URLs instead of `~/devel/...`; locate the three lines listing
`~/devel/openshift/enhancements/enhancements/olm/olmv1-hypershift-support.md`,
`~/devel/openshift/hypershift/control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/clusterolmoperator/`,
and
`~/devel/openshift/hypershift/control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/util/util.go`
and replace each with the appropriate repo-relative path (e.g., relative path
from repo root) or full GitHub URL to the target file/directory. Ensure links
are valid and use the same markdown list format.
| InjectAvailabilityProberContainer(podspec.AvailabilityProberOpts{ | ||
| KubeconfigVolumeName: "kubeconfig", | ||
| RequiredAPIs: []schema.GroupVersionKind{ | ||
| {Group: "", Version: "v1", Kind: "Service"}, | ||
| {Group: "", Version: "v1", Kind: "ConfigMap"}, | ||
| }, | ||
| }). | ||
| Build() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if the console-operator base deployment manifest provides a "kubeconfig" volume
# and compare with other OLMv1 component manifests
fd -t f -e yaml . | xargs grep -l "console-operator" 2>/dev/null | head -20
# Check the embedded asset directories for OLMv1 manifests
rg -n "kubeconfig" --type=yaml --iglob "*console-operator*" 2>/dev/null
# Check for assets/console-operator directory
fd -t d "console-operator" | head -10
# Check how InjectAvailabilityProberContainer uses KubeconfigVolumeName
ast-grep --pattern 'func InjectAvailabilityProberContainer($$$) {
$$$
}'
# Also check how other OLMv1 components reference kubeconfig volumes
rg -n "KubeconfigVolumeName" --type=go -A2 -B2Repository: openshift/hypershift
Length of output: 20480
🏁 Script executed:
#!/bin/bash
# Find the adaptDeployment function in consoleoperator
echo "=== consoleoperator adaptDeployment ==="
cat control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/consoleoperator/component.go | head -50
echo -e "\n=== catalogd adaptDeployment ==="
fd component.go control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/catalogd/ | xargs cat | head -50
echo -e "\n=== operatorcontroller adaptDeployment ==="
fd component.go control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/operatorcontroller/ | xargs cat | head -50
# Search for InjectHostedClusterKubeconfig usage
echo -e "\n=== InjectHostedClusterKubeconfig usage ==="
rg "InjectHostedClusterKubeconfig" -A2 -B2 control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/
# Search for util package that might have this function
echo -e "\n=== Search for util.InjectHostedClusterKubeconfig or similar ==="
rg "InjectHostedClusterKubeconfig" --type=go -A2 -B2Repository: openshift/hypershift
Length of output: 12276
🏁 Script executed:
#!/bin/bash
# Find and read the consoleoperator deployment.go file
echo "=== consoleoperator deployment.go (adaptDeployment) ==="
find control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/consoleoperator -name "*.go" -type f
echo ""
cat control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/consoleoperator/deployment.go
echo -e "\n=== util.InjectHostedClusterKubeconfig implementation ==="
cat control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/util/util.goRepository: openshift/hypershift
Length of output: 3961
KubeconfigVolumeName: "kubeconfig" references a volume that is never injected into the console-operator pod
The console-operator's adaptDeployment only sets proxy environment variables and RELEASE_VERSION—it never calls util.InjectHostedClusterKubeconfig, which is required to inject the "kubeconfig" volume. Every other OLMv1 component (catalogd, operatorcontroller, clusterolmoperator) calls this function. The availability prober's volumeMount will reference a non-existent volume, causing Kubernetes to reject the pod at admission.
Additionally, the design is inconsistent: NeedsManagementKASAccess() returns false with the comment "Only needs hosted cluster API access via in-cluster config", yet the availability prober is configured to use a kubeconfig file. Either:
- Call
util.InjectHostedClusterKubeconfiginadaptDeployment(to match other OLMv1 components and provide the kubeconfig volume), or - Set
KubeconfigVolumeNameto an empty string if in-cluster config is sufficient.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In
`@control-plane-operator/controllers/hostedcontrolplane/v2/olmv1/consoleoperator/component.go`
around lines 39 - 46, The availability prober is configured with
KubeconfigVolumeName: "kubeconfig" but adaptDeployment never injects that
volume; update adaptDeployment to call util.InjectHostedClusterKubeconfig so the
"kubeconfig" volume is created (matching other OLMv1 components) and keep
InjectAvailabilityProberContainer(...) as-is, or alternatively change
InjectAvailabilityProberContainer call to set KubeconfigVolumeName to "" if you
intend to use in-cluster config; also reconcile the NeedsManagementKASAccess()
return/comment to reflect the chosen approach.
CI Failure Analysis: PR #8470 — "WIP: add OLMv1 controlplane hoists"Repository: openshift/hypershift Shared Root CausesAll 6 failing jobs trace back to 3 compilation errors in newly-added OLMv1 component files and 1 commit-message lint error. The compilation errors are in 4 files under Job Failures1.
|
| # | Job | Root Cause | Fix |
|---|---|---|---|
| 1 | verify |
Commit title violates ConventionalCommits (CT1) | Rename to feat: add OLMv1 controlplane hoists |
| 2 | unit-cpo-other |
3 compilation errors in 4 new OLMv1 files | See Errors A/B/C fixes |
| 3 | unit-cpo-hypershift |
Same compilation errors (direct dependency) | Same fixes |
| 4 | unit-hypershift-operator |
Same compilation errors (transitive dependency) | Same fixes |
| 5 | unit-util |
Same compilation errors (transitive dependency) | Same fixes |
| 6 | build-images |
Same compilation errors (make → go build) |
Same fixes |
Total unique bugs: 4 (3 compilation + 1 lint)
Files to fix: catalogd/component.go, operatorcontroller/component.go, clusterolmoperator/deployment_test.go, consoleoperator/deployment_test.go, plus the commit message.
What this PR does / why we need it:
Which issue(s) this PR fixes:
Fixes
Special notes for your reviewer:
Checklist:
Summary by CodeRabbit
Release Notes
New Features
Tests
Documentation