Skip to content

RFE-8777 Hypershift operator should provide DeploymentConditions#8228

Closed
cwilkers wants to merge 1 commit intoopenshift:mainfrom
cwilkers:CNF-20619
Closed

RFE-8777 Hypershift operator should provide DeploymentConditions#8228
cwilkers wants to merge 1 commit intoopenshift:mainfrom
cwilkers:CNF-20619

Conversation

@cwilkers
Copy link
Copy Markdown

@cwilkers cwilkers commented Apr 14, 2026

What this PR does / why we need it:

To enable integration with Siteconfig Operator, Hypershift clusters should present their deployment status in a way that is compatible with ClusterDeployments, namely hivev1 clusterdeploymentconditions

  • Pulls in Hivev1 for the original definitions
  • Adds ClusterDeploymentConditions field to status
  • Adds functions to compute conditions
  • Generates deepcopy methods and api manifests
  • Updates hypershift vendor directory
  • Added tests

Special notes for your reviewer:

Checklist:

  • Subject and description added to both, commit and PR.
  • [ x] Relevant issues have been referenced.
  • [ x] This change includes docs.
  • [ x] This change includes unit tests.

Co-authored-by: Cursor AI noreply@cursor.com

Summary by CodeRabbit

  • New Features

    • HostedCluster status now exposes ClusterDeployment conditions (Provisioned, ProvisionFailed, ProvisionStopped, RequirementsMet) for clearer provisioning state and improved status accuracy during reconciliation and deletion.
  • Tests

    • Added unit tests validating computation, preservation, and lookup of ClusterDeployment conditions across lifecycle scenarios.
  • Chores

    • Updated linting config and module metadata to accommodate the new status fields and validations.

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: LGTM mode

@openshift-ci openshift-ci Bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Apr 14, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 14, 2026

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@openshift-ci openshift-ci Bot added do-not-merge/needs-area needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Apr 14, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 14, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Added a new optional Status.ClusterDeploymentConditions []hivev1.ClusterDeploymentCondition to HostedCluster API and a dependency on the Hive API. The controller now computes four Hive cluster-deployment conditions (Provisioned, ProvisionFailed, ProvisionStopped, RequirementsMet) during reconcile (including deletion path), preserves existing LastTransitionTime when appropriate, and persists status via a new updateHostedClusterStatus helper with conflict-aware requeueing. Unit tests were added to validate condition computation and lookup behavior.

Sequence Diagram(s)

sequenceDiagram
  participant KubeAPI as Kube API
  participant Controller as HostedCluster Controller
  participant HCluster as HostedCluster (status)
  participant HCP as HostedControlPlane

  KubeAPI->>Controller: Event (create/update/delete)
  Controller->>KubeAPI: Get HostedCluster
  alt HostedControlPlane exists
    Controller->>HCP: Get HostedControlPlane
  end
  Controller->>Controller: computeClusterDeploymentConditions(hcluster[, hcp])
  Controller->>HCluster: set Status.ClusterDeploymentConditions
  Controller->>KubeAPI: Status().Update(HostedCluster)
  KubeAPI-->>Controller: Persisted / Conflict / Error
  alt Conflict
    Controller->>Controller: return ctrl.Result{Requeue:true}
  else Success
    Controller->>KubeAPI: continue (finalizer removal or normal flow)
  end
Loading
🚥 Pre-merge checks | ✅ 8 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 61.54% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Test Structure And Quality ⚠️ Warning Three new test functions lack meaningful failure messages in Gomega assertions, making test failures difficult to diagnose. Add descriptive failure messages as second arguments to all Expect() assertions across the three test functions for improved clarity.
✅ Passed checks (8 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly identifies the main feature: exposing ClusterDeploymentConditions for Siteconfig Operator integration, which is the primary objective of the changeset.
Stable And Deterministic Test Names ✅ Passed Test function names (TestComputeClusterDeploymentConditions, TestComputeProvisionedCondition, TestFindClusterDeploymentCondition) are static, descriptive, and contain no dynamic information that would vary between runs.
Microshift Test Compatibility ✅ Passed The PR adds only standard Go unit tests using testing.T package, not Ginkgo e2e tests. The e2e file change is minor formatting only.
Single Node Openshift (Sno) Test Compatibility ✅ Passed This PR does not add any Ginkgo e2e tests. The new test file uses standard Go subtests with t.Run() and testing.T package, falling outside this check's scope.
Topology-Aware Scheduling Compatibility ✅ Passed PR adds status field for cluster deployment conditions; no scheduling constraints, workload deployments, or topology-aware logic introduced.
Ote Binary Stdout Contract ✅ Passed PR modifies only controlled locations with no stdout-writing operations found in modified files; unit tests properly contained within test functions.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed PR does not add new Ginkgo e2e tests; karpenter_test.go change is formatting only, hostedcluster_controller_test.go additions are standard Go unit tests.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci Bot added the area/api Indicates the PR includes changes for the API label Apr 14, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 14, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: cwilkers
Once this PR has been reviewed and has the lgtm label, please assign csrwng for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci Bot added area/cli Indicates the PR includes changes for CLI area/documentation Indicates the PR includes changes for documentation area/hypershift-operator Indicates the PR includes changes for the hypershift operator and API - outside an OCP release and removed do-not-merge/needs-area labels Apr 14, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go (1)

6976-6979: Please add focused unit tests for the remaining helper functions.

At Line 6976, the TODO leaves core helper logic without direct assertions (especially reason/message and transition-time behavior), which weakens regression localization.

If you want, I can draft those three table-driven tests and open a follow-up issue template.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go`
around lines 6976 - 6979, Add three focused table-driven unit tests named
TestComputeProvisionFailedCondition, TestComputeProvisionStoppedCondition, and
TestComputeRequirementsMetCondition that call the helper functions
ComputeProvisionFailedCondition, ComputeProvisionStoppedCondition, and
ComputeRequirementsMetCondition with a variety of input cases and assert the
full Condition fields (Type, Status, Reason, Message) and transition-time
behavior; for transition times, use a deterministic test clock or fixed time
values so you can assert that LastTransitionTime is set on status changes and
remains unchanged when only message/reason change, and include cases that
validate both True/False status branches and expected reason/message strings for
each helper.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@go.mod`:
- Line 130: Update the vendored module entry for
sigs.k8s.io/structured-merge-diff/v6 in hack/tools/vendor/modules.txt to match
go.mod by changing the version from v6.3.0 to v6.3.1 (the entry referenced
around the existing line for sigs.k8s.io/structured-merge-diff/v6). Ensure the
modules.txt record for that module now reads v6.3.1 so it is consistent with
go.mod, api/go.mod and other vendor directories.

In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`:
- Around line 1269-1270: The ClusterDeploymentConditions are only computed on
the normal reconcile path so deletion returns skip updating them; call
computeClusterDeploymentConditions(hcluster) and assign to
hcluster.Status.ClusterDeploymentConditions on the deletion path before
returning (i.e., in the branch where the HostedCluster has DeletionTimestamp /
the early return in the Reconcile handler), ensuring ClusterProvisionStoppedType
flips to True for external consumers during teardown; update the same status
write path used later so the status gets persisted.
- Around line 5458-5462: The current requirementsMet only checks
ValidHostedClusterConfiguration (validConfigCondition) but must also include the
SupportedHostedCluster and ValidReleaseImage conditions so
ClusterRequirementsMetType isn't set True when provisioning is still blocked;
update the logic that computes requirementsMet (and any code that sets
ClusterRequirementsMetType) to fetch meta.FindStatusCondition for
SupportedHostedCluster and ValidReleaseImage from hcluster.Status.Conditions and
require all three conditions to have Status == metav1.ConditionTrue before
treating requirementsMet as true.

---

Nitpick comments:
In
`@hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go`:
- Around line 6976-6979: Add three focused table-driven unit tests named
TestComputeProvisionFailedCondition, TestComputeProvisionStoppedCondition, and
TestComputeRequirementsMetCondition that call the helper functions
ComputeProvisionFailedCondition, ComputeProvisionStoppedCondition, and
ComputeRequirementsMetCondition with a variety of input cases and assert the
full Condition fields (Type, Status, Reason, Message) and transition-time
behavior; for transition times, use a deterministic test clock or fixed time
values so you can assert that LastTransitionTime is set on status changes and
remains unchanged when only message/reason change, and include cases that
validate both True/False status branches and expected reason/message strings for
each helper.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro Plus

Run ID: ef58f345-ec5d-4b79-99e9-509b953b8d6d

📥 Commits

Reviewing files that changed from the base of the PR and between dd2dae7 and 27779a4.

⛔ Files ignored due to path filters (32)
  • api/go.sum is excluded by !**/*.sum
  • api/hypershift/v1beta1/zz_generated.deepcopy.go is excluded by !**/zz_generated*.go, !**/zz_generated*
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/vendor/modules.txt is excluded by !**/vendor/**
  • api/vendor/sigs.k8s.io/structured-merge-diff/v6/schema/elements.go is excluded by !**/vendor/**
  • api/vendor/sigs.k8s.io/structured-merge-diff/v6/typed/remove.go is excluded by !**/vendor/**
  • client/applyconfiguration/hypershift/v1beta1/clusterdeploymentcondition.go is excluded by !client/**
  • client/applyconfiguration/hypershift/v1beta1/hostedclusterstatus.go is excluded by !client/**
  • client/applyconfiguration/utils.go is excluded by !client/**
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • docs/content/reference/aggregated-docs.md is excluded by !docs/content/reference/aggregated-docs.md
  • docs/content/reference/api.md is excluded by !docs/content/reference/api.md
  • go.sum is excluded by !**/*.sum
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.go is excluded by !vendor/**, !**/vendor/**
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/zz_generated.deepcopy.go is excluded by !vendor/**, !**/vendor/**, !**/zz_generated*.go, !**/zz_generated*
  • vendor/modules.txt is excluded by !vendor/**, !**/vendor/**
  • vendor/sigs.k8s.io/structured-merge-diff/v6/schema/elements.go is excluded by !vendor/**, !**/vendor/**
  • vendor/sigs.k8s.io/structured-merge-diff/v6/typed/remove.go is excluded by !vendor/**, !**/vendor/**
📒 Files selected for processing (5)
  • api/go.mod
  • api/hypershift/v1beta1/hostedcluster_types.go
  • go.mod
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go

Comment thread go.mod
Comment thread hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go (2)

5450-5489: ⚠️ Potential issue | 🟠 Major

Gate RequirementsMet on the same prerequisites that actually gate reconciliation.

requirementsMet only checks ValidHostedClusterConfiguration, but Lines 1325-1341 also block on SupportedHostedCluster and ValidReleaseImage. That can publish ClusterRequirementsMetType=True for clusters the controller still refuses to provision. The failure reason/message should come from the first blocking condition too.

Suggested change
 	// Check for validation conditions
 	validConfigCondition := meta.FindStatusCondition(hcluster.Status.Conditions, string(hyperv1.ValidHostedClusterConfiguration))
+	supportedHostedClusterCondition := meta.FindStatusCondition(hcluster.Status.Conditions, string(hyperv1.SupportedHostedCluster))
+	validReleaseImageCondition := meta.FindStatusCondition(hcluster.Status.Conditions, string(hyperv1.ValidReleaseImage))
 
-	requirementsMet := validConfigCondition != nil && validConfigCondition.Status == metav1.ConditionTrue
+	blockers := []*metav1.Condition{
+		validConfigCondition,
+		supportedHostedClusterCondition,
+		validReleaseImageCondition,
+	}
+	requirementsMet := true
+	var firstFailure *metav1.Condition
+	for _, blocker := range blockers {
+		if blocker == nil || blocker.Status != metav1.ConditionTrue {
+			requirementsMet = false
+			firstFailure = blocker
+			break
+		}
+	}
@@
 	} else {
@@
-		condition.Reason = "RequirementsNotMet"
-		if validConfigCondition != nil && validConfigCondition.Message != "" {
-			condition.Message = validConfigCondition.Message
-		} else {
-			condition.Message = "Cluster requirements have not been met"
-		}
+		condition.Reason = "RequirementsNotMet"
+		if firstFailure != nil {
+			if firstFailure.Reason != "" {
+				condition.Reason = firstFailure.Reason
+			}
+			if firstFailure.Message != "" {
+				condition.Message = firstFailure.Message
+			}
+		}
+		if condition.Message == "" {
+			condition.Message = "Cluster requirements have not been met"
+		}
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`
around lines 5450 - 5489, computeRequirementsMetCondition currently only
inspects ValidHostedClusterConfiguration; update it to require all three
prerequisite conditions (SupportedHostedCluster, ValidReleaseImage, and
ValidHostedClusterConfiguration) be metav1.ConditionTrue before setting
ClusterRequirementsMetType True (use meta.FindStatusCondition to read each
condition from hcluster.Status.Conditions), and when false select the first
blocking condition in the order SupportedHostedCluster, ValidReleaseImage, then
ValidHostedClusterConfiguration to populate condition.Reason and
condition.Message (preserve LastTransitionTime logic using existingCondition
from findClusterDeploymentCondition). Ensure you reference
computeRequirementsMetCondition, ClusterRequirementsMetType,
SupportedHostedCluster, ValidReleaseImage, ValidHostedClusterConfiguration,
meta.FindStatusCondition, and findClusterDeploymentCondition when making the
changes.

1269-1270: ⚠️ Potential issue | 🟠 Major

Compute these conditions on the deletion path too.

ClusterDeploymentConditions is still only synthesized on the normal reconcile path. A deleting HostedCluster exits at Lines 491-608, so ClusterProvisionStoppedType never flips to True during teardown and external consumers keep seeing the old deployment state.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`
around lines 1269 - 1270, The deletion path exits early before synthesizing
ClusterDeploymentConditions, so ensure
hcluster.Status.ClusterDeploymentConditions is set there too by calling
computeClusterDeploymentConditions(hcluster) on the delete/teardown path (i.e.,
just before any early return in the deletion handling block inside the reconcile
for the HostedCluster) so ClusterProvisionStoppedType flips to True for external
consumers; update the deletion/teardown branch to assign
hcluster.Status.ClusterDeploymentConditions =
computeClusterDeploymentConditions(hcluster) and persist the status update
before returning.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`:
- Around line 5415-5448: The computeProvisionStoppedCondition currently only
marks ClusterProvisionStoppedType based on DeletionTimestamp but the comment
says paused clusters should also be considered stopped; update
computeProvisionStoppedCondition to treat a cluster as stopped when either
hcluster.DeletionTimestamp is set OR hcluster.Spec.Paused is true, preserve the
existing LastTransitionTime logic by comparing to existingCondition as done now,
and set distinct Reason/Message for the paused case (e.g., Reason "Paused" and
Message "Cluster reconciliation is paused") while keeping "DeletionRequested"
for deletion; ensure you still set Status to True for both deletion and paused
cases and leave the False branch unchanged.

---

Duplicate comments:
In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`:
- Around line 5450-5489: computeRequirementsMetCondition currently only inspects
ValidHostedClusterConfiguration; update it to require all three prerequisite
conditions (SupportedHostedCluster, ValidReleaseImage, and
ValidHostedClusterConfiguration) be metav1.ConditionTrue before setting
ClusterRequirementsMetType True (use meta.FindStatusCondition to read each
condition from hcluster.Status.Conditions), and when false select the first
blocking condition in the order SupportedHostedCluster, ValidReleaseImage, then
ValidHostedClusterConfiguration to populate condition.Reason and
condition.Message (preserve LastTransitionTime logic using existingCondition
from findClusterDeploymentCondition). Ensure you reference
computeRequirementsMetCondition, ClusterRequirementsMetType,
SupportedHostedCluster, ValidReleaseImage, ValidHostedClusterConfiguration,
meta.FindStatusCondition, and findClusterDeploymentCondition when making the
changes.
- Around line 1269-1270: The deletion path exits early before synthesizing
ClusterDeploymentConditions, so ensure
hcluster.Status.ClusterDeploymentConditions is set there too by calling
computeClusterDeploymentConditions(hcluster) on the delete/teardown path (i.e.,
just before any early return in the deletion handling block inside the reconcile
for the HostedCluster) so ClusterProvisionStoppedType flips to True for external
consumers; update the deletion/teardown branch to assign
hcluster.Status.ClusterDeploymentConditions =
computeClusterDeploymentConditions(hcluster) and persist the status update
before returning.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro Plus

Run ID: 11e5c93a-c6c9-4087-8afe-e16b9d65646d

📥 Commits

Reviewing files that changed from the base of the PR and between 27779a4 and 7c70254.

⛔ Files ignored due to path filters (32)
  • api/go.sum is excluded by !**/*.sum
  • api/hypershift/v1beta1/zz_generated.deepcopy.go is excluded by !**/zz_generated*.go, !**/zz_generated*
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/vendor/modules.txt is excluded by !**/vendor/**
  • api/vendor/sigs.k8s.io/structured-merge-diff/v6/schema/elements.go is excluded by !**/vendor/**
  • api/vendor/sigs.k8s.io/structured-merge-diff/v6/typed/remove.go is excluded by !**/vendor/**
  • client/applyconfiguration/hypershift/v1beta1/clusterdeploymentcondition.go is excluded by !client/**
  • client/applyconfiguration/hypershift/v1beta1/hostedclusterstatus.go is excluded by !client/**
  • client/applyconfiguration/utils.go is excluded by !client/**
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • docs/content/reference/aggregated-docs.md is excluded by !docs/content/reference/aggregated-docs.md
  • docs/content/reference/api.md is excluded by !docs/content/reference/api.md
  • go.sum is excluded by !**/*.sum
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.go is excluded by !vendor/**, !**/vendor/**
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/zz_generated.deepcopy.go is excluded by !vendor/**, !**/vendor/**, !**/zz_generated*.go, !**/zz_generated*
  • vendor/modules.txt is excluded by !vendor/**, !**/vendor/**
  • vendor/sigs.k8s.io/structured-merge-diff/v6/schema/elements.go is excluded by !vendor/**, !**/vendor/**
  • vendor/sigs.k8s.io/structured-merge-diff/v6/typed/remove.go is excluded by !vendor/**, !**/vendor/**
📒 Files selected for processing (5)
  • api/go.mod
  • api/hypershift/v1beta1/hostedcluster_types.go
  • go.mod
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go
✅ Files skipped from review due to trivial changes (3)
  • go.mod
  • api/go.mod
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go

Comment on lines +5415 to +5448
// computeProvisionStoppedCondition determines if provision has been stopped
func computeProvisionStoppedCondition(hcluster *hyperv1.HostedCluster, probeTime metav1.Time) hyperv1.ClusterDeploymentCondition {
// ProvisionStopped is True when the cluster is being deleted or has been paused
condition := hyperv1.ClusterDeploymentCondition{
Type: hyperv1.ClusterProvisionStoppedType,
LastProbeTime: probeTime,
}

// Check if cluster is being deleted (has deletion timestamp)
isStopped := !hcluster.DeletionTimestamp.IsZero()

existingCondition := findClusterDeploymentCondition(hcluster.Status.ClusterDeploymentConditions, hyperv1.ClusterProvisionStoppedType)
if isStopped {
if existingCondition != nil && existingCondition.Status == corev1.ConditionTrue {
condition.LastTransitionTime = existingCondition.LastTransitionTime
} else {
condition.LastTransitionTime = probeTime
}
condition.Status = corev1.ConditionTrue
condition.Reason = "DeletionRequested"
condition.Message = "Cluster deletion has been requested"
} else {
if existingCondition != nil && existingCondition.Status == corev1.ConditionFalse {
condition.LastTransitionTime = existingCondition.LastTransitionTime
} else {
condition.LastTransitionTime = probeTime
}
condition.Status = corev1.ConditionFalse
condition.Reason = "NotStopped"
condition.Message = "Provision has not been stopped"
}

return condition
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

The stopped predicate and its comment disagree.

The comment says paused clusters should report ClusterProvisionStoppedType=True, but the implementation only checks DeletionTimestamp. A paused cluster will still surface NotStopped unless you also key off paused reconciliation.

Suggested adjustment
-	// Check if cluster is being deleted (has deletion timestamp)
-	isStopped := !hcluster.DeletionTimestamp.IsZero()
+	// Treat deletion or paused reconciliation as stopped.
+	isStopped := !hcluster.DeletionTimestamp.IsZero() ||
+		meta.IsStatusConditionFalse(hcluster.Status.Conditions, string(hyperv1.ReconciliationActive))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// computeProvisionStoppedCondition determines if provision has been stopped
func computeProvisionStoppedCondition(hcluster *hyperv1.HostedCluster, probeTime metav1.Time) hyperv1.ClusterDeploymentCondition {
// ProvisionStopped is True when the cluster is being deleted or has been paused
condition := hyperv1.ClusterDeploymentCondition{
Type: hyperv1.ClusterProvisionStoppedType,
LastProbeTime: probeTime,
}
// Check if cluster is being deleted (has deletion timestamp)
isStopped := !hcluster.DeletionTimestamp.IsZero()
existingCondition := findClusterDeploymentCondition(hcluster.Status.ClusterDeploymentConditions, hyperv1.ClusterProvisionStoppedType)
if isStopped {
if existingCondition != nil && existingCondition.Status == corev1.ConditionTrue {
condition.LastTransitionTime = existingCondition.LastTransitionTime
} else {
condition.LastTransitionTime = probeTime
}
condition.Status = corev1.ConditionTrue
condition.Reason = "DeletionRequested"
condition.Message = "Cluster deletion has been requested"
} else {
if existingCondition != nil && existingCondition.Status == corev1.ConditionFalse {
condition.LastTransitionTime = existingCondition.LastTransitionTime
} else {
condition.LastTransitionTime = probeTime
}
condition.Status = corev1.ConditionFalse
condition.Reason = "NotStopped"
condition.Message = "Provision has not been stopped"
}
return condition
}
// computeProvisionStoppedCondition determines if provision has been stopped
func computeProvisionStoppedCondition(hcluster *hyperv1.HostedCluster, probeTime metav1.Time) hyperv1.ClusterDeploymentCondition {
// ProvisionStopped is True when the cluster is being deleted or has been paused
condition := hyperv1.ClusterDeploymentCondition{
Type: hyperv1.ClusterProvisionStoppedType,
LastProbeTime: probeTime,
}
// Treat deletion or paused reconciliation as stopped.
isStopped := !hcluster.DeletionTimestamp.IsZero() ||
meta.IsStatusConditionFalse(hcluster.Status.Conditions, string(hyperv1.ReconciliationActive))
existingCondition := findClusterDeploymentCondition(hcluster.Status.ClusterDeploymentConditions, hyperv1.ClusterProvisionStoppedType)
if isStopped {
if existingCondition != nil && existingCondition.Status == corev1.ConditionTrue {
condition.LastTransitionTime = existingCondition.LastTransitionTime
} else {
condition.LastTransitionTime = probeTime
}
condition.Status = corev1.ConditionTrue
condition.Reason = "DeletionRequested"
condition.Message = "Cluster deletion has been requested"
} else {
if existingCondition != nil && existingCondition.Status == corev1.ConditionFalse {
condition.LastTransitionTime = existingCondition.LastTransitionTime
} else {
condition.LastTransitionTime = probeTime
}
condition.Status = corev1.ConditionFalse
condition.Reason = "NotStopped"
condition.Message = "Provision has not been stopped"
}
return condition
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`
around lines 5415 - 5448, The computeProvisionStoppedCondition currently only
marks ClusterProvisionStoppedType based on DeletionTimestamp but the comment
says paused clusters should also be considered stopped; update
computeProvisionStoppedCondition to treat a cluster as stopped when either
hcluster.DeletionTimestamp is set OR hcluster.Spec.Paused is true, preserve the
existing LastTransitionTime logic by comparing to existingCondition as done now,
and set distinct Reason/Message for the paused case (e.g., Reason "Paused" and
Message "Cluster reconciliation is paused") while keeping "DeletionRequested"
for deletion; ensure you still set Status to True for both deletion and paused
cases and leave the False branch unchanged.

@openshift-ci openshift-ci Bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 14, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (3)
hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go (3)

1269-1270: ⚠️ Potential issue | 🟠 Major

Compute ClusterDeploymentConditions on the deletion path too.

This write only happens on the normal reconcile path. The early return on Lines 491-608 bypasses it, so external consumers can keep seeing stale deployment conditions during teardown instead of ClusterProvisionStoppedType=True.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`
around lines 1269 - 1270, The ClusterDeploymentConditions assignment only runs
on the normal reconcile path; update the reconcile flow so
computeClusterDeploymentConditions(hcluster) is called and its result assigned
to hcluster.Status.ClusterDeploymentConditions even during the deletion/teardown
path (i.e., before any early return when hcluster.DeletionTimestamp is set or
when the controller is handling deletion). Locate the reconcile function and
ensure that wherever the code currently returns early during deletion you first
call computeClusterDeploymentConditions(hcluster) and set
hcluster.Status.ClusterDeploymentConditions to that value so external consumers
see the teardown condition (e.g., ClusterProvisionStoppedType=True).

5415-5448: ⚠️ Potential issue | 🟡 Minor

Include paused reconciliation in ClusterProvisionStoppedType.

The comment says paused clusters are stopped, but the predicate on Line 5424 only checks DeletionTimestamp. A paused HostedCluster will still report NotStopped unless this also keys off paused reconciliation, e.g. ReconciliationActive=False or Spec.PausedUntil.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`
around lines 5415 - 5448, computeProvisionStoppedCondition currently only treats
a HostedCluster as stopped when DeletionTimestamp is set; update it to also
treat paused reconciliation as stopped by checking the cluster's reconciliation
pause indicators (e.g., if hcluster.Spec.PausedUntil is set and in the future
and/or the status contains a ReconciliationActive condition with Status=False).
In computeProvisionStoppedCondition (and when building the condition fields such
as Reason/Message), set isStopped = true when either
hcluster.DeletionTimestamp.IsZero() is false OR the cluster is paused (inspect
hcluster.Spec.PausedUntil and the ReconciliationActive condition via
findClusterDeploymentCondition(hcluster.Status.ClusterDeploymentConditions,
"ReconciliationActive")), and set an appropriate Reason like
"ReconciliationPaused" and message when paused rather than "DeletionRequested".
Ensure LastTransitionTime logic still reuses existingCondition when statuses
match.

5458-5462: ⚠️ Potential issue | 🟠 Major

Gate ClusterRequirementsMetType on every reconcile-blocking prerequisite.

This helper only checks ValidHostedClusterConfiguration, but reconcile still stops on SupportedHostedCluster and ValidReleaseImage on Lines 1325-1341. That allows ClusterRequirementsMetType=True for clusters the operator still cannot provision.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`
around lines 5458 - 5462, The current check only inspects
ValidHostedClusterConfiguration (via meta.FindStatusCondition and variable
validConfigCondition) but must also ensure SupportedHostedCluster and
ValidReleaseImage are true before treating ClusterRequirementsMetType as
satisfied; update the requirementsMet logic to fetch the three conditions from
hcluster.Status.Conditions (e.g., using meta.FindStatusCondition for
hyperv1.ValidHostedClusterConfiguration, hyperv1.SupportedHostedCluster, and
hyperv1.ValidReleaseImage) and set requirementsMet = true only when all three
condition objects are non-nil and each has Status == metav1.ConditionTrue, so
reconcile-blocking prerequisites are correctly gated.
🧹 Nitpick comments (2)
api/hypershift/v1beta1/hostedcluster_types.go (2)

2230-2237: Add an explicit max size for clusterDeploymentConditions.

At Line 2236, this list is map-keyed but unbounded. Since only four condition types are expected, cap it to 4 for tighter API guarantees.

Proposed bound
 	// +optional
 	// +listType=map
 	// +listMapKey=type
+	// +kubebuilder:validation:MaxItems=4
 	ClusterDeploymentConditions []ClusterDeploymentCondition `json:"clusterDeploymentConditions,omitempty"`
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/hypershift/v1beta1/hostedcluster_types.go` around lines 2230 - 2237, Add
an explicit max items validation to the ClusterDeploymentConditions slice: the
field ClusterDeploymentConditions (type []ClusterDeploymentCondition) is
map-keyed but unbounded, so annotate it with a kubebuilder validation tag to cap
the list at 4 (e.g. add the comment +kubebuilder:validation:MaxItems=4
immediately above the ClusterDeploymentConditions declaration, keeping the
existing +listType=map and +listMapKey=type).

2111-2129: Constrain ClusterDeploymentConditionType to known values in schema.

ClusterDeploymentConditionType is currently open-ended. Since this contract is fixed to four Hive-compatible types, adding an enum marker will prevent accidental drift.

Proposed schema constraint
 // ClusterDeploymentCondition defines a special set of conditions used to track
 // the status of the cluster deployment. They are based on Hive's
 // ClusterDeploymentConditions
+// +kubebuilder:validation:Enum=Provisioned;ProvisionFailed;ProvisionStopped;RequirementsMet
 type ClusterDeploymentConditionType string
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/hypershift/v1beta1/hostedcluster_types.go` around lines 2111 - 2129, Add
a kubebuilder enum validation to constrain the ClusterDeploymentConditionType to
the four allowed values: "Provisioned", "ProvisionFailed", "ProvisionStopped",
and "RequirementsMet". Update the type declaration for
ClusterDeploymentConditionType to include the kubebuilder marker (e.g.,
+kubebuilder:validation:Enum=...) above the type so the generated CRD schema
uses an enum; ensure the string literals in the marker exactly match the
constants ClusterProvisionedType, ClusterProvisionFailedType,
ClusterProvisionStoppedType, and ClusterRequirementsMetType.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`:
- Around line 1269-1270: The ClusterDeploymentConditions assignment only runs on
the normal reconcile path; update the reconcile flow so
computeClusterDeploymentConditions(hcluster) is called and its result assigned
to hcluster.Status.ClusterDeploymentConditions even during the deletion/teardown
path (i.e., before any early return when hcluster.DeletionTimestamp is set or
when the controller is handling deletion). Locate the reconcile function and
ensure that wherever the code currently returns early during deletion you first
call computeClusterDeploymentConditions(hcluster) and set
hcluster.Status.ClusterDeploymentConditions to that value so external consumers
see the teardown condition (e.g., ClusterProvisionStoppedType=True).
- Around line 5415-5448: computeProvisionStoppedCondition currently only treats
a HostedCluster as stopped when DeletionTimestamp is set; update it to also
treat paused reconciliation as stopped by checking the cluster's reconciliation
pause indicators (e.g., if hcluster.Spec.PausedUntil is set and in the future
and/or the status contains a ReconciliationActive condition with Status=False).
In computeProvisionStoppedCondition (and when building the condition fields such
as Reason/Message), set isStopped = true when either
hcluster.DeletionTimestamp.IsZero() is false OR the cluster is paused (inspect
hcluster.Spec.PausedUntil and the ReconciliationActive condition via
findClusterDeploymentCondition(hcluster.Status.ClusterDeploymentConditions,
"ReconciliationActive")), and set an appropriate Reason like
"ReconciliationPaused" and message when paused rather than "DeletionRequested".
Ensure LastTransitionTime logic still reuses existingCondition when statuses
match.
- Around line 5458-5462: The current check only inspects
ValidHostedClusterConfiguration (via meta.FindStatusCondition and variable
validConfigCondition) but must also ensure SupportedHostedCluster and
ValidReleaseImage are true before treating ClusterRequirementsMetType as
satisfied; update the requirementsMet logic to fetch the three conditions from
hcluster.Status.Conditions (e.g., using meta.FindStatusCondition for
hyperv1.ValidHostedClusterConfiguration, hyperv1.SupportedHostedCluster, and
hyperv1.ValidReleaseImage) and set requirementsMet = true only when all three
condition objects are non-nil and each has Status == metav1.ConditionTrue, so
reconcile-blocking prerequisites are correctly gated.

---

Nitpick comments:
In `@api/hypershift/v1beta1/hostedcluster_types.go`:
- Around line 2230-2237: Add an explicit max items validation to the
ClusterDeploymentConditions slice: the field ClusterDeploymentConditions (type
[]ClusterDeploymentCondition) is map-keyed but unbounded, so annotate it with a
kubebuilder validation tag to cap the list at 4 (e.g. add the comment
+kubebuilder:validation:MaxItems=4 immediately above the
ClusterDeploymentConditions declaration, keeping the existing +listType=map and
+listMapKey=type).
- Around line 2111-2129: Add a kubebuilder enum validation to constrain the
ClusterDeploymentConditionType to the four allowed values: "Provisioned",
"ProvisionFailed", "ProvisionStopped", and "RequirementsMet". Update the type
declaration for ClusterDeploymentConditionType to include the kubebuilder marker
(e.g., +kubebuilder:validation:Enum=...) above the type so the generated CRD
schema uses an enum; ensure the string literals in the marker exactly match the
constants ClusterProvisionedType, ClusterProvisionFailedType,
ClusterProvisionStoppedType, and ClusterRequirementsMetType.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro Plus

Run ID: 2fa3034c-acf2-4708-9959-853ef7f6def9

📥 Commits

Reviewing files that changed from the base of the PR and between 7c70254 and 4c77b53.

⛔ Files ignored due to path filters (32)
  • api/go.sum is excluded by !**/*.sum
  • api/hypershift/v1beta1/zz_generated.deepcopy.go is excluded by !**/zz_generated*.go, !**/zz_generated*
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/vendor/modules.txt is excluded by !**/vendor/**
  • api/vendor/sigs.k8s.io/structured-merge-diff/v6/schema/elements.go is excluded by !**/vendor/**
  • api/vendor/sigs.k8s.io/structured-merge-diff/v6/typed/remove.go is excluded by !**/vendor/**
  • client/applyconfiguration/hypershift/v1beta1/clusterdeploymentcondition.go is excluded by !client/**
  • client/applyconfiguration/hypershift/v1beta1/hostedclusterstatus.go is excluded by !client/**
  • client/applyconfiguration/utils.go is excluded by !client/**
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • docs/content/reference/aggregated-docs.md is excluded by !docs/content/reference/aggregated-docs.md
  • docs/content/reference/api.md is excluded by !docs/content/reference/api.md
  • go.sum is excluded by !**/*.sum
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.go is excluded by !vendor/**, !**/vendor/**
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/zz_generated.deepcopy.go is excluded by !vendor/**, !**/vendor/**, !**/zz_generated*.go, !**/zz_generated*
  • vendor/modules.txt is excluded by !vendor/**, !**/vendor/**
  • vendor/sigs.k8s.io/structured-merge-diff/v6/schema/elements.go is excluded by !vendor/**, !**/vendor/**
  • vendor/sigs.k8s.io/structured-merge-diff/v6/typed/remove.go is excluded by !vendor/**, !**/vendor/**
📒 Files selected for processing (5)
  • api/go.mod
  • api/hypershift/v1beta1/hostedcluster_types.go
  • go.mod
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go
✅ Files skipped from review due to trivial changes (2)
  • go.mod
  • api/go.mod
🚧 Files skipped from review as they are similar to previous changes (1)
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go

@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 14, 2026

Codecov Report

❌ Patch coverage is 86.44860% with 29 lines in your changes missing coverage. Please review.
✅ Project coverage is 35.82%. Comparing base (9a6b857) to head (d4371c3).
⚠️ Report is 133 commits behind head on main.

Files with missing lines Patch % Lines
...trollers/hostedcluster/hostedcluster_controller.go 86.44% 27 Missing and 2 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #8228      +/-   ##
==========================================
+ Coverage   35.71%   35.82%   +0.11%     
==========================================
  Files         767      767              
  Lines       93402    93615     +213     
==========================================
+ Hits        33354    33540     +186     
- Misses      57346    57371      +25     
- Partials     2702     2704       +2     
Files with missing lines Coverage Δ
...trollers/hostedcluster/hostedcluster_controller.go 45.51% <86.44%> (+2.24%) ⬆️
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@openshift-ci openshift-ci Bot added the area/testing Indicates the PR includes changes for e2e testing label Apr 15, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
api/hypershift/v1beta1/hostedcluster_types.go (1)

2108-2112: Align exported type comment with the identifier name.

The comment should start with ClusterDeploymentConditionType (current text starts with ClusterDeploymentCondition).

✏️ Proposed tweak
-// ClusterDeploymentCondition defines a special set of conditions used to track
+// ClusterDeploymentConditionType defines a special set of conditions used to track
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/hypershift/v1beta1/hostedcluster_types.go` around lines 2108 - 2112,
Update the exported type comment to start with the exact identifier name
`ClusterDeploymentConditionType` instead of `ClusterDeploymentCondition`; locate
the comment above the `type ClusterDeploymentConditionType string` declaration
and change its opening sentence to begin with `ClusterDeploymentConditionType`
so it follows Go's exported identifier comment convention.
api/.golangci.yml (1)

276-295: Prefer schema validation over adding Type-related linter suppressions.

The new suppressions for ClusterDeploymentCondition.Type can be eliminated by adding explicit MinLength/MaxLength markers on the type. This keeps the API contract explicit and reduces config debt.

♻️ Suggested direction
diff --git a/api/hypershift/v1beta1/hostedcluster_types.go b/api/hypershift/v1beta1/hostedcluster_types.go
@@
 type ClusterDeploymentCondition struct {
 	// type is the type of the condition.
 	// +required
+	// +kubebuilder:validation:MinLength=1
+	// +kubebuilder:validation:MaxLength=32
 	Type ClusterDeploymentConditionType `json:"type"`
diff --git a/api/.golangci.yml b/api/.golangci.yml
@@
-      - linters:
-          - kubeapilinter
-        path: hypershift/v1beta1/hostedcluster_types.go
-        text: 'maxlength: field ClusterDeploymentCondition.Type type ClusterDeploymentConditionType must have a maximum length, add kubebuilder:validation:MaxLength marker'
@@
-      - linters:
-          - kubeapilinter
-        path: hypershift/v1beta1/hostedcluster_types.go
-        text: 'minlength: field ClusterDeploymentCondition.Type type ClusterDeploymentConditionType must have a minimum length, add kubebuilder:validation:MinLength marker'
@@
-      - linters:
-          - kubeapilinter
-        path: hypershift/v1beta1/hostedcluster_types.go
-        text: 'requiredfields: field ClusterDeploymentCondition.Type has a valid zero value \(""\), but the validation is not complete \(e.g. minimum length\). The field should be a pointer to allow the zero value to be set. If the zero value is not a valid use case, complete the validation and remove the pointer.'

Also applies to: 894-907

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/.golangci.yml` around lines 276 - 295, The kubeapilinter suppressions
should be replaced with explicit schema validation markers: add
kubebuilder:validation:MinLength and kubebuilder:validation:MaxLength to the
ClusterDeploymentConditionType (referenced by ClusterDeploymentCondition.Type)
to enforce string length, and add kubebuilder:validation:MinItems and
kubebuilder:validation:MaxItems to the slice fields
HostedClusterStatus.ClusterDeploymentConditions and ClusterVersionStatus.History
to enforce item count; update the corresponding type definitions
(ClusterDeploymentConditionType, HostedClusterStatus, ClusterVersionStatus) to
include these markers so the linter warnings are resolved without suppressions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`:
- Around line 5482-5513: The RequirementsMet false branch should propagate which
specific prerequisite is blocking instead of only copying
ValidHostedClusterConfiguration.Message; update the else branch that sets
condition.Message/Reason to inspect validConfigCondition,
supportedHostedCluster, and validReleaseImage (using their Status and Message
fields) and pick the first non-true condition (or nil) as the blocker, then set
condition.Message to that condition.Message (or a composed message including the
condition.Type) and set condition.Reason to a distinct value identifying the
blocker (e.g., "InvalidHostedClusterConfiguration", "UnsupportedHostedCluster",
or "InvalidReleaseImage") so downstream consumers receive the actual blocking
prerequisite.
- Around line 5383-5427: The code marks ClusterProvisionFailedType=True whenever
the cluster is degraded and not progressing, which can wrongly flag
already-provisioned clusters after a later control-plane outage; update
computeProvisionFailedCondition to only treat degradation as a provisioning
failure when it actually pertains to provisioning (e.g., the cluster is not yet
provisioned). Concretely, in computeProvisionFailedCondition add a guard using
the ClusterProvisioned condition (use meta.FindStatusCondition on
hcluster.Status.ClusterDeploymentConditions for hyperv1.ClusterProvisionedType
or findClusterDeploymentCondition) and only set isFailed from degradedCondition
if ClusterProvisioned is not True (or alternatively restrict by
degradedCondition.Reason to known provisioning-related reasons); keep the rest
of the LastTransitionTime/Status/Reason/Message handling unchanged.

---

Nitpick comments:
In `@api/.golangci.yml`:
- Around line 276-295: The kubeapilinter suppressions should be replaced with
explicit schema validation markers: add kubebuilder:validation:MinLength and
kubebuilder:validation:MaxLength to the ClusterDeploymentConditionType
(referenced by ClusterDeploymentCondition.Type) to enforce string length, and
add kubebuilder:validation:MinItems and kubebuilder:validation:MaxItems to the
slice fields HostedClusterStatus.ClusterDeploymentConditions and
ClusterVersionStatus.History to enforce item count; update the corresponding
type definitions (ClusterDeploymentConditionType, HostedClusterStatus,
ClusterVersionStatus) to include these markers so the linter warnings are
resolved without suppressions.

In `@api/hypershift/v1beta1/hostedcluster_types.go`:
- Around line 2108-2112: Update the exported type comment to start with the
exact identifier name `ClusterDeploymentConditionType` instead of
`ClusterDeploymentCondition`; locate the comment above the `type
ClusterDeploymentConditionType string` declaration and change its opening
sentence to begin with `ClusterDeploymentConditionType` so it follows Go's
exported identifier comment convention.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro Plus

Run ID: 59458895-a91f-49b5-95ab-fef131b42842

📥 Commits

Reviewing files that changed from the base of the PR and between 4c77b53 and f70539c.

⛔ Files ignored due to path filters (24)
  • api/hypershift/v1beta1/zz_generated.deepcopy.go is excluded by !**/zz_generated*.go, !**/zz_generated*
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • client/applyconfiguration/hypershift/v1beta1/clusterdeploymentcondition.go is excluded by !client/**
  • client/applyconfiguration/hypershift/v1beta1/hostedclusterstatus.go is excluded by !client/**
  • client/applyconfiguration/utils.go is excluded by !client/**
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • docs/content/reference/aggregated-docs.md is excluded by !docs/content/reference/aggregated-docs.md
  • docs/content/reference/api.md is excluded by !docs/content/reference/api.md
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.go is excluded by !vendor/**, !**/vendor/**
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/zz_generated.deepcopy.go is excluded by !vendor/**, !**/vendor/**, !**/zz_generated*.go, !**/zz_generated*
📒 Files selected for processing (5)
  • api/.golangci.yml
  • api/hypershift/v1beta1/hostedcluster_types.go
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go
  • test/e2e/karpenter_test.go
✅ Files skipped from review due to trivial changes (1)
  • test/e2e/karpenter_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go

Comment on lines +5352 to +5378
if existingCondition != nil && existingCondition.Status == corev1.ConditionTrue {
condition.LastTransitionTime = existingCondition.LastTransitionTime
} else {
condition.LastTransitionTime = probeTime
}
condition.Status = corev1.ConditionTrue
condition.Reason = "Provisioned"
condition.Message = "Cluster has been provisioned"
} else {
if existingCondition != nil && existingCondition.Status == corev1.ConditionFalse {
condition.LastTransitionTime = existingCondition.LastTransitionTime
} else {
condition.LastTransitionTime = probeTime
}
condition.Status = corev1.ConditionFalse
if hasInfra && !cvoAvailable {
condition.Reason = "WaitingForClusterVersionOperator"
if cvoCond != nil {
condition.Message = fmt.Sprintf("Cluster Version Operator is not available: %s", cvoCond.Message)
} else {
condition.Message = "Waiting for Cluster Version Operator to become available"
}
} else {
condition.Reason = "NotProvisioned"
condition.Message = "Cluster is not yet provisioned"
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you change the clusterdeploymentcondition to extend the condition package, you can use the kube apimachinery meta package to update the conditions without having to define the functions to find the condition itself https://pkg.go.dev/k8s.io/apimachinery/pkg/api/meta#pkg-overview

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a little beyond my familiarity with kube apimachinery, but the Claude AI says it still needs a findClusterDeploymentCondition helper because there are type incompatibilities between the Hive conditions and Hypershift's conditionsso the meta package would not work.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah gotcha, could we do something else to simplify this?
Here's a suggestion of building a helper function from Claude that could be used for all of these compute functions

// buildClusterDeploymentCondition creates a ClusterDeploymentCondition with proper transition time handling.
// If the condition already exists with the same status, it preserves the LastTransitionTime.
// Otherwise, it sets LastTransitionTime to the current probeTime.
func buildClusterDeploymentCondition(
	existingConditions []hivev1.ClusterDeploymentCondition,
	conditionType hivev1.ClusterDeploymentConditionType,
	status corev1.ConditionStatus,
	reason string,
	message string,
	probeTime metav1.Time,
) hivev1.ClusterDeploymentCondition {
	condition := hivev1.ClusterDeploymentCondition{
		Type:               conditionType,
		Status:             status,
		Reason:             reason,
		Message:            message,
		LastProbeTime:      probeTime,
		LastTransitionTime: probeTime,
	}

	// Preserve LastTransitionTime if status hasn't changed
	existingCondition := findClusterDeploymentCondition(existingConditions, conditionType)
	if existingCondition != nil && existingCondition.Status == status {
		condition.LastTransitionTime = existingCondition.LastTransitionTime
	}

	return condition
}

and its usage

Suggested change
if existingCondition != nil && existingCondition.Status == corev1.ConditionTrue {
condition.LastTransitionTime = existingCondition.LastTransitionTime
} else {
condition.LastTransitionTime = probeTime
}
condition.Status = corev1.ConditionTrue
condition.Reason = "Provisioned"
condition.Message = "Cluster has been provisioned"
} else {
if existingCondition != nil && existingCondition.Status == corev1.ConditionFalse {
condition.LastTransitionTime = existingCondition.LastTransitionTime
} else {
condition.LastTransitionTime = probeTime
}
condition.Status = corev1.ConditionFalse
if hasInfra && !cvoAvailable {
condition.Reason = "WaitingForClusterVersionOperator"
if cvoCond != nil {
condition.Message = fmt.Sprintf("Cluster Version Operator is not available: %s", cvoCond.Message)
} else {
condition.Message = "Waiting for Cluster Version Operator to become available"
}
} else {
condition.Reason = "NotProvisioned"
condition.Message = "Cluster is not yet provisioned"
}
}
// computeProvisionedCondition determines if the cluster is provisioned
func computeProvisionedCondition(hcluster *hyperv1.HostedCluster, probeTime metav1.Time) hivev1.ClusterDeploymentCondition {
// Provisioned is True when the cluster has been successfully provisioned:
// control plane endpoint and kubeconfig exist, and the Cluster Version Operator
// reports available (ClusterVersionAvailable on HostedCluster mirrors CVO on HCP).
hasInfra := hcluster.Status.ControlPlaneEndpoint.Host != "" && hcluster.Status.KubeConfig != nil
cvoCond := meta.FindStatusCondition(hcluster.Status.Conditions, string(hyperv1.ClusterVersionAvailable))
cvoAvailable := cvoCond != nil && cvoCond.Status == metav1.ConditionTrue
isProvisioned := hasInfra && cvoAvailable
if isProvisioned {
return buildClusterDeploymentCondition(
hcluster.Status.ClusterDeploymentConditions,
hivev1.ProvisionedCondition,
corev1.ConditionTrue,
hivev1.ProvisionedReason,
hivev1.ProvisionedMessage,
probeTime,
)
}
// Not provisioned - determine reason
var reason, message string
if hasInfra && !cvoAvailable {
reason = "WaitingForClusterVersionOperator"
if cvoCond != nil {
message = fmt.Sprintf("Cluster Version Operator is not available: %s", cvoCond.Message)
} else {
message = "Waiting for Cluster Version Operator to become available"
}
} else {
reason = "NotProvisioned"
message = "Cluster is not yet provisioned"
}
return buildClusterDeploymentCondition(
hcluster.Status.ClusterDeploymentConditions,
hivev1.ProvisionedCondition,
corev1.ConditionFalse,
reason,
message,
probeTime,
)
}

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I handed Claude the buildClusterDeploymentCondition and it refactored the individual condition functions. Thanks for the example!

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
api/go.mod (1)

50-93: Remove // indirect marker from github.com/openshift/hive/apis dependency on line 50 of api/go.mod.

The module is directly imported at api/hypershift/v1beta1/hostedcluster_types.go:10, so the indirect annotation is misaligned with actual usage.

Change
-	github.com/openshift/hive/apis v0.0.0-20260414220921-76f9a996844f // indirect
+	github.com/openshift/hive/apis v0.0.0-20260414220921-76f9a996844f
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/go.mod` around lines 50 - 93, The go.mod entry for
github.com/openshift/hive/apis is incorrectly marked as indirect; remove the
trailing "// indirect" from the github.com/openshift/hive/apis line in
api/go.mod so it is a direct dependency (it is imported by
hostedcluster_types.go), then run go mod tidy to update checksums and ensure the
module graph is consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@api/go.mod`:
- Around line 50-93: The go.mod entry for github.com/openshift/hive/apis is
incorrectly marked as indirect; remove the trailing "// indirect" from the
github.com/openshift/hive/apis line in api/go.mod so it is a direct dependency
(it is imported by hostedcluster_types.go), then run go mod tidy to update
checksums and ensure the module graph is consistent.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Pro Plus

Run ID: 4dff3e99-d44d-4352-acfd-b2e4cadf4ee1

📥 Commits

Reviewing files that changed from the base of the PR and between f70539c and e2d0527.

⛔ Files ignored due to path filters (25)
  • api/go.sum is excluded by !**/*.sum
  • api/hypershift/v1beta1/zz_generated.deepcopy.go is excluded by !**/zz_generated*.go, !**/zz_generated*
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • client/applyconfiguration/hypershift/v1beta1/clusterdeploymentcondition.go is excluded by !client/**
  • client/applyconfiguration/hypershift/v1beta1/hostedclusterstatus.go is excluded by !client/**
  • client/applyconfiguration/utils.go is excluded by !client/**
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • docs/content/reference/aggregated-docs.md is excluded by !docs/content/reference/aggregated-docs.md
  • docs/content/reference/api.md is excluded by !docs/content/reference/api.md
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.go is excluded by !vendor/**, !**/vendor/**
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/zz_generated.deepcopy.go is excluded by !vendor/**, !**/vendor/**, !**/zz_generated*.go, !**/zz_generated*
📒 Files selected for processing (6)
  • api/.golangci.yml
  • api/go.mod
  • api/hypershift/v1beta1/hostedcluster_types.go
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go
  • test/e2e/karpenter_test.go
✅ Files skipped from review due to trivial changes (3)
  • test/e2e/karpenter_test.go
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go
  • hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • api/.golangci.yml

@cwilkers cwilkers marked this pull request as ready for review April 16, 2026 15:06
@openshift-ci openshift-ci Bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Apr 16, 2026
@openshift-ci openshift-ci Bot requested review from Nirshal and devguyio April 16, 2026 15:08
@CrystalChun
Copy link
Copy Markdown
Contributor

Nice, this looks good to me!

@cwilkers
Copy link
Copy Markdown
Author

/retest

@cwilkers cwilkers force-pushed the CNF-20619 branch 2 times, most recently from b2374d8 to b03d0e0 Compare April 20, 2026 17:27
@cwilkers
Copy link
Copy Markdown
Author

/retest-required

@cwilkers
Copy link
Copy Markdown
Author

/retest

Introduce ClusterDeploymentConditions to HostedCluster status to provide
a Hive-compatible interface for external consumers like the siteconfig operator.

The implementation synthesizes four Hive ClusterDeployment conditions from
HostedCluster state:
- Provisioned: True when cluster is fully provisioned and available
- ProvisionFailed: True when cluster provisioning has failed
- ProvisionStopped: True when cluster is being deleted or paused
- RequirementsMet: True when all prerequisite conditions are met

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: cwilkers <cwilkers@redhat.com>
"github.com/openshift/hypershift/api/util/ipnet"

configv1 "github.com/openshift/api/config/v1"
hivev1 "github.com/openshift/hive/apis/hive/v1"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is generally an anti-pattern to import another API. What difference about the ClusterDeployment condition when compared with a standard metav1.Condition?

Why can consumers not just observe the existing conditions field?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The underlying work is to allow the siteconfig operator (SO) to update its own CR (ClusterInstance)'s status from the clusters it deploys. In the case of siteconfig's original cluster types, there is a ClusterDeployment CR whose status is defined by Hive and represents the start and stop of a deployment in a way that the SO was written around.
When I added HCP to siteconfig as a supported cluster type, the original idea was to have SO read the deployment status from the HC CR, but it required extra logic given how many different status conditions a hosted cluster can report. The maintenance of hypershift specific logic within SO was rejected as an anti-pattern, and it was suggested to put that logic under the hypershift operator where it would be better maintained.
At the start of this PR, it was synthesizing regular metav1.Conditions to match the names of Hivev1 DeploymentConditions, but that was questioned as a maintenance hassle for hypershift as the Hive API could diverge. Thus the final form of this PR where we import Hivev1.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The maintenance of hypershift specific logic within SO was rejected as an anti-pattern

Got any links to where this was discussed?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cwilkers
Copy link
Copy Markdown
Author

/test images

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 21, 2026

@cwilkers: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@enxebre
Copy link
Copy Markdown
Member

enxebre commented Apr 21, 2026

I would expect this logic to live in the consumer side which is interested in these particular semantics or otherwise propose a generic enhancement to capture why these semantics are valuable and how they could impact any existing consumer.
Extending the core API to vendor and fit any consumer specifics doesn't scale well.

@cwilkers
Copy link
Copy Markdown
Author

or otherwise propose a generic enhancement to capture why these semantics are valuable and how they could impact any existing consumer.

To a degree, that's the idea behind using the Hivev1 deploymentConditions. it is a common interface between agent-based and image-based cluster installs and any component (like SiteConfig in this instance) that is interested in the overall cluster's deployment status.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 1, 2026

PR needs rebase.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-ci openshift-ci Bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 1, 2026
@hypershift-jira-solve-ci
Copy link
Copy Markdown

hypershift-jira-solve-ci Bot commented May 1, 2026

Now I have a complete picture. Let me compile the final analysis.

Test Failure Analysis Complete

Job Information

  • Prow Job: tide (merge controller)
  • Build ID: N/A (tide is a merge controller, not a test job)
  • PR: #8228RFE-8777 Hypershift operator should provide DeploymentConditions
  • Branch: CNF-20619main
  • Head SHA: d4371c39
  • Error State: error (reported by tide)

Test Failure Analysis

Error

tide: Not mergeable. PR has a merge conflict.

Summary

This is not a test execution failure — no CI test job failed. The "error" state comes from tide (Prow's merge controller), which detected that PR #8228 has merge conflicts with the main branch and cannot be merged. The PR's last commit is from April 15, 2026. All CI presubmit tests that ran (images, okd-scos-images, security, verify-deps, verify-workflows) passed successfully on April 21. The E2E test jobs (e2e-aws, e2e-aks, etc.) remain in "pending" state because they require lgtm/approve labels or explicit /test triggers, neither of which occurred. The PR has the needs-rebase label and lacks both lgtm and approved.

Root Cause

The root cause is a git merge conflict between the PR branch (CNF-20619, last updated April 15) and the main branch. Three PRs merged to main between April 21 (when CI tests last passed) and May 1 (when the conflict was detected) that modify the same files as PR #8228:

  1. PR CNTRLPLANE-3308: deps: bump k8s.io 0.34 → 0.35 and openshift/api #8286CNTRLPLANE-3308: deps: bump k8s.io 0.34 → 0.35 and openshift/api (merged Apr 29)

  2. PR CNTRLPLANE-3160: Drop AutoNodeKarpenter feature gate and promote EC2NodeClass to v1 #8166drop AutoNodeKarpenter feature gate (merged Apr 30)

  3. PR AUTOSCALE-615: include Karpenter node vCPUs in billing metric #8265karpenter-node-billing (merged May 1 at 03:00:51Z — 15 seconds before the needs-rebase label was applied at 03:01:06Z)

    • Modifies api/hypershift/v1beta1/hostedcluster_types.go, zz_generated.deepcopy.go, and all CRD manifest YAMLs
    • This was the immediate trigger: the needs-rebase label was added within seconds of this merge

The PR adds a new ClusterDeploymentConditions field to HostedClusterStatus in hostedcluster_types.go. All three conflicting PRs also modified this file and/or its generated outputs (CRD manifests, deepcopy). The combination of API type changes, generated CRD schema changes, and a major dependency version bump creates conflicts that cannot be automatically resolved.

Recommendations
  1. Rebase the PR onto current main: The PR author (cwilkers) needs to rebase branch CNF-20619 onto the current main branch to resolve the merge conflicts. Key files requiring conflict resolution:

    • api/hypershift/v1beta1/hostedcluster_types.go — reapply the ClusterDeploymentConditions field addition on top of Karpenter billing and feature gate changes
    • api/go.mod / api/go.sum — resolve against the k8s 0.34→0.35 dependency bump
    • api/hypershift/v1beta1/zz_generated.deepcopy.go — regenerate after resolving types
    • All zz_generated.featuregated-crd-manifests/ YAML files — regenerate CRDs after rebase
    • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/ — regenerate
  2. Regenerate after rebase: After resolving hostedcluster_types.go conflicts, run the full code generation pipeline (make generate, make verify) to ensure all generated files (deepcopy, CRD manifests, docs) are consistent.

  3. Seek review: The PR currently lacks lgtm and approved labels. After rebasing, request review to trigger the E2E test suite and proceed toward merge.

Evidence
Evidence Detail
Tide status error — "Not mergeable. PR has a merge conflict." (set 2026-05-01T03:03:34Z)
needs-rebase label Applied 2026-05-01T03:01:06Z (was removed 2026-04-14, re-added May 1)
Trigger merge PR #8265 merged 2026-05-01T03:00:51Z — 15 seconds before needs-rebase
PR last commit d4371c39 — 2026-04-15T14:46:05Z (16 days stale)
CI tests passed 2026-04-21T14:49:48Z — "all tests passed!" (images, verify-deps, security, okd-scos-images)
Conflicting PR #8286 k8s deps bump 0.34→0.35 (Apr 29) — 20 overlapping files with PR #8228
Conflicting PR #8166 Drop AutoNodeKarpenter feature gate (Apr 30) — overlaps on hostedcluster_types.go
Conflicting PR #8265 Karpenter node billing (May 1) — overlaps on hostedcluster_types.go, deepcopy, all CRD YAMLs
E2E jobs All 7 E2E jobs remain pending (never triggered — require lgtm/approve or /test)
Missing labels PR has neither lgtm nor approved — only area labels and needs-rebase

@cwilkers
Copy link
Copy Markdown
Author

cwilkers commented May 1, 2026

Closing as the approach has been rejected.

@cwilkers cwilkers closed this May 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/api Indicates the PR includes changes for the API area/cli Indicates the PR includes changes for CLI area/documentation Indicates the PR includes changes for documentation area/hypershift-operator Indicates the PR includes changes for the hypershift operator and API - outside an OCP release area/testing Indicates the PR includes changes for e2e testing needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants