RFE-8777 Hypershift operator should provide DeploymentConditions#8228
RFE-8777 Hypershift operator should provide DeploymentConditions#8228cwilkers wants to merge 1 commit intoopenshift:mainfrom
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: LGTM mode |
|
Skipping CI for Draft Pull Request. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdded a new optional Sequence Diagram(s)sequenceDiagram
participant KubeAPI as Kube API
participant Controller as HostedCluster Controller
participant HCluster as HostedCluster (status)
participant HCP as HostedControlPlane
KubeAPI->>Controller: Event (create/update/delete)
Controller->>KubeAPI: Get HostedCluster
alt HostedControlPlane exists
Controller->>HCP: Get HostedControlPlane
end
Controller->>Controller: computeClusterDeploymentConditions(hcluster[, hcp])
Controller->>HCluster: set Status.ClusterDeploymentConditions
Controller->>KubeAPI: Status().Update(HostedCluster)
KubeAPI-->>Controller: Persisted / Conflict / Error
alt Conflict
Controller->>Controller: return ctrl.Result{Requeue:true}
else Success
Controller->>KubeAPI: continue (finalizer removal or normal flow)
end
🚥 Pre-merge checks | ✅ 8 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (8 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: cwilkers The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go (1)
6976-6979: Please add focused unit tests for the remaining helper functions.At Line 6976, the TODO leaves core helper logic without direct assertions (especially reason/message and transition-time behavior), which weakens regression localization.
If you want, I can draft those three table-driven tests and open a follow-up issue template.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go` around lines 6976 - 6979, Add three focused table-driven unit tests named TestComputeProvisionFailedCondition, TestComputeProvisionStoppedCondition, and TestComputeRequirementsMetCondition that call the helper functions ComputeProvisionFailedCondition, ComputeProvisionStoppedCondition, and ComputeRequirementsMetCondition with a variety of input cases and assert the full Condition fields (Type, Status, Reason, Message) and transition-time behavior; for transition times, use a deterministic test clock or fixed time values so you can assert that LastTransitionTime is set on status changes and remains unchanged when only message/reason change, and include cases that validate both True/False status branches and expected reason/message strings for each helper.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@go.mod`:
- Line 130: Update the vendored module entry for
sigs.k8s.io/structured-merge-diff/v6 in hack/tools/vendor/modules.txt to match
go.mod by changing the version from v6.3.0 to v6.3.1 (the entry referenced
around the existing line for sigs.k8s.io/structured-merge-diff/v6). Ensure the
modules.txt record for that module now reads v6.3.1 so it is consistent with
go.mod, api/go.mod and other vendor directories.
In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`:
- Around line 1269-1270: The ClusterDeploymentConditions are only computed on
the normal reconcile path so deletion returns skip updating them; call
computeClusterDeploymentConditions(hcluster) and assign to
hcluster.Status.ClusterDeploymentConditions on the deletion path before
returning (i.e., in the branch where the HostedCluster has DeletionTimestamp /
the early return in the Reconcile handler), ensuring ClusterProvisionStoppedType
flips to True for external consumers during teardown; update the same status
write path used later so the status gets persisted.
- Around line 5458-5462: The current requirementsMet only checks
ValidHostedClusterConfiguration (validConfigCondition) but must also include the
SupportedHostedCluster and ValidReleaseImage conditions so
ClusterRequirementsMetType isn't set True when provisioning is still blocked;
update the logic that computes requirementsMet (and any code that sets
ClusterRequirementsMetType) to fetch meta.FindStatusCondition for
SupportedHostedCluster and ValidReleaseImage from hcluster.Status.Conditions and
require all three conditions to have Status == metav1.ConditionTrue before
treating requirementsMet as true.
---
Nitpick comments:
In
`@hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go`:
- Around line 6976-6979: Add three focused table-driven unit tests named
TestComputeProvisionFailedCondition, TestComputeProvisionStoppedCondition, and
TestComputeRequirementsMetCondition that call the helper functions
ComputeProvisionFailedCondition, ComputeProvisionStoppedCondition, and
ComputeRequirementsMetCondition with a variety of input cases and assert the
full Condition fields (Type, Status, Reason, Message) and transition-time
behavior; for transition times, use a deterministic test clock or fixed time
values so you can assert that LastTransitionTime is set on status changes and
remains unchanged when only message/reason change, and include cases that
validate both True/False status branches and expected reason/message strings for
each helper.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Pro Plus
Run ID: ef58f345-ec5d-4b79-99e9-509b953b8d6d
⛔ Files ignored due to path filters (32)
api/go.sumis excluded by!**/*.sumapi/hypershift/v1beta1/zz_generated.deepcopy.gois excluded by!**/zz_generated*.go,!**/zz_generated*api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/vendor/modules.txtis excluded by!**/vendor/**api/vendor/sigs.k8s.io/structured-merge-diff/v6/schema/elements.gois excluded by!**/vendor/**api/vendor/sigs.k8s.io/structured-merge-diff/v6/typed/remove.gois excluded by!**/vendor/**client/applyconfiguration/hypershift/v1beta1/clusterdeploymentcondition.gois excluded by!client/**client/applyconfiguration/hypershift/v1beta1/hostedclusterstatus.gois excluded by!client/**client/applyconfiguration/utils.gois excluded by!client/**cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamlcmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamlcmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamldocs/content/reference/aggregated-docs.mdis excluded by!docs/content/reference/aggregated-docs.mddocs/content/reference/api.mdis excluded by!docs/content/reference/api.mdgo.sumis excluded by!**/*.sumvendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.gois excluded by!vendor/**,!**/vendor/**vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/zz_generated.deepcopy.gois excluded by!vendor/**,!**/vendor/**,!**/zz_generated*.go,!**/zz_generated*vendor/modules.txtis excluded by!vendor/**,!**/vendor/**vendor/sigs.k8s.io/structured-merge-diff/v6/schema/elements.gois excluded by!vendor/**,!**/vendor/**vendor/sigs.k8s.io/structured-merge-diff/v6/typed/remove.gois excluded by!vendor/**,!**/vendor/**
📒 Files selected for processing (5)
api/go.modapi/hypershift/v1beta1/hostedcluster_types.gogo.modhypershift-operator/controllers/hostedcluster/hostedcluster_controller.gohypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go (2)
5450-5489:⚠️ Potential issue | 🟠 MajorGate
RequirementsMeton the same prerequisites that actually gate reconciliation.
requirementsMetonly checksValidHostedClusterConfiguration, but Lines 1325-1341 also block onSupportedHostedClusterandValidReleaseImage. That can publishClusterRequirementsMetType=Truefor clusters the controller still refuses to provision. The failure reason/message should come from the first blocking condition too.Suggested change
// Check for validation conditions validConfigCondition := meta.FindStatusCondition(hcluster.Status.Conditions, string(hyperv1.ValidHostedClusterConfiguration)) + supportedHostedClusterCondition := meta.FindStatusCondition(hcluster.Status.Conditions, string(hyperv1.SupportedHostedCluster)) + validReleaseImageCondition := meta.FindStatusCondition(hcluster.Status.Conditions, string(hyperv1.ValidReleaseImage)) - requirementsMet := validConfigCondition != nil && validConfigCondition.Status == metav1.ConditionTrue + blockers := []*metav1.Condition{ + validConfigCondition, + supportedHostedClusterCondition, + validReleaseImageCondition, + } + requirementsMet := true + var firstFailure *metav1.Condition + for _, blocker := range blockers { + if blocker == nil || blocker.Status != metav1.ConditionTrue { + requirementsMet = false + firstFailure = blocker + break + } + } @@ } else { @@ - condition.Reason = "RequirementsNotMet" - if validConfigCondition != nil && validConfigCondition.Message != "" { - condition.Message = validConfigCondition.Message - } else { - condition.Message = "Cluster requirements have not been met" - } + condition.Reason = "RequirementsNotMet" + if firstFailure != nil { + if firstFailure.Reason != "" { + condition.Reason = firstFailure.Reason + } + if firstFailure.Message != "" { + condition.Message = firstFailure.Message + } + } + if condition.Message == "" { + condition.Message = "Cluster requirements have not been met" + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go` around lines 5450 - 5489, computeRequirementsMetCondition currently only inspects ValidHostedClusterConfiguration; update it to require all three prerequisite conditions (SupportedHostedCluster, ValidReleaseImage, and ValidHostedClusterConfiguration) be metav1.ConditionTrue before setting ClusterRequirementsMetType True (use meta.FindStatusCondition to read each condition from hcluster.Status.Conditions), and when false select the first blocking condition in the order SupportedHostedCluster, ValidReleaseImage, then ValidHostedClusterConfiguration to populate condition.Reason and condition.Message (preserve LastTransitionTime logic using existingCondition from findClusterDeploymentCondition). Ensure you reference computeRequirementsMetCondition, ClusterRequirementsMetType, SupportedHostedCluster, ValidReleaseImage, ValidHostedClusterConfiguration, meta.FindStatusCondition, and findClusterDeploymentCondition when making the changes.
1269-1270:⚠️ Potential issue | 🟠 MajorCompute these conditions on the deletion path too.
ClusterDeploymentConditionsis still only synthesized on the normal reconcile path. A deletingHostedClusterexits at Lines 491-608, soClusterProvisionStoppedTypenever flips toTrueduring teardown and external consumers keep seeing the old deployment state.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go` around lines 1269 - 1270, The deletion path exits early before synthesizing ClusterDeploymentConditions, so ensure hcluster.Status.ClusterDeploymentConditions is set there too by calling computeClusterDeploymentConditions(hcluster) on the delete/teardown path (i.e., just before any early return in the deletion handling block inside the reconcile for the HostedCluster) so ClusterProvisionStoppedType flips to True for external consumers; update the deletion/teardown branch to assign hcluster.Status.ClusterDeploymentConditions = computeClusterDeploymentConditions(hcluster) and persist the status update before returning.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`:
- Around line 5415-5448: The computeProvisionStoppedCondition currently only
marks ClusterProvisionStoppedType based on DeletionTimestamp but the comment
says paused clusters should also be considered stopped; update
computeProvisionStoppedCondition to treat a cluster as stopped when either
hcluster.DeletionTimestamp is set OR hcluster.Spec.Paused is true, preserve the
existing LastTransitionTime logic by comparing to existingCondition as done now,
and set distinct Reason/Message for the paused case (e.g., Reason "Paused" and
Message "Cluster reconciliation is paused") while keeping "DeletionRequested"
for deletion; ensure you still set Status to True for both deletion and paused
cases and leave the False branch unchanged.
---
Duplicate comments:
In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`:
- Around line 5450-5489: computeRequirementsMetCondition currently only inspects
ValidHostedClusterConfiguration; update it to require all three prerequisite
conditions (SupportedHostedCluster, ValidReleaseImage, and
ValidHostedClusterConfiguration) be metav1.ConditionTrue before setting
ClusterRequirementsMetType True (use meta.FindStatusCondition to read each
condition from hcluster.Status.Conditions), and when false select the first
blocking condition in the order SupportedHostedCluster, ValidReleaseImage, then
ValidHostedClusterConfiguration to populate condition.Reason and
condition.Message (preserve LastTransitionTime logic using existingCondition
from findClusterDeploymentCondition). Ensure you reference
computeRequirementsMetCondition, ClusterRequirementsMetType,
SupportedHostedCluster, ValidReleaseImage, ValidHostedClusterConfiguration,
meta.FindStatusCondition, and findClusterDeploymentCondition when making the
changes.
- Around line 1269-1270: The deletion path exits early before synthesizing
ClusterDeploymentConditions, so ensure
hcluster.Status.ClusterDeploymentConditions is set there too by calling
computeClusterDeploymentConditions(hcluster) on the delete/teardown path (i.e.,
just before any early return in the deletion handling block inside the reconcile
for the HostedCluster) so ClusterProvisionStoppedType flips to True for external
consumers; update the deletion/teardown branch to assign
hcluster.Status.ClusterDeploymentConditions =
computeClusterDeploymentConditions(hcluster) and persist the status update
before returning.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Pro Plus
Run ID: 11e5c93a-c6c9-4087-8afe-e16b9d65646d
⛔ Files ignored due to path filters (32)
api/go.sumis excluded by!**/*.sumapi/hypershift/v1beta1/zz_generated.deepcopy.gois excluded by!**/zz_generated*.go,!**/zz_generated*api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/vendor/modules.txtis excluded by!**/vendor/**api/vendor/sigs.k8s.io/structured-merge-diff/v6/schema/elements.gois excluded by!**/vendor/**api/vendor/sigs.k8s.io/structured-merge-diff/v6/typed/remove.gois excluded by!**/vendor/**client/applyconfiguration/hypershift/v1beta1/clusterdeploymentcondition.gois excluded by!client/**client/applyconfiguration/hypershift/v1beta1/hostedclusterstatus.gois excluded by!client/**client/applyconfiguration/utils.gois excluded by!client/**cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamlcmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamlcmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamldocs/content/reference/aggregated-docs.mdis excluded by!docs/content/reference/aggregated-docs.mddocs/content/reference/api.mdis excluded by!docs/content/reference/api.mdgo.sumis excluded by!**/*.sumvendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.gois excluded by!vendor/**,!**/vendor/**vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/zz_generated.deepcopy.gois excluded by!vendor/**,!**/vendor/**,!**/zz_generated*.go,!**/zz_generated*vendor/modules.txtis excluded by!vendor/**,!**/vendor/**vendor/sigs.k8s.io/structured-merge-diff/v6/schema/elements.gois excluded by!vendor/**,!**/vendor/**vendor/sigs.k8s.io/structured-merge-diff/v6/typed/remove.gois excluded by!vendor/**,!**/vendor/**
📒 Files selected for processing (5)
api/go.modapi/hypershift/v1beta1/hostedcluster_types.gogo.modhypershift-operator/controllers/hostedcluster/hostedcluster_controller.gohypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go
✅ Files skipped from review due to trivial changes (3)
- go.mod
- api/go.mod
- hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go
| // computeProvisionStoppedCondition determines if provision has been stopped | ||
| func computeProvisionStoppedCondition(hcluster *hyperv1.HostedCluster, probeTime metav1.Time) hyperv1.ClusterDeploymentCondition { | ||
| // ProvisionStopped is True when the cluster is being deleted or has been paused | ||
| condition := hyperv1.ClusterDeploymentCondition{ | ||
| Type: hyperv1.ClusterProvisionStoppedType, | ||
| LastProbeTime: probeTime, | ||
| } | ||
|
|
||
| // Check if cluster is being deleted (has deletion timestamp) | ||
| isStopped := !hcluster.DeletionTimestamp.IsZero() | ||
|
|
||
| existingCondition := findClusterDeploymentCondition(hcluster.Status.ClusterDeploymentConditions, hyperv1.ClusterProvisionStoppedType) | ||
| if isStopped { | ||
| if existingCondition != nil && existingCondition.Status == corev1.ConditionTrue { | ||
| condition.LastTransitionTime = existingCondition.LastTransitionTime | ||
| } else { | ||
| condition.LastTransitionTime = probeTime | ||
| } | ||
| condition.Status = corev1.ConditionTrue | ||
| condition.Reason = "DeletionRequested" | ||
| condition.Message = "Cluster deletion has been requested" | ||
| } else { | ||
| if existingCondition != nil && existingCondition.Status == corev1.ConditionFalse { | ||
| condition.LastTransitionTime = existingCondition.LastTransitionTime | ||
| } else { | ||
| condition.LastTransitionTime = probeTime | ||
| } | ||
| condition.Status = corev1.ConditionFalse | ||
| condition.Reason = "NotStopped" | ||
| condition.Message = "Provision has not been stopped" | ||
| } | ||
|
|
||
| return condition | ||
| } |
There was a problem hiding this comment.
The stopped predicate and its comment disagree.
The comment says paused clusters should report ClusterProvisionStoppedType=True, but the implementation only checks DeletionTimestamp. A paused cluster will still surface NotStopped unless you also key off paused reconciliation.
Suggested adjustment
- // Check if cluster is being deleted (has deletion timestamp)
- isStopped := !hcluster.DeletionTimestamp.IsZero()
+ // Treat deletion or paused reconciliation as stopped.
+ isStopped := !hcluster.DeletionTimestamp.IsZero() ||
+ meta.IsStatusConditionFalse(hcluster.Status.Conditions, string(hyperv1.ReconciliationActive))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // computeProvisionStoppedCondition determines if provision has been stopped | |
| func computeProvisionStoppedCondition(hcluster *hyperv1.HostedCluster, probeTime metav1.Time) hyperv1.ClusterDeploymentCondition { | |
| // ProvisionStopped is True when the cluster is being deleted or has been paused | |
| condition := hyperv1.ClusterDeploymentCondition{ | |
| Type: hyperv1.ClusterProvisionStoppedType, | |
| LastProbeTime: probeTime, | |
| } | |
| // Check if cluster is being deleted (has deletion timestamp) | |
| isStopped := !hcluster.DeletionTimestamp.IsZero() | |
| existingCondition := findClusterDeploymentCondition(hcluster.Status.ClusterDeploymentConditions, hyperv1.ClusterProvisionStoppedType) | |
| if isStopped { | |
| if existingCondition != nil && existingCondition.Status == corev1.ConditionTrue { | |
| condition.LastTransitionTime = existingCondition.LastTransitionTime | |
| } else { | |
| condition.LastTransitionTime = probeTime | |
| } | |
| condition.Status = corev1.ConditionTrue | |
| condition.Reason = "DeletionRequested" | |
| condition.Message = "Cluster deletion has been requested" | |
| } else { | |
| if existingCondition != nil && existingCondition.Status == corev1.ConditionFalse { | |
| condition.LastTransitionTime = existingCondition.LastTransitionTime | |
| } else { | |
| condition.LastTransitionTime = probeTime | |
| } | |
| condition.Status = corev1.ConditionFalse | |
| condition.Reason = "NotStopped" | |
| condition.Message = "Provision has not been stopped" | |
| } | |
| return condition | |
| } | |
| // computeProvisionStoppedCondition determines if provision has been stopped | |
| func computeProvisionStoppedCondition(hcluster *hyperv1.HostedCluster, probeTime metav1.Time) hyperv1.ClusterDeploymentCondition { | |
| // ProvisionStopped is True when the cluster is being deleted or has been paused | |
| condition := hyperv1.ClusterDeploymentCondition{ | |
| Type: hyperv1.ClusterProvisionStoppedType, | |
| LastProbeTime: probeTime, | |
| } | |
| // Treat deletion or paused reconciliation as stopped. | |
| isStopped := !hcluster.DeletionTimestamp.IsZero() || | |
| meta.IsStatusConditionFalse(hcluster.Status.Conditions, string(hyperv1.ReconciliationActive)) | |
| existingCondition := findClusterDeploymentCondition(hcluster.Status.ClusterDeploymentConditions, hyperv1.ClusterProvisionStoppedType) | |
| if isStopped { | |
| if existingCondition != nil && existingCondition.Status == corev1.ConditionTrue { | |
| condition.LastTransitionTime = existingCondition.LastTransitionTime | |
| } else { | |
| condition.LastTransitionTime = probeTime | |
| } | |
| condition.Status = corev1.ConditionTrue | |
| condition.Reason = "DeletionRequested" | |
| condition.Message = "Cluster deletion has been requested" | |
| } else { | |
| if existingCondition != nil && existingCondition.Status == corev1.ConditionFalse { | |
| condition.LastTransitionTime = existingCondition.LastTransitionTime | |
| } else { | |
| condition.LastTransitionTime = probeTime | |
| } | |
| condition.Status = corev1.ConditionFalse | |
| condition.Reason = "NotStopped" | |
| condition.Message = "Provision has not been stopped" | |
| } | |
| return condition | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`
around lines 5415 - 5448, The computeProvisionStoppedCondition currently only
marks ClusterProvisionStoppedType based on DeletionTimestamp but the comment
says paused clusters should also be considered stopped; update
computeProvisionStoppedCondition to treat a cluster as stopped when either
hcluster.DeletionTimestamp is set OR hcluster.Spec.Paused is true, preserve the
existing LastTransitionTime logic by comparing to existingCondition as done now,
and set distinct Reason/Message for the paused case (e.g., Reason "Paused" and
Message "Cluster reconciliation is paused") while keeping "DeletionRequested"
for deletion; ensure you still set Status to True for both deletion and paused
cases and leave the False branch unchanged.
There was a problem hiding this comment.
♻️ Duplicate comments (3)
hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go (3)
1269-1270:⚠️ Potential issue | 🟠 MajorCompute
ClusterDeploymentConditionson the deletion path too.This write only happens on the normal reconcile path. The early return on Lines 491-608 bypasses it, so external consumers can keep seeing stale deployment conditions during teardown instead of
ClusterProvisionStoppedType=True.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go` around lines 1269 - 1270, The ClusterDeploymentConditions assignment only runs on the normal reconcile path; update the reconcile flow so computeClusterDeploymentConditions(hcluster) is called and its result assigned to hcluster.Status.ClusterDeploymentConditions even during the deletion/teardown path (i.e., before any early return when hcluster.DeletionTimestamp is set or when the controller is handling deletion). Locate the reconcile function and ensure that wherever the code currently returns early during deletion you first call computeClusterDeploymentConditions(hcluster) and set hcluster.Status.ClusterDeploymentConditions to that value so external consumers see the teardown condition (e.g., ClusterProvisionStoppedType=True).
5415-5448:⚠️ Potential issue | 🟡 MinorInclude paused reconciliation in
ClusterProvisionStoppedType.The comment says paused clusters are stopped, but the predicate on Line 5424 only checks
DeletionTimestamp. A paused HostedCluster will still reportNotStoppedunless this also keys off paused reconciliation, e.g.ReconciliationActive=FalseorSpec.PausedUntil.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go` around lines 5415 - 5448, computeProvisionStoppedCondition currently only treats a HostedCluster as stopped when DeletionTimestamp is set; update it to also treat paused reconciliation as stopped by checking the cluster's reconciliation pause indicators (e.g., if hcluster.Spec.PausedUntil is set and in the future and/or the status contains a ReconciliationActive condition with Status=False). In computeProvisionStoppedCondition (and when building the condition fields such as Reason/Message), set isStopped = true when either hcluster.DeletionTimestamp.IsZero() is false OR the cluster is paused (inspect hcluster.Spec.PausedUntil and the ReconciliationActive condition via findClusterDeploymentCondition(hcluster.Status.ClusterDeploymentConditions, "ReconciliationActive")), and set an appropriate Reason like "ReconciliationPaused" and message when paused rather than "DeletionRequested". Ensure LastTransitionTime logic still reuses existingCondition when statuses match.
5458-5462:⚠️ Potential issue | 🟠 MajorGate
ClusterRequirementsMetTypeon every reconcile-blocking prerequisite.This helper only checks
ValidHostedClusterConfiguration, but reconcile still stops onSupportedHostedClusterandValidReleaseImageon Lines 1325-1341. That allowsClusterRequirementsMetType=Truefor clusters the operator still cannot provision.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go` around lines 5458 - 5462, The current check only inspects ValidHostedClusterConfiguration (via meta.FindStatusCondition and variable validConfigCondition) but must also ensure SupportedHostedCluster and ValidReleaseImage are true before treating ClusterRequirementsMetType as satisfied; update the requirementsMet logic to fetch the three conditions from hcluster.Status.Conditions (e.g., using meta.FindStatusCondition for hyperv1.ValidHostedClusterConfiguration, hyperv1.SupportedHostedCluster, and hyperv1.ValidReleaseImage) and set requirementsMet = true only when all three condition objects are non-nil and each has Status == metav1.ConditionTrue, so reconcile-blocking prerequisites are correctly gated.
🧹 Nitpick comments (2)
api/hypershift/v1beta1/hostedcluster_types.go (2)
2230-2237: Add an explicit max size forclusterDeploymentConditions.At Line 2236, this list is map-keyed but unbounded. Since only four condition types are expected, cap it to 4 for tighter API guarantees.
Proposed bound
// +optional // +listType=map // +listMapKey=type + // +kubebuilder:validation:MaxItems=4 ClusterDeploymentConditions []ClusterDeploymentCondition `json:"clusterDeploymentConditions,omitempty"`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/hypershift/v1beta1/hostedcluster_types.go` around lines 2230 - 2237, Add an explicit max items validation to the ClusterDeploymentConditions slice: the field ClusterDeploymentConditions (type []ClusterDeploymentCondition) is map-keyed but unbounded, so annotate it with a kubebuilder validation tag to cap the list at 4 (e.g. add the comment +kubebuilder:validation:MaxItems=4 immediately above the ClusterDeploymentConditions declaration, keeping the existing +listType=map and +listMapKey=type).
2111-2129: ConstrainClusterDeploymentConditionTypeto known values in schema.
ClusterDeploymentConditionTypeis currently open-ended. Since this contract is fixed to four Hive-compatible types, adding an enum marker will prevent accidental drift.Proposed schema constraint
// ClusterDeploymentCondition defines a special set of conditions used to track // the status of the cluster deployment. They are based on Hive's // ClusterDeploymentConditions +// +kubebuilder:validation:Enum=Provisioned;ProvisionFailed;ProvisionStopped;RequirementsMet type ClusterDeploymentConditionType string🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/hypershift/v1beta1/hostedcluster_types.go` around lines 2111 - 2129, Add a kubebuilder enum validation to constrain the ClusterDeploymentConditionType to the four allowed values: "Provisioned", "ProvisionFailed", "ProvisionStopped", and "RequirementsMet". Update the type declaration for ClusterDeploymentConditionType to include the kubebuilder marker (e.g., +kubebuilder:validation:Enum=...) above the type so the generated CRD schema uses an enum; ensure the string literals in the marker exactly match the constants ClusterProvisionedType, ClusterProvisionFailedType, ClusterProvisionStoppedType, and ClusterRequirementsMetType.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`:
- Around line 1269-1270: The ClusterDeploymentConditions assignment only runs on
the normal reconcile path; update the reconcile flow so
computeClusterDeploymentConditions(hcluster) is called and its result assigned
to hcluster.Status.ClusterDeploymentConditions even during the deletion/teardown
path (i.e., before any early return when hcluster.DeletionTimestamp is set or
when the controller is handling deletion). Locate the reconcile function and
ensure that wherever the code currently returns early during deletion you first
call computeClusterDeploymentConditions(hcluster) and set
hcluster.Status.ClusterDeploymentConditions to that value so external consumers
see the teardown condition (e.g., ClusterProvisionStoppedType=True).
- Around line 5415-5448: computeProvisionStoppedCondition currently only treats
a HostedCluster as stopped when DeletionTimestamp is set; update it to also
treat paused reconciliation as stopped by checking the cluster's reconciliation
pause indicators (e.g., if hcluster.Spec.PausedUntil is set and in the future
and/or the status contains a ReconciliationActive condition with Status=False).
In computeProvisionStoppedCondition (and when building the condition fields such
as Reason/Message), set isStopped = true when either
hcluster.DeletionTimestamp.IsZero() is false OR the cluster is paused (inspect
hcluster.Spec.PausedUntil and the ReconciliationActive condition via
findClusterDeploymentCondition(hcluster.Status.ClusterDeploymentConditions,
"ReconciliationActive")), and set an appropriate Reason like
"ReconciliationPaused" and message when paused rather than "DeletionRequested".
Ensure LastTransitionTime logic still reuses existingCondition when statuses
match.
- Around line 5458-5462: The current check only inspects
ValidHostedClusterConfiguration (via meta.FindStatusCondition and variable
validConfigCondition) but must also ensure SupportedHostedCluster and
ValidReleaseImage are true before treating ClusterRequirementsMetType as
satisfied; update the requirementsMet logic to fetch the three conditions from
hcluster.Status.Conditions (e.g., using meta.FindStatusCondition for
hyperv1.ValidHostedClusterConfiguration, hyperv1.SupportedHostedCluster, and
hyperv1.ValidReleaseImage) and set requirementsMet = true only when all three
condition objects are non-nil and each has Status == metav1.ConditionTrue, so
reconcile-blocking prerequisites are correctly gated.
---
Nitpick comments:
In `@api/hypershift/v1beta1/hostedcluster_types.go`:
- Around line 2230-2237: Add an explicit max items validation to the
ClusterDeploymentConditions slice: the field ClusterDeploymentConditions (type
[]ClusterDeploymentCondition) is map-keyed but unbounded, so annotate it with a
kubebuilder validation tag to cap the list at 4 (e.g. add the comment
+kubebuilder:validation:MaxItems=4 immediately above the
ClusterDeploymentConditions declaration, keeping the existing +listType=map and
+listMapKey=type).
- Around line 2111-2129: Add a kubebuilder enum validation to constrain the
ClusterDeploymentConditionType to the four allowed values: "Provisioned",
"ProvisionFailed", "ProvisionStopped", and "RequirementsMet". Update the type
declaration for ClusterDeploymentConditionType to include the kubebuilder marker
(e.g., +kubebuilder:validation:Enum=...) above the type so the generated CRD
schema uses an enum; ensure the string literals in the marker exactly match the
constants ClusterProvisionedType, ClusterProvisionFailedType,
ClusterProvisionStoppedType, and ClusterRequirementsMetType.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Pro Plus
Run ID: 2fa3034c-acf2-4708-9959-853ef7f6def9
⛔ Files ignored due to path filters (32)
api/go.sumis excluded by!**/*.sumapi/hypershift/v1beta1/zz_generated.deepcopy.gois excluded by!**/zz_generated*.go,!**/zz_generated*api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/vendor/modules.txtis excluded by!**/vendor/**api/vendor/sigs.k8s.io/structured-merge-diff/v6/schema/elements.gois excluded by!**/vendor/**api/vendor/sigs.k8s.io/structured-merge-diff/v6/typed/remove.gois excluded by!**/vendor/**client/applyconfiguration/hypershift/v1beta1/clusterdeploymentcondition.gois excluded by!client/**client/applyconfiguration/hypershift/v1beta1/hostedclusterstatus.gois excluded by!client/**client/applyconfiguration/utils.gois excluded by!client/**cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamlcmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamlcmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamldocs/content/reference/aggregated-docs.mdis excluded by!docs/content/reference/aggregated-docs.mddocs/content/reference/api.mdis excluded by!docs/content/reference/api.mdgo.sumis excluded by!**/*.sumvendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.gois excluded by!vendor/**,!**/vendor/**vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/zz_generated.deepcopy.gois excluded by!vendor/**,!**/vendor/**,!**/zz_generated*.go,!**/zz_generated*vendor/modules.txtis excluded by!vendor/**,!**/vendor/**vendor/sigs.k8s.io/structured-merge-diff/v6/schema/elements.gois excluded by!vendor/**,!**/vendor/**vendor/sigs.k8s.io/structured-merge-diff/v6/typed/remove.gois excluded by!vendor/**,!**/vendor/**
📒 Files selected for processing (5)
api/go.modapi/hypershift/v1beta1/hostedcluster_types.gogo.modhypershift-operator/controllers/hostedcluster/hostedcluster_controller.gohypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go
✅ Files skipped from review due to trivial changes (2)
- go.mod
- api/go.mod
🚧 Files skipped from review as they are similar to previous changes (1)
- hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #8228 +/- ##
==========================================
+ Coverage 35.71% 35.82% +0.11%
==========================================
Files 767 767
Lines 93402 93615 +213
==========================================
+ Hits 33354 33540 +186
- Misses 57346 57371 +25
- Partials 2702 2704 +2
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
api/hypershift/v1beta1/hostedcluster_types.go (1)
2108-2112: Align exported type comment with the identifier name.The comment should start with
ClusterDeploymentConditionType(current text starts withClusterDeploymentCondition).✏️ Proposed tweak
-// ClusterDeploymentCondition defines a special set of conditions used to track +// ClusterDeploymentConditionType defines a special set of conditions used to track🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/hypershift/v1beta1/hostedcluster_types.go` around lines 2108 - 2112, Update the exported type comment to start with the exact identifier name `ClusterDeploymentConditionType` instead of `ClusterDeploymentCondition`; locate the comment above the `type ClusterDeploymentConditionType string` declaration and change its opening sentence to begin with `ClusterDeploymentConditionType` so it follows Go's exported identifier comment convention.api/.golangci.yml (1)
276-295: Prefer schema validation over adding Type-related linter suppressions.The new suppressions for
ClusterDeploymentCondition.Typecan be eliminated by adding explicitMinLength/MaxLengthmarkers on the type. This keeps the API contract explicit and reduces config debt.♻️ Suggested direction
diff --git a/api/hypershift/v1beta1/hostedcluster_types.go b/api/hypershift/v1beta1/hostedcluster_types.go @@ type ClusterDeploymentCondition struct { // type is the type of the condition. // +required + // +kubebuilder:validation:MinLength=1 + // +kubebuilder:validation:MaxLength=32 Type ClusterDeploymentConditionType `json:"type"`diff --git a/api/.golangci.yml b/api/.golangci.yml @@ - - linters: - - kubeapilinter - path: hypershift/v1beta1/hostedcluster_types.go - text: 'maxlength: field ClusterDeploymentCondition.Type type ClusterDeploymentConditionType must have a maximum length, add kubebuilder:validation:MaxLength marker' @@ - - linters: - - kubeapilinter - path: hypershift/v1beta1/hostedcluster_types.go - text: 'minlength: field ClusterDeploymentCondition.Type type ClusterDeploymentConditionType must have a minimum length, add kubebuilder:validation:MinLength marker' @@ - - linters: - - kubeapilinter - path: hypershift/v1beta1/hostedcluster_types.go - text: 'requiredfields: field ClusterDeploymentCondition.Type has a valid zero value \(""\), but the validation is not complete \(e.g. minimum length\). The field should be a pointer to allow the zero value to be set. If the zero value is not a valid use case, complete the validation and remove the pointer.'Also applies to: 894-907
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/.golangci.yml` around lines 276 - 295, The kubeapilinter suppressions should be replaced with explicit schema validation markers: add kubebuilder:validation:MinLength and kubebuilder:validation:MaxLength to the ClusterDeploymentConditionType (referenced by ClusterDeploymentCondition.Type) to enforce string length, and add kubebuilder:validation:MinItems and kubebuilder:validation:MaxItems to the slice fields HostedClusterStatus.ClusterDeploymentConditions and ClusterVersionStatus.History to enforce item count; update the corresponding type definitions (ClusterDeploymentConditionType, HostedClusterStatus, ClusterVersionStatus) to include these markers so the linter warnings are resolved without suppressions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go`:
- Around line 5482-5513: The RequirementsMet false branch should propagate which
specific prerequisite is blocking instead of only copying
ValidHostedClusterConfiguration.Message; update the else branch that sets
condition.Message/Reason to inspect validConfigCondition,
supportedHostedCluster, and validReleaseImage (using their Status and Message
fields) and pick the first non-true condition (or nil) as the blocker, then set
condition.Message to that condition.Message (or a composed message including the
condition.Type) and set condition.Reason to a distinct value identifying the
blocker (e.g., "InvalidHostedClusterConfiguration", "UnsupportedHostedCluster",
or "InvalidReleaseImage") so downstream consumers receive the actual blocking
prerequisite.
- Around line 5383-5427: The code marks ClusterProvisionFailedType=True whenever
the cluster is degraded and not progressing, which can wrongly flag
already-provisioned clusters after a later control-plane outage; update
computeProvisionFailedCondition to only treat degradation as a provisioning
failure when it actually pertains to provisioning (e.g., the cluster is not yet
provisioned). Concretely, in computeProvisionFailedCondition add a guard using
the ClusterProvisioned condition (use meta.FindStatusCondition on
hcluster.Status.ClusterDeploymentConditions for hyperv1.ClusterProvisionedType
or findClusterDeploymentCondition) and only set isFailed from degradedCondition
if ClusterProvisioned is not True (or alternatively restrict by
degradedCondition.Reason to known provisioning-related reasons); keep the rest
of the LastTransitionTime/Status/Reason/Message handling unchanged.
---
Nitpick comments:
In `@api/.golangci.yml`:
- Around line 276-295: The kubeapilinter suppressions should be replaced with
explicit schema validation markers: add kubebuilder:validation:MinLength and
kubebuilder:validation:MaxLength to the ClusterDeploymentConditionType
(referenced by ClusterDeploymentCondition.Type) to enforce string length, and
add kubebuilder:validation:MinItems and kubebuilder:validation:MaxItems to the
slice fields HostedClusterStatus.ClusterDeploymentConditions and
ClusterVersionStatus.History to enforce item count; update the corresponding
type definitions (ClusterDeploymentConditionType, HostedClusterStatus,
ClusterVersionStatus) to include these markers so the linter warnings are
resolved without suppressions.
In `@api/hypershift/v1beta1/hostedcluster_types.go`:
- Around line 2108-2112: Update the exported type comment to start with the
exact identifier name `ClusterDeploymentConditionType` instead of
`ClusterDeploymentCondition`; locate the comment above the `type
ClusterDeploymentConditionType string` declaration and change its opening
sentence to begin with `ClusterDeploymentConditionType` so it follows Go's
exported identifier comment convention.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Pro Plus
Run ID: 59458895-a91f-49b5-95ab-fef131b42842
⛔ Files ignored due to path filters (24)
api/hypershift/v1beta1/zz_generated.deepcopy.gois excluded by!**/zz_generated*.go,!**/zz_generated*api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**client/applyconfiguration/hypershift/v1beta1/clusterdeploymentcondition.gois excluded by!client/**client/applyconfiguration/hypershift/v1beta1/hostedclusterstatus.gois excluded by!client/**client/applyconfiguration/utils.gois excluded by!client/**cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamlcmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamlcmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamldocs/content/reference/aggregated-docs.mdis excluded by!docs/content/reference/aggregated-docs.mddocs/content/reference/api.mdis excluded by!docs/content/reference/api.mdvendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.gois excluded by!vendor/**,!**/vendor/**vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/zz_generated.deepcopy.gois excluded by!vendor/**,!**/vendor/**,!**/zz_generated*.go,!**/zz_generated*
📒 Files selected for processing (5)
api/.golangci.ymlapi/hypershift/v1beta1/hostedcluster_types.gohypershift-operator/controllers/hostedcluster/hostedcluster_controller.gohypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.gotest/e2e/karpenter_test.go
✅ Files skipped from review due to trivial changes (1)
- test/e2e/karpenter_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
- hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go
| if existingCondition != nil && existingCondition.Status == corev1.ConditionTrue { | ||
| condition.LastTransitionTime = existingCondition.LastTransitionTime | ||
| } else { | ||
| condition.LastTransitionTime = probeTime | ||
| } | ||
| condition.Status = corev1.ConditionTrue | ||
| condition.Reason = "Provisioned" | ||
| condition.Message = "Cluster has been provisioned" | ||
| } else { | ||
| if existingCondition != nil && existingCondition.Status == corev1.ConditionFalse { | ||
| condition.LastTransitionTime = existingCondition.LastTransitionTime | ||
| } else { | ||
| condition.LastTransitionTime = probeTime | ||
| } | ||
| condition.Status = corev1.ConditionFalse | ||
| if hasInfra && !cvoAvailable { | ||
| condition.Reason = "WaitingForClusterVersionOperator" | ||
| if cvoCond != nil { | ||
| condition.Message = fmt.Sprintf("Cluster Version Operator is not available: %s", cvoCond.Message) | ||
| } else { | ||
| condition.Message = "Waiting for Cluster Version Operator to become available" | ||
| } | ||
| } else { | ||
| condition.Reason = "NotProvisioned" | ||
| condition.Message = "Cluster is not yet provisioned" | ||
| } | ||
| } |
There was a problem hiding this comment.
If you change the clusterdeploymentcondition to extend the condition package, you can use the kube apimachinery meta package to update the conditions without having to define the functions to find the condition itself https://pkg.go.dev/k8s.io/apimachinery/pkg/api/meta#pkg-overview
There was a problem hiding this comment.
This is a little beyond my familiarity with kube apimachinery, but the Claude AI says it still needs a findClusterDeploymentCondition helper because there are type incompatibilities between the Hive conditions and Hypershift's conditionsso the meta package would not work.
There was a problem hiding this comment.
Ah gotcha, could we do something else to simplify this?
Here's a suggestion of building a helper function from Claude that could be used for all of these compute functions
// buildClusterDeploymentCondition creates a ClusterDeploymentCondition with proper transition time handling.
// If the condition already exists with the same status, it preserves the LastTransitionTime.
// Otherwise, it sets LastTransitionTime to the current probeTime.
func buildClusterDeploymentCondition(
existingConditions []hivev1.ClusterDeploymentCondition,
conditionType hivev1.ClusterDeploymentConditionType,
status corev1.ConditionStatus,
reason string,
message string,
probeTime metav1.Time,
) hivev1.ClusterDeploymentCondition {
condition := hivev1.ClusterDeploymentCondition{
Type: conditionType,
Status: status,
Reason: reason,
Message: message,
LastProbeTime: probeTime,
LastTransitionTime: probeTime,
}
// Preserve LastTransitionTime if status hasn't changed
existingCondition := findClusterDeploymentCondition(existingConditions, conditionType)
if existingCondition != nil && existingCondition.Status == status {
condition.LastTransitionTime = existingCondition.LastTransitionTime
}
return condition
}
and its usage
| if existingCondition != nil && existingCondition.Status == corev1.ConditionTrue { | |
| condition.LastTransitionTime = existingCondition.LastTransitionTime | |
| } else { | |
| condition.LastTransitionTime = probeTime | |
| } | |
| condition.Status = corev1.ConditionTrue | |
| condition.Reason = "Provisioned" | |
| condition.Message = "Cluster has been provisioned" | |
| } else { | |
| if existingCondition != nil && existingCondition.Status == corev1.ConditionFalse { | |
| condition.LastTransitionTime = existingCondition.LastTransitionTime | |
| } else { | |
| condition.LastTransitionTime = probeTime | |
| } | |
| condition.Status = corev1.ConditionFalse | |
| if hasInfra && !cvoAvailable { | |
| condition.Reason = "WaitingForClusterVersionOperator" | |
| if cvoCond != nil { | |
| condition.Message = fmt.Sprintf("Cluster Version Operator is not available: %s", cvoCond.Message) | |
| } else { | |
| condition.Message = "Waiting for Cluster Version Operator to become available" | |
| } | |
| } else { | |
| condition.Reason = "NotProvisioned" | |
| condition.Message = "Cluster is not yet provisioned" | |
| } | |
| } | |
| // computeProvisionedCondition determines if the cluster is provisioned | |
| func computeProvisionedCondition(hcluster *hyperv1.HostedCluster, probeTime metav1.Time) hivev1.ClusterDeploymentCondition { | |
| // Provisioned is True when the cluster has been successfully provisioned: | |
| // control plane endpoint and kubeconfig exist, and the Cluster Version Operator | |
| // reports available (ClusterVersionAvailable on HostedCluster mirrors CVO on HCP). | |
| hasInfra := hcluster.Status.ControlPlaneEndpoint.Host != "" && hcluster.Status.KubeConfig != nil | |
| cvoCond := meta.FindStatusCondition(hcluster.Status.Conditions, string(hyperv1.ClusterVersionAvailable)) | |
| cvoAvailable := cvoCond != nil && cvoCond.Status == metav1.ConditionTrue | |
| isProvisioned := hasInfra && cvoAvailable | |
| if isProvisioned { | |
| return buildClusterDeploymentCondition( | |
| hcluster.Status.ClusterDeploymentConditions, | |
| hivev1.ProvisionedCondition, | |
| corev1.ConditionTrue, | |
| hivev1.ProvisionedReason, | |
| hivev1.ProvisionedMessage, | |
| probeTime, | |
| ) | |
| } | |
| // Not provisioned - determine reason | |
| var reason, message string | |
| if hasInfra && !cvoAvailable { | |
| reason = "WaitingForClusterVersionOperator" | |
| if cvoCond != nil { | |
| message = fmt.Sprintf("Cluster Version Operator is not available: %s", cvoCond.Message) | |
| } else { | |
| message = "Waiting for Cluster Version Operator to become available" | |
| } | |
| } else { | |
| reason = "NotProvisioned" | |
| message = "Cluster is not yet provisioned" | |
| } | |
| return buildClusterDeploymentCondition( | |
| hcluster.Status.ClusterDeploymentConditions, | |
| hivev1.ProvisionedCondition, | |
| corev1.ConditionFalse, | |
| reason, | |
| message, | |
| probeTime, | |
| ) | |
| } |
There was a problem hiding this comment.
I handed Claude the buildClusterDeploymentCondition and it refactored the individual condition functions. Thanks for the example!
There was a problem hiding this comment.
🧹 Nitpick comments (1)
api/go.mod (1)
50-93: Remove// indirectmarker fromgithub.com/openshift/hive/apisdependency on line 50 ofapi/go.mod.The module is directly imported at
api/hypershift/v1beta1/hostedcluster_types.go:10, so the indirect annotation is misaligned with actual usage.Change
- github.com/openshift/hive/apis v0.0.0-20260414220921-76f9a996844f // indirect + github.com/openshift/hive/apis v0.0.0-20260414220921-76f9a996844f🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@api/go.mod` around lines 50 - 93, The go.mod entry for github.com/openshift/hive/apis is incorrectly marked as indirect; remove the trailing "// indirect" from the github.com/openshift/hive/apis line in api/go.mod so it is a direct dependency (it is imported by hostedcluster_types.go), then run go mod tidy to update checksums and ensure the module graph is consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@api/go.mod`:
- Around line 50-93: The go.mod entry for github.com/openshift/hive/apis is
incorrectly marked as indirect; remove the trailing "// indirect" from the
github.com/openshift/hive/apis line in api/go.mod so it is a direct dependency
(it is imported by hostedcluster_types.go), then run go mod tidy to update
checksums and ensure the module graph is consistent.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Pro Plus
Run ID: 4dff3e99-d44d-4352-acfd-b2e4cadf4ee1
⛔ Files ignored due to path filters (25)
api/go.sumis excluded by!**/*.sumapi/hypershift/v1beta1/zz_generated.deepcopy.gois excluded by!**/zz_generated*.go,!**/zz_generated*api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yamlis excluded by!**/zz_generated.featuregated-crd-manifests/**client/applyconfiguration/hypershift/v1beta1/clusterdeploymentcondition.gois excluded by!client/**client/applyconfiguration/hypershift/v1beta1/hostedclusterstatus.gois excluded by!client/**client/applyconfiguration/utils.gois excluded by!client/**cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamlcmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamlcmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yamlis excluded by!**/zz_generated.crd-manifests/**,!cmd/install/assets/**/*.yamldocs/content/reference/aggregated-docs.mdis excluded by!docs/content/reference/aggregated-docs.mddocs/content/reference/api.mdis excluded by!docs/content/reference/api.mdvendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.gois excluded by!vendor/**,!**/vendor/**vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/zz_generated.deepcopy.gois excluded by!vendor/**,!**/vendor/**,!**/zz_generated*.go,!**/zz_generated*
📒 Files selected for processing (6)
api/.golangci.ymlapi/go.modapi/hypershift/v1beta1/hostedcluster_types.gohypershift-operator/controllers/hostedcluster/hostedcluster_controller.gohypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.gotest/e2e/karpenter_test.go
✅ Files skipped from review due to trivial changes (3)
- test/e2e/karpenter_test.go
- hypershift-operator/controllers/hostedcluster/hostedcluster_controller.go
- hypershift-operator/controllers/hostedcluster/hostedcluster_controller_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
- api/.golangci.yml
|
Nice, this looks good to me! |
|
/retest |
b2374d8 to
b03d0e0
Compare
|
/retest-required |
|
/retest |
Introduce ClusterDeploymentConditions to HostedCluster status to provide a Hive-compatible interface for external consumers like the siteconfig operator. The implementation synthesizes four Hive ClusterDeployment conditions from HostedCluster state: - Provisioned: True when cluster is fully provisioned and available - ProvisionFailed: True when cluster provisioning has failed - ProvisionStopped: True when cluster is being deleted or paused - RequirementsMet: True when all prerequisite conditions are met Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> Signed-off-by: cwilkers <cwilkers@redhat.com>
| "github.com/openshift/hypershift/api/util/ipnet" | ||
|
|
||
| configv1 "github.com/openshift/api/config/v1" | ||
| hivev1 "github.com/openshift/hive/apis/hive/v1" |
There was a problem hiding this comment.
It is generally an anti-pattern to import another API. What difference about the ClusterDeployment condition when compared with a standard metav1.Condition?
Why can consumers not just observe the existing conditions field?
There was a problem hiding this comment.
The underlying work is to allow the siteconfig operator (SO) to update its own CR (ClusterInstance)'s status from the clusters it deploys. In the case of siteconfig's original cluster types, there is a ClusterDeployment CR whose status is defined by Hive and represents the start and stop of a deployment in a way that the SO was written around.
When I added HCP to siteconfig as a supported cluster type, the original idea was to have SO read the deployment status from the HC CR, but it required extra logic given how many different status conditions a hosted cluster can report. The maintenance of hypershift specific logic within SO was rejected as an anti-pattern, and it was suggested to put that logic under the hypershift operator where it would be better maintained.
At the start of this PR, it was synthesizing regular metav1.Conditions to match the names of Hivev1 DeploymentConditions, but that was questioned as a maintenance hassle for hypershift as the Hive API could diverge. Thus the final form of this PR where we import Hivev1.
There was a problem hiding this comment.
The maintenance of hypershift specific logic within SO was rejected as an anti-pattern
Got any links to where this was discussed?
There was a problem hiding this comment.
|
/test images |
|
@cwilkers: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
I would expect this logic to live in the consumer side which is interested in these particular semantics or otherwise propose a generic enhancement to capture why these semantics are valuable and how they could impact any existing consumer. |
To a degree, that's the idea behind using the Hivev1 deploymentConditions. it is a common interface between agent-based and image-based cluster installs and any component (like SiteConfig in this instance) that is interested in the overall cluster's deployment status. |
|
PR needs rebase. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
Now I have a complete picture. Let me compile the final analysis. Test Failure Analysis CompleteJob Information
Test Failure AnalysisErrorSummaryThis is not a test execution failure — no CI test job failed. The "error" state comes from tide (Prow's merge controller), which detected that PR #8228 has merge conflicts with the Root CauseThe root cause is a git merge conflict between the PR branch (
The PR adds a new Recommendations
Evidence
|
|
Closing as the approach has been rejected. |
What this PR does / why we need it:
To enable integration with Siteconfig Operator, Hypershift clusters should present their deployment status in a way that is compatible with ClusterDeployments, namely hivev1 clusterdeploymentconditions
Special notes for your reviewer:
Checklist:
Co-authored-by: Cursor AI noreply@cursor.com
Summary by CodeRabbit
New Features
Tests
Chores