Skip to content

HYPERFLEET-1064 - feat: e2e test to use LastKnownReconciled#99

Merged
openshift-merge-bot[bot] merged 1 commit into
openshift-hyperfleet:mainfrom
rh-amarin:HYPERFLEET-1064
May 13, 2026
Merged

HYPERFLEET-1064 - feat: e2e test to use LastKnownReconciled#99
openshift-merge-bot[bot] merged 1 commit into
openshift-hyperfleet:mainfrom
rh-amarin:HYPERFLEET-1064

Conversation

@rh-amarin
Copy link
Copy Markdown
Contributor

Summary

openshift-hyperfleet/hyperfleet-api#127 changes the resource.conditions.Available to be resource.conditions.LastKnownReconciled but tests were checking Available so they have to be changed to read the new condition when the PR is approved

@openshift-ci openshift-ci Bot requested review from crizzo71 and jsell-rh May 13, 2026 06:49
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 13, 2026

📝 Walkthrough

Summary by CodeRabbit

  • New Features

    • Added LastKnownReconciled status condition type for clusters and nodepools.
  • Tests

    • Updated cluster and nodepool end-to-end tests to validate the new LastKnownReconciled status condition.
    • Modified readiness checks to recognize resources meeting the new condition state.

Walkthrough

This pull request updates end-to-end test assertions and adds a new constant to the client package. A new condition type constant ConditionTypeLastKnownReconciled is introduced in pkg/client/constants.go. Across nine e2e test files, assertions checking for Available status are replaced with checks for LastKnownReconciled status. The changes affect cluster and nodepool creation, update, failure recovery, and concurrent operation test scenarios. Test control flow and polling mechanisms remain structurally unchanged; only the condition types being verified are modified.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly identifies the main change: updating e2e tests to use the new LastKnownReconciled condition instead of Available.
Description check ✅ Passed The description directly explains the changeset: tests are being updated to reflect the API change from Available to LastKnownReconciled condition.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@e2e/adapter/adapter_failover.go`:
- Around line 124-127: The current check uses
h.HasAnyResourceCondition(cluster.Status.Conditions,
openapi.ResourceConditionStatusFalse, client.ConditionTypeAvailable,
client.ConditionTypeLastKnownReconciled) which is too permissive because it
permits one alias to be False while the other is True; change the assertion to
explicitly ensure neither alias is True — call HasAnyResourceCondition(...,
openapi.ResourceConditionStatusTrue, client.ConditionTypeAvailable,
client.ConditionTypeLastKnownReconciled) and assert that is False (i.e. both are
not True), and optionally keep the original False-check to assert at least one
is False; reference the existing symbols HasAnyResourceCondition,
cluster.Status.Conditions, openapi.ResourceConditionStatusFalse/True,
client.ConditionTypeAvailable, and client.ConditionTypeLastKnownReconciled when
making the change.

In `@e2e/cluster/adapter_failure.go`:
- Around line 171-173: The current assertion uses h.HasAnyResourceCondition(...,
ResourceConditionStatusFalse, client.ConditionTypeAvailable,
client.ConditionTypeLastKnownReconciled) which passes if either condition is
False and can mask regressions; change it to assert both conditions are False
explicitly by replacing the single HasAnyResourceCondition call with two
assertions that h.HasResourceCondition(cl.Status.Conditions,
openapi.ResourceConditionStatusFalse, client.ConditionTypeAvailable) and
h.HasResourceCondition(cl.Status.Conditions,
openapi.ResourceConditionStatusFalse, client.ConditionTypeLastKnownReconciled)
are true (or, if a helper exists, use a HasAll/HasBothResourceConditions helper
to require both condition types be False). Ensure you update the failure message
to reflect that both Available and LastKnownReconciled must remain False.

In `@e2e/nodepool/concurrent_creation.go`:
- Around line 114-117: The assertion immediately checks np.Status.Conditions
with h.HasAnyResourceCondition and Expect, which can flake if
Available/LastKnownReconciled lags Reconciled; change this to use Gomega's
Eventually to poll until h.HasAnyResourceCondition(np.Status.Conditions,
openapi.ResourceConditionStatusTrue, client.ConditionTypeAvailable,
client.ConditionTypeLastKnownReconciled) returns true (or timeout/fail) instead
of a single Expect; locate the call site where HasAnyResourceCondition is
invoked (and the Expect that follows) and wrap that predicate in an Eventually
with a sensible timeout and interval to stabilize the test.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Central YAML (base), Organization UI (inherited)

Review profile: ASSERTIVE

Plan: Enterprise

Run ID: 48630597-53f8-4157-8de8-f41e497280a5

📥 Commits

Reviewing files that changed from the base of the PR and between 7efc760 and 337b3f4.

📒 Files selected for processing (12)
  • e2e/adapter/adapter_failover.go
  • e2e/cluster/adapter_failure.go
  • e2e/cluster/concurrent_creation.go
  • e2e/cluster/crash_recovery.go
  • e2e/cluster/creation.go
  • e2e/cluster/update_edge_cases.go
  • e2e/nodepool/concurrent_creation.go
  • e2e/nodepool/creation.go
  • e2e/nodepool/update_edge_cases.go
  • pkg/client/constants.go
  • pkg/helper/matchers.go
  • pkg/helper/validation.go

Comment thread e2e/adapter/adapter_failover.go Outdated
Comment thread e2e/cluster/adapter_failure.go Outdated
Comment thread e2e/nodepool/concurrent_creation.go Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
e2e/nodepool/concurrent_creation.go (1)

114-117: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Use Eventually for the alias readiness assertion to avoid flakes.

This still performs a one-shot read for Available/LastKnownReconciled after a separate reconciled wait, so condition propagation lag can fail nondeterministically.

Proposed fix
-					np, err := h.Client.GetNodePool(ctx, clusterID, npID)
-					Expect(err).NotTo(HaveOccurred(), "failed to get nodepool %d (%s)", i, npID)
-
-					hasAvailable := h.HasAnyResourceCondition(np.Status.Conditions,
-						openapi.ResourceConditionStatusTrue, client.ConditionTypeAvailable, client.ConditionTypeLastKnownReconciled)
-					Expect(hasAvailable).To(BeTrue(),
-						"nodepool %d (%s) should have Available/LastKnownReconciled=True", i, npID)
+					Eventually(h.PollNodePool(ctx, clusterID, npID), h.Cfg.Timeouts.NodePool.Reconciled, h.Cfg.Polling.Interval).
+						Should(helper.HaveAnyResourceCondition(
+							openapi.ResourceConditionStatusTrue,
+							client.ConditionTypeAvailable,
+							client.ConditionTypeLastKnownReconciled,
+						))
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@e2e/nodepool/concurrent_creation.go` around lines 114 - 117, After waiting
for reconciliation, replace the one-shot Expect check on
h.HasAnyResourceCondition(np.Status.Conditions,
openapi.ResourceConditionStatusTrue, client.ConditionTypeAvailable,
client.ConditionTypeLastKnownReconciled) with an Eventually that repeatedly
polls that predicate until it returns true; specifically, wrap the call to
HasAnyResourceCondition in a closure passed to Eventually and assert
.Should(BeTrue(), <timeout>) (optionally with an interval), so the
Available/LastKnownReconciled assertion does not flake due to propagation lag.
e2e/adapter/adapter_failover.go (1)

124-127: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Tighten the negative precondition for alias conditions.

This passes when one alias is False even if the other is already True. Assert that no alias is True, then optionally keep the False check.

Proposed fix
-					hasAvailableFalse := h.HasAnyResourceCondition(cluster.Status.Conditions,
-						openapi.ResourceConditionStatusFalse, client.ConditionTypeAvailable, client.ConditionTypeLastKnownReconciled)
-					g.Expect(hasAvailableFalse).To(BeTrue(),
-						"initial cluster conditions should have Available/LastKnownReconciled=False")
+					hasAnyReadyTrue := h.HasAnyResourceCondition(cluster.Status.Conditions,
+						openapi.ResourceConditionStatusTrue, client.ConditionTypeAvailable, client.ConditionTypeLastKnownReconciled)
+					g.Expect(hasAnyReadyTrue).To(BeFalse(),
+						"initial cluster conditions should NOT have Available/LastKnownReconciled=True")
+
+					hasAnyReadyFalse := h.HasAnyResourceCondition(cluster.Status.Conditions,
+						openapi.ResourceConditionStatusFalse, client.ConditionTypeAvailable, client.ConditionTypeLastKnownReconciled)
+					g.Expect(hasAnyReadyFalse).To(BeTrue(),
+						"initial cluster conditions should have Available/LastKnownReconciled=False")
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@e2e/adapter/adapter_failover.go` around lines 124 - 127, The current check
h.HasAnyResourceCondition(cluster.Status.Conditions,
openapi.ResourceConditionStatusFalse, client.ConditionTypeAvailable,
client.ConditionTypeLastKnownReconciled) allows one alias to be False while the
other is already True; tighten it by first asserting that no alias is True (e.g.
call h.HasAnyResourceCondition with openapi.ResourceConditionStatusTrue for the
same condition types and Expect it to be False) and then assert the False
precondition (keep the existing False check) so both aliases are guaranteed
not-True and at least one is False as intended; reference
h.HasAnyResourceCondition, cluster.Status.Conditions,
openapi.ResourceConditionStatusTrue/False, client.ConditionTypeAvailable,
client.ConditionTypeLastKnownReconciled.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@e2e/cluster/concurrent_creation.go`:
- Around line 96-99: The single GetCluster/HasAnyResourceCondition check for
Available/LastKnownReconciled should be converted into a polled assertion:
replace the one-shot call that uses
h.HasAnyResourceCondition(cluster.Status.Conditions,
client.ConditionTypeAvailable, client.ConditionTypeLastKnownReconciled) with an
Eventually block that repeatedly calls GetCluster(i, clusterID) and re-evaluates
h.HasAnyResourceCondition until it returns true; use the same Eventually
timeout/interval pattern used elsewhere in tests and keep the
Expect(hasAvailable).To(BeTrue(), ...) message intact, referencing GetCluster,
HasAnyResourceCondition, Eventually, client.ConditionTypeAvailable and
client.ConditionTypeLastKnownReconciled.

In `@e2e/nodepool/creation.go`:
- Around line 133-135: Replace the immediate snapshot assertion that calls
h.HasAnyResourceCondition(...) and Expect(...) with the same polling pattern
used in update_edge_cases.go: repeatedly fetch the nodepool (update
finalNodePool) on an interval and call
h.HasAnyResourceCondition(finalNodePool.Status.Conditions,
openapi.ResourceConditionStatusTrue, client.ConditionTypeAvailable,
client.ConditionTypeLastKnownReconciled) until it returns true or a timeout
elapses, then assert true; ensure you use the same timeout/interval values as
the existing poll in update_edge_cases.go so this check waits for the
Available/LastKnownReconciled condition to appear rather than racing on a single
snapshot.

---

Duplicate comments:
In `@e2e/adapter/adapter_failover.go`:
- Around line 124-127: The current check
h.HasAnyResourceCondition(cluster.Status.Conditions,
openapi.ResourceConditionStatusFalse, client.ConditionTypeAvailable,
client.ConditionTypeLastKnownReconciled) allows one alias to be False while the
other is already True; tighten it by first asserting that no alias is True (e.g.
call h.HasAnyResourceCondition with openapi.ResourceConditionStatusTrue for the
same condition types and Expect it to be False) and then assert the False
precondition (keep the existing False check) so both aliases are guaranteed
not-True and at least one is False as intended; reference
h.HasAnyResourceCondition, cluster.Status.Conditions,
openapi.ResourceConditionStatusTrue/False, client.ConditionTypeAvailable,
client.ConditionTypeLastKnownReconciled.

In `@e2e/nodepool/concurrent_creation.go`:
- Around line 114-117: After waiting for reconciliation, replace the one-shot
Expect check on h.HasAnyResourceCondition(np.Status.Conditions,
openapi.ResourceConditionStatusTrue, client.ConditionTypeAvailable,
client.ConditionTypeLastKnownReconciled) with an Eventually that repeatedly
polls that predicate until it returns true; specifically, wrap the call to
HasAnyResourceCondition in a closure passed to Eventually and assert
.Should(BeTrue(), <timeout>) (optionally with an interval), so the
Available/LastKnownReconciled assertion does not flake due to propagation lag.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Central YAML (base), Organization UI (inherited)

Review profile: ASSERTIVE

Plan: Enterprise

Run ID: 45c12f3b-0519-42d9-8ad4-37fcf7120284

📥 Commits

Reviewing files that changed from the base of the PR and between 337b3f4 and eb52369.

📒 Files selected for processing (12)
  • e2e/adapter/adapter_failover.go
  • e2e/cluster/adapter_failure.go
  • e2e/cluster/concurrent_creation.go
  • e2e/cluster/crash_recovery.go
  • e2e/cluster/creation.go
  • e2e/cluster/update_edge_cases.go
  • e2e/nodepool/concurrent_creation.go
  • e2e/nodepool/creation.go
  • e2e/nodepool/update_edge_cases.go
  • pkg/client/constants.go
  • pkg/helper/matchers.go
  • pkg/helper/validation.go

Comment thread e2e/cluster/concurrent_creation.go Outdated
Comment thread e2e/nodepool/creation.go Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
e2e/nodepool/creation.go (1)

125-135: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Wrap the LastKnownReconciled readiness check in Eventually to prevent race conditions.

After polling Reconciled=True (lines 122-123), the single GetNodePool call at line 125 may capture a state where the API-computed LastKnownReconciled condition has not yet transitioned to True, causing intermittent test failures.

🔄 Proposed fix using Eventually pattern
-					finalNodePool, err := h.Client.GetNodePool(ctx, clusterID, nodepoolID)
-					Expect(err).NotTo(HaveOccurred(), "failed to get final nodepool state")
-					Expect(finalNodePool.Status).NotTo(BeNil(), "nodepool status should be present")
-
-					hasReconciled := h.HasResourceCondition(finalNodePool.Status.Conditions,
-						client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue)
-					Expect(hasReconciled).To(BeTrue(), "nodepool should have Reconciled=True condition")
-
-					hasAvailable := h.HasResourceCondition(finalNodePool.Status.Conditions,
-						client.ConditionTypeLastKnownReconciled, openapi.ResourceConditionStatusTrue)
-					Expect(hasAvailable).To(BeTrue(), "nodepool should have LastKnownReconciled=True condition")
+					Eventually(func(g Gomega) {
+						finalNodePool, err := h.Client.GetNodePool(ctx, clusterID, nodepoolID)
+						g.Expect(err).NotTo(HaveOccurred(), "failed to get final nodepool state")
+						g.Expect(finalNodePool.Status).NotTo(BeNil(), "nodepool status should be present")
+
+						hasReconciled := h.HasResourceCondition(finalNodePool.Status.Conditions,
+							client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue)
+						g.Expect(hasReconciled).To(BeTrue(), "nodepool should have Reconciled=True condition")
+
+						hasLastKnown := h.HasResourceCondition(finalNodePool.Status.Conditions,
+							client.ConditionTypeLastKnownReconciled, openapi.ResourceConditionStatusTrue)
+						g.Expect(hasLastKnown).To(BeTrue(), "nodepool should have LastKnownReconciled=True condition")
+					}, h.Cfg.Timeouts.NodePool.Reconciled, h.Cfg.Polling.Interval).Should(Succeed())
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@e2e/nodepool/creation.go` around lines 125 - 135, The LastKnownReconciled
readiness check is racy because it uses a single GetNodePool after
Reconciled=True; change the check that uses HasResourceCondition(...,
ConditionTypeLastKnownReconciled, ResourceConditionStatusTrue) to be wrapped in
an Eventually that repeatedly calls h.Client.GetNodePool(ctx, clusterID,
nodepoolID) and asserts HasResourceCondition on the returned
finalNodePool.Status.Conditions until it becomes true (use the same
timeout/interval pattern as the prior Reconciled Eventually), keeping the
existing Expect messages.
e2e/nodepool/concurrent_creation.go (1)

111-117: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Wrap the LastKnownReconciled readiness check in Eventually to prevent race conditions.

After polling Reconciled=True (lines 108-109), the single GetNodePool call at line 111 can capture a state where LastKnownReconciled has not yet transitioned to True, especially under concurrent load.

🔄 Proposed fix using Eventually pattern
-					np, err := h.Client.GetNodePool(ctx, clusterID, npID)
-					Expect(err).NotTo(HaveOccurred(), "failed to get nodepool %d (%s)", i, npID)
-
-					hasAvailable := h.HasResourceCondition(np.Status.Conditions,
+					Eventually(func(g Gomega) {
+						np, err := h.Client.GetNodePool(ctx, clusterID, npID)
+						g.Expect(err).NotTo(HaveOccurred(), "failed to get nodepool %d (%s)", i, npID)
+						hasLastKnown := h.HasResourceCondition(np.Status.Conditions,
-						client.ConditionTypeLastKnownReconciled, openapi.ResourceConditionStatusTrue)
-					Expect(hasAvailable).To(BeTrue(),
-						"nodepool %d (%s) should have LastKnownReconciled=True", i, npID)
+							client.ConditionTypeLastKnownReconciled, openapi.ResourceConditionStatusTrue)
+						g.Expect(hasLastKnown).To(BeTrue(),
+							"nodepool %d (%s) should have LastKnownReconciled=True", i, npID)
+					}, h.Cfg.Timeouts.NodePool.Reconciled, h.Cfg.Polling.Interval).Should(Succeed())
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@e2e/nodepool/concurrent_creation.go` around lines 111 - 117, The current
direct GetNodePool/HasResourceCondition check can race; wrap this readiness
assertion in an Eventually poll to retry until LastKnownReconciled becomes true.
Replace the single GetNodePool call and immediate Expect with an Eventually
block that repeatedly calls h.Client.GetNodePool(ctx, clusterID, npID), inspects
np.Status.Conditions using h.HasResourceCondition(...,
client.ConditionTypeLastKnownReconciled, openapi.ResourceConditionStatusTrue),
and only Expect(true) after the condition is observed (with a suitable timeout
and interval) so transient states under concurrent load are retried.
e2e/cluster/concurrent_creation.go (1)

93-99: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Wrap the LastKnownReconciled readiness check in Eventually to prevent race conditions.

After polling Reconciled=True (lines 90-91), the single GetCluster call at line 93 can capture a state where LastKnownReconciled has not yet transitioned to True, particularly under concurrent load where condition updates may lag.

🔄 Proposed fix using Eventually pattern
-					cluster, err := h.Client.GetCluster(ctx, clusterID)
-					Expect(err).NotTo(HaveOccurred(), "failed to get cluster %d (%s)", i, clusterID)
-
-					hasAvailable := h.HasResourceCondition(cluster.Status.Conditions,
+					Eventually(func(g Gomega) {
+						cluster, err := h.Client.GetCluster(ctx, clusterID)
+						g.Expect(err).NotTo(HaveOccurred(), "failed to get cluster %d (%s)", i, clusterID)
+						hasLastKnown := h.HasResourceCondition(cluster.Status.Conditions,
-						client.ConditionTypeLastKnownReconciled, openapi.ResourceConditionStatusTrue)
-					Expect(hasAvailable).To(BeTrue(),
-						"cluster %d (%s) should have LastKnownReconciled=True", i, clusterID)
+							client.ConditionTypeLastKnownReconciled, openapi.ResourceConditionStatusTrue)
+						g.Expect(hasLastKnown).To(BeTrue(),
+							"cluster %d (%s) should have LastKnownReconciled=True", i, clusterID)
+					}, h.Cfg.Timeouts.Cluster.Reconciled, h.Cfg.Polling.Interval).Should(Succeed())
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@e2e/cluster/concurrent_creation.go` around lines 93 - 99, The
LastKnownReconciled readiness check can race; replace the single
h.Client.GetCluster + h.HasResourceCondition assertion with an Eventually
polling block that repeatedly calls h.Client.GetCluster(ctx, clusterID) and
checks h.HasResourceCondition(cluster.Status.Conditions,
client.ConditionTypeLastKnownReconciled, openapi.ResourceConditionStatusTrue)
until it becomes true (use the same timeout/interval pattern used for the
Reconciled=True poll). Ensure the Eventually message references the cluster
index/ID for clarity and keep the Expect inside Eventually to assert BeTrue once
the condition stabilizes.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@pkg/client/constants.go`:
- Line 22: The comment for ConditionTypeLastKnownReconciled is misleadingly
saying "(renamed from Available)"; update the comment for
ConditionTypeLastKnownReconciled to state that it is a new API-computed
aggregated cluster/nodepool-level condition (not a rename), and clarify that
ConditionTypeAvailable remains an adapter-level condition. Locate the constant
ConditionTypeLastKnownReconciled in pkg/client/constants.go and replace the
parenthetical to reflect it is a new cluster/nodepool-level aggregated condition
and not renaming ConditionTypeAvailable.

---

Duplicate comments:
In `@e2e/cluster/concurrent_creation.go`:
- Around line 93-99: The LastKnownReconciled readiness check can race; replace
the single h.Client.GetCluster + h.HasResourceCondition assertion with an
Eventually polling block that repeatedly calls h.Client.GetCluster(ctx,
clusterID) and checks h.HasResourceCondition(cluster.Status.Conditions,
client.ConditionTypeLastKnownReconciled, openapi.ResourceConditionStatusTrue)
until it becomes true (use the same timeout/interval pattern used for the
Reconciled=True poll). Ensure the Eventually message references the cluster
index/ID for clarity and keep the Expect inside Eventually to assert BeTrue once
the condition stabilizes.

In `@e2e/nodepool/concurrent_creation.go`:
- Around line 111-117: The current direct GetNodePool/HasResourceCondition check
can race; wrap this readiness assertion in an Eventually poll to retry until
LastKnownReconciled becomes true. Replace the single GetNodePool call and
immediate Expect with an Eventually block that repeatedly calls
h.Client.GetNodePool(ctx, clusterID, npID), inspects np.Status.Conditions using
h.HasResourceCondition(..., client.ConditionTypeLastKnownReconciled,
openapi.ResourceConditionStatusTrue), and only Expect(true) after the condition
is observed (with a suitable timeout and interval) so transient states under
concurrent load are retried.

In `@e2e/nodepool/creation.go`:
- Around line 125-135: The LastKnownReconciled readiness check is racy because
it uses a single GetNodePool after Reconciled=True; change the check that uses
HasResourceCondition(..., ConditionTypeLastKnownReconciled,
ResourceConditionStatusTrue) to be wrapped in an Eventually that repeatedly
calls h.Client.GetNodePool(ctx, clusterID, nodepoolID) and asserts
HasResourceCondition on the returned finalNodePool.Status.Conditions until it
becomes true (use the same timeout/interval pattern as the prior Reconciled
Eventually), keeping the existing Expect messages.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Central YAML (base), Organization UI (inherited)

Review profile: ASSERTIVE

Plan: Enterprise

Run ID: fc0c1584-429d-4a36-b3d9-616099770516

📥 Commits

Reviewing files that changed from the base of the PR and between eb52369 and 0b12f46.

📒 Files selected for processing (10)
  • e2e/adapter/adapter_failover.go
  • e2e/cluster/adapter_failure.go
  • e2e/cluster/concurrent_creation.go
  • e2e/cluster/crash_recovery.go
  • e2e/cluster/creation.go
  • e2e/cluster/update_edge_cases.go
  • e2e/nodepool/concurrent_creation.go
  • e2e/nodepool/creation.go
  • e2e/nodepool/update_edge_cases.go
  • pkg/client/constants.go

Comment thread pkg/client/constants.go
@Mischulee
Copy link
Copy Markdown

/lgtm

@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented May 13, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Mischulee

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-bot openshift-merge-bot Bot merged commit 686b1ed into openshift-hyperfleet:main May 13, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants