HYPERFLEET-1064 - feat: e2e test to use LastKnownReconciled#99
Conversation
📝 WalkthroughSummary by CodeRabbit
WalkthroughThis pull request updates end-to-end test assertions and adds a new constant to the client package. A new condition type constant Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes 🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
337b3f4 to
f6dbf51
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@e2e/adapter/adapter_failover.go`:
- Around line 124-127: The current check uses
h.HasAnyResourceCondition(cluster.Status.Conditions,
openapi.ResourceConditionStatusFalse, client.ConditionTypeAvailable,
client.ConditionTypeLastKnownReconciled) which is too permissive because it
permits one alias to be False while the other is True; change the assertion to
explicitly ensure neither alias is True — call HasAnyResourceCondition(...,
openapi.ResourceConditionStatusTrue, client.ConditionTypeAvailable,
client.ConditionTypeLastKnownReconciled) and assert that is False (i.e. both are
not True), and optionally keep the original False-check to assert at least one
is False; reference the existing symbols HasAnyResourceCondition,
cluster.Status.Conditions, openapi.ResourceConditionStatusFalse/True,
client.ConditionTypeAvailable, and client.ConditionTypeLastKnownReconciled when
making the change.
In `@e2e/cluster/adapter_failure.go`:
- Around line 171-173: The current assertion uses h.HasAnyResourceCondition(...,
ResourceConditionStatusFalse, client.ConditionTypeAvailable,
client.ConditionTypeLastKnownReconciled) which passes if either condition is
False and can mask regressions; change it to assert both conditions are False
explicitly by replacing the single HasAnyResourceCondition call with two
assertions that h.HasResourceCondition(cl.Status.Conditions,
openapi.ResourceConditionStatusFalse, client.ConditionTypeAvailable) and
h.HasResourceCondition(cl.Status.Conditions,
openapi.ResourceConditionStatusFalse, client.ConditionTypeLastKnownReconciled)
are true (or, if a helper exists, use a HasAll/HasBothResourceConditions helper
to require both condition types be False). Ensure you update the failure message
to reflect that both Available and LastKnownReconciled must remain False.
In `@e2e/nodepool/concurrent_creation.go`:
- Around line 114-117: The assertion immediately checks np.Status.Conditions
with h.HasAnyResourceCondition and Expect, which can flake if
Available/LastKnownReconciled lags Reconciled; change this to use Gomega's
Eventually to poll until h.HasAnyResourceCondition(np.Status.Conditions,
openapi.ResourceConditionStatusTrue, client.ConditionTypeAvailable,
client.ConditionTypeLastKnownReconciled) returns true (or timeout/fail) instead
of a single Expect; locate the call site where HasAnyResourceCondition is
invoked (and the Expect that follows) and wrap that predicate in an Eventually
with a sensible timeout and interval to stabilize the test.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Central YAML (base), Organization UI (inherited)
Review profile: ASSERTIVE
Plan: Enterprise
Run ID: 48630597-53f8-4157-8de8-f41e497280a5
📒 Files selected for processing (12)
e2e/adapter/adapter_failover.goe2e/cluster/adapter_failure.goe2e/cluster/concurrent_creation.goe2e/cluster/crash_recovery.goe2e/cluster/creation.goe2e/cluster/update_edge_cases.goe2e/nodepool/concurrent_creation.goe2e/nodepool/creation.goe2e/nodepool/update_edge_cases.gopkg/client/constants.gopkg/helper/matchers.gopkg/helper/validation.go
f6dbf51 to
eb52369
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
e2e/nodepool/concurrent_creation.go (1)
114-117:⚠️ Potential issue | 🟠 Major | ⚡ Quick winUse
Eventuallyfor the alias readiness assertion to avoid flakes.This still performs a one-shot read for
Available/LastKnownReconciledafter a separate reconciled wait, so condition propagation lag can fail nondeterministically.Proposed fix
- np, err := h.Client.GetNodePool(ctx, clusterID, npID) - Expect(err).NotTo(HaveOccurred(), "failed to get nodepool %d (%s)", i, npID) - - hasAvailable := h.HasAnyResourceCondition(np.Status.Conditions, - openapi.ResourceConditionStatusTrue, client.ConditionTypeAvailable, client.ConditionTypeLastKnownReconciled) - Expect(hasAvailable).To(BeTrue(), - "nodepool %d (%s) should have Available/LastKnownReconciled=True", i, npID) + Eventually(h.PollNodePool(ctx, clusterID, npID), h.Cfg.Timeouts.NodePool.Reconciled, h.Cfg.Polling.Interval). + Should(helper.HaveAnyResourceCondition( + openapi.ResourceConditionStatusTrue, + client.ConditionTypeAvailable, + client.ConditionTypeLastKnownReconciled, + ))🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/nodepool/concurrent_creation.go` around lines 114 - 117, After waiting for reconciliation, replace the one-shot Expect check on h.HasAnyResourceCondition(np.Status.Conditions, openapi.ResourceConditionStatusTrue, client.ConditionTypeAvailable, client.ConditionTypeLastKnownReconciled) with an Eventually that repeatedly polls that predicate until it returns true; specifically, wrap the call to HasAnyResourceCondition in a closure passed to Eventually and assert .Should(BeTrue(), <timeout>) (optionally with an interval), so the Available/LastKnownReconciled assertion does not flake due to propagation lag.e2e/adapter/adapter_failover.go (1)
124-127:⚠️ Potential issue | 🟠 Major | ⚡ Quick winTighten the negative precondition for alias conditions.
This passes when one alias is
Falseeven if the other is alreadyTrue. Assert that no alias isTrue, then optionally keep theFalsecheck.Proposed fix
- hasAvailableFalse := h.HasAnyResourceCondition(cluster.Status.Conditions, - openapi.ResourceConditionStatusFalse, client.ConditionTypeAvailable, client.ConditionTypeLastKnownReconciled) - g.Expect(hasAvailableFalse).To(BeTrue(), - "initial cluster conditions should have Available/LastKnownReconciled=False") + hasAnyReadyTrue := h.HasAnyResourceCondition(cluster.Status.Conditions, + openapi.ResourceConditionStatusTrue, client.ConditionTypeAvailable, client.ConditionTypeLastKnownReconciled) + g.Expect(hasAnyReadyTrue).To(BeFalse(), + "initial cluster conditions should NOT have Available/LastKnownReconciled=True") + + hasAnyReadyFalse := h.HasAnyResourceCondition(cluster.Status.Conditions, + openapi.ResourceConditionStatusFalse, client.ConditionTypeAvailable, client.ConditionTypeLastKnownReconciled) + g.Expect(hasAnyReadyFalse).To(BeTrue(), + "initial cluster conditions should have Available/LastKnownReconciled=False")🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/adapter/adapter_failover.go` around lines 124 - 127, The current check h.HasAnyResourceCondition(cluster.Status.Conditions, openapi.ResourceConditionStatusFalse, client.ConditionTypeAvailable, client.ConditionTypeLastKnownReconciled) allows one alias to be False while the other is already True; tighten it by first asserting that no alias is True (e.g. call h.HasAnyResourceCondition with openapi.ResourceConditionStatusTrue for the same condition types and Expect it to be False) and then assert the False precondition (keep the existing False check) so both aliases are guaranteed not-True and at least one is False as intended; reference h.HasAnyResourceCondition, cluster.Status.Conditions, openapi.ResourceConditionStatusTrue/False, client.ConditionTypeAvailable, client.ConditionTypeLastKnownReconciled.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@e2e/cluster/concurrent_creation.go`:
- Around line 96-99: The single GetCluster/HasAnyResourceCondition check for
Available/LastKnownReconciled should be converted into a polled assertion:
replace the one-shot call that uses
h.HasAnyResourceCondition(cluster.Status.Conditions,
client.ConditionTypeAvailable, client.ConditionTypeLastKnownReconciled) with an
Eventually block that repeatedly calls GetCluster(i, clusterID) and re-evaluates
h.HasAnyResourceCondition until it returns true; use the same Eventually
timeout/interval pattern used elsewhere in tests and keep the
Expect(hasAvailable).To(BeTrue(), ...) message intact, referencing GetCluster,
HasAnyResourceCondition, Eventually, client.ConditionTypeAvailable and
client.ConditionTypeLastKnownReconciled.
In `@e2e/nodepool/creation.go`:
- Around line 133-135: Replace the immediate snapshot assertion that calls
h.HasAnyResourceCondition(...) and Expect(...) with the same polling pattern
used in update_edge_cases.go: repeatedly fetch the nodepool (update
finalNodePool) on an interval and call
h.HasAnyResourceCondition(finalNodePool.Status.Conditions,
openapi.ResourceConditionStatusTrue, client.ConditionTypeAvailable,
client.ConditionTypeLastKnownReconciled) until it returns true or a timeout
elapses, then assert true; ensure you use the same timeout/interval values as
the existing poll in update_edge_cases.go so this check waits for the
Available/LastKnownReconciled condition to appear rather than racing on a single
snapshot.
---
Duplicate comments:
In `@e2e/adapter/adapter_failover.go`:
- Around line 124-127: The current check
h.HasAnyResourceCondition(cluster.Status.Conditions,
openapi.ResourceConditionStatusFalse, client.ConditionTypeAvailable,
client.ConditionTypeLastKnownReconciled) allows one alias to be False while the
other is already True; tighten it by first asserting that no alias is True (e.g.
call h.HasAnyResourceCondition with openapi.ResourceConditionStatusTrue for the
same condition types and Expect it to be False) and then assert the False
precondition (keep the existing False check) so both aliases are guaranteed
not-True and at least one is False as intended; reference
h.HasAnyResourceCondition, cluster.Status.Conditions,
openapi.ResourceConditionStatusTrue/False, client.ConditionTypeAvailable,
client.ConditionTypeLastKnownReconciled.
In `@e2e/nodepool/concurrent_creation.go`:
- Around line 114-117: After waiting for reconciliation, replace the one-shot
Expect check on h.HasAnyResourceCondition(np.Status.Conditions,
openapi.ResourceConditionStatusTrue, client.ConditionTypeAvailable,
client.ConditionTypeLastKnownReconciled) with an Eventually that repeatedly
polls that predicate until it returns true; specifically, wrap the call to
HasAnyResourceCondition in a closure passed to Eventually and assert
.Should(BeTrue(), <timeout>) (optionally with an interval), so the
Available/LastKnownReconciled assertion does not flake due to propagation lag.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Central YAML (base), Organization UI (inherited)
Review profile: ASSERTIVE
Plan: Enterprise
Run ID: 45c12f3b-0519-42d9-8ad4-37fcf7120284
📒 Files selected for processing (12)
e2e/adapter/adapter_failover.goe2e/cluster/adapter_failure.goe2e/cluster/concurrent_creation.goe2e/cluster/crash_recovery.goe2e/cluster/creation.goe2e/cluster/update_edge_cases.goe2e/nodepool/concurrent_creation.goe2e/nodepool/creation.goe2e/nodepool/update_edge_cases.gopkg/client/constants.gopkg/helper/matchers.gopkg/helper/validation.go
eb52369 to
0b12f46
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
e2e/nodepool/creation.go (1)
125-135:⚠️ Potential issue | 🟠 Major | ⚡ Quick winWrap the
LastKnownReconciledreadiness check inEventuallyto prevent race conditions.After polling
Reconciled=True(lines 122-123), the singleGetNodePoolcall at line 125 may capture a state where the API-computedLastKnownReconciledcondition has not yet transitioned to True, causing intermittent test failures.🔄 Proposed fix using Eventually pattern
- finalNodePool, err := h.Client.GetNodePool(ctx, clusterID, nodepoolID) - Expect(err).NotTo(HaveOccurred(), "failed to get final nodepool state") - Expect(finalNodePool.Status).NotTo(BeNil(), "nodepool status should be present") - - hasReconciled := h.HasResourceCondition(finalNodePool.Status.Conditions, - client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue) - Expect(hasReconciled).To(BeTrue(), "nodepool should have Reconciled=True condition") - - hasAvailable := h.HasResourceCondition(finalNodePool.Status.Conditions, - client.ConditionTypeLastKnownReconciled, openapi.ResourceConditionStatusTrue) - Expect(hasAvailable).To(BeTrue(), "nodepool should have LastKnownReconciled=True condition") + Eventually(func(g Gomega) { + finalNodePool, err := h.Client.GetNodePool(ctx, clusterID, nodepoolID) + g.Expect(err).NotTo(HaveOccurred(), "failed to get final nodepool state") + g.Expect(finalNodePool.Status).NotTo(BeNil(), "nodepool status should be present") + + hasReconciled := h.HasResourceCondition(finalNodePool.Status.Conditions, + client.ConditionTypeReconciled, openapi.ResourceConditionStatusTrue) + g.Expect(hasReconciled).To(BeTrue(), "nodepool should have Reconciled=True condition") + + hasLastKnown := h.HasResourceCondition(finalNodePool.Status.Conditions, + client.ConditionTypeLastKnownReconciled, openapi.ResourceConditionStatusTrue) + g.Expect(hasLastKnown).To(BeTrue(), "nodepool should have LastKnownReconciled=True condition") + }, h.Cfg.Timeouts.NodePool.Reconciled, h.Cfg.Polling.Interval).Should(Succeed())🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/nodepool/creation.go` around lines 125 - 135, The LastKnownReconciled readiness check is racy because it uses a single GetNodePool after Reconciled=True; change the check that uses HasResourceCondition(..., ConditionTypeLastKnownReconciled, ResourceConditionStatusTrue) to be wrapped in an Eventually that repeatedly calls h.Client.GetNodePool(ctx, clusterID, nodepoolID) and asserts HasResourceCondition on the returned finalNodePool.Status.Conditions until it becomes true (use the same timeout/interval pattern as the prior Reconciled Eventually), keeping the existing Expect messages.e2e/nodepool/concurrent_creation.go (1)
111-117:⚠️ Potential issue | 🟠 Major | ⚡ Quick winWrap the
LastKnownReconciledreadiness check inEventuallyto prevent race conditions.After polling
Reconciled=True(lines 108-109), the singleGetNodePoolcall at line 111 can capture a state whereLastKnownReconciledhas not yet transitioned to True, especially under concurrent load.🔄 Proposed fix using Eventually pattern
- np, err := h.Client.GetNodePool(ctx, clusterID, npID) - Expect(err).NotTo(HaveOccurred(), "failed to get nodepool %d (%s)", i, npID) - - hasAvailable := h.HasResourceCondition(np.Status.Conditions, + Eventually(func(g Gomega) { + np, err := h.Client.GetNodePool(ctx, clusterID, npID) + g.Expect(err).NotTo(HaveOccurred(), "failed to get nodepool %d (%s)", i, npID) + hasLastKnown := h.HasResourceCondition(np.Status.Conditions, - client.ConditionTypeLastKnownReconciled, openapi.ResourceConditionStatusTrue) - Expect(hasAvailable).To(BeTrue(), - "nodepool %d (%s) should have LastKnownReconciled=True", i, npID) + client.ConditionTypeLastKnownReconciled, openapi.ResourceConditionStatusTrue) + g.Expect(hasLastKnown).To(BeTrue(), + "nodepool %d (%s) should have LastKnownReconciled=True", i, npID) + }, h.Cfg.Timeouts.NodePool.Reconciled, h.Cfg.Polling.Interval).Should(Succeed())🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/nodepool/concurrent_creation.go` around lines 111 - 117, The current direct GetNodePool/HasResourceCondition check can race; wrap this readiness assertion in an Eventually poll to retry until LastKnownReconciled becomes true. Replace the single GetNodePool call and immediate Expect with an Eventually block that repeatedly calls h.Client.GetNodePool(ctx, clusterID, npID), inspects np.Status.Conditions using h.HasResourceCondition(..., client.ConditionTypeLastKnownReconciled, openapi.ResourceConditionStatusTrue), and only Expect(true) after the condition is observed (with a suitable timeout and interval) so transient states under concurrent load are retried.e2e/cluster/concurrent_creation.go (1)
93-99:⚠️ Potential issue | 🟠 Major | ⚡ Quick winWrap the
LastKnownReconciledreadiness check inEventuallyto prevent race conditions.After polling
Reconciled=True(lines 90-91), the singleGetClustercall at line 93 can capture a state whereLastKnownReconciledhas not yet transitioned to True, particularly under concurrent load where condition updates may lag.🔄 Proposed fix using Eventually pattern
- cluster, err := h.Client.GetCluster(ctx, clusterID) - Expect(err).NotTo(HaveOccurred(), "failed to get cluster %d (%s)", i, clusterID) - - hasAvailable := h.HasResourceCondition(cluster.Status.Conditions, + Eventually(func(g Gomega) { + cluster, err := h.Client.GetCluster(ctx, clusterID) + g.Expect(err).NotTo(HaveOccurred(), "failed to get cluster %d (%s)", i, clusterID) + hasLastKnown := h.HasResourceCondition(cluster.Status.Conditions, - client.ConditionTypeLastKnownReconciled, openapi.ResourceConditionStatusTrue) - Expect(hasAvailable).To(BeTrue(), - "cluster %d (%s) should have LastKnownReconciled=True", i, clusterID) + client.ConditionTypeLastKnownReconciled, openapi.ResourceConditionStatusTrue) + g.Expect(hasLastKnown).To(BeTrue(), + "cluster %d (%s) should have LastKnownReconciled=True", i, clusterID) + }, h.Cfg.Timeouts.Cluster.Reconciled, h.Cfg.Polling.Interval).Should(Succeed())🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@e2e/cluster/concurrent_creation.go` around lines 93 - 99, The LastKnownReconciled readiness check can race; replace the single h.Client.GetCluster + h.HasResourceCondition assertion with an Eventually polling block that repeatedly calls h.Client.GetCluster(ctx, clusterID) and checks h.HasResourceCondition(cluster.Status.Conditions, client.ConditionTypeLastKnownReconciled, openapi.ResourceConditionStatusTrue) until it becomes true (use the same timeout/interval pattern used for the Reconciled=True poll). Ensure the Eventually message references the cluster index/ID for clarity and keep the Expect inside Eventually to assert BeTrue once the condition stabilizes.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@pkg/client/constants.go`:
- Line 22: The comment for ConditionTypeLastKnownReconciled is misleadingly
saying "(renamed from Available)"; update the comment for
ConditionTypeLastKnownReconciled to state that it is a new API-computed
aggregated cluster/nodepool-level condition (not a rename), and clarify that
ConditionTypeAvailable remains an adapter-level condition. Locate the constant
ConditionTypeLastKnownReconciled in pkg/client/constants.go and replace the
parenthetical to reflect it is a new cluster/nodepool-level aggregated condition
and not renaming ConditionTypeAvailable.
---
Duplicate comments:
In `@e2e/cluster/concurrent_creation.go`:
- Around line 93-99: The LastKnownReconciled readiness check can race; replace
the single h.Client.GetCluster + h.HasResourceCondition assertion with an
Eventually polling block that repeatedly calls h.Client.GetCluster(ctx,
clusterID) and checks h.HasResourceCondition(cluster.Status.Conditions,
client.ConditionTypeLastKnownReconciled, openapi.ResourceConditionStatusTrue)
until it becomes true (use the same timeout/interval pattern used for the
Reconciled=True poll). Ensure the Eventually message references the cluster
index/ID for clarity and keep the Expect inside Eventually to assert BeTrue once
the condition stabilizes.
In `@e2e/nodepool/concurrent_creation.go`:
- Around line 111-117: The current direct GetNodePool/HasResourceCondition check
can race; wrap this readiness assertion in an Eventually poll to retry until
LastKnownReconciled becomes true. Replace the single GetNodePool call and
immediate Expect with an Eventually block that repeatedly calls
h.Client.GetNodePool(ctx, clusterID, npID), inspects np.Status.Conditions using
h.HasResourceCondition(..., client.ConditionTypeLastKnownReconciled,
openapi.ResourceConditionStatusTrue), and only Expect(true) after the condition
is observed (with a suitable timeout and interval) so transient states under
concurrent load are retried.
In `@e2e/nodepool/creation.go`:
- Around line 125-135: The LastKnownReconciled readiness check is racy because
it uses a single GetNodePool after Reconciled=True; change the check that uses
HasResourceCondition(..., ConditionTypeLastKnownReconciled,
ResourceConditionStatusTrue) to be wrapped in an Eventually that repeatedly
calls h.Client.GetNodePool(ctx, clusterID, nodepoolID) and asserts
HasResourceCondition on the returned finalNodePool.Status.Conditions until it
becomes true (use the same timeout/interval pattern as the prior Reconciled
Eventually), keeping the existing Expect messages.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Central YAML (base), Organization UI (inherited)
Review profile: ASSERTIVE
Plan: Enterprise
Run ID: fc0c1584-429d-4a36-b3d9-616099770516
📒 Files selected for processing (10)
e2e/adapter/adapter_failover.goe2e/cluster/adapter_failure.goe2e/cluster/concurrent_creation.goe2e/cluster/crash_recovery.goe2e/cluster/creation.goe2e/cluster/update_edge_cases.goe2e/nodepool/concurrent_creation.goe2e/nodepool/creation.goe2e/nodepool/update_edge_cases.gopkg/client/constants.go
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Mischulee The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
686b1ed
into
openshift-hyperfleet:main
Summary
openshift-hyperfleet/hyperfleet-api#127 changes the
resource.conditions.Availableto beresource.conditions.LastKnownReconciledbut tests were checkingAvailableso they have to be changed to read the new condition when the PR is approved