Skip to content

OCPBUGS-84528: clarify pull secret in-place update behavior and add CP watches#8327

Merged
enxebre merged 2 commits intoopenshift:mainfrom
enxebre:enxebre/pull-secret-docs-and-watch
May 7, 2026
Merged

OCPBUGS-84528: clarify pull secret in-place update behavior and add CP watches#8327
enxebre merged 2 commits intoopenshift:mainfrom
enxebre:enxebre/pull-secret-docs-and-watch

Conversation

@enxebre
Copy link
Copy Markdown
Member

@enxebre enxebre commented Apr 24, 2026

Summary

  • Clarify HostedCluster.spec.pullSecret API godoc to accurately describe in-place Secret update propagation: no NodePool rollout, but on AWS/Azure Replace NodePools credentials still reach guest cluster and kubelet via the Global Pull Secret DaemonSet path
  • Add CP pull-secret watch to the globalps HCCO controller so kube-system/original-pull-secret reconciles promptly on in-place updates (the resources controller already had a watch added for openshift-config/pull-secret)
  • Overhaul global-pull-secret.md to document platform/NodePool eligibility, positive-selection scheduling model, and baseline sync behavior without additional-pull-secret
  • Add "Pull secret cycling" section to AGENTS.md so AI agents treat spec.pullSecret, management-cluster Secret data, HCCO reconciliation, and optional Global Pull Secret as one system when making changes

Test plan

  • Verify make verify passes (API generation, CRD manifests, lint)
  • Verify HCCO starts cleanly with both new watches (resources + globalps)
  • Test in-place pull secret update on AWS Replace NodePool: confirm openshift-config/pull-secret, kube-system/original-pull-secret, and kubelet config.json all update promptly without NodePool rollout
  • Test on a non-AWS/Azure platform: confirm secrets update in-cluster but no DaemonSet pods schedule (positive-selection label absent)

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Documentation
    • Clarified pull secret propagation semantics and when node pool rollouts are triggered
    • Added guidance on platform and node pool eligibility for global pull secret synchronization
    • Updated global pull secret reconciliation behavior and merge mechanics

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: LGTM mode

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label Apr 24, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@enxebre: This pull request explicitly references no jira issue.

Details

In response to this:

Summary

  • Clarify HostedCluster.spec.pullSecret API godoc to accurately describe in-place Secret update propagation: no NodePool rollout, but on AWS/Azure Replace NodePools credentials still reach guest cluster and kubelet via the Global Pull Secret DaemonSet path
  • Add CP pull-secret watch to the globalps HCCO controller so kube-system/original-pull-secret reconciles promptly on in-place updates (the resources controller already had a watch added for openshift-config/pull-secret)
  • Overhaul global-pull-secret.md to document platform/NodePool eligibility, positive-selection scheduling model, and baseline sync behavior without additional-pull-secret
  • Add omitzero JSON tag to PullSecret field and update generated CRDs

Test plan

  • Verify make verify passes (API generation, CRD manifests, lint)
  • Verify HCCO starts cleanly with both new watches (resources + globalps)
  • Test in-place pull secret update on AWS Replace NodePool: confirm openshift-config/pull-secret, kube-system/original-pull-secret, and kubelet config.json all update promptly without NodePool rollout
  • Test on a non-AWS/Azure platform: confirm secrets update in-cluster but no DaemonSet pods schedule (positive-selection label absent)

🤖 Generated with Claude Code

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci Bot added do-not-merge/needs-area needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Apr 24, 2026
@openshift-ci openshift-ci Bot requested a review from Nirshal April 24, 2026 08:25
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 24, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

The pull request introduces comprehensive documentation and controller instrumentation for pull secret cycling in HyperShift. Changes include documentation of how HostedCluster.spec.pullSecret modifications trigger NodePool rollouts while in-place Secret data updates do not, clarification of propagation behavior across control-plane and guest namespaces, and updated global pull secret mechanics showing baseline reconciliation with optional merge logic. Two new controller watches are added to monitor pull-secret changes in the control-plane namespace and respond with reconciliation requests upon Secret modification events.

🚥 Pre-merge checks | ✅ 11 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (11 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Stable And Deterministic Test Names ✅ Passed This PR contains no Ginkgo test code or test file modifications; it only includes documentation updates and controller implementation changes.
Test Structure And Quality ✅ Passed The custom check for Ginkgo test quality is not applicable; this PR contains standard Go unit tests with Gomega assertions, not Ginkgo tests.
Microshift Test Compatibility ✅ Passed The pull request does not add any new Ginkgo e2e tests. Modifications include documentation updates, API type comments, and controller watch registration code only.
Single Node Openshift (Sno) Test Compatibility ✅ Passed This PR contains no new Ginkgo e2e tests, only documentation and controller code updates.
Topology-Aware Scheduling Compatibility ✅ Passed PR introduces no scheduling constraints; changes are watch registrations and documentation updates only.
Ote Binary Stdout Contract ✅ Passed This PR does not contain OTE test binary code; it only modifies production controller code and documentation files.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed PR contains no new Ginkgo e2e tests; only documentation and API type updates.
Title check ✅ Passed The pull request title accurately summarizes the main changes: clarifying pull secret in-place update behavior and adding control-plane watches to the codebase.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci Bot requested a review from sjenning April 24, 2026 08:25
@openshift-ci openshift-ci Bot added area/api Indicates the PR includes changes for the API area/cli Indicates the PR includes changes for CLI area/control-plane-operator Indicates the PR includes changes for the control plane operator - in an OCP release area/documentation Indicates the PR includes changes for documentation and removed do-not-merge/needs-area labels Apr 24, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
docs/content/how-to/common/global-pull-secret.md (2)

106-107: Clarify the pull secret propagation path terminology.

The documentation alternates between "HostedControlPlane pull secret" and "HostedCluster.spec.pullSecret" which may confuse readers about the actual data flow. Consider adding a brief clarification early in this section explaining the path:

  1. HostedCluster.spec.pullSecret (field referencing a Secret in the management cluster)
  2. Secret reconciled into the HostedControlPlane namespace by HyperShift operator
  3. HCCO copies that Secret's data into kube-system/original-pull-secret in the data plane

This would help readers understand that "HostedControlPlane pull secret" and "the Secret referenced by HostedCluster.spec.pullSecret" have the same content but exist at different points in the reconciliation chain.

Also applies to: 112-112, 134-134

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/content/how-to/common/global-pull-secret.md` around lines 106 - 107, The
doc alternates terms and needs an explicit clarification of the pull-secret
propagation path: add a short paragraph early in this section that states the
chain as: the field HostedCluster.spec.pullSecret (a reference to a Secret in
the management cluster) → the Secret reconciled into the HostedControlPlane
namespace by the HyperShift operator → HCCO copies that Secret's data into
kube-system/original-pull-secret in the data plane (and note that
“HostedControlPlane pull secret” and the Secret referenced by
HostedCluster.spec.pullSecret contain the same content but live in different
namespaces/steps); reference HCCO, HostedControlPlane namespace,
HostedCluster.spec.pullSecret, kube-system/original-pull-secret,
additional-pull-secret and global-pull-secret so readers can map the terminology
used later in lines ~106, ~112 and ~134.

335-335: Minor phrasing ambiguity in MCD conflict explanation.

Line 335 states "without colliding with MCD on InPlace upgrade paths" which might be read as referring to InPlace paths on Replace workers. Consider rephrasing for clarity:

"so the DaemonSet can run there without the MCD conflicts that would occur on InPlace workers"

This makes it clearer that the conflict avoidance is about not running on InPlace workers, rather than about Replace workers having some relationship to InPlace upgrade paths.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/content/how-to/common/global-pull-secret.md` at line 335, Replace the
ambiguous phrase "without colliding with MCD on InPlace upgrade paths" (near the
sentence mentioning hypershift.openshift.io/nodepool-globalps-enabled=true) with
a clearer wording such as "so the DaemonSet can run there without the MCD
conflicts that would occur on InPlace workers" to make explicit that conflict
avoidance refers to not running on InPlace workers; update the sentence in the
same paragraph to use that phrasing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@api/hypershift/v1beta1/hostedcluster_types.go`:
- Around line 682-683: The API comment contains a commit-pinned GitHub URL to
the Hypershift docs (the link with SHA 3e0e06f7...), which will drift as docs
change; update the comment in hostedcluster_types.go to point to a non-pinned
docs location instead — either the published docs path or a branch-relative URL
(e.g., the docs path under the repository root or a /tree/main or /blob/main
link) so the CRD/godoc reference stays current; replace the SHA-pinned URL in
the comment block with the chosen stable URL.

---

Nitpick comments:
In `@docs/content/how-to/common/global-pull-secret.md`:
- Around line 106-107: The doc alternates terms and needs an explicit
clarification of the pull-secret propagation path: add a short paragraph early
in this section that states the chain as: the field
HostedCluster.spec.pullSecret (a reference to a Secret in the management
cluster) → the Secret reconciled into the HostedControlPlane namespace by the
HyperShift operator → HCCO copies that Secret's data into
kube-system/original-pull-secret in the data plane (and note that
“HostedControlPlane pull secret” and the Secret referenced by
HostedCluster.spec.pullSecret contain the same content but live in different
namespaces/steps); reference HCCO, HostedControlPlane namespace,
HostedCluster.spec.pullSecret, kube-system/original-pull-secret,
additional-pull-secret and global-pull-secret so readers can map the terminology
used later in lines ~106, ~112 and ~134.
- Line 335: Replace the ambiguous phrase "without colliding with MCD on InPlace
upgrade paths" (near the sentence mentioning
hypershift.openshift.io/nodepool-globalps-enabled=true) with a clearer wording
such as "so the DaemonSet can run there without the MCD conflicts that would
occur on InPlace workers" to make explicit that conflict avoidance refers to not
running on InPlace workers; update the sentence in the same paragraph to use
that phrasing.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: b724835d-8ccc-4f33-bd5d-703b48df6202

📥 Commits

Reviewing files that changed from the base of the PR and between 3e0e06f and eb4f4ec.

⛔ Files ignored due to path filters (19)
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • docs/content/reference/aggregated-docs.md is excluded by !docs/content/reference/aggregated-docs.md
  • docs/content/reference/api.md is excluded by !docs/content/reference/api.md
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.go is excluded by !vendor/**, !**/vendor/**
📒 Files selected for processing (5)
  • AGENTS.md
  • api/hypershift/v1beta1/hostedcluster_types.go
  • control-plane-operator/hostedclusterconfigoperator/controllers/globalps/setup.go
  • control-plane-operator/hostedclusterconfigoperator/controllers/resources/resources.go
  • docs/content/how-to/common/global-pull-secret.md

Comment on lines +682 to +683
// See https://github.com/openshift/hypershift/blob/3e0e06f78460ac1100dd8432dd7eee4e980a87d0/docs/content/how-to/common/global-pull-secret.md
// for details.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid a commit-pinned docs link in the API comment.

Hardcoding a blob URL to a specific SHA means the CRD/godoc description will drift as the docs evolve or get backported. Prefer the published docs path or a branch-relative reference.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/hypershift/v1beta1/hostedcluster_types.go` around lines 682 - 683, The
API comment contains a commit-pinned GitHub URL to the Hypershift docs (the link
with SHA 3e0e06f7...), which will drift as docs change; update the comment in
hostedcluster_types.go to point to a non-pinned docs location instead — either
the published docs path or a branch-relative URL (e.g., the docs path under the
repository root or a /tree/main or /blob/main link) so the CRD/godoc reference
stays current; replace the SHA-pinned URL in the comment block with the chosen
stable URL.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jparrill, thanks for confirming! Here's the fix to replace the commit-pinned URL with a stable main-branch reference:

  • Replace the SHA-pinned blob URL in the PullSecret godoc with a /blob/main/ URL

Failed to handle agent chat message. Please try again.

Copy link
Copy Markdown
Member Author

@enxebre enxebre Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's intentional so behaviour is versioned. If you point to a floating link, a consumer reading this for version N will see the description for whatever behaviour is for latest (if that link still exists) which doesn't necessarily apply

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

@enxebre enxebre force-pushed the enxebre/pull-secret-docs-and-watch branch from eb4f4ec to 0c991ce Compare April 24, 2026 08:43
@openshift-ci openshift-ci Bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 24, 2026
@enxebre enxebre force-pushed the enxebre/pull-secret-docs-and-watch branch from 0c991ce to 88bfa0c Compare April 24, 2026 08:47
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/content/how-to/common/global-pull-secret.md`:
- Around line 139-143: The docs text says that deleting additional-pull-secret
causes HCCO to remove global-pull-secret, revert to syncing only
original-pull-secret, and that the DaemonSet continues to run and keeps
/var/lib/kubelet/config.json aligned; update the architecture diagram to match
by removing the "Remove DaemonSet" cleanup path (or change it to a "DaemonSet
remains" state), and ensure arrows/labels show reversion to original-pull-secret
and retention of the DaemonSet rather than deletion.
- Around line 104-109: Update the paragraph describing HCCO watches to
explicitly list both Secret watch sources: the data-plane `kube-system` Secrets
(e.g., `kube-system/original-pull-secret`) and the
HostedControlPlane/HostedCluster namespace `pull-secret` (the
HostedCluster.spec.pullSecret source) so readers know HCCO watches both the
management-cluster Secret and the in-place HCP pull-secret for prompt updates;
keep references to `global-pull-secret`, `additional-pull-secret`, and the
DaemonSet sync behavior unchanged while adding the explicit mention that
reconciliation watches the HCP namespace `pull-secret` in addition to
`kube-system`.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 5bbc574d-9eca-4a3e-a997-901b97d9f287

📥 Commits

Reviewing files that changed from the base of the PR and between eb4f4ec and 0c991ce.

⛔ Files ignored due to path filters (19)
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • docs/content/reference/aggregated-docs.md is excluded by !docs/content/reference/aggregated-docs.md
  • docs/content/reference/api.md is excluded by !docs/content/reference/api.md
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.go is excluded by !vendor/**, !**/vendor/**
📒 Files selected for processing (5)
  • AGENTS.md
  • api/hypershift/v1beta1/hostedcluster_types.go
  • control-plane-operator/hostedclusterconfigoperator/controllers/globalps/setup.go
  • control-plane-operator/hostedclusterconfigoperator/controllers/resources/resources.go
  • docs/content/how-to/common/global-pull-secret.md
✅ Files skipped from review due to trivial changes (2)
  • control-plane-operator/hostedclusterconfigoperator/controllers/resources/resources.go
  • AGENTS.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • api/hypershift/v1beta1/hostedcluster_types.go

Comment thread docs/content/how-to/common/global-pull-secret.md
Comment thread docs/content/how-to/common/global-pull-secret.md
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
docs/content/how-to/common/global-pull-secret.md (2)

104-109: ⚠️ Potential issue | 🟡 Minor

Document both watch sources for completeness.

Line 105 mentions only kube-system namespace watches, but the PR objectives state that a control-plane pull-secret watch was added to the globalps HCCO controller. To give readers the complete picture of how in-place Secret updates trigger reconciliation, please mention both:

  • the data-plane kube-system Secrets, and
  • the control-plane pull-secret (the HostedCluster.spec.pullSecret source in the HCP namespace).

This aligns with the "pull secret cycling" model where in-place management-cluster Secret data updates flow through HCCO reconciliation into the guest cluster.

📝 Suggested doc update
 ### Automatic detection and baseline sync
-- The Hosted Cluster Config Operator (HCCO) continuously reconciles Global Pull Secret resources and watches Secrets in the `kube-system` namespace of the data plane.
+- The Hosted Cluster Config Operator (HCCO) continuously reconciles Global Pull Secret resources and watches:
+  - Secrets in the `kube-system` namespace of the data plane, and
+  - the control-plane pull-secret Secret in the HostedControlPlane namespace, ensuring in-place pull-secret updates trigger prompt reconciliation.
 - On every reconcile, HCCO copies the HostedControlPlane pull secret (sourced from **`HostedCluster.spec.pullSecret`**) into `kube-system/original-pull-secret` so the DaemonSet can mount it on the node.

Based on learnings: Treat HostedCluster spec.pullSecret, management-cluster Secret data, HCCO reconciliation, and optional Global Pull Secret as one system when changing worker authentication to registries.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/content/how-to/common/global-pull-secret.md` around lines 104 - 109,
Update the paragraph under "Automatic detection and baseline sync" to mention
that HCCO watches both the data-plane kube-system Secrets and the control-plane
HostedCluster pull-secret (HostedCluster.spec.pullSecret / HostedControlPlane
pull secret in the HCP namespace), so clarify that reconciliation is triggered
by in-place updates to either the kube-system secrets or the control-plane
pull-secret and that HCCO copies the HostedControlPlane pull secret into
kube-system/original-pull-secret, removes or merges the global-pull-secret
depending on the presence of additional-pull-secret, and then the DaemonSet
syncs the resulting secret into /var/lib/kubelet/config.json on eligible nodes.

301-302: ⚠️ Potential issue | 🟡 Minor

Update diagram to match documented cleanup behavior.

Line 302 shows the cleanup process removing the DaemonSet, but lines 139-143 clearly state: "The DaemonSet continues to run on eligible nodes and keeps /var/lib/kubelet/config.json aligned with that HostedCluster pull secret." The code (context snippet 1) confirms that the DaemonSet is reconciled even when additional-pull-secret doesn't exist—it simply syncs original-pull-secret instead of the merged global-pull-secret.

Please update the diagram to reflect that cleanup deletes global-pull-secret but the DaemonSet remains active and reverts to syncing only original-pull-secret.

🎨 Suggested diagram fix
     GlobalPSController --> |If additional PS deleted| CleanupProcess[Cleanup Process]
     CleanupProcess --> |Deletes global PS| DeleteGlobalPS[Delete global-pull-secret]
-    CleanupProcess --> |Removes DaemonSet| RemoveDaemonSet[Remove DaemonSet]
+    CleanupProcess --> |DaemonSet remains active| RevertToOriginal[Revert to syncing original-pull-secret only]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/content/how-to/common/global-pull-secret.md` around lines 301 - 302, The
diagram incorrectly shows CleanupProcess removing the DaemonSet; update it so
CleanupProcess only deletes the global-pull-secret (DeleteGlobalPS/Delete
global-pull-secret) while the DaemonSet (RemoveDaemonSet node should be removed
or changed) remains active and continues reconciling by syncing the
original-pull-secret when additional-pull-secret is absent. Specifically, remove
the arrow/statement that deletes the DaemonSet (RemoveDaemonSet) and instead
add/label a connection indicating the DaemonSet stays running and reverts to
syncing original-pull-secret (not global-pull-secret) when
additional-pull-secret is not present.
🧹 Nitpick comments (2)
docs/content/how-to/common/global-pull-secret.md (2)

333-336: Minor wording improvement for positive selection explanation.

Lines 333-336 accurately describe the positive-selection model, but "not NodeAffinity on an InPlace label" could be clearer. Consider rephrasing to emphasize it's an opt-in approach where only labeled nodes are eligible, rather than defining it by what it's not.

✏️ Suggested rewording
-### How scheduling avoids InPlace conflicts
+### How node scheduling works (positive selection)

-Eligibility is **positive selection**, not NodeAffinity on an InPlace label: InPlace workers simply **never** receive `hypershift.openshift.io/nodepool-globalps-enabled=true`, so the sync DaemonSet does not place pods on them. Replace workers on AWS/Azure **do** receive the label so the DaemonSet can run there without colliding with MCD on InPlace upgrade paths.
+The DaemonSet uses **positive selection** via `nodeSelector`: only nodes labeled `hypershift.openshift.io/nodepool-globalps-enabled=true` are eligible to run sync pods. InPlace workers **never** receive this label (avoiding conflicts with Machine Config Daemon), while AWS/Azure Replace workers **do** receive it (enabling kubelet config sync via the DaemonSet).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/content/how-to/common/global-pull-secret.md` around lines 333 - 336,
Rephrase the sentence about positive selection to make it explicit that
scheduling is opt-in: replace the fragment "not NodeAffinity on an InPlace
label" with wording that states only nodes carrying the
hypershift.openshift.io/nodepool-globalps-enabled=true label are eligible (i.e.,
the sync DaemonSet only targets labeled nodes), and clarify that InPlace workers
never receive that label while Replace workers on AWS/Azure do so to avoid
colliding with the MCD daemonset.

274-277: Clarify conditional Secret reading in diagram.

Lines 276-277 show arrows suggesting both global-pull-secret and original-pull-secret are read concurrently, but the actual behavior (documented at lines 134-135) is: read global-pull-secret when present, otherwise read original-pull-secret.

Consider updating the diagram flow to show this is a conditional choice rather than parallel reads—perhaps by routing through a decision node first, or adding "(fallback)" to the original-pull-secret arrow.

💡 Suggested diagram improvement

Modify the flow to show the choice more clearly:

    SyncCommand --> |Reads mounted files| SyncController[sync-global-pullsecret loop]
    SyncController --> |Check presence| CheckGlobalPS{global-pull-secret exists?}
    CheckGlobalPS --> |Yes| ReadGlobalPS[Read global-pull-secret mount]
    CheckGlobalPS --> |No| ReadOriginalPS[Read original-pull-secret mount]
Loading

Or add labels to existing arrows to indicate the conditional nature:

    SyncController --> |Reads if present| ReadGlobalPS[Read global-pull-secret mount]
    SyncController --> |Reads as fallback| ReadOriginalPS[Read original-pull-secret mount]
Loading
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/content/how-to/common/global-pull-secret.md` around lines 274 - 277, The
diagram incorrectly implies concurrent reads of global-pull-secret and
original-pull-secret; update the flow so SyncController (sync-global-pullsecret
loop) performs a conditional check: add a decision node (e.g.,
CheckGlobalPS{global-pull-secret exists?}) between SyncController and the read
nodes, routing Yes -> ReadGlobalPS[Read global-pull-secret mount] and No ->
ReadOriginalPS[Read original-pull-secret mount]; alternatively, if you prefer
minimal change, keep SyncController -> ReadGlobalPS and SyncController ->
ReadOriginalPS but change the arrow labels to "Reads if present" and "Reads as
fallback" respectively to clearly indicate the conditional/fallback behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/content/how-to/common/global-pull-secret.md`:
- Line 215: The "Node eligibility assessment" bullet incorrectly says "Labels
nodes from InPlace NodePools"—update this to reflect the actual behavior:
InPlace NodePools are intentionally NOT labeled with
hypershift.openshift.io/nodepool-globalps-enabled=true and only AWS/Azure
Replace NodePool nodes receive that label; change the text under "Node
eligibility assessment" to state that InPlace nodes are excluded from labeling
and that the DaemonSet scheduling restrictions apply only to nodes labeled
hypershift.openshift.io/nodepool-globalps-enabled=true (i.e., Replace NodePool
nodes).

---

Duplicate comments:
In `@docs/content/how-to/common/global-pull-secret.md`:
- Around line 104-109: Update the paragraph under "Automatic detection and
baseline sync" to mention that HCCO watches both the data-plane kube-system
Secrets and the control-plane HostedCluster pull-secret
(HostedCluster.spec.pullSecret / HostedControlPlane pull secret in the HCP
namespace), so clarify that reconciliation is triggered by in-place updates to
either the kube-system secrets or the control-plane pull-secret and that HCCO
copies the HostedControlPlane pull secret into kube-system/original-pull-secret,
removes or merges the global-pull-secret depending on the presence of
additional-pull-secret, and then the DaemonSet syncs the resulting secret into
/var/lib/kubelet/config.json on eligible nodes.
- Around line 301-302: The diagram incorrectly shows CleanupProcess removing the
DaemonSet; update it so CleanupProcess only deletes the global-pull-secret
(DeleteGlobalPS/Delete global-pull-secret) while the DaemonSet (RemoveDaemonSet
node should be removed or changed) remains active and continues reconciling by
syncing the original-pull-secret when additional-pull-secret is absent.
Specifically, remove the arrow/statement that deletes the DaemonSet
(RemoveDaemonSet) and instead add/label a connection indicating the DaemonSet
stays running and reverts to syncing original-pull-secret (not
global-pull-secret) when additional-pull-secret is not present.

---

Nitpick comments:
In `@docs/content/how-to/common/global-pull-secret.md`:
- Around line 333-336: Rephrase the sentence about positive selection to make it
explicit that scheduling is opt-in: replace the fragment "not NodeAffinity on an
InPlace label" with wording that states only nodes carrying the
hypershift.openshift.io/nodepool-globalps-enabled=true label are eligible (i.e.,
the sync DaemonSet only targets labeled nodes), and clarify that InPlace workers
never receive that label while Replace workers on AWS/Azure do so to avoid
colliding with the MCD daemonset.
- Around line 274-277: The diagram incorrectly implies concurrent reads of
global-pull-secret and original-pull-secret; update the flow so SyncController
(sync-global-pullsecret loop) performs a conditional check: add a decision node
(e.g., CheckGlobalPS{global-pull-secret exists?}) between SyncController and the
read nodes, routing Yes -> ReadGlobalPS[Read global-pull-secret mount] and No ->
ReadOriginalPS[Read original-pull-secret mount]; alternatively, if you prefer
minimal change, keep SyncController -> ReadGlobalPS and SyncController ->
ReadOriginalPS but change the arrow labels to "Reads if present" and "Reads as
fallback" respectively to clearly indicate the conditional/fallback behavior.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 96dc9db1-2b71-4010-8917-9daf5195b483

📥 Commits

Reviewing files that changed from the base of the PR and between 0c991ce and 88bfa0c.

⛔ Files ignored due to path filters (19)
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • docs/content/reference/aggregated-docs.md is excluded by !docs/content/reference/aggregated-docs.md
  • docs/content/reference/api.md is excluded by !docs/content/reference/api.md
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.go is excluded by !vendor/**, !**/vendor/**
📒 Files selected for processing (5)
  • AGENTS.md
  • api/hypershift/v1beta1/hostedcluster_types.go
  • control-plane-operator/hostedclusterconfigoperator/controllers/globalps/setup.go
  • control-plane-operator/hostedclusterconfigoperator/controllers/resources/resources.go
  • docs/content/how-to/common/global-pull-secret.md
✅ Files skipped from review due to trivial changes (2)
  • control-plane-operator/hostedclusterconfigoperator/controllers/resources/resources.go
  • AGENTS.md
🚧 Files skipped from review as they are similar to previous changes (2)
  • control-plane-operator/hostedclusterconfigoperator/controllers/globalps/setup.go
  • api/hypershift/v1beta1/hostedcluster_types.go

Comment thread docs/content/how-to/common/global-pull-secret.md Outdated
@enxebre enxebre force-pushed the enxebre/pull-secret-docs-and-watch branch from 88bfa0c to bd5fa9f Compare April 24, 2026 09:04
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 24, 2026

Codecov Report

❌ Patch coverage is 50.00000% with 12 lines in your changes missing coverage. Please review.
✅ Project coverage is 37.23%. Comparing base (68106f0) to head (a5055b3).
⚠️ Report is 43 commits behind head on main.

Files with missing lines Patch % Lines
...lusterconfigoperator/controllers/globalps/setup.go 53.33% 7 Missing ⚠️
...rconfigoperator/controllers/resources/resources.go 44.44% 5 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main    #8327   +/-   ##
=======================================
  Coverage   37.22%   37.23%           
=======================================
  Files         750      750           
  Lines       91789    91811   +22     
=======================================
+ Hits        34172    34184   +12     
- Misses      54978    54988   +10     
  Partials     2639     2639           
Files with missing lines Coverage Δ
...rconfigoperator/controllers/resources/resources.go 50.43% <44.44%> (-0.02%) ⬇️
...lusterconfigoperator/controllers/globalps/setup.go 10.25% <53.33%> (+10.25%) ⬆️
Flag Coverage Δ
cmd-support 32.06% <ø> (ø)
cpo-hostedcontrolplane 36.45% <ø> (ø)
cpo-other 37.76% <50.00%> (+0.03%) ⬆️
hypershift-operator 47.85% <ø> (ø)
other 27.77% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@enxebre enxebre force-pushed the enxebre/pull-secret-docs-and-watch branch from bd5fa9f to c635b8b Compare April 24, 2026 09:07
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
docs/content/how-to/common/global-pull-secret.md (2)

104-106: ⚠️ Potential issue | 🟡 Minor

Document both Secret watch sources in the baseline-sync section.

Line 105 currently mentions only kube-system watches, but the implementation also watches the control-plane namespace pull-secret for prompt reconciliation on in-place data updates. Please list both watch sources explicitly.

📝 Suggested doc patch
-### Automatic detection and baseline sync
-- The Hosted Cluster Config Operator (HCCO) continuously reconciles Global Pull Secret resources and watches Secrets in the `kube-system` namespace of the data plane.
+### Automatic detection and baseline sync
+- The Hosted Cluster Config Operator (HCCO) continuously reconciles Global Pull Secret resources and watches:
+  - Secrets in the `kube-system` namespace of the data plane, and
+  - the control-plane `pull-secret` Secret in the HostedControlPlane namespace.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/content/how-to/common/global-pull-secret.md` around lines 104 - 106, The
baseline-sync section should explicitly list both Secret watch sources: update
the paragraph under "Automatic detection and baseline sync" to state that HCCO
watches Secrets in the data plane `kube-system` namespace and also watches the
control-plane namespace `pull-secret` (the HostedControlPlane source) so it
reconciles promptly on in-place updates; keep references to
HostedControlPlane.spec.pullSecret and the copied target
`kube-system/original-pull-secret` to make the flow clear.

139-142: ⚠️ Potential issue | 🟡 Minor

Cleanup prose conflicts with the architecture diagram cleanup path.

Line 142 says the DaemonSet keeps running, but the diagram still shows a “Remove DaemonSet” branch. Please align the diagram with the documented behavior.

📝 Suggested diagram adjustment
-    CleanupProcess --> |Removes DaemonSet| RemoveDaemonSet[Remove DaemonSet]
+    CleanupProcess --> |DaemonSet remains scheduled by node eligibility| KeepDaemonSet[DaemonSet remains]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/content/how-to/common/global-pull-secret.md` around lines 139 - 142, The
prose and diagram conflict: the text under "Automatic cleanup" says deleting
`additional-pull-secret` leaves the DaemonSet running (syncing
`/var/lib/kubelet/config.json` from `original-pull-secret`), but the
architecture diagram shows a "Remove DaemonSet" branch; pick one behavior and
make both consistent. Either update the diagram to remove the "Remove DaemonSet"
branch and show the DaemonSet remaining active (annotate that
`global-pull-secret` is removed and the DaemonSet continues using
`original-pull-secret`), or change the prose to state that deleting
`additional-pull-secret` triggers DaemonSet removal; ensure references to
`additional-pull-secret`, `global-pull-secret`, `original-pull-secret`, and the
DaemonSet are consistent across text and diagram.
🧹 Nitpick comments (1)
api/.golangci.yml (1)

1529-1532: Replace the old PullSecret variant with the fully qualified field name to keep exclusions clean.

Lines 1526–1528 contain a PullSecret omitempty exclusion that is now superseded by the new variant at lines 1529–1532. The new entry uses the fully qualified field name (HostedClusterSpec.PullSecret) which matches the current kubeapilinter message format. Remove the old variant to avoid stale entries in the exclusion list.

Proposed cleanup

Replace lines 1526–1528 (old: field PullSecret) with the new fully qualified variant (lines 1529–1532: field HostedClusterSpec.PullSecret).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/.golangci.yml` around lines 1529 - 1532, Remove the stale kubeapilinter
exclusion that references the short field name "PullSecret" and keep the new
fully-qualified exclusion using "HostedClusterSpec.PullSecret"; specifically,
delete the old entry for field PullSecret and ensure only the fully qualified
text 'requiredfields: field HostedClusterSpec.PullSecret should have the
omitempty tag' remains in the linters exclusions so they match the current
kubeapilinter message format.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@docs/content/how-to/common/global-pull-secret.md`:
- Around line 104-106: The baseline-sync section should explicitly list both
Secret watch sources: update the paragraph under "Automatic detection and
baseline sync" to state that HCCO watches Secrets in the data plane
`kube-system` namespace and also watches the control-plane namespace
`pull-secret` (the HostedControlPlane source) so it reconciles promptly on
in-place updates; keep references to HostedControlPlane.spec.pullSecret and the
copied target `kube-system/original-pull-secret` to make the flow clear.
- Around line 139-142: The prose and diagram conflict: the text under "Automatic
cleanup" says deleting `additional-pull-secret` leaves the DaemonSet running
(syncing `/var/lib/kubelet/config.json` from `original-pull-secret`), but the
architecture diagram shows a "Remove DaemonSet" branch; pick one behavior and
make both consistent. Either update the diagram to remove the "Remove DaemonSet"
branch and show the DaemonSet remaining active (annotate that
`global-pull-secret` is removed and the DaemonSet continues using
`original-pull-secret`), or change the prose to state that deleting
`additional-pull-secret` triggers DaemonSet removal; ensure references to
`additional-pull-secret`, `global-pull-secret`, `original-pull-secret`, and the
DaemonSet are consistent across text and diagram.

---

Nitpick comments:
In `@api/.golangci.yml`:
- Around line 1529-1532: Remove the stale kubeapilinter exclusion that
references the short field name "PullSecret" and keep the new fully-qualified
exclusion using "HostedClusterSpec.PullSecret"; specifically, delete the old
entry for field PullSecret and ensure only the fully qualified text
'requiredfields: field HostedClusterSpec.PullSecret should have the omitempty
tag' remains in the linters exclusions so they match the current kubeapilinter
message format.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: 6114ac10-94b6-4911-8f79-b327d7ea779c

📥 Commits

Reviewing files that changed from the base of the PR and between 88bfa0c and bd5fa9f.

⛔ Files ignored due to path filters (19)
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AAA_ungated.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/AutoNodeKarpenter.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterUpdateAcceptRisks.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ClusterVersionOperatorConfiguration.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDC.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUIDAndExtraClaimMappings.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ExternalOIDCWithUpstreamParity.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/GCPPlatform.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HCPEtcdBackup.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/HyperShiftOnlyDynamicResourceAllocation.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/ImageStreamImportMode.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/KMSEncryptionProvider.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • api/hypershift/v1beta1/zz_generated.featuregated-crd-manifests/hostedclusters.hypershift.openshift.io/OpenStack.yaml is excluded by !**/zz_generated.featuregated-crd-manifests/**
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-CustomNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-Default.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • cmd/install/assets/crds/hypershift-operator/zz_generated.crd-manifests/hostedclusters-Hypershift-TechPreviewNoUpgrade.crd.yaml is excluded by !**/zz_generated.crd-manifests/**, !cmd/install/assets/**/*.yaml
  • docs/content/reference/aggregated-docs.md is excluded by !docs/content/reference/aggregated-docs.md
  • docs/content/reference/api.md is excluded by !docs/content/reference/api.md
  • vendor/github.com/openshift/hypershift/api/hypershift/v1beta1/hostedcluster_types.go is excluded by !vendor/**, !**/vendor/**
📒 Files selected for processing (6)
  • AGENTS.md
  • api/.golangci.yml
  • api/hypershift/v1beta1/hostedcluster_types.go
  • control-plane-operator/hostedclusterconfigoperator/controllers/globalps/setup.go
  • control-plane-operator/hostedclusterconfigoperator/controllers/resources/resources.go
  • docs/content/how-to/common/global-pull-secret.md
✅ Files skipped from review due to trivial changes (1)
  • AGENTS.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • api/hypershift/v1beta1/hostedcluster_types.go

Copy link
Copy Markdown
Contributor

@jparrill jparrill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dropped a minor thing and agreeing with bot about one thing to fix. Otherwise lgtm

Comment on lines +682 to +683
// See https://github.com/openshift/hypershift/blob/3e0e06f78460ac1100dd8432dd7eee4e980a87d0/docs/content/how-to/common/global-pull-secret.md
// for details.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree

@@ -197,16 +215,15 @@ The implementation consists of several key components working together:
- **Node eligibility assessment**: Labels nodes from InPlace NodePools and configures DaemonSet scheduling restrictions
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not accurate

@enxebre enxebre force-pushed the enxebre/pull-secret-docs-and-watch branch 2 times, most recently from 9987cf7 to 0c704a1 Compare April 24, 2026 10:36
@openshift-ci openshift-ci Bot added the area/testing Indicates the PR includes changes for e2e testing label Apr 24, 2026
@enxebre enxebre force-pushed the enxebre/pull-secret-docs-and-watch branch 2 times, most recently from 7065e93 to 87b1ccf Compare April 27, 2026 07:46
@enxebre
Copy link
Copy Markdown
Member Author

enxebre commented Apr 29, 2026

/test e2e-aks

@enxebre
Copy link
Copy Markdown
Member Author

enxebre commented Apr 29, 2026

/test e2e-aws-upgrade-hypershift-operator

@enxebre enxebre force-pushed the enxebre/pull-secret-docs-and-watch branch from 9f69bce to 65c3081 Compare April 29, 2026 16:08
@enxebre
Copy link
Copy Markdown
Member Author

enxebre commented Apr 29, 2026

/test e2e-aks
/test e2e-aks-4-22

@enxebre
Copy link
Copy Markdown
Member Author

enxebre commented Apr 30, 2026

/test e2e-aks-4-22
/test e2e-aws

@openshift-ci openshift-ci Bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 30, 2026
@enxebre enxebre force-pushed the enxebre/pull-secret-docs-and-watch branch from 65c3081 to cd21c98 Compare May 4, 2026 12:11
@openshift-ci openshift-ci Bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 4, 2026
@jparrill
Copy link
Copy Markdown
Contributor

jparrill commented May 4, 2026

/lgtm

@openshift-ci openshift-ci Bot added the lgtm Indicates that a PR is ready to be merged. label May 4, 2026
@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling tests matching the pipeline_run_if_changed or not excluded by pipeline_skip_if_only_changed parameters:
/test e2e-aks-4-22
/test e2e-aws-4-22
/test e2e-aks
/test e2e-aws
/test e2e-aws-upgrade-hypershift-operator
/test e2e-azure-self-managed
/test e2e-kubevirt-aws-ovn-reduced
/test e2e-v2-aws

@enxebre enxebre force-pushed the enxebre/pull-secret-docs-and-watch branch from cd21c98 to cef4150 Compare May 4, 2026 12:41
@openshift-ci openshift-ci Bot removed the lgtm Indicates that a PR is ready to be merged. label May 4, 2026
@enxebre enxebre force-pushed the enxebre/pull-secret-docs-and-watch branch from cef4150 to 268b27d Compare May 4, 2026 12:52
enxebre and others added 2 commits May 4, 2026 15:07
Clarify spec.pullSecret API godoc, add CP pull-secret watches
to HCCO resources and globalps controllers, extract testable
predicates with unit tests, and overhaul global-pull-secret docs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@enxebre enxebre force-pushed the enxebre/pull-secret-docs-and-watch branch from 268b27d to a5055b3 Compare May 4, 2026 13:07
@Nirshal
Copy link
Copy Markdown
Contributor

Nirshal commented May 4, 2026

/lgtm

@openshift-ci openshift-ci Bot added the lgtm Indicates that a PR is ready to be merged. label May 4, 2026
@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling tests matching the pipeline_run_if_changed or not excluded by pipeline_skip_if_only_changed parameters:
/test e2e-aks-4-22
/test e2e-aws-4-22
/test e2e-aks
/test e2e-aws
/test e2e-aws-upgrade-hypershift-operator
/test e2e-azure-self-managed
/test e2e-kubevirt-aws-ovn-reduced
/test e2e-v2-aws

@hypershift-jira-solve-ci
Copy link
Copy Markdown

AI Test Failure Analysis

Job: pull-ci-openshift-hypershift-main-e2e-aks | Build: 2051297216613584896 | Cost: $2.77051895 | Failed step: hypershift-azure-run-e2e

View full analysis report


Generated by hypershift-analyze-e2e-failure post-step using Claude claude-opus-4-6

@enxebre
Copy link
Copy Markdown
Member Author

enxebre commented May 6, 2026

/pipeline required

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling tests matching the pipeline_run_if_changed or not excluded by pipeline_skip_if_only_changed parameters:
/test e2e-aks-4-22
/test e2e-aws-4-22
/test e2e-aks
/test e2e-aws
/test e2e-aws-upgrade-hypershift-operator
/test e2e-azure-self-managed
/test e2e-kubevirt-aws-ovn-reduced
/test e2e-v2-aws

@enxebre
Copy link
Copy Markdown
Member Author

enxebre commented May 6, 2026

/retest

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 6, 2026

@enxebre: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@hypershift-jira-solve-ci
Copy link
Copy Markdown

Test Failure Analysis Complete

Job Information

  • Prow Job: Red Hat Konflux / enterprise-contract-mce-50 / hypershift-release-mce-50
  • Build ID: enterprise-contract-mce-50-5r5tr (pipeline run)
  • Snapshot: release-mce-50-20260506-161211-000
  • Component: hypershift-release-mce-50
  • Image: quay.io/redhat-user-workloads/crt-redhat-acm-tenant/hypershift-release-mce-50:on-pr-a5055b3742cff6a4cc302a4d902c6ea427b01722
  • Started: 2026-05-06T16:10:56Z
  • Completed: 2026-05-06T16:24:44Z

Test Failure Analysis

Error

Integration test for component hypershift-release-mce-50 snapshot
release-mce-50-20260506-161211-000 and scenario enterprise-contract-mce-50 has failed

Task: verify | Duration: 17s | Status: ✗ Failed | Details: ✗ Failed

Summary

This is a transient Konflux infrastructure failure, not a code issue in PR #8327. The Enterprise Contract (EC) verify Tekton task failed without producing any evaluation results (no success/warning/violation counts), which indicates the task itself crashed or encountered an error before completing its policy evaluation. In contrast, passing runs of the same check report detailed results (e.g., "262 successes, 13 warnings"). The PR's code changes (API godoc clarification, CRD manifest updates, HCCO controller watch addition) are documentation/operational changes unrelated to the container build or EC policy compliance. Identical build-time scan results (ecosystem-cert-preflight-checks: 2 failures, sast-shell-check: 38 failures) exist in both passing and failing PRs, confirming the build output is consistent — only the EC verify task execution diverged.

Root Cause

The Enterprise Contract verify Tekton task in pipeline run enterprise-contract-mce-50-5r5tr failed at the task execution level — it exited non-zero without completing its policy evaluation against the SLSA attestation of the built image.

This is proven by the absence of any success/warning/violation counts in the check run output. When EC successfully evaluates policies (even with violations), it reports counts like "262 successes, 13 warnings" (as seen on PR #8396). When the task itself crashes, the output simply says "Failed" with no counts.

The failure is transient infrastructure, not a policy violation. Evidence:

  1. Same-day PR passed: PR OCPBUGS-84308: fix(cpo) delete terminated MCD pods to retry in-place upgrades #8434 passed this exact check at 10:34 UTC on the same day (May 6)
  2. Next-day PRs passed: PRs OCPBUGS-85192: Add retry logic for Azure DNS cleanup in hcp destroy #8454 (02:02 UTC May 7) and OCPBUGS-85243: Set aws-load-balancer-scheme on public HCP router service #8458 (12:57 UTC May 7) both pass
  3. Branch consistently passes: All 5 most recent release-5.0 branch commits pass with "Warning" (neutral)
  4. Identical build outputs: The SLSA attestation for PR OCPBUGS-84528: clarify pull secret in-place update behavior and add CP watches #8327 records the exact same task results as PR CNTRLPLANE-3343: Extract support/k8sutil package from support/util #8396 (which passed EC). Both have ecosystem-cert-preflight-checks: FAILURE (6 successes, 2 failures) and sast-shell-check: FAILURE (0 successes, 38 failures) — these are scan-level findings, not EC violations
  5. All task bundles are trusted: Every Tekton task bundle used in the build pipeline (task-init:0.4, task-ecosystem-cert-preflight-checks:0.2, task-sast-shell-check-oci-ta:0.1) is present in the data-acceptable-bundles trusted list with expires: never

The most likely infrastructure cause is a transient network issue, registry timeout, or resource constraint in the Konflux cluster (stone-prd-rh01) that prevented the verify task from completing its attestation fetch or policy evaluation during the ~17-second execution window on 2026-05-06 16:10–16:24 UTC.

Recommendations
  1. Re-trigger the Konflux build: Push an empty commit or re-push the branch to trigger a new build + EC check. Given that the same check passes for other PRs and on the branch, the re-run should succeed.

    git commit --allow-empty -m "chore: retrigger CI"
    git push
  2. No code changes needed: The PR's changes (API godoc updates, CRD manifest regeneration, HCCO controller watch addition, documentation overhaul) are not related to the EC failure.

  3. If the failure persists on retry: Check the Konflux pipeline logs at https://konflux-ui.apps.stone-prd-rh01.pg1f.p1.openshiftapps.com/ns/crt-redhat-acm-tenant/pipelinerun/enterprise-contract-mce-50-5r5tr/logs/verify for the specific error message (requires Konflux cluster authentication).

  4. For Konflux team: The verify task should produce structured error output even when it fails at the task level (e.g., "failed to fetch attestation: connection timeout") rather than silently failing, to aid in diagnosing transient vs. systematic issues.

Evidence
Evidence Detail
Failed check Red Hat Konflux / enterprise-contract-mce-50 / hypershift-release-mce-50 → conclusion: failure
Verify task output Status: ✗ Failed with no success/warning counts (task-level crash)
Passing check (same check, PR #8396) 262 successes, 13 warnings → conclusion: success
Passing check (PR #8434, same day 10:34 UTC) conclusion: neutral (Warning)
Passing check (PR #8454, next day 02:02 UTC) conclusion: neutral (Warning)
Passing check (PR #8458, next day 12:57 UTC) conclusion: neutral (Warning)
release-5.0 branch (5 recent commits) All neutral (Warning) — consistently passing
Build-time preflight scan (PR 8327) FAILURE: 6 successes, 2 failures
Build-time preflight scan (PR 8396, passed EC) FAILURE: 6 successes, 2 failuresidentical
Build-time SAST shell check (PR 8327) FAILURE: 0 successes, 38 failures
Build-time SAST shell check (PR 8396, passed EC) FAILURE: 0 successes, 38 failuresidentical
Task bundles trusted? All 3 checked bundles found in data-acceptable-bundles with expires: never
Image signed? Yes — .sig and .att artifacts exist for sha256:b7935e7c...
SLSA attestation Valid application/vnd.in-toto+json with https://slsa.dev/provenance/v0.2 predicate
Pipeline run enterprise-contract-mce-50-5r5tr in crt-redhat-acm-tenant namespace
Time window 2026-05-06 16:10–16:24 UTC

@enxebre enxebre merged commit 37f46b9 into openshift:main May 7, 2026
50 of 52 checks passed
@openshift-ci-robot
Copy link
Copy Markdown

@enxebre: Jira Issue OCPBUGS-84528: Some pull requests linked via external trackers have merged:

The following pull request, linked via external tracker, has not merged:

All associated pull requests must be merged or unlinked from the Jira bug in order for it to move to the next state. Once unlinked, request a bug refresh with /jira refresh.

Jira Issue OCPBUGS-84528 has not been moved to the MODIFIED state.

Details

In response to this:

Summary

  • Clarify HostedCluster.spec.pullSecret API godoc to accurately describe in-place Secret update propagation: no NodePool rollout, but on AWS/Azure Replace NodePools credentials still reach guest cluster and kubelet via the Global Pull Secret DaemonSet path
  • Add CP pull-secret watch to the globalps HCCO controller so kube-system/original-pull-secret reconciles promptly on in-place updates (the resources controller already had a watch added for openshift-config/pull-secret)
  • Overhaul global-pull-secret.md to document platform/NodePool eligibility, positive-selection scheduling model, and baseline sync behavior without additional-pull-secret
  • Add "Pull secret cycling" section to AGENTS.md so AI agents treat spec.pullSecret, management-cluster Secret data, HCCO reconciliation, and optional Global Pull Secret as one system when making changes

Test plan

  • Verify make verify passes (API generation, CRD manifests, lint)
  • Verify HCCO starts cleanly with both new watches (resources + globalps)
  • Test in-place pull secret update on AWS Replace NodePool: confirm openshift-config/pull-secret, kube-system/original-pull-secret, and kubelet config.json all update promptly without NodePool rollout
  • Test on a non-AWS/Azure platform: confirm secrets update in-cluster but no DaemonSet pods schedule (positive-selection label absent)

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Documentation
  • Clarified pull secret propagation semantics and when node pool rollouts are triggered
  • Added guidance on platform and node pool eligibility for global pull secret synchronization
  • Updated global pull secret reconciliation behavior and merge mechanics

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. area/api Indicates the PR includes changes for the API area/cli Indicates the PR includes changes for CLI area/control-plane-operator Indicates the PR includes changes for the control plane operator - in an OCP release area/documentation Indicates the PR includes changes for documentation area/testing Indicates the PR includes changes for e2e testing jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants