Skip to content

adds the e2e coverage collector for the operator using codecov#138

Open
siddhibhor-56 wants to merge 1 commit intoopenshift:mainfrom
siddhibhor-56:codecov
Open

adds the e2e coverage collector for the operator using codecov#138
siddhibhor-56 wants to merge 1 commit intoopenshift:mainfrom
siddhibhor-56:codecov

Conversation

@siddhibhor-56
Copy link
Copy Markdown
Contributor

@siddhibhor-56 siddhibhor-56 commented Apr 12, 2026

Adds the e2e coverage collector for the operator using codecov

Summary by CodeRabbit

  • Chores
    • Added Codecov integration for automated coverage uploads.
    • Enabled end-to-end test coverage collection and publishing.
    • Improved e2e test diagnostics with enhanced tracing and node-event visibility.

@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented Apr 12, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: siddhibhor-56
Once this PR has been reviewed and has the lgtm label, please assign mytreya-rh for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 12, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: da142044-b902-4442-92b2-0526add09c1c

📥 Commits

Reviewing files that changed from the base of the PR and between 0c0adfd and 8f666be.

📒 Files selected for processing (1)
  • Makefile

Walkthrough

Makefile adds Codecov variables and an e2e-coverage-publish phony target that collects e2e coverage from the operator pod, downloads a pinned Codecov uploader and SHA, verifies the uploader, converts and uploads coverage files. test-e2e flags extended with -ginkgo.trace and -ginkgo.show-node-events; GOFLAGS dependency updated.

Changes

Codecov Integration and E2E Coverage Publishing

Layer / File(s) Summary
Configuration Variables
Makefile
Adds CODECOV_VERSION, CODECOV_URL, CODECOV_SHA_URL, E2E_COVERAGE_DIR, and E2E_COVERAGE_NAMESPACE.
Test Flags / GOFLAGS Wiring
Makefile
test-e2e invocation extended with -ginkgo.trace and -ginkgo.show-node-events; GOFLAGS dependency list updated to include e2e-coverage-publish.
Coverage Publish Target Implementation
Makefile
Adds phony e2e-coverage-publish target that locates the operator pod, copies coverage artifacts to E2E_COVERAGE_DIR, converts coverage data, downloads pinned Codecov uploader and SHA, verifies the uploader, and uploads coverage to Codecov with error handling for missing pods/data.

Sequence Diagram

sequenceDiagram
    participant CI as CI Runner
    participant K8s as Kubernetes (operator Pod)
    participant FS as Local FS (E2E_COVERAGE_DIR)
    participant Codecov as Codecov Server

    CI->>K8s: kubectl exec & copy /coverage/*.out
    K8s-->>CI: coverage files stream
    CI->>FS: write coverage files
    CI->>CI: convert coverage files to codecov format
    CI->>Codecov: download uploader and SHA
    CI->>CI: verify uploader SHA
    CI->>Codecov: upload coverage via uploader
    Codecov-->>CI: upload response/status
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

🚥 Pre-merge checks | ✅ 12
✅ Passed checks (12 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: adding e2e coverage collection using Codecov. It directly matches the changeset which introduces Codecov integration variables and an e2e-coverage-publish target.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Stable And Deterministic Test Names ✅ Passed All Ginkgo test names are stable with no dynamic content, generated suffixes, timestamps, UUIDs, or runtime-specific values. Makefile changes only add output flags affecting test display, not naming.
Test Structure And Quality ✅ Passed The PR only modifies the Makefile for Codecov integration and does not contain any Ginkgo test code changes. The custom check is not applicable as it specifically reviews Ginkgo test quality.
Microshift Test Compatibility ✅ Passed PR adds e2e tests using only standard Kubernetes APIs and operator's own custom CRDs (ExternalSecretsConfig in operator.openshift.io/v1alpha1). No MicroShift-unavailable OpenShift APIs referenced.
Single Node Openshift (Sno) Test Compatibility ✅ Passed PR adds Codecov integration and coverage publishing to Makefile only. No new Ginkgo e2e tests were added. SNO compatibility check is not applicable as it only applies when new tests are defined.
Topology-Aware Scheduling Compatibility ✅ Passed PR only modifies Makefile for CI/CD (Codecov integration). No deployment manifests, operator code, or controllers are changed. Topology-aware scheduling check does not apply.
Ote Binary Stdout Contract ✅ Passed PR complies with OTE Binary Stdout Contract. fmt.Fprintf(GinkgoWriter) in TestE2E is allowed per check. No non-JSON stdout in process-level code.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed New Ginkgo tests are IPv6 and disconnected-network compatible. No IPv4 assumptions or external internet connectivity detected. Tests use cluster-internal DNS only.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@images/ci/Dockerfile`:
- Around line 30-31: The Dockerfile currently uses "RUN mkdir -p /tmp/e2e-cover
&& chmod 777 /tmp/e2e-cover" which creates a world-writable directory; instead
create the directory, chown it to UID/GID 65534 and set restrictive permissions
(e.g. 0700 or 0750) so only the intended user can write; update the RUN that
creates /tmp/e2e-cover to use chown 65534:65534 and chmod 700 (or 750 if group
write is needed) rather than chmod 777.

In `@Makefile`:
- Around line 234-243: The kubectl cp invocation in the Makefile is causing a
nested e2e-cover directory, so change the kubectl cp usage to copy the contents
of /tmp/e2e-cover directly into $(E2E_COVERAGE_DIR); update the kubectl cp line
that references $(KUBECTL) cp "$(E2E_COVERAGE_NAMESPACE)/$$POD:/tmp/e2e-cover"
$(E2E_COVERAGE_DIR) to use a trailing dot on the source (e.g.
"$(E2E_COVERAGE_NAMESPACE)/$$POD:/tmp/e2e-cover/." $(E2E_COVERAGE_DIR)) so
covmeta and covcounters land directly in $(E2E_COVERAGE_DIR) for go tool covdata
to consume.
- Around line 245-248: Replace the unsafe "latest" downloader invocation in the
Makefile (the curl/chmod/./codecov sequence) with a pinned release workflow:
download a specific versioned uploader binary URL (not "latest") and its
corresponding SHA256SUM (or .sha256) file, verify the binary against the
checksum (using sha256sum -c or equivalent) before making it executable and
running it (the ./codecov invocation), and fail the step if verification fails;
keep the existing flags (--file=$(OUTPUTS_PATH)/coverage-e2e.out --flags=e2e
--name="E2E Coverage" --verbose) and still remove the binary afterwards. Ensure
the Makefile references the chosen version string so it is explicit and
reproducible.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 9ff32fe0-a38f-4c43-ab77-dfdadbf0a804

📥 Commits

Reviewing files that changed from the base of the PR and between 2a65cb8 and 5670473.

📒 Files selected for processing (2)
  • Makefile
  • images/ci/Dockerfile

Comment thread images/ci/Dockerfile Outdated
Comment on lines +30 to +31
# Create the directory where Go will write coverage data at runtime.
RUN mkdir -p /tmp/e2e-cover && chmod 777 /tmp/e2e-cover
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid world-writable permissions for /tmp/e2e-cover.

chmod 777 weakens container security unnecessarily. Since the process runs as UID/GID 65534, set ownership and restrictive perms instead.

Suggested hardening
-RUN mkdir -p /tmp/e2e-cover && chmod 777 /tmp/e2e-cover
+RUN mkdir -p /tmp/e2e-cover && chown 65534:65534 /tmp/e2e-cover && chmod 700 /tmp/e2e-cover

As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Create the directory where Go will write coverage data at runtime.
RUN mkdir -p /tmp/e2e-cover && chmod 777 /tmp/e2e-cover
# Create the directory where Go will write coverage data at runtime.
RUN mkdir -p /tmp/e2e-cover && chown 65534:65534 /tmp/e2e-cover && chmod 700 /tmp/e2e-cover
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@images/ci/Dockerfile` around lines 30 - 31, The Dockerfile currently uses
"RUN mkdir -p /tmp/e2e-cover && chmod 777 /tmp/e2e-cover" which creates a
world-writable directory; instead create the directory, chown it to UID/GID
65534 and set restrictive permissions (e.g. 0700 or 0750) so only the intended
user can write; update the RUN that creates /tmp/e2e-cover to use chown
65534:65534 and chmod 700 (or 750 if group write is needed) rather than chmod
777.

Comment thread Makefile
Comment thread Makefile Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (10)
test/utils/bitwarden_resources.go (2)

26-32: Constants are duplicated with aws_resources.go.

The constants externalSecretsAPIVersionV1 and clusterSecretStoreKind are also defined in test/utils/aws_resources.go. Consider extracting these to a shared constants file within the utils package to avoid drift.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/utils/bitwarden_resources.go` around lines 26 - 32, The constants
externalSecretsAPIVersionV1 and clusterSecretStoreKind are duplicated across
bitwarden_resources.go and aws_resources.go; extract these into a shared utils
package constants file (e.g., constants.go) and replace the local definitions in
both bitwarden_resources.go and aws_resources.go with references to the new
shared constants (externalSecretsAPIVersionV1, clusterSecretStoreKind), removing
the duplicate declarations so both files import/use the single source of truth.

70-96: Consider handling SetNestedField errors.

While SetNestedField typically only fails if the path is structurally invalid (which won't happen with these compile-time known paths), explicitly checking the error would make the code more defensive.

-	_ = unstructured.SetNestedField(u.Object, []interface{}{
+	if err := unstructured.SetNestedField(u.Object, []interface{}{
 		map[string]interface{}{
 			"secretKey": "value",
 			"remoteRef": map[string]interface{}{
 				"key": remoteKey,
 			},
 		},
-	}, "spec", "data")
+	}, "spec", "data"); err != nil {
+		panic(fmt.Sprintf("failed to set spec.data: %v", err))
+	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/utils/bitwarden_resources.go` around lines 70 - 96, The SetNestedField
calls in BitwardenExternalSecretByName and BitwardenExternalSecretByUUID ignore
returned errors; update both functions to capture the error from
unstructured.SetNestedField and handle it (e.g., if err != nil {
panic(fmt.Errorf("failed to set spec.data for %s: %w", name, err)) }) so
failures are surfaced; reference the SetNestedField call sites in
BitwardenExternalSecretByName and BitwardenExternalSecretByUUID and include the
original error text in the panic/log message for diagnosability.
test/utils/bitwarden_api_runner.go (1)

35-38: Consider pinning the curl image tag for reproducibility.

Using docker.io/curlimages/curl:latest can lead to non-reproducible test behavior if the image is updated with breaking changes. Consider pinning to a specific version.

 const (
-	bitwardenAPITestRunnerImage = "docker.io/curlimages/curl:latest"
+	bitwardenAPITestRunnerImage = "docker.io/curlimages/curl:8.7.1"
 	bitwardenCredMountPath      = "/etc/bitwarden-cred"
 )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/utils/bitwarden_api_runner.go` around lines 35 - 38, The constant
bitwardenAPITestRunnerImage currently uses the mutable tag
"docker.io/curlimages/curl:latest" which can cause flaky tests; change the value
of bitwardenAPITestRunnerImage to a specific, immutable curlimages tag (e.g., a
known semver like :8.x.y) and add a short comment noting why the tag is pinned;
ensure any CI/test docs referencing this constant are updated when you
intentionally upgrade the pinned tag.
Makefile (2)

182-182: Add PHONY declaration for the targets on this line.

The static analysis tool flagged that targets listed here should be declared as .PHONY to ensure they're always executed regardless of whether a file with that name exists.

+.PHONY: fmt vet test test-unit test-e2e e2e-coverage-publish run update-vendor update-dep
 fmt vet test test-unit test-e2e e2e-coverage-publish run update-vendor update-dep: GOFLAGS=
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Makefile` at line 182, The Makefile line defining the combined targets "fmt
vet test test-unit test-e2e e2e-coverage-publish run update-vendor update-dep:
GOFLAGS=" lacks a .PHONY declaration, so add a .PHONY rule listing these exact
target names (fmt vet test test-unit test-e2e e2e-coverage-publish run
update-vendor update-dep) elsewhere in the Makefile to ensure they always run
regardless of files with those names; update the .PHONY block to include all
these targets.

219-222: Codecov uploader pinning and SHA256 verification are correct.

The implementation correctly pins v0.7.6 and verifies the SHA256 checksum before execution. v0.7.6 is a valid release. However, v0.8.0 is now the latest available version. If this version is not pinned for specific compatibility reasons, consider upgrading to v0.8.0 to benefit from any bug fixes or security patches.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Makefile` around lines 219 - 222, Update the pinned Codecov uploader by
changing the CODECOV_VERSION variable from v0.7.6 to v0.8.0 and adjust the
derived URLs (CODECOV_URL and CODECOV_SHA_URL) will automatically follow; if
there's a deliberate compatibility constraint, add a brief comment next to
CODECOV_VERSION explaining why v0.7.6 must remain pinned instead of upgrading to
v0.8.0.
test/utils/bitwarden.go (2)

182-201: Consider validating organization and project IDs.

FetchBitwardenCredsFromSecret validates that token is non-empty but silently returns empty strings for orgID and projectID if they're missing. If these are required for Bitwarden API operations, consider validating them as well.

 	if v, ok := secret.Data[BitwardenCredKeyOrgID]; ok {
 		orgID = string(v)
+	} else {
+		return "", "", "", fmt.Errorf("bitwarden creds secret %s/%s missing required key %q", secretNamespace, secretName, BitwardenCredKeyOrgID)
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/utils/bitwarden.go` around lines 182 - 201,
FetchBitwardenCredsFromSecret currently only enforces token presence but
silently allows orgID and projectID to be empty; update the function to validate
BitwardenCredKeyOrgID and BitwardenCredKeyProjectID like TokenSecretKey and
return a descriptive error if either is missing or empty (use the same error
formatting pattern as the existing token check, referencing
secretNamespace/secretName and the missing key names) so callers receive a clear
failure when required organization or project IDs are absent.

326-341: Permissive error handling may mask connectivity issues.

The reachability check treats http_code=000 (connection refused/timeout/TLS failure) and empty logs as success to avoid failing the suite. While this is pragmatic, it could mask real connectivity problems that would cause subsequent tests to fail with less informative errors.

Consider at minimum logging a warning when these permissive paths are taken, so operators can investigate if tests fail later.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/utils/bitwarden.go` around lines 326 - 341, In the PodFailed case of the
reachability check (the switch handling corev1.PodFailed), add a warning log
before returning true on the permissive paths so connectivity issues aren't
silently ignored: when getPodLogs(...) contains "http_code=000" or when logs
contain "error on the server" or are empty/whitespace, call the test/logger
warning function to record the pod/container identity (use
formatPodContainerStatus(p) and podName/BitwardenOperandNamespace) and the raw
logs, then return true,nil as before.
test/utils/dynamic_resources.go (1)

112-127: Input mutation may surprise callers.

getResourceInterface mutates the input unstructuredObj by calling SetNamespace (Line 121). This side effect could be unexpected for callers who pass an object they intend to reuse. Consider documenting this behavior or working with a copy.

 func (d DynamicResourceLoader) getResourceInterface(unstructuredObj *unstructured.Unstructured, overrideNamespace string) dynamic.ResourceInterface {
 	gvk := unstructuredObj.GroupVersionKind()
 	gr, err := restmapper.GetAPIGroupResources(d.KubeClient.Discovery())
 	require.NoError(d.t, err)
 	mapper := restmapper.NewDiscoveryRESTMapper(gr)
 	mapping, err := mapper.RESTMapping(gvk.GroupKind(), gvk.Version)
 	require.NoError(d.t, err)
 	if mapping.Scope.Name() == meta.RESTScopeNameNamespace {
 		if overrideNamespace != "" {
-			unstructuredObj.SetNamespace(overrideNamespace)
+			unstructuredObj.SetNamespace(overrideNamespace) // Note: mutates input object
 		}
 		require.NotEmpty(d.t, unstructuredObj.GetNamespace(), "Namespace can not be empty for namespaced resource")
 		return d.DynamicClient.Resource(mapping.Resource).Namespace(unstructuredObj.GetNamespace())
 	}
 	return d.DynamicClient.Resource(mapping.Resource)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/utils/dynamic_resources.go` around lines 112 - 127, The
getResourceInterface function currently mutates the caller's unstructuredObj by
calling SetNamespace on it when applying overrideNamespace; avoid surprising
side-effects by operating on a copy: create a deep copy of unstructuredObj
(e.g., via unstructuredObj.DeepCopy()), set the namespace on the copy rather
than the original, and use the copy's GetNamespace when selecting the Resource
Interface; alternatively, clearly document that getResourceInterface will mutate
unstructuredObj if overrideNamespace is provided. Ensure references to
SetNamespace, unstructuredObj, overrideNamespace and getResourceInterface are
updated accordingly.
test/utils/artifact_dump.go (1)

171-177: Consider pre-compiling the regex for sanitizeFilename.

The regex is compiled on every call. While acceptable for e2e test utilities where this runs infrequently, pre-compiling at package level would be slightly more efficient.

♻️ Optional: Pre-compile regex
+var sanitizeFilenameRe = regexp.MustCompile(`[^a-zA-Z0-9_-]`)
+
 func sanitizeFilename(s string) string {
-	s = regexp.MustCompile(`[^a-zA-Z0-9_-]`).ReplaceAllString(s, "_")
+	s = sanitizeFilenameRe.ReplaceAllString(s, "_")
 	if len(s) > 100 {
 		s = s[:100]
 	}
 	return s
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/utils/artifact_dump.go` around lines 171 - 177, The sanitizeFilename
function currently compiles the regex on every call; move the
regexp.MustCompile(`[^a-zA-Z0-9_-]`) to a package-level variable (e.g.,
filenameSanitizeRegex) and update sanitizeFilename to use
filenameSanitizeRegex.ReplaceAllString(s, "_") so the regex is compiled once and
behavior (truncation to 100 chars) remains unchanged.
test/e2e/bitwarden_api_test.go (1)

139-144: Consider documenting why both 200 and 400 are acceptable.

The test accepts either HTTP 200 or 400 for empty ids list. Adding a brief comment explaining the expected API behavior variance would improve clarity.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/bitwarden_api_test.go` around lines 139 - 144, Add a short comment
above the It("GET /rest/api/1/secrets-by-ids with empty ids should return 200 or
400", ...) test explaining why both HTTP 200 and 400 are accepted (e.g.,
different implementations may treat an empty ids list as a valid no-op returning
200 or as a client error returning 400), so future readers of the test (and
maintainers of runAPIJob/script and the API contract) understand the intended
variance; reference the test name, the script variable that posts '{"ids":[]}',
and runAPIJob to locate the exact spot to insert the comment.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@Makefile`:
- Line 182: The Makefile line defining the combined targets "fmt vet test
test-unit test-e2e e2e-coverage-publish run update-vendor update-dep: GOFLAGS="
lacks a .PHONY declaration, so add a .PHONY rule listing these exact target
names (fmt vet test test-unit test-e2e e2e-coverage-publish run update-vendor
update-dep) elsewhere in the Makefile to ensure they always run regardless of
files with those names; update the .PHONY block to include all these targets.
- Around line 219-222: Update the pinned Codecov uploader by changing the
CODECOV_VERSION variable from v0.7.6 to v0.8.0 and adjust the derived URLs
(CODECOV_URL and CODECOV_SHA_URL) will automatically follow; if there's a
deliberate compatibility constraint, add a brief comment next to CODECOV_VERSION
explaining why v0.7.6 must remain pinned instead of upgrading to v0.8.0.

In `@test/e2e/bitwarden_api_test.go`:
- Around line 139-144: Add a short comment above the It("GET
/rest/api/1/secrets-by-ids with empty ids should return 200 or 400", ...) test
explaining why both HTTP 200 and 400 are accepted (e.g., different
implementations may treat an empty ids list as a valid no-op returning 200 or as
a client error returning 400), so future readers of the test (and maintainers of
runAPIJob/script and the API contract) understand the intended variance;
reference the test name, the script variable that posts '{"ids":[]}', and
runAPIJob to locate the exact spot to insert the comment.

In `@test/utils/artifact_dump.go`:
- Around line 171-177: The sanitizeFilename function currently compiles the
regex on every call; move the regexp.MustCompile(`[^a-zA-Z0-9_-]`) to a
package-level variable (e.g., filenameSanitizeRegex) and update sanitizeFilename
to use filenameSanitizeRegex.ReplaceAllString(s, "_") so the regex is compiled
once and behavior (truncation to 100 chars) remains unchanged.

In `@test/utils/bitwarden_api_runner.go`:
- Around line 35-38: The constant bitwardenAPITestRunnerImage currently uses the
mutable tag "docker.io/curlimages/curl:latest" which can cause flaky tests;
change the value of bitwardenAPITestRunnerImage to a specific, immutable
curlimages tag (e.g., a known semver like :8.x.y) and add a short comment noting
why the tag is pinned; ensure any CI/test docs referencing this constant are
updated when you intentionally upgrade the pinned tag.

In `@test/utils/bitwarden_resources.go`:
- Around line 26-32: The constants externalSecretsAPIVersionV1 and
clusterSecretStoreKind are duplicated across bitwarden_resources.go and
aws_resources.go; extract these into a shared utils package constants file
(e.g., constants.go) and replace the local definitions in both
bitwarden_resources.go and aws_resources.go with references to the new shared
constants (externalSecretsAPIVersionV1, clusterSecretStoreKind), removing the
duplicate declarations so both files import/use the single source of truth.
- Around line 70-96: The SetNestedField calls in BitwardenExternalSecretByName
and BitwardenExternalSecretByUUID ignore returned errors; update both functions
to capture the error from unstructured.SetNestedField and handle it (e.g., if
err != nil { panic(fmt.Errorf("failed to set spec.data for %s: %w", name, err))
}) so failures are surfaced; reference the SetNestedField call sites in
BitwardenExternalSecretByName and BitwardenExternalSecretByUUID and include the
original error text in the panic/log message for diagnosability.

In `@test/utils/bitwarden.go`:
- Around line 182-201: FetchBitwardenCredsFromSecret currently only enforces
token presence but silently allows orgID and projectID to be empty; update the
function to validate BitwardenCredKeyOrgID and BitwardenCredKeyProjectID like
TokenSecretKey and return a descriptive error if either is missing or empty (use
the same error formatting pattern as the existing token check, referencing
secretNamespace/secretName and the missing key names) so callers receive a clear
failure when required organization or project IDs are absent.
- Around line 326-341: In the PodFailed case of the reachability check (the
switch handling corev1.PodFailed), add a warning log before returning true on
the permissive paths so connectivity issues aren't silently ignored: when
getPodLogs(...) contains "http_code=000" or when logs contain "error on the
server" or are empty/whitespace, call the test/logger warning function to record
the pod/container identity (use formatPodContainerStatus(p) and
podName/BitwardenOperandNamespace) and the raw logs, then return true,nil as
before.

In `@test/utils/dynamic_resources.go`:
- Around line 112-127: The getResourceInterface function currently mutates the
caller's unstructuredObj by calling SetNamespace on it when applying
overrideNamespace; avoid surprising side-effects by operating on a copy: create
a deep copy of unstructuredObj (e.g., via unstructuredObj.DeepCopy()), set the
namespace on the copy rather than the original, and use the copy's GetNamespace
when selecting the Resource Interface; alternatively, clearly document that
getResourceInterface will mutate unstructuredObj if overrideNamespace is
provided. Ensure references to SetNamespace, unstructuredObj, overrideNamespace
and getResourceInterface are updated accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: a9d37d64-68e3-464b-9b91-29402719f719

📥 Commits

Reviewing files that changed from the base of the PR and between 5670473 and 0c0adfd.

📒 Files selected for processing (20)
  • Makefile
  • README.md
  • hack/govulncheck.sh
  • images/ci/Dockerfile.codecoverage
  • test/README.md
  • test/apis/README.md
  • test/e2e/README.md
  • test/e2e/bitwarden_api_test.go
  • test/e2e/bitwarden_es_test.go
  • test/e2e/e2e_suite_test.go
  • test/e2e/e2e_test.go
  • test/go.mod
  • test/utils/artifact_dump.go
  • test/utils/aws_resources.go
  • test/utils/bitwarden.go
  • test/utils/bitwarden_api_runner.go
  • test/utils/bitwarden_resources.go
  • test/utils/cleanup.go
  • test/utils/conditions.go
  • test/utils/dynamic_resources.go
✅ Files skipped from review due to trivial changes (6)
  • README.md
  • test/README.md
  • test/apis/README.md
  • hack/govulncheck.sh
  • test/go.mod
  • test/e2e/README.md

@siddhibhor-56 siddhibhor-56 force-pushed the codecov branch 2 times, most recently from e371145 to 3604751 Compare May 6, 2026 18:22
Adds the e2e-coverage-publish Makefile target to collect coverage data
from the operator pod after e2e tests and upload it to Codecov using a
pinned, SHA256-verified uploader binary. Adds empty Dockerfile.codecoverage
placeholder for the coverage CI image.

Co-authored-by: Cursor <cursoragent@cursor.com>
@openshift-ci
Copy link
Copy Markdown

openshift-ci Bot commented May 6, 2026

@siddhibhor-56: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/verify 8f666be link true /test verify

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant