Conversation
Replace silent catch {} blocks with descriptive error logging
or explanatory comments across CLI helper modules:
- nim.js: Log GPU detection failures and NIM health check errors
at debug level via pino logger for troubleshooting
- credentials.js: Warn on corrupted credentials file (verbose mode),
add comment explaining gh CLI fallthrough
- registry.js: Warn on corrupted sandbox registry (verbose mode)
- policies.js: Add comment explaining empty catch intent
These catch blocks previously swallowed all errors silently,
making it impossible to diagnose failures in GPU detection,
NIM container management, and credential/registry loading.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Automatically fast-forward fork's main branch to match NVIDIA/NemoClaw:main daily at 6:17 AM UTC. Can also be triggered manually via workflow_dispatch. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… snapshot Add 62 new tests across JavaScript and Python modules: JavaScript (29 tests): - credentials-unit.test.js: loadCredentials, saveCredential, getCredential, file persistence, corrupt file handling, env var precedence - policies-unit.test.js: extractPresetEntries, parseCurrentPolicy, getAppliedPresets, applyPreset input validation Python (33 tests): - test_runner.py: log/progress output format, run_cmd safety (never shell=True), load_blueprint validation, action_plan structure/validation/ endpoint override/progress emission, action_status with/without runs - test_snapshot.py: create_snapshot manifest generation, cutover_host archiving, rollback_from_snapshot restoration, list_snapshots ordering Also fixes nim.js to use NEMOCLAW_VERBOSE console output instead of pino logger (which is not available in the upstream codebase). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reduce contributor friction with consistent tooling: - Add .nvmrc (22) and .python-version (3.11) so nvm/fnm/pyenv auto-select the correct runtime versions - Add `make test` to root Makefile that runs both JS and Python tests in one command (`make test-js` and `make test-py` for running them individually) - Add `npm run test:all` to root package.json for JS unit + TypeScript vitest in one command - Add `npm run check` to root package.json delegating to nemoclaw's lint + format-check + type-check - Add pytest configuration to nemoclaw-blueprint/pyproject.toml (testpaths, pythonpath) - Add `make test` target to nemoclaw-blueprint/Makefile Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New tests (27 total): - runner-capture.test.js: 7 tests for runCapture covering stdout capture, trimming, error handling with ignoreError, env var merging, and stderr isolation - resolve-openshell.test.js: 10 tests for openshell binary resolution covering commandV absolute path validation, fallback candidate priority order, relative path rejection, and home directory handling New CI workflow: - test-python.yaml: runs Python tests on blueprint changes and npm audit on every PR (fills gap in upstream pr.yaml which only runs JS tests) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New CI workflow: - docker-smoke.yaml: builds the production Dockerfile on PRs that touch Dockerfile/plugin/blueprint files, then verifies the image starts, plugin files exist, Python venv works, and sandbox user permissions are correct. Catches build failures before release. Developer tooling: - Add `make dead-code` target using tsc --noUnusedLocals and ruff F401/F841 checks (no new dependencies needed) Improved error message: - Gateway health check failure in onboard.js now shows troubleshooting steps, common causes, and retry instructions instead of a single opaque line Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
CONTRIBUTING.md: - Add make test, make test-js, make test-py, make dead-code, npm run test:all, and npm run check to the task table - Update PR checklist to reference make test instead of just npm test .dockerignore: - Exclude .git, .github, docs/, IDE files, .env files, and dev config from Docker build context for faster builds - Keep nemoclaw/src/ and test/ accessible for builder stage and test Dockerfiles Onboard error messages: - Docker not running: show platform-specific start commands (Docker Desktop/Colima on macOS, systemctl on Linux) instead of a generic "start Docker" message Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Solves the problem of agents stopping to ask 'what to do next'. What's included: - Complete autonomous agent guide (docs/autonomous-agents.md) - 600+ lines - Configuration examples for non-interactive mode - Dashboard and log streaming instructions - Example autonomous config (.openclaw/autonomous-config.json) - Quick reference added to AGENTS.md This enables users to observe agents working without interruption. Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Test coverage, CI hardening, and developer experience improvements
docs: autonomous agent configuration guide
Replaces tls: terminate with access: full for NVIDIA and Anthropic inference endpoints. TLS termination causes the proxy to fail decoding chunked streaming responses, resulting in "error decoding response body" warnings. Endpoints fixed: - integrate.api.nvidia.com - inference-api.nvidia.com - api.anthropic.com - statsig.anthropic.com - sentry.io This follows the same fix applied to GitHub/npm in commit 24a1b4e. access: full allows CONNECT tunneling to pass through without L7 inspection, which is required for streaming APIs. Fixes: upstream protocol error warnings in sandbox logs Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Updates all 10 preset policy files to use access: full instead of tls: terminate, preventing the same streaming decode errors when users apply preset policies. Presets fixed: - discord.yaml (Discord API + gateway) - docker.yaml (Docker Hub + nvcr.io) - github.yaml (GitHub + API + raw content) - huggingface.yaml (Hub + LFS + Inference API) - jira.yaml (Atlassian Cloud) - npm.yaml (npm + Yarn registries) - outlook.yaml (Microsoft Graph + Office365) - pypi.yaml (Python Package Index) - slack.yaml (Slack API + webhooks) - telegram.yaml (Telegram Bot API) This ensures consistency with the main openclaw-sandbox.yaml policy and prevents streaming errors for any API that uses chunked transfer encoding. Related: previous commit fixed main policy Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Updates tsconfig.json to exclude *.test.ts files from the build output. Test files should not be compiled into dist/ as they are only used during development and testing. This resolves TypeScript compilation errors caused by test files using vitest dependencies and mocking that aren't needed in production builds. Before: tsc failed with "Cannot find module 'vitest'" errors After: tsc builds successfully, only compiling source files Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
Add mtime-based file cache to avoid redundant readFileSync + JSON.parse
on every function call. Cache invalidated automatically on writes.
Replaces bare catch {} with diagnostic logging under NEMOCLAW_VERBOSE.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace 6 spawnSync("sleep") calls in nim.js and onboard.js with
non-blocking await sleepMs(). The Node event loop is no longer frozen
during gateway health checks, DNS propagation, and Ollama startup.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add default 120s timeout to run_cmd() to prevent indefinite hangs when openshell CLI or other external commands become unresponsive. Timeout is configurable per-call and logs a clear error on TimeoutExpired. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace 7 top-level imports with dynamic import() inside each command's .action() callback. Running 'nemoclaw status' no longer loads migrate.ts (which pulls tar, JSON5, heavy fs ops). Cuts plugin startup overhead. TypeScript compiles clean (tsc --noEmit verified manually). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Reorder COPY: package.json first → npm install → then dist/blueprint so source changes no longer bust the 60MB node_modules cache layer - Replace --break-system-packages with proper Python venv for PyYAML - Expand .dockerignore to exclude docs/, .github/, test/, IDE files, TS source (~50MB less context sent to Docker daemon per build) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
perf: caching, async delays, subprocess timeouts, lazy imports, Docker optimization
Remove files that require NVIDIA secrets or hardware: - nightly-e2e.yaml (requires NVIDIA_API_KEY) - ci.yml (duplicate of pr.yaml) - publish-npm.yml (requires NVIDIA npm scope) - check-spdx-headers.sh (NVIDIA copyright policy) - setup-spark.sh (DGX Spark hardware only) - install-openshell.sh (NVIDIA product installer) Also remove SPDX header checks from lint-staged and update repository URL to fork. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- CODEOWNERS: replace @NVIDIA/ teams with @quanticsoul4772 - dependabot.yml: update reviewers/assignees - pr-limit.yaml: update maintainer exemption list - CODE_OF_CONDUCT.md: replace NVIDIA contact with fork maintainer - SECURITY.md: rewrite with fork-appropriate disclosure process Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add community fork header, upstream sync badge, and Fork Changes section documenting performance, DX, and CI/CD improvements. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…1774220088615 Add Claude Code GitHub Workflow
chore: clean up fork — remove NVIDIA-only files, update governance
…cement Add comprehensive test coverage for both TypeScript and Python codebases, bringing file-level coverage from 26% to 100% (TypeScript) and function-level from 75% to 100% (Python). Foundation: - Shared test helpers: mock-fs (cross-platform in-memory fs), mock-child-process (configurable spawn/exec mocks), factories (makeLogger, makeApi, makeState, etc.) - Coverage thresholds enforced: 80% lines, 70% branches, 85% functions (TS); 80% line coverage (Python via pytest-cov) - New CI workflow (ci-test.yml) runs both TS and Python tests with coverage on PRs New test files (98 tests across 14 TS files, 12 tests across 2 Python files): - blueprint: verify, resolve, exec, fetch - commands: connect, status, eject, logs, launch, migrate, onboard - onboard: validate, prompt - cli command registration - Python: action_apply, action_rollback, main, restore_into_sandbox Fixes 33 pre-existing test failures on Windows caused by path separator mismatches in fs mocks and assertions. Applied norm() path normalization in fs mock stores and POSIX path mock for migration-state tests. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The cli.ts module imports PluginCliContext from index.ts but the type was never exported, causing tsc --noEmit to fail and blocking the pre-push hook. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Pyright strict mode reports many false positives on pytest test files (unknown fixture types, mock parameter types, etc.). Exclude tests/ from pyright analysis since test code is validated by pytest, not the type checker. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
📝 WalkthroughWalkthroughComprehensive repository restructuring introducing development containerization, extensive automation documentation, observability instrumentation (logging/tracing/metrics/error tracking), expanded test coverage, policy simplifications, and GitHub workflow automation for CI/CD and release management. Changes
Sequence Diagram(s)sequenceDiagram
actor User
User->>CLI: nemoclaw onboard
CLI->>PrefLight: Validate environment
PrefLight-->>CLI: Checks passed
CLI->>OnboardWizard: Interactive feature selection
OnboardWizard-->>CLI: Profile (vLLM/Ollama/Cloud)
CLI->>FeatureFlags: Auto-select if enabled
FeatureFlags-->>CLI: Provider choice
CLI->>Blueprint: Execute sandbox creation
Blueprint->>Gateway: Deploy via OpenShell
Gateway->>Sandbox: Create & configure container
Sandbox-->>Gateway: Ready
Gateway-->>Blueprint: Connection established
Blueprint-->>CLI: Sandbox details
CLI-->>User: Onboarding complete
sequenceDiagram
participant App as Application
participant Router as Inference Router
participant Auth as Auth Manager
participant Cache as Response Cache
participant Provider as Inference Provider
participant Observability as Observability Stack
App->>Router: Inference request
Router->>Cache: Check for cached response
alt Cache hit
Cache-->>App: Return cached response
else Cache miss
Router->>Auth: Get credentials
Auth-->>Router: Token/API key
Router->>Provider: Send request with auth
Provider-->>Router: Inference response
Router->>Cache: Store response
Router->>Observability: Record metrics + logs + trace
Observability-->>Router: Ack
Router-->>App: Return response
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 18
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
bin/lib/nim.js (1)
134-152:⚠️ Potential issue | 🔴 CriticalReintroduce quoting or argv-based execution for these Docker commands.
This file already relies on shell features (
|, redirection,|| true), sorun()/runCapture()is definitely executing through a shell. InterpolatingsandboxName/name/image/portdirectly into these command strings reopens command injection and also breaks on whitespace.Also applies to: 182-190
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/nim.js` around lines 134 - 152, The Docker command invocations in startNimContainer (and similar calls around 182-190) interpolate variables directly into shell strings, creating command-injection and whitespace bugs; change these to argv-based calls to run()/runCapture() (e.g., run(['docker','pull', image]) and run(['docker','run','-d','--gpus','all','-p', `${port}:8000`, '--name', name, '--shm-size', '16g', image]) so arguments are passed safely without shell interpolation, and replace the shell-only `docker rm -f ${name} 2>/dev/null || true` with an argv call like run(['docker','rm','-f', name], { ignoreError: true }) to preserve the intended behavior without shell redirection/|| usage.docs/reference/network-policies.md (1)
21-21:⚠️ Potential issue | 🟡 MinorH1 does not match
title.pagefrontmatter.The H1 heading is "Network Policies" but
title.pagein frontmatter is "NemoClaw Network Policies — Baseline Rules and Operator Approval". Per page structure guidelines, H1 heading should match thetitle.pagefrontmatter value.Proposed fix (update H1 to match frontmatter)
-# Network Policies +# NemoClaw Network Policies — Baseline Rules and Operator ApprovalAlternatively, update the frontmatter
title.pageto match the simpler H1.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/reference/network-policies.md` at line 21, The H1 "Network Policies" doesn't match the frontmatter key title.page ("NemoClaw Network Policies — Baseline Rules and Operator Approval"); update the H1 at the top of the document to exactly match the title.page frontmatter string, or alternatively change the frontmatter title.page value to "Network Policies" so both values are identical; locate the H1 (the single leading "# Network Policies") and the frontmatter key title.page to ensure they are synchronized.bin/lib/policies.js (2)
43-50:⚠️ Potential issue | 🟠 MajorPotential path traversal in
loadPreset.The AI summary notes that "path traversal guard" was removed. The
nameparameter is used directly inpath.joinwithout validation. Ifnamecontains../, an attacker could read files outsidePRESETS_DIR.Example:
loadPreset("../../../etc/passwd")would attempt to read/etc/passwd(depending onPRESETS_DIRdepth).🔒 Proposed fix to validate preset name
function loadPreset(name) { + // Prevent path traversal + if (name.includes('/') || name.includes('\\') || name.includes('..')) { + console.error(` Invalid preset name: ${name}`); + return null; + } const file = path.join(PRESETS_DIR, `${name}.yaml`); if (!fs.existsSync(file)) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/policies.js` around lines 43 - 50, The loadPreset function currently uses the incoming name directly with path.join(PRESETS_DIR, `${name}.yaml`), allowing path traversal; validate and sanitize name before reading: reject or normalize any name containing path separators or ".." (e.g. enforce a whitelist regex like /^[A-Za-z0-9_-]+$/), or compute const resolved = path.resolve(PRESETS_DIR, `${name}.yaml`) and ensure resolved.startsWith(path.resolve(PRESETS_DIR) + path.sep) before calling fs.existsSync/fs.readFileSync; return null or throw on invalid names to prevent reading files outside PRESETS_DIR.
143-146:⚠️ Potential issue | 🔴 CriticalAssignment to
constvariable will cause runtime error.Line 144 attempts to reassign
currentPolicy, but it was declared withconston line 102. This will throw aTypeError: Assignment to constant variableat runtime.🐛 Proposed fix
- const currentPolicy = parseCurrentPolicy(rawPolicy); + let currentPolicy = parseCurrentPolicy(rawPolicy);Apply this change at line 102.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/policies.js` around lines 143 - 146, The code reassigns currentPolicy (used later to build merged) but currentPolicy was declared as a const, causing a runtime TypeError; change the declaration of currentPolicy to let (or alternatively avoid reassignment by creating a new variable, e.g. updatedPolicy) so the assignment in the block that prepends "version: 1\n" succeeds; ensure merged still concatenates the correct variable (currentPolicy or your new updatedPolicy) with "\n\nnetwork_policies:\n" + presetEntries.
🟡 Minor comments (31)
.github/ISSUE_TEMPLATE/security.yml-10-11 (1)
10-11:⚠️ Potential issue | 🟡 MinorRemove emoji from technical prose.
The heading contains an emoji (
⚠️) which violates the coding guidelines for markdown content.Proposed fix
value: | - ## ⚠️ IMPORTANT - Read Before Proceeding + ## IMPORTANT - Read Before ProceedingAs per coding guidelines: "No emoji in technical prose."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/ISSUE_TEMPLATE/security.yml around lines 10 - 11, The YAML template contains a heading string value "## ⚠️ IMPORTANT - Read Before Proceeding" that includes an emoji; update that value to remove the emoji so it reads something like "## IMPORTANT - Read Before Proceeding" (edit the `value` entry in .github/ISSUE_TEMPLATE/security.yml), ensuring no other emoji remain in the technical prose of that same `value`.bin/lib/feature-flags.js-128-156 (1)
128-156:⚠️ Potential issue | 🟡 MinorMake the status helpers report effective flag state.
If only
NEMOCLAW_EXPERIMENTAL=1is set,isLocalInferenceEnabled()andisAutoSelectEnabled()both returntrue, butgetAllFlags()/printStatus()still show those flags as disabled because they only callisEnabled(flag.name). That makes the debug output misleading when the umbrella flag is doing the work.One way to align reporting with behavior
+function getEffectiveEnabled(flagName) { + if (flagName === "localInference") return isLocalInferenceEnabled(); + if (flagName === "autoSelectProviders") return isAutoSelectEnabled(); + return isEnabled(flagName); +} + function getAllFlags() { const result = {}; for (const [key, flag] of Object.entries(FLAGS)) { result[flag.name] = { - enabled: isEnabled(flag.name), + enabled: getEffectiveEnabled(flag.name), env: flag.env, description: flag.description, status: flag.status, since: flag.since,Also applies to: 163-170
.factory/skills/build-project/SKILL.md-348-360 (1)
348-360:⚠️ Potential issue | 🟡 MinorAvoid
killall nodein the workflow example.That kills every Node process on the machine, not just the watcher started in step 1. Capture
$!and kill that PID instead, or run the watcher in the foreground.Safer example
-# 1. Start watch mode -cd nemoclaw && npm run dev & +# 1. Start watch mode +cd nemoclaw && npm run dev & +watch_pid=$! @@ -# 4. Stop watch mode -killall node +# 4. Stop watch mode +kill "$watch_pid"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.factory/skills/build-project/SKILL.md around lines 348 - 360, The workflow example uses "killall node" which indiscriminately kills all Node processes; change the steps to either run the watcher in the foreground (so you can stop it with Ctrl+C) or, if running in the background as in the "npm run dev &" step, capture the background PID with $! and use kill on that PID instead of killall (update the example around the "npm run dev &" and the final "killall node" line to show $! handling or a foreground run)..github/ISSUE_TEMPLATE/documentation.yml-5-6 (1)
5-6:⚠️ Potential issue | 🟡 MinorReplace NVIDIA-specific defaults with fork-local values.
Lines 5-6 default to
NVIDIA/docs-teamfor assignees, and line 35's placeholder example points togithub.com/NVIDIA/NemoClaw. In a fork, both will misroute documentation issues or send reporters to the wrong repository.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/ISSUE_TEMPLATE/documentation.yml around lines 5 - 6, Replace the NVIDIA-specific defaults in the documentation issue template: change the assignees entry (key "assignees" currently set to "NVIDIA/docs-team") to a fork-appropriate value (e.g., an empty list, the fork's team, or maintainers) and update the placeholder repository URL example (the example pointing to "github.com/NVIDIA/NemoClaw") to reference the fork's own repo or a neutral placeholder; modify the "assignees" field and the placeholder example in .github/ISSUE_TEMPLATE/documentation.yml so they no longer route issues or reporters to NVIDIA..pre-commit-config.yaml-67-72 (1)
67-72:⚠️ Potential issue | 🟡 MinorRemove the
filesconstraint to catch TypeScript config-only changes.The
files: ^nemoclaw/src/.*\.ts$pattern gates the hook execution, so changes to config files liketsconfig.jsonorpackage.jsonwithout touching source files won't trigger the check. Sincepass_filenames: falsemeans the command runs independently anyway, the regex only prevents execution when it shouldn't be skipped.Simple fix
- id: tsc-check name: tsc --noEmit (TypeScript type check) entry: bash -c 'cd nemoclaw && npm run check --silent || exit 1' language: system - files: ^nemoclaw/src/.*\.ts$ pass_filenames: false🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.pre-commit-config.yaml around lines 67 - 72, The pre-commit hook entry with id "tsc-check" currently includes a files: ^nemoclaw/src/.*\.ts$ constraint which prevents the hook from running on config-only changes; remove the files: line from the tsc-check block so the hook always runs (it already uses pass_filenames: false and runs an independent command), ensuring TypeScript config/package changes trigger the tsc --noEmit check.docs/code-quality.md-358-358 (1)
358-358:⚠️ Potential issue | 🟡 MinorAvoid duplicate H2 headings.
Line [358] repeats the heading used earlier (“For Autonomous Agents”), which hurts navigation and lint compliance.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/code-quality.md` at line 358, The file contains a duplicate H2 heading "For Autonomous Agents" at the second occurrence; remove or rename the repeated H2 so each H2 is unique (e.g., merge content under the original "For Autonomous Agents" or change the second heading text to a distinct title), ensuring the duplicate heading string "For Autonomous Agents" no longer appears twice to satisfy navigation and lint rules.docs/code-quality.md-269-272 (1)
269-272:⚠️ Potential issue | 🟡 MinorAdd explicit language tags to fenced code blocks.
These fenced blocks are missing language identifiers; add appropriate tags (for example
textfor output samples).Also applies to: 275-278, 416-420
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/code-quality.md` around lines 269 - 272, The fenced code blocks showing the unused dependency report (the block containing "Unused dependencies (1)" and the line "yaml package.json:29:6") lack language tags; update those triple-backtick fences to include appropriate language identifiers (e.g., ```text for plain output, ```json for JSON snippets, or ```bash for command output). Apply the same change to the other missing-tag blocks noted in the comment (the blocks around the occurrences described as 275-278 and 416-420) so every fenced code block in docs/code-quality.md has an explicit language tag..c8rc.json-3-5 (1)
3-5:⚠️ Potential issue | 🟡 MinorRemove
testfrom coverage source; consider whethernemoclaw/srcshould be included instead.Including
testinsrcarray while excluding test files creates contradictory configuration. Test directories should not be instrumented for coverage. Additionally, the exclude pattern**/*.test.jsonly matches JavaScript test files—TypeScript test files likenemoclaw/src/cli.test.tswon't be caught by this pattern.🔧 Suggested fix
- "src": ["bin", "test"], + "src": ["bin", "nemoclaw/src"],Or remove
testand include only actual source directories being tested.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.c8rc.json around lines 3 - 5, Remove "test" from the "src" array in the c8 configuration and instead list only actual source directories (e.g., include "nemoclaw/src" if that is the project source); also broaden the "exclude" patterns to cover TypeScript test filenames (add patterns like "**/*.test.ts" and "**/*.spec.ts") so test files are not instrumented. Update the "src" array entry and the "exclude" list accordingly to eliminate the contradiction between including test directories and excluding only .test.js files..github/workflows/test-python.yaml-38-51 (1)
38-51:⚠️ Potential issue | 🟡 MinorMisplaced npm audit job in Python test workflow.
The
auditjob runsnpm auditfor Node.js dependencies, but this workflow is namedtest-pythonand only triggers onnemoclaw-blueprint/**changes. This job will never provide meaningful results since it:
- Doesn't relate to Python testing
- Triggers only when Python files change, not when npm dependencies change
Consider moving this job to a dedicated workflow (e.g.,
test-js.yaml) or a general CI workflow that triggers onpackage-lock.jsonchanges.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/test-python.yaml around lines 38 - 51, The 'audit' job in the 'test-python' workflow is misplaced: it runs 'npm audit --audit-level=high || true' under the 'audit' job but the workflow is named 'test-python' and only triggers on 'nemoclaw-blueprint/**' changes. Move the 'audit' job out of this Python workflow into a JavaScript-specific workflow (e.g., create 'test-js.yaml' with the same 'audit' job), or change its triggers to run on Node dependency changes (e.g., 'package-lock.json' or 'package.json' pushes) instead of the current Python-only trigger; reference the job name 'audit' and the command 'npm audit --audit-level=high || true' when relocating or adjusting triggers..github/workflows/test-python.yaml-31-32 (1)
31-32:⚠️ Potential issue | 🟡 MinorAdopt project's declared tool chain for dependency installation.
The workflow should use
uv syncto install dependencies, consistent with the project'spyproject.toml(which declarestool.uv.managed = true), CONTRIBUTING.md, and postCreate.sh. While the current manual installation ofpytestandpyyamlis functional,uv syncautomatically manages all project and dev dependencies, making the workflow more maintainable.🔧 Suggested fix using uv
- name: Install dependencies + working-directory: nemoclaw-blueprint - run: pip install pytest pyyaml + run: | + pip install uv + uv sync🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/test-python.yaml around lines 31 - 32, Replace the hard-coded pip install step in the "Install dependencies" workflow step with the project's managed dependency command by running "uv sync" (so the step that currently runs "pip install pytest pyyaml" should instead run "uv sync") to ensure the workflow uses the project's declared toolchain (tool.uv.managed = true) and installs all project and dev dependencies consistently; update the "Install dependencies" step in the workflow to execute "uv sync" accordingly.docs/README.md-109-121 (1)
109-121:⚠️ Potential issue | 🟡 MinorAdd language specifier to fenced code block.
The directory structure code block on line 109 lacks a language specifier. Use
textorplaintextfor non-code content.Proposed fix
-``` +```text docs/ ├── conf.py # Sphinx configuration🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/README.md` around lines 109 - 121, The fenced code block showing the docs directory tree in README.md is missing a language specifier; update that block (the triple-backtick block containing "docs/ ├── conf.py ..." in README.md) to include a plain text language like "text" or "plaintext" (e.g., change ``` to ```text) so the renderer treats it as non-code/plain text and preserves formatting..agents/skills/update-docs-from-commits/SKILL.md-118-131 (1)
118-131:⚠️ Potential issue | 🟡 MinorAdd language specifier to fenced code blocks.
Several code blocks in this file lack language specifiers (flagged by markdownlint). Use
textormarkdownfor structured output examples:
- Line 118-131: Example output block
Proposed fix for lines 118-131
-``` +```text ## Doc Updates from Commits ### Updated pagesApply similar fixes to other unlabeled code blocks in the file.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.agents/skills/update-docs-from-commits/SKILL.md around lines 118 - 131, The fenced code block under the "## Doc Updates from Commits" section (the example output block showing Updated pages / New pages needed / Commits with no doc impact) lacks a language specifier; update that block (and any other unlabeled fenced blocks in this file) to use a language tag such as text or markdown (e.g., change ``` to ```text) so markdownlint warnings are resolved, ensuring the block around the "Updated pages" and "Commits with no doc impact" examples is labeled consistently.docs/troubleshooting/streaming-errors.md-1-6 (1)
1-6:⚠️ Potential issue | 🟡 MinorMissing SPDX header and frontmatter.
Per the docs style guide, new pages require:
- SPDX license header after frontmatter
- Frontmatter with title, description, keywords, topics, tags, content type, difficulty, audience, and status fields
- H1 heading matching
title.pagefrontmatter value🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/troubleshooting/streaming-errors.md` around lines 1 - 6, The markdown page docs/troubleshooting/streaming-errors.md is missing frontmatter and an SPDX license header; add YAML frontmatter at the top containing title, description, keywords, topics, tags, content type, difficulty, audience, and status (ensure title.page matches the H1), then add the required SPDX license header immediately after the frontmatter, and ensure the existing H1 ("Troubleshooting Streaming Inference Errors") matches the title field; update any metadata values as appropriate for the page.docs/feature-flags.md-1-4 (1)
1-4:⚠️ Potential issue | 🟡 MinorMissing SPDX header and frontmatter.
Per the docs style guide, new pages require:
- SPDX license header after frontmatter
- Frontmatter with title, description, keywords, topics, tags, content type, difficulty, audience, and status fields
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/feature-flags.md` around lines 1 - 4, The file is missing the required YAML frontmatter and SPDX header: add a YAML frontmatter block at the very top containing title: "Feature Flags", description, keywords, topics, tags, content_type, difficulty, audience, and status fields (populate sensible values), then immediately after the closing frontmatter add the SPDX license header (e.g., SPDX-License-Identifier: Apache-2.0 or the repo standard). Ensure the existing "# Feature Flags" heading remains in the body below the SPDX header and that the frontmatter keys use the exact names (title, description, keywords, topics, tags, content_type, difficulty, audience, status) so the docs generator can pick them up.docs/feature-flags.md-22-29 (1)
22-29:⚠️ Potential issue | 🟡 MinorUse
consolelanguage tag with$prompt for CLI examples.The docs style guide requires CLI code blocks to use the
consolelanguage tag with$prompt prefix, notbash.Proposed fix
-```bash +```console # Enable experimental features -export NEMOCLAW_EXPERIMENTAL=1 -nemoclaw onboard +$ export NEMOCLAW_EXPERIMENTAL=1 +$ nemoclaw onboard # Or set for a single command -NEMOCLAW_EXPERIMENTAL=1 nemoclaw onboard +$ NEMOCLAW_EXPERIMENTAL=1 nemoclaw onboard</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/feature-flags.mdaround lines 22 - 29, Update the CLI example block to
use the console language tag and include the $ prompt prefix for each command;
replace the triple-backtick fence language from bash to console and add a
leading "$ " before commands like "export NEMOCLAW_EXPERIMENTAL=1" and "nemoclaw
onboard" (and the single-command form "NEMOCLAW_EXPERIMENTAL=1 nemoclaw
onboard") so the snippet follows the docs style guide.</details> </blockquote></details> <details> <summary>docs/troubleshooting/streaming-errors.md-58-62 (1)</summary><blockquote> `58-62`: _⚠️ Potential issue_ | _🟡 Minor_ **Use `console` language tag with `$` prompt for CLI examples.** The docs style guide requires CLI code blocks to use the `console` language tag with `$` prompt prefix, not `bash` or `shell`. <details> <summary>Proposed fix</summary> ```diff -```bash +```console # Update your policy file # Then apply it to your sandbox -openshell policy set <sandbox-name> --policy path/to/updated-policy.yaml --wait +$ openshell policy set <sandbox-name> --policy path/to/updated-policy.yaml --wait</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/troubleshooting/streaming-errors.mdaround lines 58 - 62, Change the
CLI example code fence from "bash" to "console" and prepend a "$ " prompt to the
command string "openshell policy set --policy
path/to/updated-policy.yaml --wait" so the block uses the console language tag
and shows the shell prompt (i.e., replace the fence with ```console and update
the line to "$ openshell policy set --policy
path/to/updated-policy.yaml --wait").</details> </blockquote></details> <details> <summary>.factory/skills/check-code-quality/SKILL.md-228-236 (1)</summary><blockquote> `228-236`: _⚠️ Potential issue_ | _🟡 Minor_ **Add language identifier to fenced code block.** Line 228 has a code block without a language specifier. Since this shows a TODO comment example, use an appropriate language tag. <details> <summary>Proposed fix</summary> ```diff **Best practice**: -``` +```typescript // ❌ BAD: Vague TODO // TODO: fix this // ✅ GOOD: Specific TODO with context // TODO(`#456`): Add timeout to prevent infinite loops (target: v0.2.0) ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In @.factory/skills/check-code-quality/SKILL.md around lines 228 - 236, The
fenced code block that begins with the comment line "// ❌ BAD: Vague TODO"
(showing the TODO examples) is missing a language identifier; update the opening
fence fromtotypescript so the block is marked as TypeScript (i.e.,
apply the proposed fix: change the code fence for the TODO example todocs/autonomous-agents.md-1-4 (1)
1-4:⚠️ Potential issue | 🟡 MinorMissing required page structure elements.
This page lacks:
- SPDX license header
- Frontmatter (title, description, keywords, topics, tags, content type, difficulty, audience, status)
- "Next Steps" section linking to related pages
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/autonomous-agents.md` around lines 1 - 4, The file "Autonomous Agent Configuration" is missing required repository documentation scaffolding; add an SPDX license header at the top (e.g., SPDX-License-Identifier: Apache-2.0), insert YAML frontmatter including title, description, keywords, topics, tags, content_type, difficulty, audience, and status, and append a "Next Steps" section that links to related docs (e.g., onboarding, agent-configuration, troubleshooting) so the page conforms to repo standards and navigation. Ensure the frontmatter keys exactly match the project's naming convention and the "Next Steps" section uses the same link style as other docs.docs/architecture/README.md-1-4 (1)
1-4:⚠️ Potential issue | 🟡 MinorMissing required page structure elements.
This page lacks:
- SPDX license header
- Frontmatter with required fields
- "Next Steps" section
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/architecture/README.md` around lines 1 - 4, The README titled "NemoClaw Architecture Documentation" is missing the SPDX license header, required frontmatter, and a "Next Steps" section; add an SPDX short identifier comment at the top (e.g., SPDX-License-Identifier: Apache-2.0), insert YAML frontmatter beneath the header including required fields (title: "NemoClaw Architecture Documentation", description, sidebar_label, and any required tags or sidebar_position), and append a "Next Steps" H2 section at the bottom with guidance/links for contributors or follow-up documentation; update the existing "# NemoClaw Architecture Documentation" header and the new "Next Steps" heading to match project conventions.docs/product-analytics.md-1-4 (1)
1-4:⚠️ Potential issue | 🟡 MinorMissing required page structure and filler introduction.
This page lacks:
- SPDX license header
- Frontmatter with required fields
- "Next Steps" section
Additionally, line 3 uses a filler introduction pattern: "This document explains how to..." LLM pattern detected. Per guidelines: "Filler introductions ('In this section, we will explore...'). Start with the content."
Consider revising to something like: "Instrument product analytics in NemoClaw to measure feature usage and understand user behavior."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/product-analytics.md` around lines 1 - 4, Add the required SPDX license header at the top of docs/product-analytics.md, insert the project frontmatter block with the required fields (title: "Product Analytics for NemoClaw", description, date, and sidebar/category entries) immediately after the SPDX line, remove the filler intro sentence ("This document explains how to instrument product analytics in NemoClaw to measure feature usage and understand user behavior.") and replace it with the concise lead sentence "Instrument product analytics in NemoClaw to measure feature usage and understand user behavior.", and append a "Next Steps" section at the end outlining actionable follow-ups (e.g., instrumentation checklist, event naming conventions, and links to implementation guides).docs/deployment.md-1-3 (1)
1-3:⚠️ Potential issue | 🟡 MinorMissing required page structure elements and title formatting.
This page is missing:
- SPDX license header after frontmatter
- Frontmatter (title, description, keywords, topics, tags, content type, difficulty, audience, status)
- One- or two-sentence introduction
- "Next Steps" section at the bottom
Additionally, the title contains a colon ("Deployment and Release Automation"), which violates formatting rules. Consider "Deployment and Release Automation" → "Release and Deployment Automation" or simply "Automated Releases".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/deployment.md` around lines 1 - 3, Update the docs page to include required page structure: add YAML frontmatter with title, description, keywords, topics, tags, content type, difficulty, audience, and status at the top, then add an SPDX license header immediately after the frontmatter; replace the existing H1 "Deployment and Release Automation" with one of the approved titles (e.g., "Release and Deployment Automation" or "Automated Releases"); add a one- to two-sentence introduction paragraph under the title; and append a "Next Steps" section at the bottom with links or pointers to related pages. Ensure the frontmatter fields are populated with meaningful values and the SPDX header follows the frontmatter.docs/runbooks.md-1-3 (1)
1-3:⚠️ Potential issue | 🟡 MinorMissing required page structure elements.
This page lacks:
- SPDX license header
- Frontmatter with required fields
- "Next Steps" section
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/runbooks.md` around lines 1 - 3, The document titled "NemoClaw Runbooks" is missing required structural elements: add an SPDX license header (e.g., "// SPDX-License-Identifier: Apache-2.0" or the org-standard SPDX line) at the top, insert YAML frontmatter directly below the header with required fields such as title: "NemoClaw Runbooks", description, category, tags (array), and last_reviewed (ISO date), and append a "Next Steps" section near the end outlining follow-up actions and owners; update the frontmatter keys to match the site schema used by the site generator and ensure the header, frontmatter, and "Next Steps" section are formatted consistently with other markdown runbooks.docs/architecture/README.md-226-240 (1)
226-240:⚠️ Potential issue | 🟡 MinorCode blocks missing language specification.
Static analysis correctly identifies that the code blocks at lines 226, 232, and 238 lack language tags. These show conceptual data flows and should use
textor another appropriate identifier.🛠️ Suggested fix
-``` +```text User Input → CLI Validation → Preflight Checks → Blueprint Execution → Gateway Deployment → Sandbox ReadyApply similar changes to lines 232 and 238. </details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/architecture/README.mdaround lines 226 - 240, The three
language-agnostic fenced code blocks containing the flow diagrams (the blocks
starting with "User Input → CLI Validation → Preflight Checks → Blueprint
Execution → Gateway Deployment → Sandbox Ready", "IDE Request → Plugin → Gateway
→ Provider Selection → Inference API → Response → Cache → IDE", and "Blueprint
Plan → OpenShell Apply → State Snapshot → Persistent Storage") are missing
language specifiers; update each triple-backtick fence to include a language tag
such as text (e.g., ```text) so the blocks are properly annotated and rendered
consistently.</details> </blockquote></details> <details> <summary>.devcontainer/devcontainer.json-66-68 (1)</summary><blockquote> `66-68`: _⚠️ Potential issue_ | _🟡 Minor_ **Remove all deprecated Python settings.** The `python.linting.enabled`, `python.linting.ruffEnabled`, and `python.formatting.provider` settings are deprecated in recent versions of the VS Code Python extension. Linting and formatting are now handled by the Ruff extension (which is already configured at line 70 as the default formatter). <details> <summary>🛠️ Suggested fix</summary> ```diff // Python "python.defaultInterpreterPath": "/usr/local/bin/python", - "python.linting.enabled": true, - "python.linting.ruffEnabled": true, - "python.formatting.provider": "none", "[python]": { "editor.defaultFormatter": "charliermarsh.ruff", ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In @.devcontainer/devcontainer.json around lines 66 - 68, Remove the deprecated VS Code Python settings by deleting "python.linting.enabled", "python.linting.ruffEnabled", and "python.formatting.provider" from the devcontainer.json; instead rely on the Ruff extension already configured as the default formatter (see the existing Ruff formatter setting) and any workspace-level linter configuration so linting/formatting is driven by the extension rather than these deprecated keys. ``` </details> </blockquote></details> <details> <summary>docs/observability.md-1-16 (1)</summary><blockquote> `1-16`: _⚠️ Potential issue_ | _🟡 Minor_ **Missing SPDX header and frontmatter.** This comprehensive documentation page needs the required SPDX license header and frontmatter metadata. As per coding guidelines: "SPDX license header is present after frontmatter" and frontmatter should include `title`, `description`, `keywords`, `topics`, `tags`, `content type`, `difficulty`, `audience`, and `status` fields. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@docs/observability.md` around lines 1 - 16, Add the required frontmatter block at the top of the "Observability and Logging" document containing title, description, keywords, topics, tags, content type, difficulty, audience, and status fields, and then insert the SPDX license header immediately after the frontmatter (per guideline "SPDX license header is present after frontmatter"); ensure the frontmatter keys match the exact names specified and the SPDX header includes the appropriate license identifier and copyright year/owner. ``` </details> </blockquote></details> <details> <summary>docs/error-to-insight-pipeline.md-1-4 (1)</summary><blockquote> `1-4`: _⚠️ Potential issue_ | _🟡 Minor_ **Missing SPDX header and frontmatter.** This documentation page lacks the required SPDX license header and frontmatter metadata fields. As per coding guidelines: "SPDX license header is present after frontmatter" and frontmatter must include `title`, `description`, `keywords`, `topics`, `tags`, `content type`, `difficulty`, `audience`, and `status` fields. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@docs/error-to-insight-pipeline.md` around lines 1 - 4, Add the required YAML frontmatter and SPDX license header to this document: insert a YAML frontmatter block at the top containing title: "Error to Insight Pipeline", description, keywords, topics, tags, content type, difficulty, audience, and status fields (populate sensible values), then place the SPDX license header immediately after the closing frontmatter delimiter as required by the guideline; ensure the existing H1 "Error to Insight Pipeline" remains in the body after the SPDX header. ``` </details> </blockquote></details> <details> <summary>docs/releases.md-1-4 (1)</summary><blockquote> `1-4`: _⚠️ Potential issue_ | _🟡 Minor_ **Missing SPDX header and frontmatter.** This new documentation page is missing the required SPDX license header and frontmatter fields (`title`, `description`, `keywords`, `topics`, `tags`, `content type`, `difficulty`, `audience`, `status`). As per coding guidelines: "SPDX license header is present after frontmatter" and "Frontmatter includes title, description, keywords, topics, tags, content type, difficulty, audience, and status fields." <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@docs/releases.md` around lines 1 - 4, The docs/releases.md page is missing the required frontmatter and SPDX license header; add YAML frontmatter at the top containing the fields title, description, keywords, topics, tags, content type, difficulty, audience, and status, and then add the SPDX license header immediately after the frontmatter (as required by guidelines). Locate the top of docs/releases.md (the "# Release Notes and Changelog Automation" heading) and replace/precede it with the YAML block and SPDX header so the page conforms to the repository's documentation standards. ``` </details> </blockquote></details> <details> <summary>docs/error-to-insight-pipeline.md-339-341 (1)</summary><blockquote> `339-341`: _⚠️ Potential issue_ | _🟡 Minor_ **Fenced code block missing language specification.** Add a language identifier to this flow diagram code block. <details> <summary>Suggested fix</summary> ```diff -``` +```text Critical Error → Sentry → PagerDuty Page → GitHub Issue ``` ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/error-to-insight-pipeline.mdaround lines 339 - 341, The fenced code
block containing "Critical Error → Sentry → PagerDuty Page → GitHub Issue" is
missing a language identifier; update the opening backticks for that block (the``` to ```text) so the flow diagram has an explicit language specifier.docs/error-to-insight-pipeline.md-97-99 (1)
97-99:⚠️ Potential issue | 🟡 MinorFenced code block missing language specification.
This code block showing a GitHub issue title should have a language identifier.
so the snippet (theSuggested fix
-``` +```text [Sentry] ConnectionError: Failed to connect to inference API</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/error-to-insight-pipeline.mdaround lines 97 - 99, The fenced code
block containing "[Sentry] ConnectionError: Failed to connect to inference API"
lacks a language identifier; update that backtick-fenced block to include a
language specifier (e.g., add "text" after the openingso it readstext)
to ensure proper rendering and accessibility in the docs file that contains the
error example.</details> </blockquote></details> <details> <summary>docs/error-to-insight-pipeline.md-10-12 (1)</summary><blockquote> `10-12`: _⚠️ Potential issue_ | _🟡 Minor_ **Fenced code block missing language specification.** Add a language identifier to this code block (e.g., `text` or `plaintext` for flow diagrams). As per coding guidelines: "Missing SPDX header, broken cross-references, or incorrect code block language: flag as issues." <details> <summary>Suggested fix</summary> ```diff -``` +```text Production Error → Sentry → GitHub Issue → Developer Action → Fix → Deploy ``` ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/error-to-insight-pipeline.mdaround lines 10 - 12, The fenced code
block containing the flow "Production Error → Sentry → GitHub Issue → Developer
Action → Fix → Deploy" is missing a language specifier; update the fenced block
to include a plaintext language tag (e.g.,text orplaintext) so the code
block is explicitly typed and follows the docs guideline; locate the fenced
block by searching for the exact flow string and add the language identifier on
the opening ``` fence.</details> </blockquote></details> <details> <summary>AGENTS.md-1356-1386 (1)</summary><blockquote> `1356-1386`: _⚠️ Potential issue_ | _🟡 Minor_ **Malformed code block - missing backtick.** Line 1356 uses double backticks (` `` `) instead of triple backticks for the fenced code block, which will cause rendering issues. <details> <summary>🐛 Fix the code fence</summary> ```diff -``bash +```bash # 1. Open dashboard in browser # http://127.0.0.1:18789 ``` And the closing fence at line 1386: ```diff -`` +``` ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@AGENTS.mdaround lines 1356 - 1386, The fenced code block in AGENTS.md uses
an opening double backtick (``bash) instead of a proper triple backtick fence;
change the opening fence to ```bash and ensure the block is closed with a
matching triple backtick before the closing
commands that start with "# 1. Open dashboard in browser" and the JSON heredoc)
renders correctly.</details> </blockquote></details> </blockquote></details> --- <details> <summary>ℹ️ Review info</summary> <details> <summary>⚙️ Run configuration</summary> **Configuration used**: Path: .coderabbit.yaml **Review profile**: CHILL **Plan**: Pro **Run ID**: `87228af2-949c-4dad-a588-41b0d6eba295` </details> <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between c55a3099577b4c6d2001b0a8e4324114c89d322c and db67cd4c95ad4ae2badd2932bf6290431a522036. </details> <details> <summary>📒 Files selected for processing (159)</summary> * `.agents/skills/update-docs-from-commits/SKILL.md` * `.c8rc.json` * `.devcontainer/README.md` * `.devcontainer/devcontainer.json` * `.devcontainer/postCreate.sh` * `.dockerignore` * `.factory/settings.json` * `.factory/skills/build-project/SKILL.md` * `.factory/skills/check-code-quality/SKILL.md` * `.factory/skills/generate-release-notes/SKILL.md` * `.factory/skills/lint-and-format-code/SKILL.md` * `.factory/skills/run-full-test-suite/SKILL.md` * `.github/BRANCH_PROTECTION.md` * `.github/CODEOWNERS` * `.github/DEPLOYMENT.md` * `.github/ISSUE_TEMPLATE/documentation.yml` * `.github/ISSUE_TEMPLATE/security.yml` * `.github/branch-protection.json` * `.github/dependabot.yml` * `.github/workflows/ci-test.yml` * `.github/workflows/claude-code-review.yml` * `.github/workflows/claude.yml` * `.github/workflows/docker-smoke.yaml` * `.github/workflows/docs-validation.yml` * `.github/workflows/docs.yml` * `.github/workflows/markdown-link-check-config.json` * `.github/workflows/nightly-e2e.yaml` * `.github/workflows/pr-limit.yaml` * `.github/workflows/publish-docker.yml` * `.github/workflows/release.yml` * `.github/workflows/sync-upstream.yml` * `.github/workflows/test-python.yaml` * `.jscpd.json` * `.jscpd/html/index.html` * `.jscpd/html/js/prism.js` * `.jscpd/html/jscpd-report.json` * `.jscpd/html/styles/prism.css` * `.jscpd/html/styles/tailwind.css` * `.nvmrc` * `.openclaw/autonomous-config.json` * `.pre-commit-config.yaml` * `.python-version` * `.secrets.baseline` * `AGENTS.md` * `CHANGELOG.md` * `CODE_OF_CONDUCT.md` * `CONTRIBUTING.md` * `Dockerfile` * `Makefile` * `README.md` * `SECURITY.md` * `bin/lib/credentials.js` * `bin/lib/feature-flags.js` * `bin/lib/logger.js` * `bin/lib/metrics.js` * `bin/lib/nim.js` * `bin/lib/onboard.js` * `bin/lib/policies.js` * `bin/lib/registry.js` * `bin/lib/sentry.js` * `bin/lib/trace-context.js` * `docs/README.md` * `docs/architecture/README.md` * `docs/architecture/component-interactions.mermaid` * `docs/architecture/deployment-model.mermaid` * `docs/architecture/inference-routing.mermaid` * `docs/architecture/onboarding-flow.mermaid` * `docs/architecture/system-overview.mermaid` * `docs/autonomous-agents.md` * `docs/code-quality.md` * `docs/deployment.md` * `docs/error-to-insight-pipeline.md` * `docs/feature-flags.md` * `docs/observability.md` * `docs/product-analytics.md` * `docs/reference/network-policies.md` * `docs/releases.md` * `docs/runbooks.md` * `docs/testing.md` * `docs/troubleshooting/streaming-errors.md` * `nemoclaw-blueprint/.vulture` * `nemoclaw-blueprint/Makefile` * `nemoclaw-blueprint/orchestrator/runner.py` * `nemoclaw-blueprint/policies/openclaw-sandbox.yaml` * `nemoclaw-blueprint/policies/presets/discord.yaml` * `nemoclaw-blueprint/policies/presets/docker.yaml` * `nemoclaw-blueprint/policies/presets/github.yaml` * `nemoclaw-blueprint/policies/presets/huggingface.yaml` * `nemoclaw-blueprint/policies/presets/jira.yaml` * `nemoclaw-blueprint/policies/presets/npm.yaml` * `nemoclaw-blueprint/policies/presets/outlook.yaml` * `nemoclaw-blueprint/policies/presets/pypi.yaml` * `nemoclaw-blueprint/policies/presets/slack.yaml` * `nemoclaw-blueprint/policies/presets/telegram.yaml` * `nemoclaw-blueprint/pyproject.toml` * `nemoclaw-blueprint/tests/__init__.py` * `nemoclaw-blueprint/tests/test_runner.py` * `nemoclaw-blueprint/tests/test_runner_extended.py` * `nemoclaw-blueprint/tests/test_snapshot.py` * `nemoclaw-blueprint/tests/test_snapshot_extended.py` * `nemoclaw/eslint.config.mjs` * `nemoclaw/knip.json` * `nemoclaw/package.json` * `nemoclaw/src/__test-helpers__/factories.ts` * `nemoclaw/src/__test-helpers__/mock-child-process.ts` * `nemoclaw/src/__test-helpers__/mock-fs.ts` * `nemoclaw/src/blueprint/exec.test.ts` * `nemoclaw/src/blueprint/exec.ts` * `nemoclaw/src/blueprint/fetch.test.ts` * `nemoclaw/src/blueprint/fetch.ts` * `nemoclaw/src/blueprint/resolve.test.ts` * `nemoclaw/src/blueprint/resolve.ts` * `nemoclaw/src/blueprint/state.test.ts` * `nemoclaw/src/blueprint/verify.test.ts` * `nemoclaw/src/blueprint/verify.ts` * `nemoclaw/src/cli.test.ts` * `nemoclaw/src/cli.ts` * `nemoclaw/src/commands/connect.test.ts` * `nemoclaw/src/commands/connect.ts` * `nemoclaw/src/commands/eject.test.ts` * `nemoclaw/src/commands/eject.ts` * `nemoclaw/src/commands/launch.test.ts` * `nemoclaw/src/commands/launch.ts` * `nemoclaw/src/commands/logs.test.ts` * `nemoclaw/src/commands/logs.ts` * `nemoclaw/src/commands/migrate.test.ts` * `nemoclaw/src/commands/migrate.ts` * `nemoclaw/src/commands/migration-state.test.ts` * `nemoclaw/src/commands/onboard.test.ts` * `nemoclaw/src/commands/onboard.ts` * `nemoclaw/src/commands/status.test.ts` * `nemoclaw/src/commands/status.ts` * `nemoclaw/src/index.ts` * `nemoclaw/src/onboard/config.test.ts` * `nemoclaw/src/onboard/prompt.test.ts` * `nemoclaw/src/onboard/prompt.ts` * `nemoclaw/src/onboard/validate.test.ts` * `nemoclaw/src/onboard/validate.ts` * `nemoclaw/tsconfig.json` * `nemoclaw/typedoc.json` * `nemoclaw/vitest.config.ts` * `package.json` * `scripts/check-spdx-headers.sh` * `scripts/generate-changelog.js` * `scripts/install-openshell.sh` * `scripts/setup-spark.sh` * `scripts/telegram-bridge-external.js` * `scripts/tg-bridge-simple.js` * `scripts/write-openclaw-config.py` * `test/credentials-unit.test.js` * `test/integration/cli-workflow.test.js` * `test/integration/policy-workflow.test.js` * `test/integration/runner-blueprint.test.js` * `test/metrics.test.js` * `test/policies-unit.test.js` * `test/policies.test.js` * `test/resolve-openshell.test.js` * `test/runner-capture.test.js` * `test/sentry.test.js` </details> <details> <summary>💤 Files with no reviewable changes (1)</summary> * .github/workflows/nightly-e2e.yaml </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
| permissions: | ||
| contents: read | ||
| pull-requests: read | ||
| issues: read | ||
| id-token: write |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check anthropics/claude-code-action documentation for required permissions
curl -s https://raw.githubusercontent.com/anthropics/claude-code-action/main/README.md | head -200Repository: NVIDIA/NemoClaw
Length of output: 4852
🏁 Script executed:
#!/bin/bash
# Fetch the configuration documentation which should list permissions
curl -s https://raw.githubusercontent.com/anthropics/claude-code-action/main/docs/configuration.md | head -300Repository: NVIDIA/NemoClaw
Length of output: 9387
🏁 Script executed:
#!/bin/bash
# Also fetch security documentation for permission details
curl -s https://raw.githubusercontent.com/anthropics/claude-code-action/main/docs/security.md | head -300Repository: NVIDIA/NemoClaw
Length of output: 7925
Missing pull-requests: write permission for posting review comments.
The workflow only has pull-requests: read permission, but the Claude Code Action requires pull-requests: write to post comments and create/update pull requests. According to the action's documentation, Pull Requests (Read & Write) permissions are required for "creating/updating pull requests."
Proposed fix
permissions:
contents: read
- pull-requests: read
+ pull-requests: write
issues: read
id-token: write📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| permissions: | |
| contents: read | |
| pull-requests: read | |
| issues: read | |
| id-token: write | |
| permissions: | |
| contents: read | |
| pull-requests: write | |
| issues: read | |
| id-token: write |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/claude-code-review.yml around lines 22 - 26, Update the
workflow's permissions so the "pull-requests" key grants write access instead of
read: change the permissions block to set pull-requests: write (replacing
pull-requests: read) so the Claude Code Action can post review comments and
create/update PRs; keep the other keys (contents, issues, id-token) as-is.
| - name: Verify plugin files exist | ||
| run: | | ||
| docker run --rm nemoclaw-smoke-test -c " | ||
| test -f /opt/nemoclaw/dist/index.js && echo '✓ Plugin JS built' | ||
| test -f /opt/nemoclaw/openclaw.plugin.json && echo '✓ Plugin manifest present' | ||
| test -d /opt/nemoclaw/node_modules && echo '✓ Node modules installed' | ||
| test -f /opt/nemoclaw-blueprint/blueprint.yaml && echo '✓ Blueprint present' | ||
| test -f /sandbox/.openclaw/openclaw.json && echo '✓ OpenClaw config written' | ||
| " |
There was a problem hiding this comment.
Verification step may pass even if required files are missing.
The test -f commands return non-zero if files are missing, but the script continues executing subsequent lines. Since the last command is an echo that always succeeds, the step passes regardless of missing files.
🐛 Proposed fix to fail on missing files
- name: Verify plugin files exist
run: |
docker run --rm nemoclaw-smoke-test -c "
+ set -e
test -f /opt/nemoclaw/dist/index.js && echo '✓ Plugin JS built'
test -f /opt/nemoclaw/openclaw.plugin.json && echo '✓ Plugin manifest present'
test -d /opt/nemoclaw/node_modules && echo '✓ Node modules installed'
test -f /opt/nemoclaw-blueprint/blueprint.yaml && echo '✓ Blueprint present'
test -f /sandbox/.openclaw/openclaw.json && echo '✓ OpenClaw config written'
"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Verify plugin files exist | |
| run: | | |
| docker run --rm nemoclaw-smoke-test -c " | |
| test -f /opt/nemoclaw/dist/index.js && echo '✓ Plugin JS built' | |
| test -f /opt/nemoclaw/openclaw.plugin.json && echo '✓ Plugin manifest present' | |
| test -d /opt/nemoclaw/node_modules && echo '✓ Node modules installed' | |
| test -f /opt/nemoclaw-blueprint/blueprint.yaml && echo '✓ Blueprint present' | |
| test -f /sandbox/.openclaw/openclaw.json && echo '✓ OpenClaw config written' | |
| " | |
| - name: Verify plugin files exist | |
| run: | | |
| docker run --rm nemoclaw-smoke-test -c " | |
| set -e | |
| test -f /opt/nemoclaw/dist/index.js && echo '✓ Plugin JS built' | |
| test -f /opt/nemoclaw/openclaw.plugin.json && echo '✓ Plugin manifest present' | |
| test -d /opt/nemoclaw/node_modules && echo '✓ Node modules installed' | |
| test -f /opt/nemoclaw-blueprint/blueprint.yaml && echo '✓ Blueprint present' | |
| test -f /sandbox/.openclaw/openclaw.json && echo '✓ OpenClaw config written' | |
| " |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/docker-smoke.yaml around lines 53 - 61, The verification
step "Verify plugin files exist" currently runs multiple test -f/-d checks
inside docker run -c but a trailing echo masks failures; prepend a fail-fast
shell option so any missing file causes the container command to exit non‑zero.
Edit the docker run command string (the one containing test -f
/opt/nemoclaw/dist/index.js, test -f /opt/nemoclaw/openclaw.plugin.json, test -d
/opt/nemoclaw/node_modules, test -f /opt/nemoclaw-blueprint/blueprint.yaml, test
-f /sandbox/.openclaw/openclaw.json) to start with set -euo pipefail (or at
minimum set -e) before the test lines so the step fails when any test fails.
Ensure the step name "Verify plugin files exist" remains the same.
| - name: Build and push Docker image | ||
| uses: docker/build-push-action@v5 | ||
| with: | ||
| context: . | ||
| platforms: linux/amd64,linux/arm64 | ||
| push: true | ||
| tags: ${{ steps.meta.outputs.tags }} | ||
| labels: ${{ steps.meta.outputs.labels }} | ||
| cache-from: type=gha | ||
| cache-to: type=gha,mode=max | ||
| build-args: | | ||
| VERSION=${{ steps.get_version.outputs.version }} | ||
|
|
||
| - name: Image digest | ||
| run: echo "Image pushed with digest ${{ steps.build.outputs.digest }}" |
There was a problem hiding this comment.
Missing step id causes broken digest output.
Line 94 references ${{ steps.build.outputs.digest }}, but the docker/build-push-action step has no id assigned. The digest will always be empty.
Suggested fix
- name: Build and push Docker image
uses: docker/build-push-action@v5
+ id: build
with:
context: .
platforms: linux/amd64,linux/arm64🧰 Tools
🪛 actionlint (1.7.11)
[error] 94-94: property "build" is not defined in object type {get_version: {conclusion: string; outcome: string; outputs: {string => string}}; meta: {conclusion: string; outcome: string; outputs: {annotations: string; bake-file: string; bake-file-annotations: string; bake-file-labels: string; bake-file-tags: string; json: string; labels: string; tag-names: string; tags: string; version: string}}}
(expression)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/publish-docker.yml around lines 80 - 94, The workflow
references steps.build.outputs.digest but the Docker build-push step using
docker/build-push-action@v5 has no id, so the digest is empty; add an id "build"
to the "Build and push Docker image" step (the step that uses
docker/build-push-action@v5) so the output can be referenced as
steps.build.outputs.digest, and keep the step name and outputs unchanged so the
subsequent "Image digest" run can read the digest.
| "threshold": 3, | ||
| "reporters": ["console", "html"], | ||
| "ignore": [ | ||
| "**/.git/**", | ||
| "**/node_modules/**", | ||
| "**/dist/**", | ||
| "**/build/**", | ||
| "**/*.min.js", | ||
| "**/package-lock.json", | ||
| "**/uv.lock", | ||
| "**/.venv/**", | ||
| "**/__pycache__/**", | ||
| "**/docs/_build/**", | ||
| "**/.factory/**" | ||
| ], | ||
| "format": ["typescript", "javascript", "python"], | ||
| "minLines": 5, | ||
| "minTokens": 50, | ||
| "maxLines": 500, | ||
| "maxSize": "100kb", | ||
| "output": ".jscpd", | ||
| "absolute": false, | ||
| "gitignore": true, | ||
| "blame": false, | ||
| "silent": false, | ||
| "verbose": false, | ||
| "skipComments": true, | ||
| "exitCode": 0 | ||
| } |
There was a problem hiding this comment.
exitCode: 0 nullifies duplicate-threshold enforcement.
With Line [29] set to 0, scans never fail even when duplication exceeds Line [2] threshold. Also add .jscpd to ignore to avoid scanning generated reports.
Suggested config adjustment
"ignore": [
@@
- "**/.factory/**"
+ "**/.factory/**",
+ "**/.jscpd/**"
],
@@
- "exitCode": 0
+ "exitCode": 1
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "threshold": 3, | |
| "reporters": ["console", "html"], | |
| "ignore": [ | |
| "**/.git/**", | |
| "**/node_modules/**", | |
| "**/dist/**", | |
| "**/build/**", | |
| "**/*.min.js", | |
| "**/package-lock.json", | |
| "**/uv.lock", | |
| "**/.venv/**", | |
| "**/__pycache__/**", | |
| "**/docs/_build/**", | |
| "**/.factory/**" | |
| ], | |
| "format": ["typescript", "javascript", "python"], | |
| "minLines": 5, | |
| "minTokens": 50, | |
| "maxLines": 500, | |
| "maxSize": "100kb", | |
| "output": ".jscpd", | |
| "absolute": false, | |
| "gitignore": true, | |
| "blame": false, | |
| "silent": false, | |
| "verbose": false, | |
| "skipComments": true, | |
| "exitCode": 0 | |
| } | |
| "threshold": 3, | |
| "reporters": ["console", "html"], | |
| "ignore": [ | |
| "**/.git/**", | |
| "**/node_modules/**", | |
| "**/dist/**", | |
| "**/build/**", | |
| "**/*.min.js", | |
| "**/package-lock.json", | |
| "**/uv.lock", | |
| "**/.venv/**", | |
| "**/__pycache__/**", | |
| "**/docs/_build/**", | |
| "**/.factory/**", | |
| "**/.jscpd/**" | |
| ], | |
| "format": ["typescript", "javascript", "python"], | |
| "minLines": 5, | |
| "minTokens": 50, | |
| "maxLines": 500, | |
| "maxSize": "100kb", | |
| "output": ".jscpd", | |
| "absolute": false, | |
| "gitignore": true, | |
| "blame": false, | |
| "silent": false, | |
| "verbose": false, | |
| "skipComments": true, | |
| "exitCode": 1 | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.jscpd.json around lines 2 - 30, The config currently sets "exitCode": 0
which prevents jscpd from failing CI when duplication exceeds the "threshold";
change "exitCode" to a non-zero value (e.g., 1) so duplicate-threshold
violations cause a non-zero exit, and add ".jscpd" to the "ignore" array so
generated reports are not scanned; update the .jscpd.json keys "exitCode" and
"ignore" accordingly.
| "controlUi": { | ||
| "allowedOrigins": [ | ||
| "http://127.0.0.1:18789", | ||
| "http://localhost:18789" | ||
| ], | ||
| "allowInsecureAuth": true, | ||
| "dangerouslyDisableDeviceAuth": true |
There was a problem hiding this comment.
Don't ship insecure gateway auth as the checked-in default.
allowInsecureAuth: true plus dangerouslyDisableDeviceAuth: true makes the local control UI opt out of both transport and device-level protections by default. For a committed config, that is too permissive; keep these false and document local-only overrides separately.
Suggested hardening
"allowedOrigins": [
"http://127.0.0.1:18789",
"http://localhost:18789"
],
- "allowInsecureAuth": true,
- "dangerouslyDisableDeviceAuth": true
+ "allowInsecureAuth": false,
+ "dangerouslyDisableDeviceAuth": false📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "controlUi": { | |
| "allowedOrigins": [ | |
| "http://127.0.0.1:18789", | |
| "http://localhost:18789" | |
| ], | |
| "allowInsecureAuth": true, | |
| "dangerouslyDisableDeviceAuth": true | |
| "controlUi": { | |
| "allowedOrigins": [ | |
| "http://127.0.0.1:18789", | |
| "http://localhost:18789" | |
| ], | |
| "allowInsecureAuth": false, | |
| "dangerouslyDisableDeviceAuth": false |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.openclaw/autonomous-config.json around lines 44 - 50, The committed default
config currently disables transport and device-level protections in controlUi by
setting allowInsecureAuth and dangerouslyDisableDeviceAuth to true; change both
keys to false in the controlUi section (keep allowedOrigins as-is) and add
documentation or comments explaining how developers can enable these insecure,
local-only overrides for testing (separate developer-only config or env var)
rather than shipping them enabled by default.
|
|
||
| **Incident Response:** | ||
| - **Runbooks**: [docs/runbooks.md](../runbooks.md) | ||
| - Comprehensive incident response playbooks for production operations | ||
| - Covers sandbox, inference, deployment, performance, and security incidents | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if runbooks.md exists
fd -t f "runbooks.md" docs/Repository: NVIDIA/NemoClaw
Length of output: 75
Fix incorrect relative path in runbook cross-reference.
The file docs/runbooks.md exists, but the reference ../runbooks.md on line 2542 is incorrect. Change it to ./runbooks.md to point to the runbook file in the same directory.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/observability.md` around lines 2540 - 2545, In the "Incident Response:"
section where the runbook link currently points to "../runbooks.md", update the
reference to "./runbooks.md" so the link correctly targets the runbooks.md file
in the same docs directory (replace the string "../runbooks.md" with
"./runbooks.md" in the "Incident Response" block).
|
|
||
| NemoClaw uses multiple automated documentation systems: | ||
|
|
||
| ### 1. Sphinx with Autodoc (Python API) |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Remove numbers from section titles.
Section titles should not be numbered. Per coding guidelines: "Do not number section titles. Flag 'Section 1: ...', 'Step 3: ...', etc."
Examples to fix:
### 1. Sphinx with Autodoc (Python API)→### Sphinx with Autodoc (Python API)### 2. TypeDoc (TypeScript API)→### TypeDoc (TypeScript API)### 3. GitHub Actions CI (Continuous Integration)→### GitHub Actions CI### 4. AI-Powered Doc Updates→### AI-Powered Doc Updates
Also applies to: 28-28, 42-42, 58-58
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/README.md` at line 9, Remove the numeric prefixes from the markdown
section headers so they are not numbered; locate the heading strings like "###
1. Sphinx with Autodoc (Python API)", "### 2. TypeDoc (TypeScript API)", "### 3.
GitHub Actions CI (Continuous Integration)", and "### 4. AI-Powered Doc Updates"
(and the other occurrences mentioned) and change them to "### Sphinx with
Autodoc (Python API)", "### TypeDoc (TypeScript API)", "### GitHub Actions CI",
and "### AI-Powered Doc Updates" respectively, preserving the exact header text
after the number and the same markdown level.
| ```bash | ||
| make docs # From repository root | ||
| # or | ||
| uv run --group docs sphinx-build -b html docs docs/_build/html | ||
| ``` |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Use console language tag with $ prompt prefix for CLI examples.
CLI code blocks must use the console language tag, not bash. Per coding guidelines: "CLI code blocks must use the console language tag with $ prompt prefix. Flag ```bash or ```shell for CLI examples."
Example fix for lines 14-18
-```bash
-make docs # From repository root
-# or
-uv run --group docs sphinx-build -b html docs docs/_build/html
-```
+```console
+$ make docs # From repository root
+# or
+$ uv run --group docs sphinx-build -b html docs docs/_build/html
+```Apply similar changes to all other CLI code blocks in the file.
Also applies to: 33-36, 80-83, 87-98
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/README.md` around lines 14 - 18, Replace the fenced CLI blocks in
README.md that currently use ```bash or ```shell with ```console and add a
leading "$ " prompt to each command line (e.g., change "make docs" to "$ make
docs" and "uv run --group docs sphinx-build -b html docs docs/_build/html" to "$
uv run --group docs sphinx-build -b html docs docs/_build/html"), and apply the
same change to the other CLI blocks referenced (the blocks containing the "make
docs" / "uv run ..." snippet and those at the ranges containing the other CLI
examples noted: 33-36, 80-83, 87-98) so all CLI examples use the console
language tag and $ prompt prefix.
| - ✅ On every commit to `main` (GitHub Actions) | ||
| - ✅ On every pull request (validation) | ||
| - ✅ From Python docstrings (Sphinx Autodoc) | ||
| - ✅ From TypeScript comments (TypeDoc) | ||
| - ✅ From git commits (AI skill) |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Remove emoji from documentation prose.
Per coding guidelines: "Emoji in documentation prose" should be flagged as an LLM-generated pattern. Replace checkmarks with plain text or a bullet list.
Proposed fix
Documentation is built automatically:
-- ✅ On every commit to `main` (GitHub Actions)
-- ✅ On every pull request (validation)
-- ✅ From Python docstrings (Sphinx Autodoc)
-- ✅ From TypeScript comments (TypeDoc)
-- ✅ From git commits (AI skill)
-
-No manual documentation steps required - everything is automated!
+- On every commit to `main` (GitHub Actions)
+- On every pull request (validation)
+- From Python docstrings (Sphinx Autodoc)
+- From TypeScript comments (TypeDoc)
+- From git commits (AI skill)
+
+No manual documentation steps are required.LLM pattern detected.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/README.md` around lines 134 - 138, Remove emoji checkmarks from the
README checklist and replace them with plain text/bullets; update the five items
("On every commit to `main` (GitHub Actions)", "On every pull request
(validation)", "From Python docstrings (Sphinx Autodoc)", "From TypeScript
comments (TypeDoc)", "From git commits (AI skill)") so they use standard
markdown bullets or plain status words (e.g., "Yes" or "-") instead of ✅,
ensuring the prose contains no emoji.
| access: full | ||
| - host: api.slack.com | ||
| port: 443 | ||
| protocol: rest | ||
| enforcement: enforce | ||
| tls: terminate | ||
| rules: | ||
| - allow: { method: GET, path: "/**" } | ||
| - allow: { method: POST, path: "/**" } | ||
| access: full | ||
| - host: hooks.slack.com | ||
| port: 443 | ||
| protocol: rest | ||
| enforcement: enforce | ||
| tls: terminate | ||
| rules: | ||
| - allow: { method: GET, path: "/**" } | ||
| - allow: { method: POST, path: "/**" } | ||
| access: full |
There was a problem hiding this comment.
access: full is too permissive for this Slack REST preset.
For Slack-style REST API presets, this change drops request-level guardrails by granting full tunnel access. Please keep the REST/TLS-termination pattern and scoped allow rules for this preset class.
Based on learnings: "Reserve tls: terminate for plain HTTPS REST request/response API presets (e.g., discord, slack, jira) where termination without CONNECT tunneling is appropriate."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@nemoclaw-blueprint/policies/presets/slack.yaml` around lines 14 - 20, The
Slack REST preset was made too permissive by setting access: full for the
api.slack.com and hooks.slack.com host entries; revert to the
REST/TLS-termination pattern by changing each host entry to use tls: terminate
(or equivalent scoped REST access) instead of access: full and reintroduce
scoped allow rules that restrict to HTTPS REST request/response semantics (e.g.,
only standard REST methods and port 443) for the hosts api.slack.com and
hooks.slack.com so CONNECT/tunneling is not permitted.
Summary
Related Issue
Changes
Type of Change
Testing
make checkpasses.npm testpasses.make docsbuilds without warnings. (for doc-only changes)Checklist
General
Code Changes
make formatapplied (TypeScript and Python).Doc Changes
update-docsagent skill to draft changes while complying with the style guide. For example, prompt your agent with "/update-docscatch up the docs for the new changes I made in this PR."Summary by CodeRabbit
New Features
Documentation
Tests
Chores