feat: add CRUD and schema discovery commands to infrahubctl#900
feat: add CRUD and schema discovery commands to infrahubctl#900petercrocker wants to merge 32 commits intostablefrom
Conversation
…overy Add a new `infrahub` command as a separate entry point alongside `infrahubctl`. While `infrahubctl` targets developers building on top of Infrahub, the new `infrahub` command targets end users who need to query, create, update, and delete data in the Infrahub database. Commands: - `infrahub get <kind> [identifier]` — list or detail view with table/JSON/CSV/YAML output - `infrahub create <kind>` — create objects via --set flags or --file (JSON/YAML) - `infrahub update <kind> <identifier>` — update objects via --set flags or --file - `infrahub delete <kind> <identifier>` — delete with confirmation prompt (--yes to skip) - `infrahub schema list/show` — discover available kinds and their attributes Key features: - Four output formats: table (default TTY), JSON (default piped), CSV, YAML - YAML output uses Infrahub Object format (apiVersion/kind/spec) for round-tripping - Unified --set flag for both attributes and relationships - Reuses existing SDK client, config, and AsyncTyper infrastructure - 87 unit tests, 16 integration tests
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 15 minutes and 35 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (19)
WalkthroughThis pull request introduces a comprehensive end-user CLI command implementation for Infrahub. It adds new CLI commands for CRUD operations ( 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Deploying infrahub-sdk-python with
|
| Latest commit: |
bc4bfb6
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://f04a89e0.infrahub-sdk-python.pages.dev |
| Branch Preview URL: | https://001-end-user-cli.infrahub-sdk-python.pages.dev |
Codecov Report❌ Patch coverage is @@ Coverage Diff @@
## stable #900 +/- ##
==========================================
+ Coverage 80.68% 80.98% +0.29%
==========================================
Files 119 132 +13
Lines 10336 10887 +551
Branches 1551 1638 +87
==========================================
+ Hits 8340 8817 +477
- Misses 1473 1530 +57
- Partials 523 540 +17
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
- Fix MD060 table column style in plan.md and quickstart.md - Fix MD032 blanks-around-lists in research.md - Fix MD050 strong-style (escaped __init__.py underscores) in tasks.md - Split integration test class to stay under PLR0904 method limit - Make CLI runner tests sync to avoid asyncio.run() nested loop error
- Add 47 new tests (134 total, up from 87) - schema.py: test list/show with mocked client, filter, branch, empty schema - create.py: test --set inline, --file, invalid field, multiple args, branch - update.py: test --set attributes/relationships, --file, branch - delete.py: test --yes, -y, branch, confirmation abort/accept - formatters/__init__.py: test detect_output_format TTY/non-TTY, get_formatter - yaml/csv/table: test many-cardinality rels, empty peers, None values
- Fix MD032 blanks-around-lists in cli-commands.md and data-model.md - Revert SDK ref docs to match stable (CI regenerates with its own mdxify version)
- Add type coercion in parse_set_args (int, float, bool, null) so --set height=190 sends an integer instead of string "190" - Fix MD032 blanks-around-lists in cli-commands.md and data-model.md - Revert SDK ref docs to stable baseline (CI regenerates with its own mdxify version) - Add error output to integration test assertions for debugging
Instead of a separate `infrahub` entry point, register get, create, update, and delete as top-level commands on the existing `infrahubctl` CLI. Add schema list and schema show as subcommands of the existing `infrahubctl schema` group alongside load, check, and export. - Remove infrahub entry point from pyproject.toml - Remove enduser_cli.py and enduser_commands.py - Register commands in cli_commands.py - Merge schema list/show into ctl/schema.py - Update all tests to use cli_commands.app
- Expose command functions as module-level names (get, create, update, delete) in cli_commands.py so typer --func can find them for doc gen - Generate CLI docs for new commands (get, create, update, delete) and updated schema docs (list, show subcommands added) - Fix test_version to use 'version' subcommand instead of --version flag
Remove Args sections from command docstrings since typer generates help text from Option/Argument annotations. The Args sections leaked Python parameter names (filter_args, set_args, filter_text) and technical terms (substring) into the generated MDX docs, triggering vale spelling errors in CI.
Add --all-columns flag to `infrahubctl get`. By default, columns where every value is empty are suppressed in table and CSV output, making wide schemas with sparse data much more readable. Use --all-columns to show every column. JSON and YAML output always includes all fields.
- Exit code 80 when query succeeds but returns zero results (exit 0 when results found, exit 1 on errors) - Print "No objects of kind <X> found." to stderr on empty results - Show "<N> object(s) found." footer in table output - Empty table still renders column headers so the user sees the kind name and knows the command ran
Add resolve_node() helper that tries multiple lookup strategies: 1. UUID if identifier looks like one 2. Schema default_filter (e.g., name__value) as keyword filter 3. Human-friendly ID (single or multi-component with / separator) This allows commands like `infrahubctl get DcimDevice arista-switch-01` to work using the device name, not just UUID. Applied to get, update, and delete commands.
- Omit empty/null attribute values instead of writing empty strings
(fixes BigInt validation errors on reload)
- Omit unset relationships instead of writing empty strings or {data: []}
- Use HFID for relationship references: single-component as string,
multi-component as list (fixes HFID element count mismatch errors)
- Preserve falsy-but-valid values (0, False)
Replace typer.BadParameter with console.print + typer.Exit for create/update validation errors. BadParameter was caught by the generic exception handler which printed a full traceback. Now shows a clean error message with usage example.
When using --set location=LON-1, the CLI now resolves the relationship target by searching for the node across all schema kinds. This handles generic peer types (e.g., LocationHosting) where the concrete kind (LocationBuilding) differs from the relationship declaration. Lookup order for relationship values: 1. UUID if it looks like one 2. Direct lookup on the declared peer kind (default_filter, then HFID) 3. Search all node schemas by default_filter and HFID
infrahub CLI for CRUD and schema discoveryThere was a problem hiding this comment.
Actionable comments posted: 19
🧹 Nitpick comments (5)
tests/unit/ctl/formatters/test_json.py (1)
34-38: Make relationship mock resolution name-aware in the test helper.
schema.get_relationship.return_valueis reassigned in a loop, so only the last relationship is effectively used. This can hide bugs when future tests include multiple relationships.♻️ Suggested helper fix
def _make_mock_schema( @@ ) -> MagicMock: @@ schema = MagicMock() schema.kind = kind schema.attribute_names = attr_names schema.relationship_names = rel_names - for _name in rel_names: - rel = MagicMock() - rel.cardinality = "one" - schema.get_relationship.return_value = rel + rel_by_name: dict[str, MagicMock] = {} + for rel_name in rel_names: + rel = MagicMock() + rel.cardinality = "one" + rel_by_name[rel_name] = rel + schema.get_relationship.side_effect = lambda rel_name: rel_by_name[rel_name] return schema🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/ctl/formatters/test_json.py` around lines 34 - 38, The test helper currently sets schema.get_relationship.return_value in a loop (iterating rel_names) so every call returns the last MagicMock; update the helper so schema.get_relationship is name-aware by assigning schema.get_relationship.side_effect to a function or lambda that accepts a relationship name and returns the corresponding MagicMock created for each name (use a dict mapping from each name in rel_names to its MagicMock), ensuring calls to schema.get_relationship(name) return the correct per-name mock instead of the final one.tests/unit/ctl/commands/test_get.py (1)
40-44: Mock return value may not match actual type.
detect_output_format()returnsOutputFormatenum (seeinfrahub_sdk/ctl/formatters/__init__.pyline 49), but the mock returns a plain string"json". While this may work ifget_formatter()handles both types, it's better to mock with the actual type for accuracy:♻️ Suggested fix
+from infrahub_sdk.ctl.formatters import OutputFormat + with ( patch("infrahub_sdk.ctl.commands.get.initialize_client", return_value=mock_client), - patch("infrahub_sdk.ctl.commands.get.detect_output_format", return_value="json"), + patch("infrahub_sdk.ctl.commands.get.detect_output_format", return_value=OutputFormat.JSON), patch("infrahub_sdk.ctl.commands.get.get_formatter", return_value=mock_formatter), ):Apply similar changes to lines 72 and 112.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/ctl/commands/test_get.py` around lines 40 - 44, The test mocks use a plain string for detect_output_format() but the real function returns the OutputFormat enum; update the patches to return the actual enum value (use OutputFormat.JSON from infrahub_sdk.ctl.formatters) when mocking detect_output_format() and adjust the other occurrences where detect_output_format is mocked (the other test cases that patch detect_output_format and get_formatter) so they return the enum instead of a string; ensure imports in the test include OutputFormat or reference it fully so get_formatter(mocked enum) receives the correct type.infrahub_sdk/ctl/commands/create.py (1)
61-66: Overly broad exception suppression may hide real errors.Using
contextlib.suppress(Exception)silences all exceptions, including unexpected failures like network errors, authentication issues, or server errors. This makes debugging harder when the pre-check fails for non-404 reasons.Consider narrowing the suppression to the specific exception type indicating "not found":
♻️ Proposed fix
+from infrahub_sdk.exceptions import NodeNotFoundError + # Check if node already exists to distinguish create from upsert existing = None name_value = data.get("name") if name_value is not None: - with contextlib.suppress(Exception): + with contextlib.suppress(NodeNotFoundError): existing = await resolve_node(client, kind, str(name_value), schema=schema, branch=branch)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infrahub_sdk/ctl/commands/create.py` around lines 61 - 66, The current use of contextlib.suppress(Exception) around the resolve_node pre-check (with variables name_value and existing) suppresses all errors; replace it with a targeted try/except that only swallows the "not found" error thrown by resolve_node (e.g., the SDK's NotFoundError/ResourceNotFoundError or the HTTP client error with status==404) and re-raises or lets propagate any other exceptions (network/auth/server errors). Identify the actual exception type raised by resolve_node (or check for ClientResponseError.status == 404) and catch that specific exception, leaving other exceptions unhandled so callers can see real failures.tests/unit/ctl/formatters/test_table.py (1)
29-33: Mock schema returns same relationship spec for all relationship names.The mock uses a single
return_valueforget_relationship, so all relationships will havecardinality="one". This works for current tests but won't support testing mixed cardinalities. Consider usingside_effectif future tests need different cardinalities per relationship.# Example for mixed cardinality testing: def get_rel(name): rel = MagicMock() rel.cardinality = cardinalities.get(name, "one") return rel schema.get_relationship.side_effect = get_rel🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/ctl/formatters/test_table.py` around lines 29 - 33, The mock currently sets schema.get_relationship.return_value so every name in rel_names gets the same MagicMock with rel.cardinality="one"; change this to use schema.get_relationship.side_effect that accepts the relationship name and returns a fresh MagicMock with rel.cardinality taken from a per-name mapping (e.g., a dict keyed by names) so tests can exercise mixed cardinalities; update the test helper that builds schema (the loop using rel_names and schema.get_relationship) to install that side_effect instead of a single return_value and ensure each returned mock is unique.tests/unit/ctl/commands/test_update.py (1)
79-112: Assert the mutation, not justsave().Both success-path tests would still pass if
_update_with_set_args()skippedattr.value = ...orsetattr(node, key, ...)and only calledsave(). Please assert the final attribute value / relationship payload as well.Also applies to: 218-259
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/ctl/commands/test_update.py` around lines 79 - 112, The tests currently only assert mock_node.save was awaited, which misses verifying that the attribute/relationship was actually mutated; update the test_update_with_set_args_attribute_applied (and the other tests referenced around lines 218-259) to also assert the mutation itself by checking the attribute or relationship on mock_node after invoking the command: for attribute tests assert getattr(mock_node, "description").value (or mock_attr.value) equals "new description", and for relationship tests assert the relationship payload or that setattr(mock_node, "<relationship_name>") (or the relationship object) contains the expected target id(s) — use the existing mocks (mock_node, mock_attr) and the functions resolve_node / _update_with_set_args as reference points to locate where to add these assertions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/docs/infrahubctl/infrahubctl-get.mdx`:
- Line 22: Update the IDENTIFIER argument description to explicitly state that
it accepts HFIDs (including multi-component forms like "a/b") in addition to
object IDs and display names; edit the line that currently reads
"`[IDENTIFIER]`: Object ID or display name for detail view" to mention HFID
support and give an example of multi-component HFIDs so users can discover and
use that resolver feature.
- Line 11: Update the docs line about exit codes to clarify that exit code 80
applies only to empty list results (list mode), while detail lookups that fail
to resolve an identifier raise NodeNotFoundError and exit with code 1; reference
the get command behavior implemented in the get command handler
(NodeNotFoundError handling in infrahub_sdk/ctl/commands/get.py and exit logic
in infrahub_sdk/ctl/utils.py) so readers understand the different behaviors for
list vs detail modes.
In `@infrahub_sdk/ctl/commands/update.py`:
- Around line 123-143: _update_with_file currently ignores the CLI target object
(kind/identifier) and will blindly apply any objects in the file; change its
signature to accept the CLI target (e.g., add parameters kind: str, identifier:
str), then after loading files via ObjectFile.load_from_disk iterate and ensure
exactly one object is present or that each obj_file matches the supplied kind
and identifier (or raise a clear error), and pass the target through to
obj_file.validate_format(...) and obj_file.process(...) (or call dedicated
match/validation helpers) so the update only applies to the specified CLI
target.
- Around line 104-110: actual_changes is comparing the old relationship
display/id to the raw --set string (new_value) causing false positives; instead,
for relationship keys (schema.relationship_names) compute the "new" display/id
from the resolved relationship target and compare that to the existing
relationship before recording a change. Concretely: in the block handling
relationship keys (where you have rel = getattr(node, key) and
resolved_data[key]), derive new_display/new_id from resolved_data[key] using the
same getters used for old_display (e.g., getattr(resolved_target,
"display_label", getattr(resolved_target, "id", ...))) and append (key,
old_display, new_display) to changes (or adjust actual_changes) so
actual_changes compares old vs resolved target identity rather than the raw
input string, then call setattr(node, key, resolved_data[key]) only if the
resolved identities differ.
In `@infrahub_sdk/ctl/commands/utils.py`:
- Around line 125-135: The current except Exception around resolve_node(...)
hides all errors; change it to only catch the specific "not found" case and
re-raise everything else. Update the try/except in the block using
resolve_node(client, peer_kind, str_value, branch=branch) so that you catch the
lookup-miss error only (e.g., a NotFound/LookupError or the client-specific
exception class) and then call _search_generic_peer(...) as the fallback; for
any other exception types (auth/network/schema errors) re-raise the exception
instead of falling back. Ensure you reference resolve_node,
_search_generic_peer, peer_kind, str_value and branch when locating and updating
the code.
- Around line 161-193: _search_generic_peer currently returns the first node
found across schemas, which can silently pick the wrong object if multiple kinds
match the same HFID or default-filter; change its behavior to collect all
successful matches from the client.get calls (for paths using
schema.default_filter and schema.human_friendly_id) into a list, and if more
than one match is found raise a clear AmbiguousIdentifierError (or ValueError)
that includes the identifier and the matching schema kinds; keep the existing
single-match return path and the no-match behavior, but do not return the first
hit silently—use the symbols _search_generic_peer, NodeSchemaAPI,
schema.default_filter, schema.human_friendly_id, and client.get to locate where
to aggregate results and implement the ambiguity check.
In `@infrahub_sdk/ctl/formatters/table.py`:
- Around line 98-104: The detail renderer in table.py currently uses
display_label blindly and shows empty strings when labels are missing; update
the rendering in the block that handles rel_detail (used by extract_node_detail
outputs) so that for cardinality "one" you use rel_detail.get("display_label")
or fall back to rel_detail.get("id"), and for multi-peers build labels using
p.get("display_label") or p.get("id") for each peer; ensure table.add_row still
receives the label string (joining multi labels with ", ").
In `@infrahub_sdk/ctl/formatters/yaml.py`:
- Around line 65-73: The loop in _node_to_data_entry that builds the entry dict
is adding Attribute.value directly, which leaves
ipaddress.IPv4Interface/IPv4Network objects in the dict and causes yaml.dump to
emit Python tags; detect when the value is an ipaddress object (e.g.,
isinstance(value, ipaddress.BaseAddress) or ipaddress.BaseNetwork/IPv4Interface
types) and replace it with str(value) before assigning to entry[attr_name] so
IPHost/IPNetwork attributes are stored as plain strings for safe YAML
round-tripping.
In `@specs/001-end-user-cli/contracts/cli-commands.md`:
- Line 17: Update the contract line that documents the `get` list mode exit
codes so it reflects the PR behavior: replace the current "Exit 0: results found
| Exit 0: no results (empty table) | Exit 1: invalid kind" with "Exit 0: results
found | Exit 80: no results (empty table) | Exit 1: invalid kind" to document
that zero-result responses return exit code 80.
- Around line 11-64: The contract uses the wrong CLI namespace; replace every
occurrence of the command prefix "infrahub" with "infrahubctl" across this spec
so it matches the implemented CLI (e.g. update `infrahub get <kind>`, `infrahub
create <kind>`, `infrahub update <kind> <identifier>`, `infrahub delete <kind>
<identifier>`, `infrahub schema list`, and `infrahub schema show <kind>` and any
inline backticked examples or descriptions) and ensure headers, examples, and
exit-code descriptions reflect the new "infrahubctl" namespace consistently.
In `@specs/001-end-user-cli/plan.md`:
- Around line 8-12: Update the plan to reflect that commands are integrated into
the existing infrahubctl CLI (not a separate infrahub entry point) and remove
references to enduser_* modules; replace descriptions that propose a new
entrypoint with language describing adding get/create/update/delete/schema
commands into infrahubctl, reusing the existing SDK client, configuration, and
AsyncTyper framework, and adjust the "structure" section to list new command
groups (e.g., infrahubctl commands: get, create, update, delete, schema) and
output formats (including Infrahub Object YAML) so the document matches the
implemented architecture.
In `@specs/001-end-user-cli/quickstart.md`:
- Around line 1-2: The quickstart uses the outdated CLI name "infrahub" in
command examples; update all command invocations to "infrahubctl" (e.g., replace
occurrences like `infrahub ...` with `infrahubctl ...`) throughout the document
(including the blocks around lines 11-12, 30-98, and 113-116) so examples
reflect the integrated command binary; ensure any code blocks, inline commands,
and headings use the new executable name consistently.
In `@specs/001-end-user-cli/research.md`:
- Around line 5-6: Update the research doc to remove or mark as outdated the
decision that adds `infrahub` as a second entry point in `[project.scripts]`
pointing to `infrahub_sdk/ctl/enduser_cli.py`; instead document the current
implementation that registers the end-user commands on the existing
`infrahubctl` entry point (reference `infrahubctl` and the command registration
code in `infrahub_sdk/ctl/enduser_cli.py`), and add a short note describing that
the design evolved from a separate entry point (`infrahub`) to command
registration on `infrahubctl`.
In `@specs/001-end-user-cli/spec.md`:
- Line 114: FR-001 in the spec conflicts with the implementation: it currently
mandates a separate `infrahub` CLI entry point, but the PR integrated commands
into `infrahubctl`; update the FR-001 requirement text to reflect the
implemented design by replacing the hard MUST for a separate `infrahub` entry
with a statement that the CLI functionality is exposed via the `infrahubctl`
entry point (or change MUST to SHOULD if you want to allow either approach), and
make sure to reference `infrahubctl` by name (FR-001) so the spec and
implementation are consistent.
In `@specs/001-end-user-cli/tasks.md`:
- Around line 28-31: Update the checked-off task entries to reflect the final
CLI wiring: replace references to the old infrahub entry point and modules
(infrahub_sdk/ctl/enduser_cli.py, infrahub_sdk/ctl/enduser_commands.py) with the
new entry point name infrahubctl and the module
infrahub_sdk/ctl/cli_commands.py; ensure the pyproject.toml [project.scripts]
entry uses "infrahubctl" pointing to infrahub_sdk.ctl.cli_commands:app and
update task bullets to mention the command-group modules now registered from
cli_commands.py (branch, schema, object) instead of the deprecated files so
follow-up work is not misled.
In `@tests/integration/test_enduser_cli.py`:
- Around line 176-189: The tests test_create_inline and
test_create_inline_verify (and related pairs like
test_update_inline/test_update_inline_verify and test_delete_* tests) rely on
execution order and leave persistent state; make each write scenario
self-contained by generating a unique id per test (e.g., append a UUID or
timestamp to "Integration Test Person"), perform the operation via runner.invoke
or client (refer to runner.invoke in test_create_inline and client.get in
test_create_inline_verify), verify the result, and then clean up in test
teardown either by deleting the created object via the client or by converting
the test to use a yield fixture that creates the object and deletes it in the
fixture finalizer so no state persists between tests. Ensure the same unique id
is used across the create/verify steps within the single test or fixture and
remove dependence on test ordering for the update/delete flows as well.
- Around line 53-68: The fixture ctl_client_config mutates global state but only
conditionally restores environment variables and never resets
config.SETTINGS._settings.server_address; update the teardown to capture
originals for server_address, INFRAHUB_USERNAME, and INFRAHUB_PASSWORD before
mutation, then after yield restore server_address to its original value and
restore the env vars by either setting them back to the original values or
deleting the keys if they were originally unset so no leftover credentials or
server address persist for later tests.
In `@tests/unit/ctl/commands/test_update.py`:
- Around line 93-95: The test is setting a class attribute via
type(mock_node).description = mock_attr which mutates MagicMock's class; instead
set the description on the mock instance itself by assigning the attribute on
mock_node (or use mock_node.configure_mock(description=mock_attr)) so
getattr(mock_node, "description") returns mock_attr without modifying the
MagicMock class.
In `@tests/unit/ctl/commands/test_utils.py`:
- Around line 24-25: Remove the redundant `@pytest.mark.anyio` decorator from the
async test functions in this file (specifically above async def
test_resolve_by_uuid) so pytest's asyncio_mode="auto" can detect them; locate
any other async test functions in tests/unit/ctl/commands/test_utils.py that use
`@pytest.mark.anyio` and delete those decorators (leave the async def signatures
and test bodies unchanged).
---
Nitpick comments:
In `@infrahub_sdk/ctl/commands/create.py`:
- Around line 61-66: The current use of contextlib.suppress(Exception) around
the resolve_node pre-check (with variables name_value and existing) suppresses
all errors; replace it with a targeted try/except that only swallows the "not
found" error thrown by resolve_node (e.g., the SDK's
NotFoundError/ResourceNotFoundError or the HTTP client error with status==404)
and re-raises or lets propagate any other exceptions (network/auth/server
errors). Identify the actual exception type raised by resolve_node (or check for
ClientResponseError.status == 404) and catch that specific exception, leaving
other exceptions unhandled so callers can see real failures.
In `@tests/unit/ctl/commands/test_get.py`:
- Around line 40-44: The test mocks use a plain string for
detect_output_format() but the real function returns the OutputFormat enum;
update the patches to return the actual enum value (use OutputFormat.JSON from
infrahub_sdk.ctl.formatters) when mocking detect_output_format() and adjust the
other occurrences where detect_output_format is mocked (the other test cases
that patch detect_output_format and get_formatter) so they return the enum
instead of a string; ensure imports in the test include OutputFormat or
reference it fully so get_formatter(mocked enum) receives the correct type.
In `@tests/unit/ctl/commands/test_update.py`:
- Around line 79-112: The tests currently only assert mock_node.save was
awaited, which misses verifying that the attribute/relationship was actually
mutated; update the test_update_with_set_args_attribute_applied (and the other
tests referenced around lines 218-259) to also assert the mutation itself by
checking the attribute or relationship on mock_node after invoking the command:
for attribute tests assert getattr(mock_node, "description").value (or
mock_attr.value) equals "new description", and for relationship tests assert the
relationship payload or that setattr(mock_node, "<relationship_name>") (or the
relationship object) contains the expected target id(s) — use the existing mocks
(mock_node, mock_attr) and the functions resolve_node / _update_with_set_args as
reference points to locate where to add these assertions.
In `@tests/unit/ctl/formatters/test_json.py`:
- Around line 34-38: The test helper currently sets
schema.get_relationship.return_value in a loop (iterating rel_names) so every
call returns the last MagicMock; update the helper so schema.get_relationship is
name-aware by assigning schema.get_relationship.side_effect to a function or
lambda that accepts a relationship name and returns the corresponding MagicMock
created for each name (use a dict mapping from each name in rel_names to its
MagicMock), ensuring calls to schema.get_relationship(name) return the correct
per-name mock instead of the final one.
In `@tests/unit/ctl/formatters/test_table.py`:
- Around line 29-33: The mock currently sets
schema.get_relationship.return_value so every name in rel_names gets the same
MagicMock with rel.cardinality="one"; change this to use
schema.get_relationship.side_effect that accepts the relationship name and
returns a fresh MagicMock with rel.cardinality taken from a per-name mapping
(e.g., a dict keyed by names) so tests can exercise mixed cardinalities; update
the test helper that builds schema (the loop using rel_names and
schema.get_relationship) to install that side_effect instead of a single
return_value and ensure each returned mock is unique.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 2a38f5f1-0e3c-44fc-a15d-d4540d43b466
📒 Files selected for processing (43)
docs/docs/infrahubctl/infrahubctl-create.mdxdocs/docs/infrahubctl/infrahubctl-delete.mdxdocs/docs/infrahubctl/infrahubctl-get.mdxdocs/docs/infrahubctl/infrahubctl-schema.mdxdocs/docs/infrahubctl/infrahubctl-update.mdxinfrahub_sdk/ctl/cli_commands.pyinfrahub_sdk/ctl/commands/__init__.pyinfrahub_sdk/ctl/commands/create.pyinfrahub_sdk/ctl/commands/delete.pyinfrahub_sdk/ctl/commands/get.pyinfrahub_sdk/ctl/commands/update.pyinfrahub_sdk/ctl/commands/utils.pyinfrahub_sdk/ctl/formatters/__init__.pyinfrahub_sdk/ctl/formatters/base.pyinfrahub_sdk/ctl/formatters/csv.pyinfrahub_sdk/ctl/formatters/json.pyinfrahub_sdk/ctl/formatters/table.pyinfrahub_sdk/ctl/formatters/yaml.pyinfrahub_sdk/ctl/parsers.pyinfrahub_sdk/ctl/schema.pyspecs/001-end-user-cli/checklists/requirements.mdspecs/001-end-user-cli/contracts/cli-commands.mdspecs/001-end-user-cli/data-model.mdspecs/001-end-user-cli/plan.mdspecs/001-end-user-cli/quickstart.mdspecs/001-end-user-cli/research.mdspecs/001-end-user-cli/spec.mdspecs/001-end-user-cli/tasks.mdtests/integration/test_enduser_cli.pytests/unit/ctl/commands/__init__.pytests/unit/ctl/commands/test_create.pytests/unit/ctl/commands/test_delete.pytests/unit/ctl/commands/test_get.pytests/unit/ctl/commands/test_schema.pytests/unit/ctl/commands/test_update.pytests/unit/ctl/commands/test_utils.pytests/unit/ctl/formatters/__init__.pytests/unit/ctl/formatters/test_csv.pytests/unit/ctl/formatters/test_init.pytests/unit/ctl/formatters/test_json.pytests/unit/ctl/formatters/test_table.pytests/unit/ctl/formatters/test_yaml.pytests/unit/ctl/test_parsers.py
| # Make getattr(node, "description") return mock_attr | ||
| type(mock_node).description = mock_attr | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
python - <<'PY'
from unittest.mock import MagicMock
a = MagicMock()
b = MagicMock()
type(a).description = "leaked"
print("a.description =", a.description)
print("b.description =", b.description)
PYRepository: opsmill/infrahub-sdk-python
Length of output: 166
🏁 Script executed:
# Find and examine the test file
git ls-files | grep -i "test_update.py"Repository: opsmill/infrahub-sdk-python
Length of output: 109
🏁 Script executed:
# Read the test file to understand the context around lines 93-95
wc -l tests/unit/ctl/commands/test_update.pyRepository: opsmill/infrahub-sdk-python
Length of output: 113
🏁 Script executed:
# Read the relevant section around lines 93-95
sed -n '80,110p' tests/unit/ctl/commands/test_update.pyRepository: opsmill/infrahub-sdk-python
Length of output: 1150
🏁 Script executed:
# Read more context to see the full test function and assertions
sed -n '75,130p' tests/unit/ctl/commands/test_update.pyRepository: opsmill/infrahub-sdk-python
Length of output: 1943
🏁 Script executed:
# Check if there are shared fixtures or if mock_node is used across tests
sed -n '1,40p' tests/unit/ctl/commands/test_update.pyRepository: opsmill/infrahub-sdk-python
Length of output: 1323
🏁 Script executed:
# Check all test functions and search for any potential shared mock state or fixture issues
grep -n "def test_" tests/unit/ctl/commands/test_update.pyRepository: opsmill/infrahub-sdk-python
Length of output: 611
🏁 Script executed:
# Look at what the actual command implementation does with the node object
find . -name "*.py" -path "*/ctl/commands/*" | grep -E "(update|__init__)" | head -5Repository: opsmill/infrahub-sdk-python
Length of output: 227
🏁 Script executed:
# Examine the actual update command implementation
cat -n infrahub_sdk/ctl/commands/update.pyRepository: opsmill/infrahub-sdk-python
Length of output: 6369
Use an instance attribute instead of a class attribute on MagicMock.
type(mock_node).description = mock_attr sets a class attribute on MagicMock, which is unnecessary and poor practice. Set the attribute directly on the instance instead:
Suggested fix
- # Make getattr(node, "description") return mock_attr
- type(mock_node).description = mock_attr
+ mock_node.description = mock_attrThis is cleaner and avoids class-level mutations entirely.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/ctl/commands/test_update.py` around lines 93 - 95, The test is
setting a class attribute via type(mock_node).description = mock_attr which
mutates MagicMock's class; instead set the description on the mock instance
itself by assigning the attribute on mock_node (or use
mock_node.configure_mock(description=mock_attr)) so getattr(mock_node,
"description") returns mock_attr without modifying the MagicMock class.
There was a problem hiding this comment.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
docs/docs/infrahubctl/infrahubctl-schema.mdx (1)
1-3:⚠️ Potential issue | 🟠 MajorMissing required MDX frontmatter (
title).This page should include frontmatter metadata at the top.
Suggested fix
+--- +title: infrahubctl schema +--- + # `infrahubctl schema`As per coding guidelines:
docs/**/{python-sdk,infrahubctl}/**/*.mdxmust include frontmatter withtitle.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/docs/infrahubctl/infrahubctl-schema.mdx` around lines 1 - 3, Add required MDX frontmatter to the top of this document by inserting a YAML block containing at least a title (e.g., title: "infrahubctl schema") before the existing content; update the file that currently begins with the heading `# \`infrahubctl schema\`` to include the frontmatter block so it complies with the docs/**/{python-sdk,infrahubctl}/**/*.mdx requirement.
🧹 Nitpick comments (6)
tests/unit/ctl/test_parsers.py (1)
11-53: Add a regression test for leading-zero numeric strings.Given coercion behavior, add coverage for inputs like
code=00123to lock in the intended handling and prevent accidental data-shape regressions.✅ Suggested test case
class TestCoerceValue: @@ def test_empty_string(self) -> None: result = parse_set_args(["name="]) assert not result["name"] assert isinstance(result["name"], str) + + def test_numeric_string_with_leading_zero_preserved(self) -> None: + result = parse_set_args(["code=00123"]) + assert result["code"] == "00123" + assert isinstance(result["code"], str)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/ctl/test_parsers.py` around lines 11 - 53, Add a regression test in the TestCoerceValue test class that verifies leading-zero numeric strings are not coerced to numbers: call parse_set_args(["code=00123"]) and assert the returned value for "code" equals the string "00123" and is an instance of str; this will lock in the intended handling of values like "00123" and prevent accidental coercion to int in the parse_set_args implementation.tests/unit/ctl/formatters/test_yaml.py (1)
31-35: Relationship mock setup may not work correctly for multiple relationships.The loop sets
schema.get_relationship.return_value = relon each iteration, meaning only the last relationship's mock configuration is retained. If a test needsget_relationshipto return different mocks for different relationship names, this won't work. Consider usingside_effectwith a lambda or dict lookup if multi-relationship scenarios are needed.Current tests use single relationships, so this works for now.
🔧 Potential improvement for future extensibility
- for _name in rel_names: - rel = MagicMock() - rel.cardinality = "one" - schema.get_relationship.return_value = rel + rel_mocks = {} + for name in rel_names: + rel = MagicMock() + rel.cardinality = "one" + rel_mocks[name] = rel + schema.get_relationship.side_effect = lambda n: rel_mocks.get(n)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/ctl/formatters/test_yaml.py` around lines 31 - 35, The loop that assigns schema.get_relationship.return_value = rel overwrites the return for every iteration so only the last MagicMock is returned; instead create a dict mapping each relationship name (rel_names) to its MagicMock (rel) and set schema.get_relationship.side_effect to a callable (e.g., lambda name: rel_map[name]) so get_relationship(name) returns the correct mock per relationship name; update the code that builds the mocks (where rel_names and MagicMock are used) to populate the map and assign the side_effect on schema.get_relationship.tests/unit/ctl/commands/test_get.py (1)
87-118: Assert option forwarding, not just exit behavior.Line 118 only validates the empty-results exit code. Please also assert that each option is correctly propagated to
client.filters(...)to catch parser/plumbing regressions.✅ Suggested test tightening
`@pytest.mark.parametrize`( - "extra_args", + ("extra_args", "expected"), [ - ["--branch", "my-branch"], - ["--limit", "10"], - ["--offset", "5"], - ["--filter", "name__value=router1"], + (["--branch", "my-branch"], {"branch": "my-branch"}), + (["--limit", "10"], {"limit": 10}), + (["--offset", "5"], {"offset": 5}), + (["--filter", "name__value=router1"], {"name__value": "router1"}), ], ) -def test_get_list_mode_with_options(extra_args: list[str]) -> None: +def test_get_list_mode_with_options(extra_args: list[str], expected: dict[str, object]) -> None: @@ assert result.exit_code == 80 + called = mock_client.filters.await_args.kwargs + for key, value in expected.items(): + assert called.get(key) == value🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/ctl/commands/test_get.py` around lines 87 - 118, The test currently only checks the exit code; update test_get_list_mode_with_options to also assert that mock_client.filters was called with the expected arguments derived from each extra_args case (e.g., "--branch" -> branch="my-branch", "--limit" -> limit=10, "--offset" -> offset=5, "--filter" -> filters=["name__value=router1"] or equivalent parser output) by mapping extra_args to expected kwargs and using mock_client.filters.assert_awaited_once_with(...) (or assert_called_once_with for a sync mock) to ensure the CLI forwarded options into client.filters; keep the existing patches and exit_code assertion.tests/unit/ctl/commands/test_schema.py (1)
165-181: Add a filtered heterogeneous-entry regression test.Please add a
schema list --filter ...case where a non-NodeSchemaAPIentry has no.kindattribute. That guards the filter-ordering edge case and prevents future regressions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/ctl/commands/test_schema.py` around lines 165 - 181, Extend the existing test_schema_list_skips_non_node_schema_entries to add a new case invoking runner.invoke(app, ["schema", "list", "--filter", "Infra"]) (or a separate test function) where mock_client.schema.all returns a dict including a non-NodeSchemaAPI MagicMock entry that deliberately has no .kind attribute (e.g., delattr(generic_schema, "kind")) to exercise the filter path; assert the command exits with code 0, that the valid NodeSchemaAPI entry still appears in result.stdout and that the missing-.kind entry does not cause an exception or appear in output, ensuring the code that iterates/filtering the results (mock_client.schema.all and the CLI handler that calls it) correctly guards attribute access on non-NodeSchemaAPI objects.tests/unit/ctl/formatters/test_csv.py (1)
82-248: Add regression coverage for formula-like values andmanyrelationships.Current tests don’t protect the risky/branchy cases: values starting with spreadsheet formula prefixes and detail rendering for multi-peer relationships.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/ctl/formatters/test_csv.py` around lines 82 - 248, Add tests to cover two regression cases: (1) in TestCsvFormatterFormatList add a case where an attribute value begins with spreadsheet formula prefixes (e.g., "=SUM(1,2)" or values starting with "+", "-", "@") using CsvFormatter.format_list and _make_mock_node/_make_mock_schema and assert the CSV output preserves the exact string in the data row; (2) in TestCsvFormatterFormatDetail add a case for a relationship marked as many (provide _make_mock_node with a relationship value list like ["DC1","DC2"] and _make_mock_schema with that relationship) and call CsvFormatter.format_detail to assert the detail output contains the combined/displayed relationship labels (i.e., both peers appear in the "value" for the relationship row). Ensure tests reference CsvFormatter.format_list and CsvFormatter.format_detail so these branches remain covered.tests/unit/ctl/commands/test_update.py (1)
218-258: Add a no-op relationship case here.This only exercises the save path. A case where the existing relation is referenced via a different identifier but resolves to the same node would catch false-positive updates and protect the
No changespath.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/ctl/commands/test_update.py` around lines 218 - 258, Add a new unit test (e.g., test_update_with_set_args_relationship_noop) that mirrors test_update_with_set_args_relationship but has resolve_relationship_values/resolve_node return a relationship mapping that resolves to the same existing node id (simulate a different identifier resolving to "old-site-id"); call the CLI with the same --set argument that would resolve to the same target and assert the command exits 0, prints a "No changes" message (or similar), and that mock_node.save.assert_not_awaited() to ensure no save occurs; reference the existing test_update_with_set_args_relationship, resolve_relationship_values, resolve_node, and mock_node.save to locate where to add and model the new test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/docs/infrahubctl/infrahubctl-create.mdx`:
- Around line 1-9: Add a YAML frontmatter block with a title (e.g., begin the
file with --- then title: "infrahubctl create" then ---) so the document has
required frontmatter, and remove the hidden/non-printable character immediately
before the word "Examples" to ensure markdownlint cleanliness; update the top of
docs/docs/infrahubctl/infrahubctl-create.mdx to include the title frontmatter
and delete the stray invisible character.
In `@docs/docs/infrahubctl/infrahubctl-delete.mdx`:
- Line 8: Remove the non-printable backspace character that precedes the
"Examples:" heading in the document; open the content containing "Examples:",
delete the hidden control character (e.g., U+0008) before the word, and verify
there are no other invisible characters around the heading (or run a
search/replace for control characters) so the markdown renders cleanly.
- Around line 1-3: This page is missing the required MDX frontmatter; add a
frontmatter block at the top containing a title field (e.g., title: "infrahubctl
delete") so the document under the infrahubctl docs (the page starting with the
"# `infrahubctl delete`" heading) includes the required metadata; ensure the
frontmatter is the very first content in the file and uses standard YAML
frontmatter format.
In `@docs/docs/infrahubctl/infrahubctl-get.mdx`:
- Around line 1-10: Add a YAML frontmatter block with a title key at the top of
the document (e.g., `title: "infrahubctl get"`) to satisfy docs frontmatter
requirements, and remove the invisible control character immediately before the
"Examples:" line so the file is markdownlint-clean; update the top of the
infrahubctl-get.mdx content to begin with the frontmatter block and ensure no
stray control characters remain before the Examples section.
In `@docs/docs/infrahubctl/infrahubctl-schema.mdx`:
- Line 72: Remove the non-printable backspace characters that appear immediately
before the "Examples:" headings in the infrahubctl schema doc (the two
occurrences of the literal string "Examples:" near lines 72 and 119); open the
MDX source, locate the two "Examples:" tokens and delete any hidden/control
characters (e.g., U+0008 or other non-printables) preceding them, and verify
both "Examples:" lines are plain ASCII text and render correctly.
In `@docs/docs/infrahubctl/infrahubctl-update.mdx`:
- Around line 1-9: Add a YAML frontmatter block with a title (e.g., title:
"infrahubctl update") at the very top of the MDX page and remove the stray
non-printable/control character that appears before the "Examples:" line; ensure
the resulting file begins with the frontmatter block followed by the heading and
that the file passes markdownlint rules for
docs/**/{python-sdk,infrahubctl}/**/*.mdx.
In `@infrahub_sdk/ctl/commands/create.py`:
- Around line 27-31: Validate that when a file is provided the file's declared
kind matches the required positional kind: in the command handler that receives
the typer Argument "kind" and Option "file", after loading the file (the object
parsed into obj_file or similar with obj_file.spec.kind), compare
obj_file.spec.kind to the positional "kind" and raise a user-facing error
(typer.Exit or typer.BadParameter) if they differ; apply the same validation in
the other create code path referenced around the other handler block (the code
handling the --file branch at the second occurrence) so a mismatched kind (e.g.,
create Foo --file bar.yml where bar.yml declares a different kind) is rejected
rather than silently honored.
In `@infrahub_sdk/ctl/commands/update.py`:
- Around line 30-35: The command currently ignores the positional kind and
identifier when --file is passed; instead validate or disable those args: when
file is provided (the Path option named file) parse the JSON/YAML payload and
extract its target kind/identifier and compare them against the positional
parameters kind and identifier, and if they differ raise a user-facing error and
abort; alternatively, if you prefer disabling, reject invocation that supplies
positional args together with --file. Apply the same validation/guard in the
other update command code paths referenced (the blocks around lines 67-72 and
128-148) so all code paths either validate file contents against the
kind/identifier or reject mixed usage.
- Around line 102-125: The comparison for changes currently uses the raw
`new_value` from the CLI which causes relationships to always appear changed;
update the relationship branch to compute a normalized new identity (e.g. use
resolved_data[key]'s display_label or id) and append that normalized new value
to `changes` (instead of the raw `new_value`), so that `actual_changes =
[(f,o,n) for f,o,n in changes if str(o) != str(n)]` will correctly filter
no-ops; adjust the relationship handling around `rel`, `old_display`,
`resolved_data[key]`, and `changes` to use the same normalization logic used for
`old_display` when deciding and recording the new value before calling `await
node.save()`.
In `@infrahub_sdk/ctl/commands/utils.py`:
- Around line 125-136: The try/except around resolve_node in the async block
swallows all Exceptions which can hide network/auth/rate-limit errors; change
the except to only catch the expected "not found" or resolution-specific
exception(s) (e.g., the exception type raised by resolve_node) and let other
exceptions propagate (or re-raise them), and if you want visibility add a
debug/processLogger.debug statement inside the except before falling back to
_search_generic_peer; specifically update the block that calls resolve_node and
sets resolved[key] to only handle resolution-not-found errors, otherwise either
re-raise or log and propagate the original exception so transient failures
aren't masked.
In `@infrahub_sdk/ctl/formatters/csv.py`:
- Around line 45-51: Sanitize any cell value before writing to CSV to prevent
spreadsheet formula injection: implement a small sanitizer used where
writer.writerow(columns) and where writer.writerow([str(row_data.get(col, ""))
...]) are called that checks string values and, if they start with any of the
characters '=', '+', '-', or '@', prefixes them (e.g. with a single quote or
another safe prefix) or otherwise escapes them; apply this sanitizer to both
header and data cell values (reference the variables/methods writer.writerow,
columns, rows, and row_data.get) so no raw user-controlled string beginning with
those characters is emitted.
In `@infrahub_sdk/ctl/parsers.py`:
- Around line 20-30: The current naive numeric coercion that directly tries
int(value) and float(value) can turn leading-zero identifiers like "00123" into
123; update the parsing logic that attempts numeric coercion (the block using
int(value) and float(value) with the local variable value) to first validate the
string with stricter rules: only coerce to int if it matches the pattern
^(?:0|[1-9][0-9]*)$ (allow "0" but not numbers with leading zeros), and only
coerce to float if it matches a safe decimal/float regex (no leading-zero
ambiguity) or when an explicit opt-in flag is set; otherwise return the original
string unchanged. Ensure this change is applied to both places where the
try-int/try-float sequence appears.
In `@infrahub_sdk/ctl/schema.py`:
- Around line 284-289: The current filter_text check accesses s.kind before
ensuring s is a NodeSchemaAPI, which can raise AttributeError for
non-NodeSchemaAPI entries; fix by first restricting items to instances of
NodeSchemaAPI (use isinstance(s, NodeSchemaAPI)) and then apply the filter_text
check against s.kind, and finally sort by s.kind (affecting the variables/items,
NodeSchemaAPI, filter_text, and s.kind).
In `@specs/001-end-user-cli/contracts/cli-commands.md`:
- Line 17: The contract text "Exit 0: results found | Exit 0: no results (empty
table) | Exit 1: invalid kind" conflicts with the implementation and PR
summaries which use exit code 80 for empty-result queries; decide whether empty
results should return 0 or 80, then make a single-source change: either update
the CLI implementation (the code path that returns exit codes for query results)
to return 0 for empty results, or update this contract line in cli-commands.md
to state "Exit 80: no results (empty table)" and also update the PR summary/AI
summary to match; finally run/update any tests or docs referencing the old
behavior so all places (contract text, PR summary, AI summary, and tests)
consistently reflect the chosen exit code.
In `@specs/001-end-user-cli/research.md`:
- Around line 5-7: Update the Decision paragraph to reflect the actual
implemented CLI surface by removing or replacing the separate `infrahub` entry
point mention and documenting the single entry `infrahubctl` instead; edit the
Decision text under `[project.scripts]` to state that CLI commands are exposed
via `infrahubctl` (not a second `infrahub` entry) and reference the implemented
module `infrahub_sdk/ctl/enduser_cli.py` or the `infrahubctl` entry point symbol
so the spec matches the code.
In `@specs/001-end-user-cli/spec.md`:
- Around line 1-7: The spec currently uses the command name literal "infrahub"
in multiple places; update every occurrence to the intended "infrahubctl" so the
feature header, input description, and all acceptance/doc sections match this
PR's target; search for the string "infrahub" in the spec document and replace
it with "infrahubctl" (e.g., the top-level title, the input line, and the
sections referenced around the command entry) to ensure consistency between the
spec and implementation.
In `@tests/integration/test_enduser_cli.py`:
- Around line 176-224: The verify tests depend on state created by other tests
and leak data; make each write-related test manage its own create/update/delete
lifecycle. For test_create_inline_verify: create the object within that test (or
call the CLI create command with a unique id/name, e.g., include a uuid), then
verify and delete it via client.delete or CLI. For test_update_inline_verify:
create a fresh TestingPerson with the original height before invoking the CLI
update (use a unique id), verify the height changed, then delete it. For
test_delete_verify: create the "Delete Me" object inside that test (or use a
unique name), run the CLI delete with --yes (or call client.delete), then verify
it no longer exists. Update test_create_inline, test_update_inline, and
test_delete_with_yes to use unique ids or ensure they clean up after themselves
so tests no longer rely on ordering.
- Around line 53-67: The fixture ctl_client_config overrides
config.SETTINGS._settings.server_address and environment variables but only
partially restores them; save the original server_address (e.g., orig_addr =
config.SETTINGS._settings.server_address) before mutating, set
INFRAHUB_USERNAME/PASSWORD only when client provides them, and in the teardown
restore server_address back to orig_addr and restore or delete the environment
variables based on whether original_username/original_password were present
(delete the env var if original was None); update the teardown in
ctl_client_config to perform these restores so subsequent tests don't inherit
the mutated state.
---
Outside diff comments:
In `@docs/docs/infrahubctl/infrahubctl-schema.mdx`:
- Around line 1-3: Add required MDX frontmatter to the top of this document by
inserting a YAML block containing at least a title (e.g., title: "infrahubctl
schema") before the existing content; update the file that currently begins with
the heading `# \`infrahubctl schema\`` to include the frontmatter block so it
complies with the docs/**/{python-sdk,infrahubctl}/**/*.mdx requirement.
---
Nitpick comments:
In `@tests/unit/ctl/commands/test_get.py`:
- Around line 87-118: The test currently only checks the exit code; update
test_get_list_mode_with_options to also assert that mock_client.filters was
called with the expected arguments derived from each extra_args case (e.g.,
"--branch" -> branch="my-branch", "--limit" -> limit=10, "--offset" -> offset=5,
"--filter" -> filters=["name__value=router1"] or equivalent parser output) by
mapping extra_args to expected kwargs and using
mock_client.filters.assert_awaited_once_with(...) (or assert_called_once_with
for a sync mock) to ensure the CLI forwarded options into client.filters; keep
the existing patches and exit_code assertion.
In `@tests/unit/ctl/commands/test_schema.py`:
- Around line 165-181: Extend the existing
test_schema_list_skips_non_node_schema_entries to add a new case invoking
runner.invoke(app, ["schema", "list", "--filter", "Infra"]) (or a separate test
function) where mock_client.schema.all returns a dict including a
non-NodeSchemaAPI MagicMock entry that deliberately has no .kind attribute
(e.g., delattr(generic_schema, "kind")) to exercise the filter path; assert the
command exits with code 0, that the valid NodeSchemaAPI entry still appears in
result.stdout and that the missing-.kind entry does not cause an exception or
appear in output, ensuring the code that iterates/filtering the results
(mock_client.schema.all and the CLI handler that calls it) correctly guards
attribute access on non-NodeSchemaAPI objects.
In `@tests/unit/ctl/commands/test_update.py`:
- Around line 218-258: Add a new unit test (e.g.,
test_update_with_set_args_relationship_noop) that mirrors
test_update_with_set_args_relationship but has
resolve_relationship_values/resolve_node return a relationship mapping that
resolves to the same existing node id (simulate a different identifier resolving
to "old-site-id"); call the CLI with the same --set argument that would resolve
to the same target and assert the command exits 0, prints a "No changes" message
(or similar), and that mock_node.save.assert_not_awaited() to ensure no save
occurs; reference the existing test_update_with_set_args_relationship,
resolve_relationship_values, resolve_node, and mock_node.save to locate where to
add and model the new test.
In `@tests/unit/ctl/formatters/test_csv.py`:
- Around line 82-248: Add tests to cover two regression cases: (1) in
TestCsvFormatterFormatList add a case where an attribute value begins with
spreadsheet formula prefixes (e.g., "=SUM(1,2)" or values starting with "+",
"-", "@") using CsvFormatter.format_list and _make_mock_node/_make_mock_schema
and assert the CSV output preserves the exact string in the data row; (2) in
TestCsvFormatterFormatDetail add a case for a relationship marked as many
(provide _make_mock_node with a relationship value list like ["DC1","DC2"] and
_make_mock_schema with that relationship) and call CsvFormatter.format_detail to
assert the detail output contains the combined/displayed relationship labels
(i.e., both peers appear in the "value" for the relationship row). Ensure tests
reference CsvFormatter.format_list and CsvFormatter.format_detail so these
branches remain covered.
In `@tests/unit/ctl/formatters/test_yaml.py`:
- Around line 31-35: The loop that assigns schema.get_relationship.return_value
= rel overwrites the return for every iteration so only the last MagicMock is
returned; instead create a dict mapping each relationship name (rel_names) to
its MagicMock (rel) and set schema.get_relationship.side_effect to a callable
(e.g., lambda name: rel_map[name]) so get_relationship(name) returns the correct
mock per relationship name; update the code that builds the mocks (where
rel_names and MagicMock are used) to populate the map and assign the side_effect
on schema.get_relationship.
In `@tests/unit/ctl/test_parsers.py`:
- Around line 11-53: Add a regression test in the TestCoerceValue test class
that verifies leading-zero numeric strings are not coerced to numbers: call
parse_set_args(["code=00123"]) and assert the returned value for "code" equals
the string "00123" and is an instance of str; this will lock in the intended
handling of values like "00123" and prevent accidental coercion to int in the
parse_set_args implementation.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 50a4cdc5-c375-446e-b1e6-d3b3ccfcc7c9
📒 Files selected for processing (43)
docs/docs/infrahubctl/infrahubctl-create.mdxdocs/docs/infrahubctl/infrahubctl-delete.mdxdocs/docs/infrahubctl/infrahubctl-get.mdxdocs/docs/infrahubctl/infrahubctl-schema.mdxdocs/docs/infrahubctl/infrahubctl-update.mdxinfrahub_sdk/ctl/cli_commands.pyinfrahub_sdk/ctl/commands/__init__.pyinfrahub_sdk/ctl/commands/create.pyinfrahub_sdk/ctl/commands/delete.pyinfrahub_sdk/ctl/commands/get.pyinfrahub_sdk/ctl/commands/update.pyinfrahub_sdk/ctl/commands/utils.pyinfrahub_sdk/ctl/formatters/__init__.pyinfrahub_sdk/ctl/formatters/base.pyinfrahub_sdk/ctl/formatters/csv.pyinfrahub_sdk/ctl/formatters/json.pyinfrahub_sdk/ctl/formatters/table.pyinfrahub_sdk/ctl/formatters/yaml.pyinfrahub_sdk/ctl/parsers.pyinfrahub_sdk/ctl/schema.pyspecs/001-end-user-cli/checklists/requirements.mdspecs/001-end-user-cli/contracts/cli-commands.mdspecs/001-end-user-cli/data-model.mdspecs/001-end-user-cli/plan.mdspecs/001-end-user-cli/quickstart.mdspecs/001-end-user-cli/research.mdspecs/001-end-user-cli/spec.mdspecs/001-end-user-cli/tasks.mdtests/integration/test_enduser_cli.pytests/unit/ctl/commands/__init__.pytests/unit/ctl/commands/test_create.pytests/unit/ctl/commands/test_delete.pytests/unit/ctl/commands/test_get.pytests/unit/ctl/commands/test_schema.pytests/unit/ctl/commands/test_update.pytests/unit/ctl/commands/test_utils.pytests/unit/ctl/formatters/__init__.pytests/unit/ctl/formatters/test_csv.pytests/unit/ctl/formatters/test_init.pytests/unit/ctl/formatters/test_json.pytests/unit/ctl/formatters/test_table.pytests/unit/ctl/formatters/test_yaml.pytests/unit/ctl/test_parsers.py
Only catch lookup-miss errors (NodeNotFoundError, SchemaNotFoundError, ValueError, IndexError) before falling back to generic peer search. Auth, network, and other unexpected errors now propagate instead of being silently swallowed.
asyncio_mode=auto in pyproject.toml already detects async tests. The anyio markers caused each test to run under both asyncio and trio, doubling the count unnecessarily.
Values like '00123' are now kept as strings instead of being coerced to int 123. The coercion only applies when str(int(v)) == v, ensuring no data loss for zip codes, serial numbers, rack unit names, etc.
Summary by CodeRabbit
New Features
infrahubCLI command with CRUD operations:get,create,update, anddeletefor managing objectsschemacommands (list,show) for schema discovery and browsing--yes)Documentation
get,create,update,deleteas top-level commands andschema list/schema showas subcommands to the existinginfrahubctlCLICommands
infrahubctl get <kind> [id]infrahubctl create <kind>--set key=valueor--file(JSON/YAML)infrahubctl update <kind> <id>--set key=valueor--fileinfrahubctl delete <kind> <id>--yesto skip)infrahubctl schema list--filter)infrahubctl schema show <kind>Key features
Cisco/NX-OS)--set location=LON-1resolves the name to a node ID, even for generic/polymorphic peer typesinfrahubctl get ... --output yaml > backup.ymltheninfrahubctl object load backup.yml— uses HFID references, omits empty values--all-columnsto show all)--set height=190sends integer 190, not string "190"Test plan
uv run invoke formatanduv run invoke lint-codepass🤖 Generated with Claude Code