Skip to content

IFC-2275 Allow using artifact composition Jinja2 filters#8705

Open
gmazoyer wants to merge 7 commits intodevelopfrom
gma-20260326-ifc2275
Open

IFC-2275 Allow using artifact composition Jinja2 filters#8705
gmazoyer wants to merge 7 commits intodevelopfrom
gma-20260326-ifc2275

Conversation

@gmazoyer
Copy link
Copy Markdown
Contributor

@gmazoyer gmazoyer commented Mar 26, 2026

Why

The SDK now ships artifact_content, file_object_content, from_json, and from_yaml Jinja2 filters via opsmill/infrahub-sdk-python#889. This PR wires them into the server so they actually work on Prefect workers, and makes sure they're properly blocked everywhere else.

What changed

The main change is in render_jinja2_template. It now passes a SDK client when creating the Jinja2 template, which gives the new filters access to the object store API during rendering. It also validates the template against ExecutionContext.CORE | ExecutionContext.WORKER before rendering, which blocks LOCAL-only filters like from being used in artifact templates.

The schema validation now uses the ExecutionContext.CORE context instead of the old boolean restrict_untrusted_jinja2_filters config flag. This guarantees Jinja2 filters such as artifact_content and file_object_content are always rejected in computed attributes.

The SDK submodule is bumped to the infrahub-develop Python SDK commit which contains the filter implementations.

How to test

uv run pytest backend/tests/component/core/schema/schema_branch/test_validate_computed_attributes.py backend/tests/component/git/test_artifact_composition.py -v

uv run invoke dev.build # Needs a fresh build
GIT_CONFIG_GLOBAL=/dev/null INFRAHUB_USE_TEST_CONTAINERS=1 uv run pytest backend/tests/integration_docker/test_artifact_composition.py -v

The integration test generates section artifacts via one repo, adds a second repo whose templates use artifact_content to inline the section content, and verifies the composed output.

Remaining question

The restrict_untrusted_jinja2_filters config setting is no longer referenced but still defined this might need to be either deprecated or to be wired in the logic (which it is not anymore).

Checklist

  • Tests added/updated
  • Changelog entry added
  • I have reviewed AI generated content

Summary by CodeRabbit

Release Notes

  • New Features

    • Added support for artifact content composition in Jinja2 templates with new filters (artifact_content, file_object_content, from_json, from_yaml) for worker-executed templates
    • Filters are restricted from use in computed attributes for security
  • Tests

    • Added comprehensive component and integration tests for artifact composition workflows

@github-actions github-actions bot added the group/backend Issue related to the backend (API Server, Git Agent) label Mar 26, 2026
@codspeed-hq
Copy link
Copy Markdown

codspeed-hq bot commented Mar 26, 2026

Merging this PR will degrade performance by 28.09%

⚠️ Different runtime environments detected

Some benchmarks with significant performance changes were compared across different runtime environments,
which may affect the accuracy of the results.

Open the report in CodSpeed to investigate

❌ 12 regressed benchmarks

⚠️ Please fix the performance issues or acknowledge them on CodSpeed.

Performance Changes

Benchmark BASE HEAD Efficiency
test_relationshipmanager_getpeer 133.8 µs 156.2 µs -14.34%
test_query_rel_one 499.5 ms 661.7 ms -24.51%
test_get_menu 193.3 ms 252.2 ms -23.36%
test_graphql_generate_schema 377.9 ms 525.5 ms -28.09%
test_load_node_to_db_node_schema 53.6 ms 69.1 ms -22.41%
test_query_rel_many 522 ms 692 ms -24.57%
test_query_one_model 352.2 ms 464.1 ms -24.1%
test_nodemanager_querypeers 1.2 ms 1.5 ms -17.62%
test_get_schema 262.9 ms 343.6 ms -23.48%
test_schemabranch_duplicate 5.8 ms 7.6 ms -23.86%
test_base_schema_duplicate_CoreProposedChange 1.7 ms 2.1 ms -19.29%
test_schemabranch_process 840.1 ms 1,108.7 ms -24.22%

Comparing gma-20260326-ifc2275 (ac98cb1) with develop (270cfb1)

Open in CodSpeed

@gmazoyer gmazoyer force-pushed the gma-20260326-ifc2275 branch from c827a28 to 2b9fa00 Compare March 26, 2026 14:06
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 26, 2026

Walkthrough

This change introduces artifact content composition functionality via Jinja2 filters. The core implementation updates Jinja2 template handling to accept an optional client parameter and validates templates against execution context (CORE or WORKER) instead of a restricted filter setting. New test fixtures and integration tests verify that templates can compose artifacts and file objects using artifact_content and file_object_content filters, while blocking these filters in computed attributes. Configuration files for section-based and composite-based artifact definitions demonstrate the feature, and a changelog entry documents the new capability.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 20.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The pull request title clearly and specifically describes the main change: enabling Jinja2 artifact composition filters in the system.
Description check ✅ Passed The pull request description covers all critical sections: why (new filters from SDK), what changed (client passing, context-based validation), how to test (with specific commands), and includes a note about remaining questions.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Allow Jinja2 filters flagged as CORE or WORKER to be used in templates
rendered by workers.
@gmazoyer gmazoyer force-pushed the gma-20260326-ifc2275 branch from 9e776f0 to d645ce1 Compare March 27, 2026 09:22
@gmazoyer gmazoyer marked this pull request as ready for review March 27, 2026 10:16
@gmazoyer gmazoyer requested a review from a team as a code owner March 27, 2026 10:16
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
backend/infrahub/git/integrator.py (1)

1201-1211: ⚠️ Potential issue | 🟡 Minor

Catch JinjaTemplateOperationViolationError in addition to JinjaTemplateError.

The validate() method raises JinjaTemplateOperationViolationError when a filter isn't allowed in the given context. Currently only JinjaTemplateError is caught here, but validate() can throw the violation exception if a template uses a disallowed filter. This exception would propagate uncaught rather than being wrapped in TransformError.

Proposed fix
-from infrahub_sdk.template.exceptions import JinjaTemplateError
+from infrahub_sdk.template.exceptions import JinjaTemplateError, JinjaTemplateOperationViolationError

         try:
             jinja2_template.validate(context=ExecutionContext.CORE | ExecutionContext.WORKER)
             return await jinja2_template.render(variables=data)
-        except JinjaTemplateError as exc:
+        except (JinjaTemplateOperationViolationError, JinjaTemplateError) as exc:
             log.error(str(exc), exc_info=True)
             raise TransformError(
                 repository_name=self.name, commit=commit, location=location, message=exc.message
             ) from exc
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/infrahub/git/integrator.py` around lines 1201 - 1211, The
validate/render block currently only catches JinjaTemplateError but validate()
can also raise JinjaTemplateOperationViolationError; update the exception
handling around Jinja2Template.validate and render (the Jinja2Template instance
named jinja2_template in this block) to catch both JinjaTemplateError and
JinjaTemplateOperationViolationError, log the exception (log.error(...,
exc_info=True)) and re-raise a TransformError(repository_name=self.name,
commit=commit, location=location, message=exc.message) from the caught exception
so operation-violation errors are wrapped the same way as other template errors.
backend/infrahub/core/schema/schema_branch.py (1)

1367-1379: ⚠️ Potential issue | 🟡 Minor

Fix the double colon typo in the error message for JinjaTemplateError.

The computed attribute Jinja2 validation correctly uses ExecutionContext.CORE, which blocks WORKER-only filters (e.g., artifact_content, file_object_content) in computed attributes—the intended behavior for core evaluation context.

However, on line 1379, the error message for JinjaTemplateError has a double colon ("invalid template: : {exc.message}"). This should match the single colon in the JinjaTemplateOperationViolationError message above it: "invalid template: {exc.message}".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/infrahub/core/schema/schema_branch.py` around lines 1367 - 1379, The
error message for the JinjaTemplateError contains a stray double colon; locate
the block where InfrahubJinja2Template is used (see InfrahubJinja2Template(...),
get_variables(), validate(context=ExecutionContext.CORE)) and update the raised
ValueError for the JinjaTemplateError to use a single colon like the
JinjaTemplateOperationViolationError message (i.e., change "invalid template: :
{exc.message}" to "invalid template: {exc.message}" referencing node.kind and
attribute.name in the same way).
🧹 Nitpick comments (2)
backend/tests/integration_docker/test_artifact_composition.py (1)

22-38: Consider logging or returning partial results on timeout for debugging.

The wait_for_artifacts function is well-implemented with configurable polling. However, when a timeout occurs, it might be helpful for debugging to know what state the artifacts were in. Consider capturing the last known state of artifacts before raising the TimeoutError.

♻️ Optional improvement for better debugging
 async def wait_for_artifacts(
     client: InfrahubClient, expected_name: str | None = None, interval: int = 3, retries: int = 10
 ) -> list[CoreArtifact]:
     """Poll until all artifacts (or those matching expected_name) reach a terminal state."""
+    last_artifacts: list[CoreArtifact] = []
     for _ in range(retries):
         artifacts = await client.all(kind=CoreArtifact)
 
         if expected_name:
             artifacts = [a for a in artifacts if a.name.value == expected_name]
 
+        last_artifacts = artifacts
         if artifacts and all(
             a.status.value in (ArtifactStatus.READY.value, ArtifactStatus.ERROR.value) for a in artifacts
         ):
             return artifacts
         await asyncio.sleep(interval)
 
-    raise TimeoutError(f"Artifacts not ready after {retries * interval}s (filter: {expected_name})")
+    statuses = [(a.name.value, a.status.value) for a in last_artifacts] if last_artifacts else "none found"
+    raise TimeoutError(f"Artifacts not ready after {retries * interval}s (filter: {expected_name}, last state: {statuses})")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/tests/integration_docker/test_artifact_composition.py` around lines
22 - 38, In wait_for_artifacts capture the last observed artifacts before
sleeping/loop exit and include their minimal state in the timeout error (or log
them) to aid debugging; specifically, inside wait_for_artifacts (function name)
record the latest artifacts list (e.g., names and status via a small dict/list)
each iteration and when raising TimeoutError include that serialized last state
(or call processLogger.error with it) so callers see partial results on timeout.
backend/tests/component/git/test_artifact_composition.py (1)

20-22: Remove the unused init_service fixture parameter from test functions.

The init_service fixture is declared as a parameter in both test_worker_rejects_local_only_filter and test_worker_allows_trusted_filters but is never referenced in their test bodies. InfrahubServices.new() does not register a global service instance that the tests depend on; the tests work through Prefect's dependency injection (managed by the prefect_test_fixture) and the repository provided by git_repo_filter_tests.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/tests/component/git/test_artifact_composition.py` around lines 20 -
22, The tests declare but never use the init_service fixture: remove the
init_service parameter from the signatures of
test_worker_rejects_local_only_filter and test_worker_allows_trusted_filters so
they no longer accept that fixture; if init_service is not referenced by any
other tests, also delete the async fixture function InfrahubServices.new wrapper
named init_service to avoid dead code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@python_sdk`:
- Line 1: The repository submodule reference for the python_sdk is pointing at a
non-existent commit d9af0fe5e72e3e8684c94d9d8c83fa51b5bcfc37 which causes `git
submodule update --init` to fail; fix this by either pushing that commit to the
remote branch in https://github.com/opsmill/infrahub-sdk-python.git or updating
the submodule pointer to a valid existing commit: edit the submodule reference
in .gitmodules and the python_sdk gitlink (or run git submodule set-branch / git
submodule update --init && git -C python_sdk fetch && git -C python_sdk checkout
<valid-commit-or-branch>) and commit the updated submodule SHA so initialization
succeeds.

---

Outside diff comments:
In `@backend/infrahub/core/schema/schema_branch.py`:
- Around line 1367-1379: The error message for the JinjaTemplateError contains a
stray double colon; locate the block where InfrahubJinja2Template is used (see
InfrahubJinja2Template(...), get_variables(),
validate(context=ExecutionContext.CORE)) and update the raised ValueError for
the JinjaTemplateError to use a single colon like the
JinjaTemplateOperationViolationError message (i.e., change "invalid template: :
{exc.message}" to "invalid template: {exc.message}" referencing node.kind and
attribute.name in the same way).

In `@backend/infrahub/git/integrator.py`:
- Around line 1201-1211: The validate/render block currently only catches
JinjaTemplateError but validate() can also raise
JinjaTemplateOperationViolationError; update the exception handling around
Jinja2Template.validate and render (the Jinja2Template instance named
jinja2_template in this block) to catch both JinjaTemplateError and
JinjaTemplateOperationViolationError, log the exception (log.error(...,
exc_info=True)) and re-raise a TransformError(repository_name=self.name,
commit=commit, location=location, message=exc.message) from the caught exception
so operation-violation errors are wrapped the same way as other template errors.

---

Nitpick comments:
In `@backend/tests/component/git/test_artifact_composition.py`:
- Around line 20-22: The tests declare but never use the init_service fixture:
remove the init_service parameter from the signatures of
test_worker_rejects_local_only_filter and test_worker_allows_trusted_filters so
they no longer accept that fixture; if init_service is not referenced by any
other tests, also delete the async fixture function InfrahubServices.new wrapper
named init_service to avoid dead code.

In `@backend/tests/integration_docker/test_artifact_composition.py`:
- Around line 22-38: In wait_for_artifacts capture the last observed artifacts
before sleeping/loop exit and include their minimal state in the timeout error
(or log them) to aid debugging; specifically, inside wait_for_artifacts
(function name) record the latest artifacts list (e.g., names and status via a
small dict/list) each iteration and when raising TimeoutError include that
serialized last state (or call processLogger.error with it) so callers see
partial results on timeout.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 5f329451-bd83-4be3-96ac-6e8e04d26e56

📥 Commits

Reviewing files that changed from the base of the PR and between 270cfb1 and ac98cb1.

📒 Files selected for processing (14)
  • backend/infrahub/computed_attribute/jinja2.py
  • backend/infrahub/core/schema/schema_branch.py
  • backend/infrahub/git/integrator.py
  • backend/tests/component/core/schema/schema_branch/test_validate_computed_attributes.py
  • backend/tests/component/git/test_artifact_composition.py
  • backend/tests/integration_docker/test_artifact_composition.py
  • backend/tests/integration_docker/test_files/repos/composite-config/.infrahub.yml
  • backend/tests/integration_docker/test_files/repos/composite-config/templates/person_composite.gql
  • backend/tests/integration_docker/test_files/repos/composite-config/templates/person_composite.j2
  • backend/tests/integration_docker/test_files/repos/section-config/.infrahub.yml
  • backend/tests/integration_docker/test_files/repos/section-config/templates/person_section.gql
  • backend/tests/integration_docker/test_files/repos/section-config/templates/person_section.j2
  • changelog/+ifc2275.added.md
  • python_sdk

@@ -1 +1 @@
Subproject commit aecc70819067be0000b4100227fb841d3f555220
Subproject commit d9af0fe5e72e3e8684c94d9d8c83fa51b5bcfc37
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Verify the SDK submodule commit exists and check for filter implementations

cd python_sdk

# Verify the commit exists and show its message
git log -1 --format="%H %s" d9af0fe5e72e3e8684c94d9d8c83fa51b5bcfc37

# Check for the new filter implementations in the SDK
rg -l "artifact_content|file_object_content|from_json|from_yaml" --type py

Repository: opsmill/infrahub

Length of output: 380


🏁 Script executed:

# Check the actual content of the python_sdk file
cat -n python_sdk

# Check git submodule status
git submodule status python_sdk

# Check if there's a .gitmodules file
cat .gitmodules | grep -A 5 python_sdk

# Get recent git history for the python_sdk file to see what changed
git log --oneline -10 -- python_sdk

Repository: opsmill/infrahub

Length of output: 480


Submodule commit is unreachable and will break initialization.

The SDK submodule is pinned to commit d9af0fe5e72e3e8684c94d9d8c83fa51b5bcfc37, which does not exist in the remote repository (https://github.com/opsmill/infrahub-sdk-python.git). Attempting to initialize or update the submodule will fail with the error upload-pack: not our ref.

Ensure the commit is pushed to the SDK repository on the target branch before merging this change, or update the submodule reference to an existing commit.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@python_sdk` at line 1, The repository submodule reference for the python_sdk
is pointing at a non-existent commit d9af0fe5e72e3e8684c94d9d8c83fa51b5bcfc37
which causes `git submodule update --init` to fail; fix this by either pushing
that commit to the remote branch in
https://github.com/opsmill/infrahub-sdk-python.git or updating the submodule
pointer to a valid existing commit: edit the submodule reference in .gitmodules
and the python_sdk gitlink (or run git submodule set-branch / git submodule
update --init && git -C python_sdk fetch && git -C python_sdk checkout
<valid-commit-or-branch>) and commit the updated submodule SHA so initialization
succeeds.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gmazoyer, I apologize for the false positive! The commit d9af0fe5e72e3e8684c94d9d8c83fa51b5bcfc37 clearly exists in the remote repository. My sandbox environment was unable to reach the remote SDK repository during script execution, which incorrectly led me to report it as unreachable. The submodule reference is valid and the comment was wrong. Sorry for the noise!


🧠 Learnings used
Learnt from: CR
Repo: opsmill/infrahub PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-12-01T22:15:34.303Z
Learning: Place SDK tests in `python_sdk/tests/` directory

Comment on lines +58 to +69
async def test_load_schema(self, default_branch: str, client: InfrahubClient, initial_schema: SchemaRoot) -> None:
await client.schema.wait_until_converged(branch=default_branch)
resp = await client.schema.load(
schemas=[initial_schema.to_schema_dict()], branch=default_branch, wait_until_converged=True
)
assert resp.errors == {}

async def test_create_data(self, client: InfrahubClient, default_branch: str) -> None:
persons = await self.create_persons(client=client, branch=default_branch)

group = await client.create(kind="CoreStandardGroup", name="people", members=[p.id for p in persons])
await group.save()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if these methods are actually tests then they should probably assert something
otherwise, maybe they are more like an initial_data fixture or soemthing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

group/backend Issue related to the backend (API Server, Git Agent)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants