Skip to content

Add search_files tool#11

Open
bigcat88 wants to merge 8 commits intomainfrom
feature/p1-search-files
Open

Add search_files tool#11
bigcat88 wants to merge 8 commits intomainfrom
feature/p1-search-files

Conversation

@bigcat88
Copy link
Contributor

@bigcat88 bigcat88 commented Mar 24, 2026

Summary

  • Add search_files tool (READ) using WebDAV SEARCH API
  • Supports filename search, MIME type filtering, directory scoping, and pagination
  • 11 integration tests, 30 tools total, 250 tests

Tool added

Tool Permission Description
search_files READ Search files by name pattern and/or MIME type, scoped to a directory, with pagination (limit/offset). Results sorted by last modified (newest first).

Examples

  • search_files(query="report") — find all files with "report" in the name
  • search_files(mimetype="image") — find all images
  • search_files(query="notes", mimetype="text", path="Documents") — find text files matching "notes" in Documents
  • search_files(query="budget", limit=5) — first 5 results

Test plan

  • All 11 search tests pass locally
  • ruff + pyright clean
  • CI passes on NC 32 + NC 33

Summary by CodeRabbit

  • New Features

    • Added a recursive file search tool supporting filename and MIME-type filters, path scoping, sorting, and pagination.
  • Behavior Changes

    • Activity and comments now return structured JSON with top-level data and pagination (count/offset/has_more/since) instead of textual footers.
  • Tests

    • Added/updated integration tests for search, activity, and comments covering filtering, pagination, permissions, and edge cases.
  • Documentation

    • Marked the file-search tool completed in progress tracking and updated aggregate counts.
  • Chores

    • Adjusted coverage/reporting comment configuration.

@coderabbitai
Copy link

coderabbitai bot commented Mar 24, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a completed PROGRESS.md entry for a WebDAV SEARCH search_files tool; implements search_files with XML builder and WebDAV SEARCH integration; refactors tool registration; changes activity and comments tools to return {data,pagination} objects; updates integration tests and codecov configuration.

Changes

Cohort / File(s) Summary
Documentation
PROGRESS.md
Mark search_files (WebDAV SEARCH) completed; update Files/tools/test counts and overall totals.
Files tool implementation
src/nextcloud_mcp/tools/files.py
Add _build_search_xml, new search_files READ tool (validation, limit/offset clamping, WebDAV SEARCH request, parse results, pagination), refactor register(mcp) into _register_read_tools, _register_write_tools, _register_destructive_tools, and update imports (NextcloudClient, get_config, xml.sax.saxutils.escape).
Activity & Comments tools
src/nextcloud_mcp/tools/activity.py, src/nextcloud_mcp/tools/comments.py
Change return payloads from raw arrays/strings to structured JSON objects { "data": ..., "pagination": { ... } } and adjust pagination computation.
Integration tests — files
tests/integration/test_files.py
Add TestSearchFiles covering query/mimetype/path filters, combined queries, pagination, empty-results, argument validation, and read-only permission behavior; assert response shape and pagination fields.
Integration tests — activity/comments/server
tests/integration/test_activity.py, tests/integration/test_comments.py, tests/integration/test_server.py
Update tests to parse {data,pagination} responses; adapt pagination assertions; append search_files to EXPECTED_TOOLS.
CI config
codecov.yml
Remove coverage.status.*.after_n_builds entries and add comment.after_n_builds: 5.

Sequence Diagram(s)

sequenceDiagram
    participant User as User
    participant MCP as MCP Tool\n(search_files)
    participant Config as Config\nProvider
    participant NCClient as NextcloudClient
    participant WebDAV as Nextcloud\nWebDAV

    User->>MCP: call search_files(query, path, mimetype, limit, offset)
    MCP->>Config: get_config()
    Config-->>MCP: return config/user info
    MCP->>NCClient: build SEARCH XML (_build_search_xml)
    NCClient->>WebDAV: DAV SEARCH request (XML body)
    WebDAV-->>NCClient: SEARCH response (multistatus)
    NCClient->>MCP: parse response -> structured results
    MCP-->>User: return JSON { data: [...], pagination: {...} }
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

"I hopped through code with curious paws,
Crafted XML and counted test laws,
Search leapt to life on WebDAV trails,
Pagination hummed in tidy details. 🐇"

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 23.21% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'Add search_files tool' directly and clearly summarizes the main change: introducing a new search_files READ tool using WebDAV SEARCH API, which is the primary focus of the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feature/p1-search-files

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link

codecov bot commented Mar 24, 2026

Codecov Report

❌ Patch coverage is 97.87234% with 1 line in your changes missing coverage. Please review.
✅ Project coverage is 96.53%. Comparing base (67f3b69) to head (b1fbe2d).
⚠️ Report is 2 commits behind head on main.

Files with missing lines Patch % Lines
src/nextcloud_mcp/tools/files.py 97.56% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main      #11      +/-   ##
==========================================
+ Coverage   96.34%   96.53%   +0.19%     
==========================================
  Files          13       13              
  Lines         629      664      +35     
==========================================
+ Hits          606      641      +35     
  Misses         23       23              
Flag Coverage Δ
integration 95.18% <97.87%> (+0.42%) ⬆️
nc32 95.18% <97.87%> (+0.42%) ⬆️
nc33 95.18% <97.87%> (+0.42%) ⬆️
py3.12 16.41% <0.00%> (-0.92%) ⬇️
py3.13 16.41% <0.00%> (-0.92%) ⬇️
py3.14 16.41% <0.00%> (-0.92%) ⬇️
unit 16.41% <0.00%> (-0.92%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@oleksandr-nc oleksandr-nc marked this pull request as ready for review March 24, 2026 15:51
@bigcat88
Copy link
Contributor Author

(AI) Ready for review. All CI jobs green (lint, unit tests py3.12/3.13/3.14, integration NC 32 + NC 33).

Codecov checks failing due to repo rename — needs re-setup on codecov.io.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
PROGRESS.md (1)

46-58: ⚠️ Potential issue | 🟡 Minor

Test count discrepancy: sum of module tests (245) doesn't match total (250).

The individual module test counts sum to 245 (35+13+11+48+31+18+25+6+34+10+12+2), but the total shows 250. Please verify the counts are accurate.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@PROGRESS.md` around lines 46 - 58, The total tests count in PROGRESS.md is
inconsistent: the module rows (Files, Users, Notifications, Talk, Talk Polls,
Activity, Comments, Server, Permissions, Errors, Config, State) sum to 245 but
the bolded Total shows 250; recalculate the sum of the right-column values and
either update the individual module counts to reflect true values or correct the
bolded Total to 245 so they match, ensuring the change is made in the
PROGRESS.md table row for **Total** (or the specific module rows if one of those
numbers was mistyped).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/nextcloud_mcp/tools/files.py`:
- Around line 17-50: The _build_search_xml function interpolates unescaped user
inputs (query, mimetype, path, and user) directly into XML, causing XML
injection/malformed XML; fix by escaping these values (use
xml.sax.saxutils.escape or html.escape) before building where_parts,
mime_pattern, and scope so special XML chars (<, >, &, ", ') are replaced
safely, while preserving the intended wildcard logic (e.g., apply escape after
constructing the percent-wrapped query and after forming mime_pattern) and then
use the escaped variables when constructing the final XML string.

---

Outside diff comments:
In `@PROGRESS.md`:
- Around line 46-58: The total tests count in PROGRESS.md is inconsistent: the
module rows (Files, Users, Notifications, Talk, Talk Polls, Activity, Comments,
Server, Permissions, Errors, Config, State) sum to 245 but the bolded Total
shows 250; recalculate the sum of the right-column values and either update the
individual module counts to reflect true values or correct the bolded Total to
245 so they match, ensuring the change is made in the PROGRESS.md table row for
**Total** (or the specific module rows if one of those numbers was mistyped).

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: da1b4208-d28c-452a-8c93-224f45a6c20d

📥 Commits

Reviewing files that changed from the base of the PR and between 693d546 and a20c145.

📒 Files selected for processing (4)
  • PROGRESS.md
  • src/nextcloud_mcp/tools/files.py
  • tests/integration/test_files.py
  • tests/integration/test_server.py

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
src/nextcloud_mcp/tools/files.py (1)

15-20: ⚠️ Potential issue | 🟠 Major

Escape the SEARCH inputs before interpolating them into XML.

query, mimetype, path, and user are still inserted raw into <d:literal> / <d:href>. Values containing &, <, or > will produce malformed XML or alter the generated SEARCH clause.

Also applies to: 28-39

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/nextcloud_mcp/tools/files.py` around lines 15 - 20, Escape all
user-controlled values before inserting into XML literals/refs: apply an
XML-escaping function (e.g. xml.sax.saxutils.escape or html.escape) to query,
mimetype, path, and user prior to the f-string interpolation that builds
where_parts and any <d:href> strings; for mimetype build mime_pattern first
(with your % suffix logic) then escape the resulting mime_pattern, and use the
escaped values in the f-strings that create the <d:literal> and <d:href>
fragments so characters like &, <, and > cannot break the generated SEARCH XML.
🧹 Nitpick comments (1)
src/nextcloud_mcp/tools/files.py (1)

124-132: Consider moving SEARCH transport/parsing into NextcloudClient.

The tool layer now owns the raw XML body, the SEARCH call, the header, and a dependency on NextcloudClient._parse_propfind. A dav_search(...) client method would keep the DAV protocol boundary in one place and avoid new callers reaching into a private parser.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/nextcloud_mcp/tools/files.py` around lines 124 - 132, The SEARCH request
and XML handling should be moved into the Nextcloud client to keep DAV protocol
logic encapsulated: add a new NextcloudClient.dav_search(user, query, path,
limit, offset, mimetype) method that internally calls _build_search_xml (or
embeds that logic), performs client.dav_request("SEARCH", ...,
headers={"Content-Type": "text/xml; charset=utf-8"}, body=...), and returns
parsed results by calling the existing NextcloudClient._parse_propfind on the
response; then update callers that currently build the body and call
client.dav_request plus NextcloudClient._parse_propfind to call
client.dav_search(...) instead and remove direct references to _build_search_xml
and _parse_propfind from the tool layer.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/nextcloud_mcp/tools/files.py`:
- Around line 119-124: The code allows negative offsets which produce invalid
XML like <d:firstresult>-1</d:firstresult>; in the function where body is built
(call to _build_search_xml in files.py) validate and sanitize the offset before
use (e.g., coerce to an int and set offset = max(0, offset)) so only
non-negative offsets are passed to _build_search_xml; update any callers or
docstring if needed to reflect that offset is clamped to zero minimum.
- Around line 133-137: The current code JSON-serializes `results` then appends a
text footer which breaks JSON; instead build a JSON-serializable structure (e.g.
an object with keys "results" and "pagination") and dump that: keep `results` as
the array, compute `next_offset = offset + len(results)` and include pagination
fields like `count`, `offset`, and `next_offset` (or null when no more pages),
then return json.dumps(...) of that object (use the existing `results`,
`offset`, and `next_offset` names to locate where to change the return value).

---

Duplicate comments:
In `@src/nextcloud_mcp/tools/files.py`:
- Around line 15-20: Escape all user-controlled values before inserting into XML
literals/refs: apply an XML-escaping function (e.g. xml.sax.saxutils.escape or
html.escape) to query, mimetype, path, and user prior to the f-string
interpolation that builds where_parts and any <d:href> strings; for mimetype
build mime_pattern first (with your % suffix logic) then escape the resulting
mime_pattern, and use the escaped values in the f-strings that create the
<d:literal> and <d:href> fragments so characters like &, <, and > cannot break
the generated SEARCH XML.

---

Nitpick comments:
In `@src/nextcloud_mcp/tools/files.py`:
- Around line 124-132: The SEARCH request and XML handling should be moved into
the Nextcloud client to keep DAV protocol logic encapsulated: add a new
NextcloudClient.dav_search(user, query, path, limit, offset, mimetype) method
that internally calls _build_search_xml (or embeds that logic), performs
client.dav_request("SEARCH", ..., headers={"Content-Type": "text/xml;
charset=utf-8"}, body=...), and returns parsed results by calling the existing
NextcloudClient._parse_propfind on the response; then update callers that
currently build the body and call client.dav_request plus
NextcloudClient._parse_propfind to call client.dav_search(...) instead and
remove direct references to _build_search_xml and _parse_propfind from the tool
layer.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b5b66783-7ae2-48de-a78a-bf347592df41

📥 Commits

Reviewing files that changed from the base of the PR and between a20c145 and 5787bb0.

📒 Files selected for processing (1)
  • src/nextcloud_mcp/tools/files.py

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
src/nextcloud_mcp/tools/comments.py (1)

113-118: ⚠️ Potential issue | 🟡 Minor

Clamp negative comment offsets before building the REPORT body.

Now that list_comments() exposes explicit pagination state, Line 118 still serializes negative offsets unchanged. That can send <oc:offset>-1</oc:offset> to Nextcloud and makes this pager behave differently from search_files(), which already normalizes offset.

🩹 Minimal fix
         limit = max(1, min(100, limit))
+        offset = max(0, offset)
         client = get_client()
         body = FILTER_COMMENTS_BODY.format(limit=limit, offset=offset)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/nextcloud_mcp/tools/comments.py` around lines 113 - 118, In
list_comments(), normalize the offset before building the REPORT body the same
way limit is clamped: ensure offset is at least 0 (e.g., offset = max(0,
offset)) so FILTER_COMMENTS_BODY.format(limit=limit, offset=offset) never
inserts a negative <oc:offset> value; update the code path that uses
FILTER_COMMENTS_BODY to reference the normalized offset variable and keep the
existing limit clamping logic intact.
src/nextcloud_mcp/tools/files.py (1)

28-29: ⚠️ Potential issue | 🟡 Minor

Escape user in the SEARCH scope.

Line 29 still interpolates the raw username into <d:href>. A username containing & or < will break the XML even though the other inputs are already escaped.

🛡️ Minimal fix
-    safe_path = xml_escape(path.strip("/"))
-    scope = f"/files/{user}/{safe_path}" if safe_path else f"/files/{user}"
+    safe_user = xml_escape(user)
+    safe_path = xml_escape(path.strip("/"))
+    scope = f"/files/{safe_user}/{safe_path}" if safe_path else f"/files/{safe_user}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/nextcloud_mcp/tools/files.py` around lines 28 - 29, The code builds the
WebDAV scope string using the raw user which can inject XML special characters
into <d:href>; fix by XML-escaping the username before interpolation (e.g., call
xml_escape on user to produce a safe_user) and use that safe_user when
constructing scope (update the scope assignment where safe_path and user are
used); keep safe_path as-is and ensure xml_escape is applied to the username
wherever <d:href> or scope is composed.
🧹 Nitpick comments (1)
tests/integration/test_files.py (1)

306-315: Make the offset test prove page 2 is actually different.

Lines 310-315 only show that the second call returned something. An implementation that ignores offset and always returns the first page would still pass. Compare the returned paths (or IDs) and assert the pages do not overlap.

🧪 Tighten the assertion
         page1 = await nc_mcp.call("search_files", query="page-file", limit=2, offset=0)
         data1 = json.loads(page1)["data"]
         assert len(data1) == 2
         page2 = await nc_mcp.call("search_files", query="page-file", limit=2, offset=2)
         data2 = json.loads(page2)["data"]
         assert len(data2) >= 1
+        paths1 = {e["path"] for e in data1}
+        paths2 = {e["path"] for e in data2}
+        assert paths1.isdisjoint(paths2)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/integration/test_files.py` around lines 306 - 315, The
test_search_with_offset test currently only checks that page2 has items but
doesn't verify offset semantics; change it to extract and compare unique
identifiers (e.g., file paths or IDs) from the JSON results returned by
nc_mcp.call("search_files") for page1 and page2 (variables page1/page2 ->
data1/data2), then assert that the sets/lists do not overlap and that page2
contains the expected remaining items (e.g., different paths than page1) to
ensure offset is respected.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/nextcloud_mcp/tools/activity.py`:
- Around line 113-119: The pagination cursor currently always uses
min(activity_id) (oldest_id) which only works for sort=="desc"; update the
cursor computation in src/nextcloud_mcp/tools/activity.py to be sort-aware: if
sort == "desc" set since to min(a["activity_id"] for a in activities) (as
before), otherwise set since to max(a["activity_id"] for a in activities) so the
cursor points to the last item returned for ascending pages; adjust the response
pagination "since" assignment (the variable oldest_id can be renamed or
replaced) and update the docstring for the since parameter to state that the
cursor returned is the minimum ID for desc and the maximum ID for asc.

---

Duplicate comments:
In `@src/nextcloud_mcp/tools/comments.py`:
- Around line 113-118: In list_comments(), normalize the offset before building
the REPORT body the same way limit is clamped: ensure offset is at least 0
(e.g., offset = max(0, offset)) so FILTER_COMMENTS_BODY.format(limit=limit,
offset=offset) never inserts a negative <oc:offset> value; update the code path
that uses FILTER_COMMENTS_BODY to reference the normalized offset variable and
keep the existing limit clamping logic intact.

In `@src/nextcloud_mcp/tools/files.py`:
- Around line 28-29: The code builds the WebDAV scope string using the raw user
which can inject XML special characters into <d:href>; fix by XML-escaping the
username before interpolation (e.g., call xml_escape on user to produce a
safe_user) and use that safe_user when constructing scope (update the scope
assignment where safe_path and user are used); keep safe_path as-is and ensure
xml_escape is applied to the username wherever <d:href> or scope is composed.

---

Nitpick comments:
In `@tests/integration/test_files.py`:
- Around line 306-315: The test_search_with_offset test currently only checks
that page2 has items but doesn't verify offset semantics; change it to extract
and compare unique identifiers (e.g., file paths or IDs) from the JSON results
returned by nc_mcp.call("search_files") for page1 and page2 (variables
page1/page2 -> data1/data2), then assert that the sets/lists do not overlap and
that page2 contains the expected remaining items (e.g., different paths than
page1) to ensure offset is respected.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ca1f22e7-6fc3-45a8-99a5-ee5848d8891a

📥 Commits

Reviewing files that changed from the base of the PR and between dc9eadd and 3f380f3.

📒 Files selected for processing (6)
  • src/nextcloud_mcp/tools/activity.py
  • src/nextcloud_mcp/tools/comments.py
  • src/nextcloud_mcp/tools/files.py
  • tests/integration/test_activity.py
  • tests/integration/test_comments.py
  • tests/integration/test_files.py

Comment on lines +113 to +119
oldest_id = min(a["activity_id"] for a in activities) if activities else None
response: dict[str, Any] = {
"data": activities,
"pagination": {
"count": len(activities),
"has_more": len(activities) == limit,
"since": oldest_id,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

For the Nextcloud Activity OCS API, when paginating with the sinceparameter andsort=asc, should the next-page cursor be the maximum activity_id from the current page rather than the minimum?

💡 Result:

Yes, for the Nextcloud Activity OCS API when paginating with the since parameter and sort=asc, the next-page cursor should be the maximum activity_id from the current page. The API fetches activities starting after the provided since ID (exclusive), ordered by timestamp and activity_id both ASC. It sets the X-Activity-Last-Given header to the activity_id of the last activity returned in the page (which is the maximum ID in that page since sorted ASC). The next page is constructed using since equal to this X-Activity-Last-Given value, as shown in the Link header example and controller code. This ensures the next page fetches activities strictly after the last one seen, avoiding duplicates even if new activities are inserted.

Citations:


🏁 Script executed:

cd src/nextcloud_mcp/tools && head -150 activity.py | tail -70

Repository: cloud-py-api/nc-mcp-server

Length of output: 3303


🏁 Script executed:

rg -A 40 "def get_activity" src/nextcloud_mcp/tools/activity.py

Repository: cloud-py-api/nc-mcp-server

Length of output: 2160


Fix pagination cursor to be sort-aware.

Lines 113–119 always calculate the pagination cursor as the minimum activity_id, which is correct only for sort="desc". When sort="asc", the cursor must be the maximum activity_id from the current page (the last activity returned); otherwise, the next page request repeats or stalls.

The sort parameter is available in scope and should be checked when computing the cursor:

🔁 Sort-aware cursor calculation
-        oldest_id = min(a["activity_id"] for a in activities) if activities else None
+        next_since = (
+            (
+                min(a["activity_id"] for a in activities)
+                if sort == "desc"
+                else max(a["activity_id"] for a in activities)
+            )
+            if activities
+            else None
+        )
         response: dict[str, Any] = {
             "data": activities,
             "pagination": {
                 "count": len(activities),
                 "has_more": len(activities) == limit,
-                "since": oldest_id,
+                "since": next_since,
             },
         }

Also update the docstring for the since parameter to clarify the cursor behavior depends on sort order.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
oldest_id = min(a["activity_id"] for a in activities) if activities else None
response: dict[str, Any] = {
"data": activities,
"pagination": {
"count": len(activities),
"has_more": len(activities) == limit,
"since": oldest_id,
next_since = (
(
min(a["activity_id"] for a in activities)
if sort == "desc"
else max(a["activity_id"] for a in activities)
)
if activities
else None
)
response: dict[str, Any] = {
"data": activities,
"pagination": {
"count": len(activities),
"has_more": len(activities) == limit,
"since": next_since,
},
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/nextcloud_mcp/tools/activity.py` around lines 113 - 119, The pagination
cursor currently always uses min(activity_id) (oldest_id) which only works for
sort=="desc"; update the cursor computation in
src/nextcloud_mcp/tools/activity.py to be sort-aware: if sort == "desc" set
since to min(a["activity_id"] for a in activities) (as before), otherwise set
since to max(a["activity_id"] for a in activities) so the cursor points to the
last item returned for ascending pages; adjust the response pagination "since"
assignment (the variable oldest_id can be renamed or replaced) and update the
docstring for the since parameter to state that the cursor returned is the
minimum ID for desc and the maximum ID for asc.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
src/nextcloud_mcp/tools/files.py (1)

135-135: Consider exposing _parse_propfind as a public method or module function.

Calling the protected method NextcloudClient._parse_propfind from outside the class works but is unconventional. Since it's a pure static method with no side effects, you could either rename it to parse_propfind (without underscore) or extract it to a module-level utility.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/nextcloud_mcp/tools/files.py` at line 135, The code calls the protected
static method NextcloudClient._parse_propfind from outside the class; make this
function public or module-level to follow convention and clarity: either rename
NextcloudClient._parse_propfind to NextcloudClient.parse_propfind (remove
leading underscore) and update all callers (e.g., the call in files.py that
currently uses NextcloudClient._parse_propfind), or extract the logic into a
top-level function parse_propfind in the same module and import/use that from
files.py; ensure you update references and any tests to use the new public
symbol.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/nextcloud_mcp/tools/files.py`:
- Line 135: The code calls the protected static method
NextcloudClient._parse_propfind from outside the class; make this function
public or module-level to follow convention and clarity: either rename
NextcloudClient._parse_propfind to NextcloudClient.parse_propfind (remove
leading underscore) and update all callers (e.g., the call in files.py that
currently uses NextcloudClient._parse_propfind), or extract the logic into a
top-level function parse_propfind in the same module and import/use that from
files.py; ensure you update references and any tests to use the new public
symbol.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 92ee8027-456d-4cfe-a851-4388bffddfbf

📥 Commits

Reviewing files that changed from the base of the PR and between 3f380f3 and b1fbe2d.

📒 Files selected for processing (3)
  • PROGRESS.md
  • src/nextcloud_mcp/tools/comments.py
  • src/nextcloud_mcp/tools/files.py
✅ Files skipped from review due to trivial changes (1)
  • PROGRESS.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/nextcloud_mcp/tools/comments.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants