Problem
Monday.com's GraphQL API returns a complexity object with every response containing query, before, after, and reset_in_x_seconds. This data is critical for monitoring API usage and avoiding rate limits, but the MCP server strips it before returning results to the client.
When using the MCP heavily (especially with AI agents making many calls), there's no visibility into how much of the complexity budget is being consumed per call or cumulatively. Users can silently approach rate limits with no warning until calls start failing.
Proposed Solution
Surface the complexity object from Monday's API responses in the MCP tool output metadata. Two levels of usefulness:
Minimum viable
Include the complexity data in each tool response's metadata so MCP clients can access it:
{
"content": "...",
"metadata": {
"complexity": {
"query": 12500,
"before": 9950000,
"after": 9937500,
"reset_in_x_seconds": 42
}
}
}
Nice to have
- A
--log-complexity CLI flag that writes a .jsonl log file with timestamp, tool name, and complexity per call
- A warning emitted when remaining budget drops below a configurable threshold (e.g. 20%)
Why This Matters
- AI agents make many rapid API calls and can burn through complexity budgets quickly without any feedback loop
- Rate limit debugging is currently blind — users only discover issues when calls start failing with
maxComplexityExceeded
- The data already exists in every API response — it just needs to be passed through rather than discarded
Current Workaround
None within the MCP. Users would need to build a stdio proxy wrapper to intercept HTTP responses, or manually query { complexity { after reset_in_x_seconds } } via dynamic API tools after each operation.
Problem
Monday.com's GraphQL API returns a
complexityobject with every response containingquery,before,after, andreset_in_x_seconds. This data is critical for monitoring API usage and avoiding rate limits, but the MCP server strips it before returning results to the client.When using the MCP heavily (especially with AI agents making many calls), there's no visibility into how much of the complexity budget is being consumed per call or cumulatively. Users can silently approach rate limits with no warning until calls start failing.
Proposed Solution
Surface the
complexityobject from Monday's API responses in the MCP tool output metadata. Two levels of usefulness:Minimum viable
Include the complexity data in each tool response's metadata so MCP clients can access it:
{ "content": "...", "metadata": { "complexity": { "query": 12500, "before": 9950000, "after": 9937500, "reset_in_x_seconds": 42 } } }Nice to have
--log-complexityCLI flag that writes a.jsonllog file with timestamp, tool name, and complexity per callWhy This Matters
maxComplexityExceededCurrent Workaround
None within the MCP. Users would need to build a stdio proxy wrapper to intercept HTTP responses, or manually query
{ complexity { after reset_in_x_seconds } }via dynamic API tools after each operation.