Skip to content

docs: DoS prevention security guide#98

Merged
marc0olo merged 2 commits into
mainfrom
docs/guides-security-dos-prevention
Apr 16, 2026
Merged

docs: DoS prevention security guide#98
marc0olo merged 2 commits into
mainfrom
docs/guides-security-dos-prevention

Conversation

@marc0olo
Copy link
Copy Markdown
Member

Summary

  • Reverse gas model attack surface and 7-item defensive checklist
  • Cycle drain attacks: inspect_message as first-pass filter (Motoko + Rust)
  • Rate limiting and per-caller locking: CallerGuard pattern with cleanup caveat
  • Proof-of-work and captchas for public unauthenticated endpoints
  • Resource limits table (from IC spec), per-user memory quotas, wasm_memory_limit
  • Freezing threshold configuration via CLI and icp.yaml
  • Noisy neighbor protection: compute and memory allocation
  • Monitoring cycle consumption with CLI commands
  • Caller-pays cycle pattern for expensive operations

Sync recommendation

informed by dfinity/icskills — skills/canister-security/SKILL.md; dfinity/portal — docs/building-apps/security/dos.mdx

@marc0olo
Copy link
Copy Markdown
Member Author

Review: DoS Prevention

Must fix

  • Inaccurate base fee claim: Line 25 states "A base fee per message (~5M cycles on a 13-node subnet)". This is misleading. According to .sources/portal/docs/references/cycles-cost-formulas.mdx, the 5M figure is the update message execution fee (charged per message executed). There is an additional ingress message reception fee of 1,200,000 cycles plus a per-byte fee of 2,000 cycles/byte on top of that. For a typical ingress call, total base cost is ~6.2M+ cycles. The description conflates "base execution fee" with the total base cost of an ingress message. Suggest: "A base execution fee of 5M cycles per update message on a 13-node subnet, plus an ingress reception fee of ~1.2M cycles" — or omit the specific number and direct readers to the cycles costs reference.

  • Deprecated API reference: Line 322 references ic_cdk::api::call::msg_cycles_accept128. Both msg_cycles_accept and msg_cycles_accept128 in the ic_cdk::api::call module are marked #[deprecated] in .sources/cdk-rs/ic-cdk/src/api/call.rs with the note "Please use ic_cdk::api::msg_cycles_accept instead." The canonical modern API is ic_cdk::api::msg_cycles_accept(max_amount: u128) -> u128 (defined at .sources/cdk-rs/ic-cdk/src/api.rs, line 140). Fix: change the reference to ic_cdk::api::msg_cycles_accept.

Suggestions

  • icp.yaml settings structure needs context: The icp.yaml snippets (lines 234–239, 258–263, 278–280, 288–290) show a bare settings: block. Per .sources/icp-cli/docs/reference/configuration.md, a standalone settings: block is valid at the canister file level (in canister.yaml) or as the per-canister settings reference section. In the project-level icp.yaml, settings must be nested under a canister name (e.g., settings: { backend: { wasm_memory_limit: 3gib } }). The snippets use the comment # icp.yaml — ... which implies they go directly in icp.yaml, but their flat structure only works in a standalone canister.yaml. Consider clarifying: "# In canister.yaml or equivalent canister settings block".

  • Checklist item vs. body text inconsistency for wasm_memory_threshold: The checklist (line 18) does not mention wasm_memory_threshold, but the icp.yaml snippet (line 238) includes wasm_memory_threshold: 512mib. Either add a checklist item ("Set wasm_memory_threshold to get notified before the limit is hit via the on_low_wasm_memory hook") or remove the threshold from the snippet to keep body and checklist aligned.

  • Motoko catch (e) with unused variable: The Motoko concurrency lock example (lines 135–144) uses catch (e) but discards e (returns a static string). This may generate an unused-variable warning in Motoko. Consider catch _ to make the intent explicit.

  • Rust CallerGuard example omits bounded-wait context: The Rust rate-limiting snippet (lines 149–197) calls do_expensive_work().await? as a placeholder. Unlike the canister-security skill's version which explicitly uses Call::bounded_wait, this version doesn't show how to avoid unbounded waits. The page mentions unbounded waits blocking upgrades is a risk (via link to inter-canister-calls.md), but a brief note here — "use Call::bounded_wait rather than unbounded-wait calls inside a guarded method" — would make the connection explicit for readers who don't follow the link.

  • compute_allocation value 10 — scheduling precision: Line 281 says "A value of 10 means the canister is scheduled at least every 10 rounds." This is correct (the formula is every 100 / allocation rounds per the portal source), but consider adding "consensus rounds" for clarity rather than just "rounds."

Verified

  • All CLI commands verified against .sources/icp-cli/docs/reference/cli.md: icp canister settings update backend --freezing-threshold 7776000 -e ic and icp canister status backend -e ic are valid commands with correct flag names.
  • icp.yaml setting keys (wasm_memory_limit, wasm_memory_threshold, compute_allocation, memory_allocation, freezing_threshold) confirmed in .sources/icp-cli/docs/schemas/canister-yaml-schema.json and the configuration.md example.
  • freezing_threshold: 90d duration suffix confirmed valid per DurationAmount schema (string type accepted) and CLI reference ("d (days)" supported).
  • 7776000 seconds = 90 days confirmed correct.
  • Resource limits table (lines 215–223) cross-checked against .sources/portal/docs/building-apps/canister-management/resource-limits.mdx: 40B instructions/update, 5B/query, 200M/inspect_message, 2 MiB ingress payload, 4 GiB Wasm heap, 500 GiB stable memory — all correct.
  • Motoko imports use mo:core throughout (no mo:base). Confirmed: mo:core/Principal, mo:core/Map, mo:core/Result.
  • No dfx references anywhere in the page.
  • ic_cdk::api::in_replicated_execution() confirmed present in .sources/cdk-rs/ic-cdk/src/api.rs line 453.
  • ic0.in_replicated_execution() system API name is correct.
  • Call::bounded_wait from ic_cdk::call::Call confirmed in .sources/cdk-rs/ic-cdk/src/call.rs.
  • Internal links verified: concepts/reverse-gas-model.md exists; guides/security/inter-canister-calls.md exists; reference/cycles-costs.md exists; concepts/security.md exists. access-management.md and canister-management/settings.md resolve via .mdx fallback (Astro .md.mdx resolution).
  • <\!-- Upstream: --> comment present at end of file (CI-required).
  • File is .md (correct — no interactive components used).
  • Frontmatter complete: title, description, sidebar.order all present.
  • No links to internetcomputer.org/docs/ or docs.internetcomputer.org.
  • 10M cycles per percentage point per second claim on line 282 confirmed against .sources/portal/docs/references/cycles-cost-formulas.mdx ("Compute percent allocated per second: 10,000,000 cycles").
  • Internet Identity captcha link on line 209 is a commit-pinned GitHub URL — stable reference, no internal page for this topic.

- Fix inaccurate base fee description: distinguish update message execution
  fee (5M cycles) from ingress reception fee (~1.2M cycles + 2,000/byte)
- Fix deprecated API reference: replace ic_cdk::api::call::msg_cycles_accept128
  with ic_cdk::api::msg_cycles_accept (canonical modern API)
- Fix icp.yaml snippets: show settings nested under canister name in canisters array
- Add wasm_memory_threshold to checklist to match body text
- Use catch _ instead of catch (e) in Motoko where error variable is unused
- Add note about Call::bounded_wait in Rust CallerGuard example
- Clarify compute_allocation scheduling: "10 consensus rounds" not just "10 rounds"
@marc0olo
Copy link
Copy Markdown
Member Author

Feedback addressed for PR #98 — DoS Prevention

Changes applied

Must-fix items (both applied):

  1. Inaccurate base fee claim (line 26): Fixed the cycle cost description to distinguish the update message execution fee (5M cycles) from the ingress reception fee (~1.2M cycles + 2,000 cycles/byte). The original text conflated both into a single "base fee" of ~5M cycles. Verified against .sources/portal/docs/references/cycles-cost-formulas.mdx.

  2. Deprecated API reference (line 322): Replaced ic_cdk::api::call::msg_cycles_accept128 with ic_cdk::api::msg_cycles_accept(max_amount: u128). Verified in .sources/cdk-rs/ic-cdk/src/api/call.rsmsg_cycles_accept128 carries #[deprecated(note = "Please use ic_cdk::api::msg_cycles_accept instead.")].

Suggestions (all applied):

  1. icp.yaml snippets structure: Fixed all four settings: snippets to show the correct nested structure — settings: must be nested under a named canister entry in the canisters: array of icp.yaml.

  2. Checklist inconsistency for wasm_memory_threshold: Added new checklist item for wasm_memory_threshold to match the body text which already referenced it in the YAML snippet.

  3. Motoko catch (e) with unused variable: Changed catch (e) to catch _ in the Motoko concurrency lock example to avoid an unused-variable warning. Valid Motoko syntax confirmed in .sources/motoko/doc/md/examples/todo-error.mo.

  4. Rust CallerGuard example — bounded-wait context: Added a comment noting that Call::bounded_wait should be used for inter-canister calls to avoid unbounded waits that would block canister upgrades.

  5. compute_allocation scheduling precision: Changed "every 10 rounds" to "every 10 consensus rounds" for clarity.

Items skipped

None — all feedback items were factually correct and verified against source material.

Build note

The build in this worktree fails due to a pre-existing issue in docs/guides/backends/https-outcalls.mdx which requires the examples submodule (could not be cloned due to SSH restrictions in sandbox). The dos-prevention.md page does not use snippet files and is not the cause of the failure. The main repo builds successfully.

@marc0olo marc0olo merged commit dd2d74c into main Apr 16, 2026
1 check passed
@marc0olo marc0olo deleted the docs/guides-security-dos-prevention branch April 16, 2026 19:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant