From c7bb100de745e3f3089af789aee29d9a0e07b78a Mon Sep 17 00:00:00 2001 From: Marco Walz Date: Thu, 23 Apr 2026 16:12:27 +0200 Subject: [PATCH 1/4] chore(brand): remove web3 jargon and em-dashes across docs MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Per ICP brand guidelines: - Replace all em-dashes (—) with colons, periods, or parentheses across all editable docs (78 files, ~1400 substitutions) - Remove "seamlessly" from glossary boundary-nodes definition - Change "Web-native smart contracts" card title → "Onchain web apps" - Replace "smart contract" when used as the primary descriptor for ICP canisters (canisters.md, network-overview.md, chain-key-tokens.mdx); keep in cross-chain comparisons where technically accurate - Replace "dapp/dApp/dapps" → "app/apps" in all prose; preserve Candid field names (RegisterDappCanisters, DeregisterDappCanisters, dapps:) and the NNS product name - Change "Decentralized oracle" → "Trustless oracle" in chain-fusion.md - Remove "fully decentralized application" pitch from choose-your-path.md - Fix N/A em-dash table cells → hyphens (chain-fusion, ethereum, protocol) - Fix "a dapp" → "an app" article agreement throughout Excluded from changes (synced or spec files): - docs/languages/motoko/ (auto-synced) - docs/guides/tools/migrating-from-dfx.md (synced from icp-cli) - docs/reference/ic-interface-spec.md (protocol spec) - docs/reference/internet-identity-spec.md (protocol spec) - docs/reference/candid-spec.md (protocol spec) - docs/reference/glossary.md "smart contract" and "dapp" entries (definitions) --- docs/concepts/app-architecture.md | 38 ++-- docs/concepts/canisters.md | 54 ++--- docs/concepts/chain-fusion.md | 56 ++--- docs/concepts/chain-key-cryptography.md | 36 ++-- docs/concepts/governance.md | 42 ++-- docs/concepts/https-outcalls.md | 42 ++-- docs/concepts/index.md | 4 +- docs/concepts/network-overview.md | 28 +-- docs/concepts/onchain-randomness.md | 28 +-- docs/concepts/reverse-gas-model.md | 34 +-- docs/concepts/security.md | 38 ++-- docs/concepts/timers.md | 34 +-- docs/getting-started/choose-your-path.md | 4 +- docs/getting-started/project-structure.mdx | 6 +- docs/getting-started/quickstart.md | 32 +-- .../authentication/internet-identity.mdx | 34 +-- .../authentication/verifiable-credentials.md | 48 ++--- docs/guides/backends/certified-variables.md | 36 ++-- docs/guides/backends/data-persistence.mdx | 58 +++--- docs/guides/backends/https-outcalls.mdx | 32 +-- docs/guides/backends/onchain-ai.mdx | 22 +- docs/guides/backends/randomness.md | 26 +-- docs/guides/backends/timers.mdx | 32 +-- docs/guides/canister-calls/candid.mdx | 34 +-- docs/guides/canister-calls/offchain-calls.md | 28 +-- docs/guides/canister-calls/onchain-calls.mdx | 24 +-- docs/guides/canister-calls/parallel-calls.mdx | 28 +-- .../canister-management/cycles-management.mdx | 72 +++---- docs/guides/canister-management/large-wasm.md | 46 ++-- docs/guides/canister-management/lifecycle.mdx | 64 +++--- docs/guides/canister-management/logs.md | 20 +- .../canister-management/optimization.md | 44 ++-- .../reproducible-builds.md | 30 +-- docs/guides/canister-management/settings.mdx | 10 +- docs/guides/canister-management/snapshots.md | 16 +- .../canister-management/subnet-selection.md | 46 ++-- docs/guides/chain-fusion/bitcoin.mdx | 50 ++--- .../chain-fusion/chain-fusion-signer.md | 20 +- docs/guides/chain-fusion/dogecoin.md | 48 ++--- docs/guides/chain-fusion/ethereum.mdx | 32 +-- .../chain-fusion/offline-key-derivation.md | 12 +- docs/guides/chain-fusion/solana.mdx | 44 ++-- docs/guides/defi/chain-key-tokens.mdx | 42 ++-- docs/guides/defi/rosetta.md | 88 ++++---- docs/guides/defi/token-ledgers.mdx | 32 +-- docs/guides/defi/wallet-integration.md | 50 ++--- docs/guides/frontends/asset-canister.md | 2 +- docs/guides/frontends/certification.md | 40 ++-- docs/guides/frontends/custom-domains.md | 24 +-- docs/guides/frontends/frameworks.md | 26 +-- docs/guides/governance/launching.md | 70 +++---- docs/guides/governance/managing.md | 44 ++-- docs/guides/governance/testing.md | 68 +++--- docs/guides/index.md | 4 +- docs/guides/security/access-management.mdx | 32 +-- docs/guides/security/canister-upgrades.md | 46 ++-- docs/guides/security/data-integrity.md | 48 ++--- docs/guides/security/dos-prevention.md | 78 +++---- docs/guides/security/encryption.md | 2 +- docs/guides/security/inter-canister-calls.md | 38 ++-- docs/guides/testing/pocket-ic.md | 22 +- docs/guides/testing/strategies.md | 28 +-- docs/guides/tools/ai-coding-agents.md | 18 +- docs/guides/tools/overview.md | 26 +-- docs/index.mdx | 12 +- docs/languages/rust/index.md | 26 +-- docs/languages/rust/stable-structures.md | 46 ++-- docs/languages/rust/testing.md | 40 ++-- docs/reference/application-canisters.md | 44 ++-- docs/reference/cycles-costs.md | 30 +-- docs/reference/execution-errors.md | 12 +- docs/reference/glossary.md | 2 +- docs/reference/http-gateway-spec.md | 4 +- docs/reference/management-canister.md | 196 +++++++++--------- docs/reference/protocol-canisters.md | 54 ++--- docs/reference/subnet-types.md | 20 +- docs/reference/system-canisters.md | 24 +-- docs/reference/token-standards.md | 48 ++--- 78 files changed, 1409 insertions(+), 1409 deletions(-) diff --git a/docs/concepts/app-architecture.md b/docs/concepts/app-architecture.md index d4bbb621..80fa59c2 100644 --- a/docs/concepts/app-architecture.md +++ b/docs/concepts/app-architecture.md @@ -5,14 +5,14 @@ sidebar: order: 3 --- -An application on the Internet Computer typically consists of one or more [canisters](canisters.md) that handle backend logic, store data, and optionally serve a web frontend — all without external servers, databases, or CDNs. This page explains how these pieces fit together and what architectural patterns are available as your application grows. +An application on the Internet Computer typically consists of one or more [canisters](canisters.md) that handle backend logic, store data, and optionally serve a web frontend: all without external servers, databases, or CDNs. This page explains how these pieces fit together and what architectural patterns are available as your application grows. ## The default two-canister model Most ICP applications start with two canisters: -- **Backend canister** — contains your application logic and data. You write it in Motoko or Rust (the official CDKs). Community-supported languages like TypeScript and Python are also available — see [Languages](../languages/index.md). Your code is compiled locally to WebAssembly and executed by the network. -- **Frontend (asset) canister** — serves your web UI. It is a standard canister that hosts static files (HTML, CSS, JavaScript, images) and delivers them over HTTP. +- **Backend canister**: contains your application logic and data. You write it in Motoko or Rust (the official CDKs). Community-supported languages like TypeScript and Python are also available: see [Languages](../languages/index.md). Your code is compiled locally to WebAssembly and executed by the network. +- **Frontend (asset) canister**: serves your web UI. It is a standard canister that hosts static files (HTML, CSS, JavaScript, images) and delivers them over HTTP. When a user opens your application in a browser: @@ -22,7 +22,7 @@ When a user opens your application in a browser: 4. The backend canister processes the message, updates its state if needed, and returns a response. 5. The frontend renders the result. -This flow replaces the traditional web stack. There is no separate web server, application server, or database — the backend canister handles all three roles, and the frontend canister replaces your CDN. +This flow replaces the traditional web stack. There is no separate web server, application server, or database. The backend canister handles all three roles, and the frontend canister replaces your CDN. ## How ICP compares to traditional architectures @@ -54,15 +54,15 @@ If you have built on Ethereum or other EVM chains, here is how ICP concepts map: | Bridges / oracles | [Chain-key signing](chain-fusion.md) | Canisters sign transactions on other chains natively | | Immutable by default | Upgradeable by default | Canisters can be upgraded while preserving state | -The biggest shift: on Ethereum, smart contracts are minimal programs that rely on offchain infrastructure for anything beyond basic state transitions. On ICP, a canister can be an entire application — frontend, backend, database, and scheduled jobs — all onchain. +The biggest shift: on Ethereum, smart contracts are minimal programs that rely on offchain infrastructure for anything beyond basic state transitions. On ICP, a canister can be an entire application (frontend, backend, database, and scheduled jobs) all onchain. ## Architectural patterns -As your application grows, you can choose from several patterns. Start simple and evolve as needed — over-architecting from the start is a common mistake. +As your application grows, you can choose from several patterns. Start simple and evolve as needed: over-architecting from the start is a common mistake. ### Single canister -Everything — assets, logic, and data — lives in one canister. This is the simplest architecture and works well for applications serving up to thousands of users. +Everything (assets, logic, and data) lives in one canister. This is the simplest architecture and works well for applications serving up to thousands of users. **When to use:** recommended for most applications. A single canister provides atomic operations and minimal maintenance overhead (no cycle management across canisters, no inter-canister call complexity). Consider multi-canister only when you need separation of concerns or hit a single canister's platform limits. @@ -73,7 +73,7 @@ Separate canisters handle distinct responsibilities. The two-canister setup (fro **When to use:** when you need separation of concerns between components or hit a single canister's platform limits (memory, compute, or storage). **Things to know:** -- Inter-canister calls are asynchronous. Code before and after an `await` executes in separate message rounds — this affects atomicity. +- Inter-canister calls are asynchronous. Code before and after an `await` executes in separate message rounds: this affects atomicity. - Request and response payloads are limited to 2 MiB per call. - Cross-subnet calls add one consensus round of latency compared to same-subnet calls. @@ -89,7 +89,7 @@ For maximum throughput, distribute canisters across multiple [subnets](network-o ## Data storage -Canisters store data in heap memory during execution and can persist data across upgrades using [stable memory](../guides/backends/data-persistence.md) — there is no external database. Libraries provide familiar data-structure abstractions on top of raw stable memory: +Canisters store data in heap memory during execution and can persist data across upgrades using [stable memory](../guides/backends/data-persistence.md): there is no external database. Libraries provide familiar data-structure abstractions on top of raw stable memory: - **Motoko:** the [`core` standard library](https://mops.one/core/docs) includes persistent data structures designed for upgrade-safe storage. - **Rust:** [`ic-stable-structures`](https://docs.rs/ic-stable-structures/latest/ic_stable_structures/) provides `StableBTreeMap` and other structures for stable memory. @@ -100,27 +100,27 @@ For small to medium datasets, stable memory is straightforward. For applications Not every ICP application needs the default asset canister. Your options: -- **Asset canister** — the standard approach. Deploy your built frontend (React, Svelte, vanilla JS, etc.) to an asset canister that serves it over HTTP. See [Asset canister](../guides/frontends/asset-canister.md). -- **Framework-specific canister** — use a framework like Juno that provides a more opinionated hosting solution on ICP. -- **Offchain frontend** — host your frontend on traditional infrastructure (Vercel, Netlify, etc.) and call ICP canisters from JavaScript using [`@icp-sdk/core/agent`](https://js.icp.build). Useful during migration or when you need features that asset canisters don't support. -- **No frontend** — backend-only canisters that expose a Candid API for other canisters or CLI tools to call. +- **Asset canister**: the standard approach. Deploy your built frontend (React, Svelte, vanilla JS, etc.) to an asset canister that serves it over HTTP. See [Asset canister](../guides/frontends/asset-canister.md). +- **Framework-specific canister**: use a framework like Juno that provides a more opinionated hosting solution on ICP. +- **Offchain frontend**: host your frontend on traditional infrastructure (Vercel, Netlify, etc.) and call ICP canisters from JavaScript using [`@icp-sdk/core/agent`](https://js.icp.build). Useful during migration or when you need features that asset canisters don't support. +- **No frontend**: backend-only canisters that expose a Candid API for other canisters or CLI tools to call. ## Choosing an architecture | Question | If yes | If no | |----------|--------|-------| -| Start here | [Single canister](#single-canister) — recommended for most applications | — | +| Start here | [Single canister](#single-canister): recommended for most applications | - | | Does the app have a web UI? | Add an [asset canister](#the-default-two-canister-model) | Backend-only canister | | Do you need separation of concerns or hit platform limits? | [Canister-per-service](#canister-per-service) | Stay with a single canister | | Do you need to scale beyond one subnet? | [Canister-per-subnet](#canister-per-subnet) | Stay on one subnet | -Start with the simplest architecture that meets your requirements. You can always split a canister into multiple canisters later — it is much harder to merge canisters that were split prematurely. +Start with the simplest architecture that meets your requirements. You can always split a canister into multiple canisters later: it is much harder to merge canisters that were split prematurely. ## Next steps -- [Quickstart](../getting-started/quickstart.md) — deploy your first application -- [Onchain calls](../guides/canister-calls/onchain-calls.md) — inter-canister communication patterns -- [Asset canister](../guides/frontends/asset-canister.md) — frontend deployment -- [Canisters](canisters.md) — canister internals +- [Quickstart](../getting-started/quickstart.md): deploy your first application +- [Onchain calls](../guides/canister-calls/onchain-calls.md): inter-canister communication patterns +- [Asset canister](../guides/frontends/asset-canister.md): frontend deployment +- [Canisters](canisters.md): canister internals diff --git a/docs/concepts/canisters.md b/docs/concepts/canisters.md index 8294aab8..768e4cc5 100644 --- a/docs/concepts/canisters.md +++ b/docs/concepts/canisters.md @@ -1,20 +1,20 @@ --- title: "Canisters" -description: "Smart contracts that run WebAssembly, hold state, serve HTTP, and pay for their own compute" +description: "Compute units that run WebAssembly, hold state, serve HTTP, and pay for their own compute" sidebar: order: 2 --- -Canisters are smart contracts on the Internet Computer. Each canister bundles compiled WebAssembly code with its own persistent state into a single unit that the network executes, replicates, and secures. You deploy code to a canister, send it messages, and the network guarantees that every honest node in the subnet reaches the same result. +Canisters are the compute units of the Internet Computer. Each canister bundles compiled WebAssembly code with its own persistent state into a single unit that the network executes, replicates, and secures. You deploy code to a canister, send it messages, and the network guarantees that every honest node in the subnet reaches the same result. -Unlike smart contracts on most blockchains, canisters can serve web pages over HTTP, store gigabytes of data, make calls to external APIs, sign transactions on other chains, and run scheduled tasks autonomously — all without external infrastructure. +Unlike smart contracts on most blockchains, canisters can serve web pages over HTTP, store gigabytes of data, make calls to external APIs, sign transactions on other chains, and run scheduled tasks autonomously, all without external infrastructure. ## How canisters differ from traditional smart contracts Canisters share the core properties of smart contracts: their execution is governed by protocol consensus, their state is tamperproof, and their behavior is auditable. But they go further: - **Upgradeable code.** A canister's Wasm module can be replaced with a new version while preserving its data. This lets you ship bug fixes and new features without redeploying from scratch. -- **HTTP serving.** Canisters handle HTTP requests directly, so a single canister can serve a full web application — frontend and backend — with no servers in between. +- **HTTP serving.** Canisters handle HTTP requests directly, so a single canister can serve a full web application (frontend and backend) with no servers in between. - **Large state.** Canisters can hold up to 500 GiB of stable memory, far beyond what most blockchains allow per contract. - **Outbound calls.** Canisters make HTTPS requests to external services (see [HTTPS outcalls](https-outcalls.md)) and sign transactions on other blockchains using chain-key cryptography (see [Chain Fusion](chain-fusion.md)). - **Autonomous execution.** Timers let canisters schedule their own work without any external trigger. @@ -22,7 +22,7 @@ Canisters share the core properties of smart contracts: their execution is gover ## Execution model -Canisters are WebAssembly module instances. You write code in Motoko or Rust (the official CDKs), or community-supported languages like TypeScript and Python — any language that compiles to Wasm works. The network runs your code in a sandboxed Wasm virtual machine. +Canisters are WebAssembly module instances. You write code in Motoko or Rust (the official CDKs), or community-supported languages like TypeScript and Python: any language that compiles to Wasm works. The network runs your code in a sandboxed Wasm virtual machine. Each canister runs on a single thread. It processes messages one at a time in sequence, which means there are no data races within a canister. However, many canisters execute concurrently across (and within) subnets, so the network as a whole achieves high throughput. @@ -32,35 +32,35 @@ Canisters follow the [actor model](https://en.wikipedia.org/wiki/Actor_model): t All interaction with a canister happens through messages. There are two categories: -- **Ingress messages** — sent by external users (through a browser, CLI, or agent library). -- **Inter-canister messages** — sent from one canister to another within the network. +- **Ingress messages**: sent by external users (through a browser, CLI, or agent library). +- **Inter-canister messages**: sent from one canister to another within the network. Ingress messages use one of two call types: ### Update calls -Update calls can modify canister state. They go through consensus: every node in the subnet executes the call, and the subnet collectively signs the response. This provides strong authenticity guarantees — a single malicious node cannot forge results. +Update calls can modify canister state. They go through consensus: every node in the subnet executes the call, and the subnet collectively signs the response. This provides strong authenticity guarantees: a single malicious node cannot forge results. Update calls typically complete in 1–2 seconds. They cost cycles. ### Query calls -Query calls read state without modifying it. A single node executes the call and returns the result directly, without consensus. Because the response is not threshold-signed by the subnet, query results should be treated as unverified unless you use certified variables. The tradeoff is speed — results come back in milliseconds. +Query calls read state without modifying it. A single node executes the call and returns the result directly, without consensus. Because the response is not threshold-signed by the subnet, query results should be treated as unverified unless you use certified variables. The tradeoff is speed: results come back in milliseconds. -For applications that need authenticated reads (for example, a governance dapp showing proposal text that a user will vote on), you have two options: +For applications that need authenticated reads (for example, a governance app showing proposal text that a user will vote on), you have two options: - Issue the query as an update call for full consensus, at the cost of higher latency. - Use [certified variables](../guides/backends/certified-variables.md) to pre-sign data during updates and serve proofs in query responses. ### Composite queries -Composite queries let a query call other queries on canisters **within the same subnet**, then combine the results into a single response — all without going through consensus. This is useful for aggregating data across multiple canisters at query speed. +Composite queries let a query call other queries on canisters **within the same subnet**, then combine the results into a single response, all without going through consensus. This is useful for aggregating data across multiple canisters at query speed. Key constraints: -- **Same subnet only** — composite queries cannot call canisters on other subnets. -- **Ingress only** — only external clients (browsers, CLI tools) can invoke composite queries. Other canisters cannot call them. -- **No replicated mode** — unlike regular queries, composite queries cannot be executed as update calls for stronger authenticity. +- **Same subnet only**: composite queries cannot call canisters on other subnets. +- **Ingress only**: only external clients (browsers, CLI tools) can invoke composite queries. Other canisters cannot call them. +- **No replicated mode**: unlike regular queries, composite queries cannot be executed as update calls for stronger authenticity. ## Memory model @@ -71,17 +71,17 @@ Each canister has two storage regions: | **Heap (Wasm) memory** | 4 GiB (wasm32) / 6 GiB (wasm64) | No (cleared on upgrade, unless using Motoko's orthogonal persistence) | Standard Wasm memory instructions | | **Stable memory** | 500 GiB | Yes | System API calls | -**Heap memory** is standard Wasm linear memory. It holds your program's heap-allocated data — variables, data structures, and anything your code allocates at runtime. Both 32-bit and 64-bit Wasm memory are supported. Heap memory is cleared when you upgrade the canister's Wasm module. +**Heap memory** is standard Wasm linear memory. It holds your program's heap-allocated data: variables, data structures, and anything your code allocates at runtime. Both 32-bit and 64-bit Wasm memory are supported. Heap memory is cleared when you upgrade the canister's Wasm module. **Stable memory** is a separate address space accessed through the [system API](../reference/ic-interface-spec.md). It survives upgrades, making it the right place for any data that must persist long-term. Libraries like `StableBTreeMap` (Rust) or the [`core`](https://mops.one/core/docs) persistent data structures (Motoko) let you work with stable memory through familiar abstractions. -After a message executes successfully, the system atomically commits all memory changes. If execution traps (fails), no changes are committed — the canister's state rolls back to what it was before that message. +After a message executes successfully, the system atomically commits all memory changes. If execution traps (fails), no changes are committed. The canister's state rolls back to what it was before that message. For a deeper dive, see [Orthogonal persistence](orthogonal-persistence.md). ## Canister IDs and principals -Every canister gets a globally unique **canister ID** when it is created. This ID is a [principal](https://learn.internetcomputer.org/hc/en-us/articles/34250491785108) — the same type of identifier used for users — and serves as the canister's address on the network. +Every canister gets a globally unique **canister ID** when it is created. This ID is a [principal](https://learn.internetcomputer.org/hc/en-us/articles/34250491785108): the same type of identifier used for users: and serves as the canister's address on the network. To send a message to a canister, you include its canister ID in the message header. The network routes the message to the correct subnet and places it in the canister's input queue for processing. @@ -111,7 +111,7 @@ For step-by-step CLI commands, see [Canister lifecycle management](../guides/can ## Controllers -Controllers are principals (users or other canisters) that have permission to manage a canister — upgrade its code, change its settings, stop it, or delete it. +Controllers are principals (users or other canisters) that have permission to manage a canister: upgrade its code, change its settings, stop it, or delete it. If a canister has **no controllers**, it is immutable: no one can change its code or settings. This is a strong guarantee for users who want to verify that a canister's behavior will never change. @@ -119,17 +119,17 @@ If a canister has **no controllers**, it is immutable: no one can change its cod Under the hood, each canister maintains several components: -- **Input queue** — holds incoming messages waiting to be processed. The canister processes one message at a time. -- **Output queue** — holds outgoing messages to other canisters, dispatched after successful execution. -- **Cycles balance** — the canister's fuel for computation and storage. The system deducts cycles after each message execution, whether it succeeds or fails. -- **Controllers list** — the set of principals authorized to manage the canister. -- **Settings** — configurable parameters like compute allocation, memory allocation, and the freezing threshold (the cycles balance below which the canister stops accepting new messages to avoid running out). +- **Input queue**: holds incoming messages waiting to be processed. The canister processes one message at a time. +- **Output queue**: holds outgoing messages to other canisters, dispatched after successful execution. +- **Cycles balance**: the canister's fuel for computation and storage. The system deducts cycles after each message execution, whether it succeeds or fails. +- **Controllers list**: the set of principals authorized to manage the canister. +- **Settings**: configurable parameters like compute allocation, memory allocation, and the freezing threshold (the cycles balance below which the canister stops accepting new messages to avoid running out). ## Next steps -- [Reverse gas model](reverse-gas-model.md) — how canisters pay for computation -- [App architecture](app-architecture.md) — how canisters fit into application design -- [Canister lifecycle](../guides/canister-management/lifecycle.md) — practical guide to managing canisters -- [Network overview](network-overview.md) — the infrastructure canisters run on +- [Reverse gas model](reverse-gas-model.md): how canisters pay for computation +- [App architecture](app-architecture.md): how canisters fit into application design +- [Canister lifecycle](../guides/canister-management/lifecycle.md): practical guide to managing canisters +- [Network overview](network-overview.md): the infrastructure canisters run on diff --git a/docs/concepts/chain-fusion.md b/docs/concepts/chain-fusion.md index a77484a4..81cec26f 100644 --- a/docs/concepts/chain-fusion.md +++ b/docs/concepts/chain-fusion.md @@ -5,9 +5,9 @@ sidebar: order: 10 --- -Chain Fusion is ICP's approach to cross-chain interoperability. Instead of relying on bridges or oracles, canisters interact with other blockchains directly — they can read state, hold assets, and sign and submit transactions on Bitcoin, Ethereum, Solana, and dozens of other chains. All of this runs onchain with the same trust assumptions as the Internet Computer itself. +Chain Fusion is ICP's approach to cross-chain interoperability. Instead of relying on bridges or oracles, canisters interact with other blockchains directly: they can read state, hold assets, and sign and submit transactions on Bitcoin, Ethereum, Solana, and dozens of other chains. All of this runs onchain with the same trust assumptions as the Internet Computer itself. -The foundation is [chain-key cryptography](chain-key-cryptography.md). Each canister can derive keys for external signature schemes (ECDSA and Schnorr) and request threshold signatures from the protocol. This means a canister can control a Bitcoin address, an Ethereum account, or a Solana wallet — without any single node ever holding the private key. +The foundation is [chain-key cryptography](chain-key-cryptography.md). Each canister can derive keys for external signature schemes (ECDSA and Schnorr) and request threshold signatures from the protocol. This means a canister can control a Bitcoin address, an Ethereum account, or a Solana wallet: without any single node ever holding the private key. ## Why Chain Fusion matters @@ -16,9 +16,9 @@ Most cross-chain solutions introduce a trusted intermediary: a bridge, a multisi A canister interacting with Bitcoin or Ethereum has no external dependency beyond the target chain itself. The signing happens inside the protocol through a threshold cryptographic ceremony distributed across subnet nodes. This gives developers several advantages: - **No bridges.** Canisters hold assets directly on external chains. There is no wrapped token that can depeg, no bridge contract that can be exploited. -- **No oracles.** Canisters can read external chain state themselves — either through a direct protocol integration (Bitcoin) or by querying RPC providers via [HTTPS outcalls](https-outcalls.md). -- **Full autonomy.** Canisters can schedule cross-chain actions using [timers](../guides/backends/timers.md), enabling use cases like automated trading, periodic liquidations, or cronjob services — all without external triggers. -- **Web2-like UX.** Because ICP has low-cost computation and a [reverse gas model](reverse-gas-model.md), users can interact with cross-chain dapps through a standard browser without installing a wallet. +- **No oracles.** Canisters can read external chain state themselves: either through a direct protocol integration (Bitcoin) or by querying RPC providers via [HTTPS outcalls](https-outcalls.md). +- **Full autonomy.** Canisters can schedule cross-chain actions using [timers](../guides/backends/timers.md), enabling use cases like automated trading, periodic liquidations, or cronjob services: all without external triggers. +- **Web2-like UX.** Because ICP has low-cost computation and a [reverse gas model](reverse-gas-model.md), users can interact with cross-chain apps through a standard browser without installing a wallet. ## How it works @@ -42,7 +42,7 @@ See [Chain-key cryptography](chain-key-cryptography.md) for details on the thres A canister needs to read the state of an external chain to verify events, check balances, or monitor smart contracts. ICP supports two models: -- **Direct integration.** The protocol runs a native adapter that connects to the external chain's peer-to-peer network. Bitcoin uses this model — ICP nodes run a Bitcoin adapter that syncs blocks directly, so canisters can query UTXOs and submit transactions through the management canister's Bitcoin API without any intermediary. +- **Direct integration.** The protocol runs a native adapter that connects to the external chain's peer-to-peer network. Bitcoin uses this model: ICP nodes run a Bitcoin adapter that syncs blocks directly, so canisters can query UTXOs and submit transactions through the management canister's Bitcoin API without any intermediary. - **RPC integration.** For chains without a direct integration, canisters use [HTTPS outcalls](https-outcalls.md) to query RPC providers. The EVM RPC canister (`7hfb6-caaaa-aaaar-qadga-cai`) provides a typed Candid interface for Ethereum and EVM-compatible chains. It sends each request to at least three independent RPC providers and returns either a `Consistent` result (all providers agree) or an `Inconsistent` result that the caller can handle. Solana has a similar dedicated canister (SOL RPC). For other chains, canisters can make raw HTTPS outcalls to any JSON-RPC endpoint. @@ -64,11 +64,11 @@ The combination of signing, reading, and submitting creates three integration pa | **Dedicated RPC canister** | Typed canister queries multiple providers | Ethereum, EVM chains, Solana | ICP consensus + RPC provider agreement | | **Raw HTTPS outcalls** | Canister makes HTTP requests to RPC endpoints | Any chain with an RPC API | ICP consensus + RPC provider trust | -Direct integration provides the strongest trust guarantees — the only assumption is that a supermajority of subnet nodes are honest. RPC-based integration adds the assumption that at least one of the queried RPC providers returns correct data, which is mitigated by querying multiple independent providers and comparing results. +Direct integration provides the strongest trust guarantees. The only assumption is that a supermajority of subnet nodes are honest. RPC-based integration adds the assumption that at least one of the queried RPC providers returns correct data, which is mitigated by querying multiple independent providers and comparing results. ## Chain-key tokens -Chain-key tokens are digital twins of native assets from other blockchains (for example, ckBTC for Bitcoin and ckETH for Ethereum). Each token is backed 1:1 by the native asset, which is held in a canister-controlled address on the source chain. Minting and burning happen entirely onchain — no bridge, no custodian. +Chain-key tokens are digital twins of native assets from other blockchains (for example, ckBTC for Bitcoin and ckETH for Ethereum). Each token is backed 1:1 by the native asset, which is held in a canister-controlled address on the source chain. Minting and burning happen entirely onchain. No bridge, no custodian. These tokens implement the [ICRC-2](../guides/defi/token-ledgers.md) token standard, so they can be transferred and traded within the ICP ecosystem with the same speed and cost as any other ICP token. When a user wants to redeem the underlying asset, the minter canister signs and submits a withdrawal transaction on the source chain. @@ -82,18 +82,18 @@ Any blockchain whose transactions use ECDSA (secp256k1), Schnorr (BIP340 over se |-------|-----------------|-------------------|-----------------| | Bitcoin | ECDSA, Schnorr | Direct | ckBTC | | Ethereum | ECDSA | EVM RPC canister | ckETH, ckERC20 | -| EVM chains (Arbitrum, Base, Optimism, etc.) | ECDSA | EVM RPC canister | — | +| EVM chains (Arbitrum, Base, Optimism, etc.) | ECDSA | EVM RPC canister | - | | Solana | Ed25519 | SOL RPC canister | ckSOL | | Dogecoin | ECDSA | Direct | ckDOGE | -| Aptos | ECDSA, Ed25519 | HTTPS outcalls | — | -| Avalanche | ECDSA | HTTPS outcalls | — | -| Cardano | Ed25519 | HTTPS outcalls | — | -| Cosmos | ECDSA | HTTPS outcalls | — | -| NEAR | Ed25519 | HTTPS outcalls | — | -| Polkadot | ECDSA, Ed25519 | HTTPS outcalls | — | -| Stellar | Ed25519 | HTTPS outcalls | — | -| TON | Ed25519 | HTTPS outcalls | — | -| XRP | ECDSA, Ed25519 | HTTPS outcalls | — | +| Aptos | ECDSA, Ed25519 | HTTPS outcalls | - | +| Avalanche | ECDSA | HTTPS outcalls | - | +| Cardano | Ed25519 | HTTPS outcalls | - | +| Cosmos | ECDSA | HTTPS outcalls | - | +| NEAR | Ed25519 | HTTPS outcalls | - | +| Polkadot | ECDSA, Ed25519 | HTTPS outcalls | - | +| Stellar | Ed25519 | HTTPS outcalls | - | +| TON | Ed25519 | HTTPS outcalls | - | +| XRP | ECDSA, Ed25519 | HTTPS outcalls | - | This is not exhaustive. If a chain uses a supported signature scheme and has RPC providers accessible over IPv6, integration is possible. @@ -101,28 +101,28 @@ This is not exhaustive. If a chain uses a supported signature scheme and has RPC Several reusable canisters and protocol APIs are available for building Chain Fusion applications: -- **Bitcoin API.** The management canister exposes `bitcoin_get_utxos`, `bitcoin_get_balance`, and `bitcoin_send_transaction` — a direct protocol-level integration with no intermediary. See [Bitcoin integration](../guides/chain-fusion/bitcoin.md). +- **Bitcoin API.** The management canister exposes `bitcoin_get_utxos`, `bitcoin_get_balance`, and `bitcoin_send_transaction`: a direct protocol-level integration with no intermediary. See [Bitcoin integration](../guides/chain-fusion/bitcoin.md). - **EVM RPC canister** (`7hfb6-caaaa-aaaar-qadga-cai`). A canister providing a typed Candid interface for Ethereum and EVM-compatible chains. Queries multiple RPC providers and returns consensus results. See [Ethereum integration](../guides/chain-fusion/ethereum.md). - **SOL RPC canister.** A similar canister for Solana, providing typed access to Solana's JSON-RPC API. See [Solana integration](../guides/chain-fusion/solana.md). -- **Chain-key tokens.** Minter and ledger canisters that implement ckBTC, ckETH, and ckERC20 — trustless 1:1 representations of external assets on ICP. See [Chain-key tokens](../guides/defi/chain-key-tokens.md). -- **Chain Fusion Signer.** A reusable canister that exposes threshold signature APIs directly to web apps and CLI users, with cycle payments via ICRC-2 approval. [OISY Wallet](https://oisy.com) is a prominent production example — a multichain wallet built on ICP that uses the Chain Fusion Signer to manage keys for Bitcoin, Ethereum, and other chains. See the [chain-fusion-signer repository](https://github.com/dfinity/chain-fusion-signer). +- **Chain-key tokens.** Minter and ledger canisters that implement ckBTC, ckETH, and ckERC20: trustless 1:1 representations of external assets on ICP. See [Chain-key tokens](../guides/defi/chain-key-tokens.md). +- **Chain Fusion Signer.** A reusable canister that exposes threshold signature APIs directly to web apps and CLI users, with cycle payments via ICRC-2 approval. [OISY Wallet](https://oisy.com) is a prominent production example: a multichain wallet built on ICP that uses the Chain Fusion Signer to manage keys for Bitcoin, Ethereum, and other chains. See the [chain-fusion-signer repository](https://github.com/dfinity/chain-fusion-signer). ## Example use cases Chain Fusion enables application patterns that are difficult or impossible with bridge-based approaches: -- **Trustless cronjob service.** A canister monitors an Ethereum DeFi contract via the EVM RPC canister and triggers loan liquidations or batch settlements automatically using timers — no Gelato or Chainlink Keepers needed. +- **Trustless cronjob service.** A canister monitors an Ethereum DeFi contract via the EVM RPC canister and triggers loan liquidations or batch settlements automatically using timers. No Gelato or Chainlink Keepers needed. - **Multichain wallet.** A single canister controls addresses on Bitcoin, Ethereum, and Solana simultaneously. Users interact through a web frontend served from ICP without installing chain-specific wallets. - **Onchain frontend.** An immutable or DAO-governed frontend for an Ethereum smart contract, hosted on ICP as a certified asset. Users interact with the Ethereum contract through the ICP-hosted UI. - **Cross-chain DeFi.** A lending protocol that accepts Bitcoin as collateral (held in a canister-controlled BTC address) and issues stablecoins as ICRC-2 tokens. -- **Decentralized oracle.** A canister fetches real-world data via HTTPS outcalls and posts it to a smart contract on another chain — replacing centralized oracle networks. +- **Trustless oracle.** A canister fetches real-world data via HTTPS outcalls and posts it to a smart contract on another chain: replacing centralized oracle networks. ## Next steps -- [Bitcoin integration](../guides/chain-fusion/bitcoin.md) — build with BTC on ICP -- [Ethereum integration](../guides/chain-fusion/ethereum.md) — interact with Ethereum and EVM chains -- [Chain-key tokens](../guides/defi/chain-key-tokens.md) — ckBTC, ckETH, and ckERC20 -- [Chain-key cryptography](chain-key-cryptography.md) — the threshold signing protocols behind Chain Fusion -- [HTTPS outcalls](https-outcalls.md) — make HTTP requests from canisters +- [Bitcoin integration](../guides/chain-fusion/bitcoin.md): build with BTC on ICP +- [Ethereum integration](../guides/chain-fusion/ethereum.md): interact with Ethereum and EVM chains +- [Chain-key tokens](../guides/defi/chain-key-tokens.md): ckBTC, ckETH, and ckERC20 +- [Chain-key cryptography](chain-key-cryptography.md): the threshold signing protocols behind Chain Fusion +- [HTTPS outcalls](https-outcalls.md): make HTTP requests from canisters diff --git a/docs/concepts/chain-key-cryptography.md b/docs/concepts/chain-key-cryptography.md index 045bba66..a41fceed 100644 --- a/docs/concepts/chain-key-cryptography.md +++ b/docs/concepts/chain-key-cryptography.md @@ -5,17 +5,17 @@ sidebar: order: 9 --- -Chain-key cryptography is a set of threshold cryptographic protocols that underpin the Internet Computer. Instead of any single node holding a private key, keys are split into shares distributed across the nodes of a [subnet](network-overview.md). Nodes collaboratively sign messages without ever reconstructing the full key — and this single capability enables everything from fast response verification to canisters signing transactions on Bitcoin, Ethereum, and dozens of other blockchains. +Chain-key cryptography is a set of threshold cryptographic protocols that underpin the Internet Computer. Instead of any single node holding a private key, keys are split into shares distributed across the nodes of a [subnet](network-overview.md). Nodes collaboratively sign messages without ever reconstructing the full key: and this single capability enables everything from fast response verification to canisters signing transactions on Bitcoin, Ethereum, and dozens of other blockchains. ## Why threshold cryptography matters -On most blockchains, verifying state requires replaying transactions or trusting a full node. On ICP, verifying a response means checking **one signature against one public key** — regardless of how many nodes produced it. This is possible because each subnet holds a threshold BLS key: any sufficiently large subset of nodes can produce a valid signature, but no smaller group can forge one. +On most blockchains, verifying state requires replaying transactions or trusting a full node. On ICP, verifying a response means checking **one signature against one public key**: regardless of how many nodes produced it. This is possible because each subnet holds a threshold BLS key: any sufficiently large subset of nodes can produce a valid signature, but no smaller group can forge one. This design has several consequences for developers: - **Fast verification.** Clients verify subnet responses with a single public key check. There is no need to download block headers or maintain a light client. - **Certified data.** Canisters can set certified variables that the subnet signs at each block. Query responses that include these certificates are cryptographically authenticated, bridging the gap between fast queries and trusted updates. See [Certified variables](../guides/backends/certified-variables.md). -- **Onchain randomness.** The threshold BLS scheme produces unique signatures — for a given message and key, only one valid signature exists. ICP exploits this property to generate unpredictable, unbiased random numbers that canisters can consume. See [Onchain randomness](onchain-randomness.md). +- **Onchain randomness.** The threshold BLS scheme produces unique signatures: for a given message and key, only one valid signature exists. ICP exploits this property to generate unpredictable, unbiased random numbers that canisters can consume. See [Onchain randomness](onchain-randomness.md). - **Cross-chain signing.** Canisters can request threshold ECDSA and Schnorr signatures, giving them the ability to control addresses and sign transactions on external blockchains. This is the foundation of [Chain Fusion](chain-fusion.md). ## Core protocols @@ -24,7 +24,7 @@ Chain-key cryptography is not a single algorithm but a protocol suite. The main ### Distributed key generation (DKG) -Before a subnet can sign anything, its nodes must collectively generate a key whose shares are distributed among them. ICP uses a novel DKG protocol that works over an **asynchronous network** and tolerates up to one-third of nodes being faulty. The same protocol handles **key resharing** — transferring key material to a new set of nodes when subnet membership changes — without ever reconstructing the private key. Resharing also runs periodically within a subnet to defend against adaptive attackers: each resharing invalidates all previously obtained shares, so compromising nodes over time does not help an adversary accumulate enough shares to forge signatures. +Before a subnet can sign anything, its nodes must collectively generate a key whose shares are distributed among them. ICP uses a novel DKG protocol that works over an **asynchronous network** and tolerates up to one-third of nodes being faulty. The same protocol handles **key resharing**: transferring key material to a new set of nodes when subnet membership changes: without ever reconstructing the private key. Resharing also runs periodically within a subnet to defend against adaptive attackers: each resharing invalidates all previously obtained shares, so compromising nodes over time does not help an adversary accumulate enough shares to forge signatures. ### Threshold BLS signatures @@ -33,7 +33,7 @@ BLS is the signature scheme used for ICP's internal operations: consensus, respo BLS was chosen for two properties: 1. **Non-interactive signing.** A node holding a key share can independently produce a signature share. Shares are combined into a full signature with no further communication between nodes. -2. **Unique signatures.** For a given public key and message, exactly one valid BLS signature exists. This uniqueness is what makes onchain randomness unbiasable — no coalition of nodes can influence the output. +2. **Unique signatures.** For a given public key and message, exactly one valid BLS signature exists. This uniqueness is what makes onchain randomness unbiasable. No coalition of nodes can influence the output. ### Chain-key signatures (threshold ECDSA and Schnorr) @@ -49,20 +49,20 @@ Two signature schemes are supported, with the Schnorr API offering two algorithm Each scheme is backed by a pair of management canister methods: -- **Public key retrieval** (`ecdsa_public_key`, `schnorr_public_key`) — returns a canister's public key for a given derivation path. -- **Signing** (`sign_with_ecdsa`, `sign_with_schnorr`) — computes a threshold signature using the canister's derived key. +- **Public key retrieval** (`ecdsa_public_key`, `schnorr_public_key`): returns a canister's public key for a given derivation path. +- **Signing** (`sign_with_ecdsa`, `sign_with_schnorr`): computes a threshold signature using the canister's derived key. See the [Management canister reference](../reference/management-canister.md) for the full API, and the [IC interface specification](../reference/ic-interface-spec.md) for the authoritative protocol-level details. ### Key derivation -A small number of **master keys** are deployed across the network — one per signature scheme. From each master key, the protocol derives a unique **canister root key** for every canister using the canister's principal as input. From the root key, canisters can derive an unlimited number of child keys by providing a `derivation_path` in API calls. +A small number of **master keys** are deployed across the network: one per signature scheme. From each master key, the protocol derives a unique **canister root key** for every canister using the canister's principal as input. From the root key, canisters can derive an unlimited number of child keys by providing a `derivation_path` in API calls. For ECDSA and BIP340, key derivation uses a generalized form of [BIP-32](https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki), which means derived keys are compatible with standard Bitcoin and Ethereum HD wallet tooling. Ed25519 uses a custom hierarchical derivation mechanism designed for this use case. -Derivation is transparent — it happens inside the protocol as part of the signing and public-key-retrieval APIs. You provide a derivation path and the protocol handles the rest. +Derivation is transparent: it happens inside the protocol as part of the signing and public-key-retrieval APIs. You provide a derivation path and the protocol handles the rest. -Because the derivation algorithm is deterministic and uses only public parameters (the master public key, the canister principal, and the derivation path), public key derivation can also be performed **offline** — no management canister call or network connection required. This is useful for building explorers, dashboards, or address-derivation tools that need a canister's public key or blockchain address without a live ICP connection. See the [offline key derivation guide](../guides/chain-fusion/offline-key-derivation.md) for TypeScript and Rust libraries. +Because the derivation algorithm is deterministic and uses only public parameters (the master public key, the canister principal, and the derivation path), public key derivation can also be performed **offline**: no management canister call or network connection required. This is useful for building explorers, dashboards, or address-derivation tools that need a canister's public key or blockchain address without a live ICP connection. See the [offline key derivation guide](../guides/chain-fusion/offline-key-derivation.md) for TypeScript and Rust libraries. @@ -85,27 +85,27 @@ The following master keys are deployed at the time of writing. The NNS can add n | `(ed25519, test_key_1)` | Schnorr (Ed25519) | Development and testing | 13-node subnet | | `(ed25519, key_1)` | Schnorr (Ed25519) | Production | High-replication subnet | -Test keys are available for development and run on smaller subnets with lower signing costs. They should not be used for anything of value. Production keys run on high-replication subnets (34+ nodes) for stronger security guarantees. Each key is also reshared to a backup subnet for availability — if the signing subnet fails, the backup can take over without generating a new key. +Test keys are available for development and run on smaller subnets with lower signing costs. They should not be used for anything of value. Production keys run on high-replication subnets (34+ nodes) for stronger security guarantees. Each key is also reshared to a backup subnet for availability: if the signing subnet fails, the backup can take over without generating a new key. For signing costs, see [Cycles costs](../reference/cycles-costs.md). ## Supported chains -Any blockchain whose transaction authentication uses ECDSA (secp256k1) or Schnorr signatures (BIP340 over secp256k1, or Ed25519) can be integrated with ICP through chain-key signatures. For the full list of supported chains with integration methods and chain-key tokens, see [Chain Fusion — Supported chains](chain-fusion.md#supported-chains). +Any blockchain whose transaction authentication uses ECDSA (secp256k1) or Schnorr signatures (BIP340 over secp256k1, or Ed25519) can be integrated with ICP through chain-key signatures. For the full list of supported chains with integration methods and chain-key tokens, see [Chain Fusion: Supported chains](chain-fusion.md#supported-chains). ## Chain evolution -The same threshold cryptographic infrastructure that enables signing also enables ICP to upgrade itself without downtime or forks. When a subnet's membership changes (nodes are added, removed, or replaced), the DKG protocol **reshares** the existing keys to the new set of nodes. The subnet's public key stays the same, but the underlying shares change — meaning old shares held by removed nodes become useless. +The same threshold cryptographic infrastructure that enables signing also enables ICP to upgrade itself without downtime or forks. When a subnet's membership changes (nodes are added, removed, or replaced), the DKG protocol **reshares** the existing keys to the new set of nodes. The subnet's public key stays the same, but the underlying shares change: meaning old shares held by removed nodes become useless. -Combined with the NNS governance system, this enables **autonomous protocol upgrades**: the NNS approves an upgrade, the orchestrator on each node downloads the new replica software, and the subnet transitions at the next epoch boundary — all while preserving canister state and maintaining the same public key. +Combined with the NNS governance system, this enables **autonomous protocol upgrades**: the NNS approves an upgrade, the orchestrator on each node downloads the new replica software, and the subnet transitions at the next epoch boundary: all while preserving canister state and maintaining the same public key. For more on how upgrades work at the protocol level, see the [Chain Evolution](https://learn.internetcomputer.org/hc/en-us/articles/34210120121748) article on the Learn Hub. ## Next steps -- [Chain Fusion](chain-fusion.md) — how canisters use chain-key signatures to interact with other blockchains -- [Ethereum integration](../guides/chain-fusion/ethereum.md) — using threshold ECDSA with Ethereum and EVM chains -- [VetKeys](vetkeys.md) — a related cryptographic primitive for onchain encryption -- [Management canister reference](../reference/management-canister.md) — the threshold signing API +- [Chain Fusion](chain-fusion.md): how canisters use chain-key signatures to interact with other blockchains +- [Ethereum integration](../guides/chain-fusion/ethereum.md): using threshold ECDSA with Ethereum and EVM chains +- [VetKeys](vetkeys.md): a related cryptographic primitive for onchain encryption +- [Management canister reference](../reference/management-canister.md): the threshold signing API diff --git a/docs/concepts/governance.md b/docs/concepts/governance.md index 79acc184..da6ad525 100644 --- a/docs/concepts/governance.md +++ b/docs/concepts/governance.md @@ -1,13 +1,13 @@ --- title: "Governance" -description: "How ICP is governed: the NNS, SNS for dapp governance, neurons, proposals, and tokenomics fundamentals" +description: "How ICP is governed: the NNS, SNS for app governance, neurons, proposals, and tokenomics fundamentals" sidebar: order: 13 --- -The Internet Computer Protocol uses two governance systems: the **Network Nervous System (NNS)** governs the protocol itself, and the **Service Nervous System (SNS)** provides a framework for dapp developers to hand control of their applications to a community-owned DAO. +The Internet Computer Protocol uses two governance systems: the **Network Nervous System (NNS)** governs the protocol itself, and the **Service Nervous System (SNS)** provides a framework for app developers to hand control of their applications to a community-owned DAO. -Understanding both systems is important for developers. NNS proposals can affect canister behavior—for example, proposals that update system canisters or modify subnet configurations. SNS gives developers a standardized path to decentralize their dapp. +Understanding both systems is important for developers. NNS proposals can affect canister behavior (for example, proposals that update system canisters or modify subnet configurations). SNS gives developers a standardized path to decentralize their app. ## The Network Nervous System @@ -19,7 +19,7 @@ Decisions made through the NNS include: - Creating and managing subnets (adding capacity, changing subnet membership) - Setting economic parameters such as the ICP-to-cycles conversion rate - Authorizing new node providers and their hardware -- Creating new SNS DAOs for dapps +- Creating new SNS DAOs for apps The NNS governance canister (`rrkah-fqaaa-aaaaa-aaaaq-cai`) is the entry point for all proposal submissions and voting. See [system canisters](../reference/system-canisters.md) for the full list of NNS canister IDs. @@ -44,7 +44,7 @@ A neuron is a governance participant created by locking ICP tokens in the NNS go - **Age**: How long the neuron has been non-dissolving. Older neurons earn an age bonus on voting power. - **State**: A neuron is either locked (non-dissolving), dissolving, or dissolved (ready to disburse). -**Voting power formula:** A neuron's voting power scales with its stake, dissolve delay bonus (up to 2x at 8 years), and age bonus (up to 1.25x at 4 years). This design incentivizes long-term alignment with the network. +**Voting power formula:** A neuron's voting power scales with its stake, dissolve delay bonus (up to 2x at 8 years), and age bonus (up to 1.25x at 4 years). This design incentivizes long-term alignment with the network. **Liquid democracy (following):** Neurons can delegate their votes to other neurons on specific proposal topics. A neuron that doesn't vote directly inherits the vote of its followed neurons. This allows passive participation while still counting toward quorum. @@ -68,7 +68,7 @@ An NNS proposal is a governance action submitted by a neuron and voted on by the - *UpgradeNnsCanister* and *UpgradeRootCanister*: Update protocol canisters. May change interfaces developers rely on. - *CreateSubnet* / *AddNodeToSubnet*: Affect where canisters run. - *UpdateCanisterSettings* for NNS canisters: Can change the behavior of system canisters. -- *CreateServiceNervousSystem*: Authorizes a new SNS DAO, launching the decentralization process for a dapp. +- *CreateServiceNervousSystem*: Authorizes a new SNS DAO, launching the decentralization process for an app. See [system canisters](../reference/system-canisters.md) for the full list of NNS proposal topics and types. @@ -82,9 +82,9 @@ The reward rate declines over time as the protocol matures, converging toward a ## The Service Nervous System -The SNS is a governance framework that allows dapp developers to create a community-owned DAO for their application. When a dapp is governed by an SNS, token holders vote on proposals to upgrade the dapp's canisters, manage treasury funds, and adjust governance parameters. +The SNS is a governance framework that allows app developers to create a community-owned DAO for their application. When an app is governed by an SNS, token holders vote on proposals to upgrade the app's canisters, manage treasury funds, and adjust governance parameters. -Unlike the NNS, which is a singleton governing the entire protocol, each SNS is a separate set of canisters specific to one dapp. SNSes live on a dedicated SNS subnet. +Unlike the NNS, which is a singleton governing the entire protocol, each SNS is a separate set of canisters specific to one app. SNSes live on a dedicated SNS subnet. ### SNS canisters @@ -94,12 +94,12 @@ An SNS consists of six canisters deployed by SNS-W (the SNS Wasm modules caniste |----------|---------| | **Governance** | Proposal submission, voting, neuron management | | **Ledger** | SNS token transfers (ICRC-1 standard) | -| **Root** | Sole controller of all dapp canisters post-launch | +| **Root** | Sole controller of all app canisters post-launch | | **Swap** | Runs the decentralization swap (ICP for SNS tokens) | | **Index** | Transaction indexing for the SNS ledger | | **Archive** | Historical transaction storage | -Once an SNS is live, the SNS Root canister is the sole controller of the dapp's canisters. Upgrades happen through governance proposals voted on by SNS token holders. +Once an SNS is live, the SNS Root canister is the sole controller of the app's canisters. Upgrades happen through governance proposals voted on by SNS token holders. ### Token economics @@ -115,38 +115,38 @@ The SNS ledger implements the ICRC-1 token standard. SNS neurons work similarly The decentralization swap is the mechanism by which SNS tokens are distributed to the public. Participants send ICP to the SNS Swap canister during the swap window; when the swap closes, the exchange rate is determined and participants receive SNS tokens in a basket of neurons with vesting schedules. -The swap has minimum and maximum ICP participation thresholds. If the minimum is not reached, the swap fails: all ICP is refunded and control of the dapp returns to the original developers (via the fallback controllers defined in the configuration). If the maximum is reached before the end time, the swap closes early. +The swap has minimum and maximum ICP participation thresholds. If the minimum is not reached, the swap fails: all ICP is refunded and control of the app returns to the original developers (via the fallback controllers defined in the configuration). If the maximum is reached before the end time, the swap closes early. The Neurons' Fund (a subset of NNS neurons that commit maturity for ecosystem investment) can optionally participate in the swap, providing a baseline level of participation. ### SNS governance vs NNS governance -SNS governance mirrors the NNS design but is customized per dapp: +SNS governance mirrors the NNS design but is customized per app: | Aspect | NNS | SNS | |--------|-----|-----| -| What it governs | Protocol and network | A specific dapp | +| What it governs | Protocol and network | A specific app | | Token | ICP | Project-specific ICRC-1 token | -| Governance canisters | Singleton on NNS subnet | Per-dapp on SNS subnet | +| Governance canisters | Singleton on NNS subnet | Per-app on SNS subnet | | Launch authority | N/A (pre-existing) | NNS must approve creation | -| Proposal types | Protocol updates, subnet management, economics | Dapp upgrades, treasury transfers, parameter changes | +| Proposal types | Protocol updates, subnet management, economics | App upgrades, treasury transfers, parameter changes | ## What decentralization means for developers -When a dapp is governed by an SNS, the original developers no longer have direct control. Key implications: +When an app is governed by an SNS, the original developers no longer have direct control. Key implications: -- **Upgrades require proposals**: All changes to dapp canisters must go through SNS governance votes. Development slows down compared to centralized control. +- **Upgrades require proposals**: All changes to app canisters must go through SNS governance votes. Development slows down compared to centralized control. - **Treasury spending requires votes**: Any use of DAO funds requires a governance proposal. - **Upgrade path is transparent**: Community members can verify new canister wasm modules before voting. Reproducible builds allow independent verification. - **Responsibility is distributed**: Post-launch, the development team typically continues leading the project but must engage the token-holding community for major decisions. -- **Custom proposals**: Dapps can register custom proposal types (generic functions) that allow the DAO to call specific canister methods, enabling fine-grained governance without unrestricted code upgrades. +- **Custom proposals**: Apps can register custom proposal types (generic functions) that allow the DAO to call specific canister methods, enabling fine-grained governance without unrestricted code upgrades. Developers preparing for an SNS launch should ensure their codebase is stable, open-sourced, and reproducibly buildable before the decentralization swap. The NNS community votes on the creation proposal and expects evidence of product-market fit, sound tokenomics, and a realistic roadmap. ## Next steps -- [Launch an SNS](../guides/governance/launching.md) — step-by-step guide to decentralizing your dapp -- [Manage a live SNS](../guides/governance/managing.md) — proposals, upgrades, and treasury management after launch -- [System canisters reference](../reference/system-canisters.md) — NNS canister IDs and interfaces +- [Launch an SNS](../guides/governance/launching.md): step-by-step guide to decentralizing your app +- [Manage a live SNS](../guides/governance/managing.md): proposals, upgrades, and treasury management after launch +- [System canisters reference](../reference/system-canisters.md): NNS canister IDs and interfaces diff --git a/docs/concepts/https-outcalls.md b/docs/concepts/https-outcalls.md index 4741dd78..d260b910 100644 --- a/docs/concepts/https-outcalls.md +++ b/docs/concepts/https-outcalls.md @@ -5,15 +5,15 @@ sidebar: order: 8 --- -Canisters on the Internet Computer can make HTTP requests to any public web server — fetching API data, posting to webhooks, or querying offchain services — without relying on oracles or other intermediaries. This capability is called **HTTPS outcalls**. +Canisters on the Internet Computer can make HTTP requests to any public web server (fetching API data, posting to webhooks, or querying offchain services) without relying on oracles or other intermediaries. This capability is called **HTTPS outcalls**. -What makes this unusual for a blockchain is that every replica in a subnet executes the same code independently. When a canister makes an HTTPS outcall, all replicas in the subnet send the same request to the external server, each receives its own response, and the subnet must reach consensus on a single response to return to the canister. This mechanism preserves the replicated state machine guarantees that make smart contracts trustworthy while enabling direct communication with the conventional web. +What makes this unusual for a blockchain is that every replica in a subnet executes the same code independently. When a canister makes an HTTPS outcall, all replicas in the subnet send the same request to the external server, each receives its own response, and the subnet must reach consensus on a single response to return to the canister. This mechanism preserves the replicated state machine guarantees that make canisters trustworthy while enabling direct communication with the conventional web. ## Why HTTPS outcalls exist -Traditional blockchains cannot make outbound HTTP requests. Smart contracts are deterministic state machines — if different nodes received different responses from an external server, their state would diverge and consensus would break. The standard workaround is **oracles**: trusted third-party services that fetch offchain data and submit it onchain. +Traditional blockchains cannot make outbound HTTP requests. Smart contracts are deterministic state machines: if different nodes received different responses from an external server, their state would diverge and consensus would break. The standard workaround is **oracles**: trusted third-party services that fetch offchain data and submit it onchain. -Oracles work, but they add complexity, cost, and trust assumptions. You must choose an oracle provider, pay their fees, and trust that they relay data honestly. With HTTPS outcalls, canisters call external APIs directly. The IC protocol handles the consensus problem internally, so you get the same result — reliable offchain data onchain — without the middleman. +Oracles work, but they add complexity, cost, and trust assumptions. You must choose an oracle provider, pay their fees, and trust that they relay data honestly. With HTTPS outcalls, canisters call external APIs directly. The IC protocol handles the consensus problem internally, so you get the same result (reliable offchain data onchain) without the middleman. ## How outcalls reach consensus @@ -23,15 +23,15 @@ When a canister calls the management canister's `http_request` method, the follo 2. **Every replica makes the same HTTP request.** On a 13-node subnet, 13 independent requests go to the target server. The IC first tries a direct IPv6 connection; if that fails (e.g., the server is IPv4-only), it retries through a SOCKS proxy. -3. **Each replica receives its own response.** These responses are often *almost* identical but may differ in non-deterministic fields — timestamps in headers, request IDs, JSON field ordering, or IP-dependent content. +3. **Each replica receives its own response.** These responses are often *almost* identical but may differ in non-deterministic fields: timestamps in headers, request IDs, JSON field ordering, or IP-dependent content. 4. **The transform function normalizes responses.** The canister provides a transform function (a query method) that each replica runs locally on its response. The transform strips or normalizes the non-deterministic parts so that all honest replicas produce the same transformed response. 5. **Consensus agrees on the response.** The IC's consensus protocol requires at least 2/3 of replicas to produce the same transformed response. If enough replicas agree, that response is returned to the canister. If they can't agree, the call fails with a timeout. -The transform function is critical. Without it, even minor differences between responses — a header timestamp off by a millisecond — prevent consensus. If consensus cannot be reached, the call eventually times out — this is the most common failure mode when developing outcalls. +The transform function is critical. Without it, even minor differences between responses (a header timestamp off by a millisecond) prevent consensus. If consensus cannot be reached, the call eventually times out: this is the most common failure mode when developing outcalls. -> **Local testing caveat:** The local replica runs a single node, so all responses pass consensus automatically — even without a transform function. Transform and consensus issues only surface when you deploy to a multi-node subnet. +> **Local testing caveat:** The local replica runs a single node, so all responses pass consensus automatically: even without a transform function. Transform and consensus issues only surface when you deploy to a multi-node subnet. For practical guidance on writing transform functions, see the [HTTPS outcalls guide](../guides/backends/https-outcalls.md). @@ -45,7 +45,7 @@ There are two general strategies: - **Strip the variable parts.** Remove headers and body fields that vary between responses (timestamps, request IDs, ordering differences) while keeping the rest of the structure intact. -The first approach is recommended whenever possible — it produces smaller responses, is simpler to implement, and is less likely to miss a non-deterministic field. +The first approach is recommended whenever possible: it produces smaller responses, is simpler to implement, and is less likely to miss a non-deterministic field. A common pattern is stripping all response headers (they frequently contain timestamps and server-specific metadata) and either keeping the body as-is (if it's already deterministic) or re-serializing JSON to normalize field ordering. @@ -53,9 +53,9 @@ A common pattern is stripping all response headers (they frequently contain time HTTPS outcalls support `GET`, `HEAD`, and `POST` methods. -**GET and HEAD** requests are straightforward — they're inherently idempotent (repeating them doesn't change server state), so having 13 replicas send the same GET is harmless. `HEAD` is particularly useful for determining a resource's response size before making the actual request, which helps you set `max_response_bytes` accurately. +**GET and HEAD** requests are straightforward: they're inherently idempotent (repeating them doesn't change server state), so having 13 replicas send the same GET is harmless. `HEAD` is particularly useful for determining a resource's response size before making the actual request, which helps you set `max_response_bytes` accurately. -**POST requests** require more care. Because all replicas send the request independently, a non-idempotent POST endpoint (like "create order") will be called once per replica — potentially 13 times on a standard subnet. To prevent this: +**POST requests** require more care. Because all replicas send the request independently, a non-idempotent POST endpoint (like "create order") will be called once per replica: potentially 13 times on a standard subnet. To prevent this: - **Use an idempotency key.** Include a unique identifier in the request headers. Well-designed APIs recognize duplicate requests by this key and process only the first one. - **Design for idempotency.** If you control the target API, make the endpoint handle duplicate requests gracefully. @@ -65,14 +65,14 @@ Not all servers support idempotency keys, so evaluate this on a case-by-case bas ## Cycle costs -HTTPS outcalls are not free — the calling canister must attach cycles to cover the cost. Both the Motoko `ic` mops package and the Rust `ic-cdk` provide wrappers that automatically compute and attach the required amount using the `ic0.cost_http_request` system API. +HTTPS outcalls are not free. The calling canister must attach cycles to cover the cost. Both the Motoko `ic` mops package and the Rust `ic-cdk` provide wrappers that automatically compute and attach the required amount using the `ic0.cost_http_request` system API. The cost depends on two factors: -- **Request size** — the combined byte length of the URL, headers, body, transform function name, and transform context. -- **`max_response_bytes`** — the maximum response size you declare. This is what you're charged for, not the actual response size. +- **Request size**: the combined byte length of the URL, headers, body, transform function name, and transform context. +- **`max_response_bytes`**: the maximum response size you declare. This is what you're charged for, not the actual response size. -If you omit `max_response_bytes`, the system assumes the maximum of 2 MB and charges accordingly — roughly 21.5 billion cycles on a 13-node subnet. Always set this to a reasonable upper bound for your expected response to avoid overpaying. Unused cycles are refunded. +If you omit `max_response_bytes`, the system assumes the maximum of 2 MB and charges accordingly: roughly 21.5 billion cycles on a 13-node subnet. Always set this to a reasonable upper bound for your expected response to avoid overpaying. Unused cycles are refunded. For exact pricing formulas, see the [cycles costs reference](../reference/cycles-costs.md). @@ -84,7 +84,7 @@ For exact pricing formulas, see the [cycles costs reference](../reference/cycles - **No streaming or WebSocket.** Outcalls are single request-response pairs. Long-lived connections are not supported. - **~30-second timeout.** If the external server doesn't respond in time, the call fails. - **Rate limiting.** All canisters on a subnet share the same IPv6 prefixes. If many canisters on the same subnet call the same server, they share its rate limit quota. Using API keys with per-key quotas mitigates this. -- **Shared API keys are visible to all replicas.** An API key stored in canister state is readable by every replica. A compromised replica could use the key to make entirely different, unauthorized requests to the external service — not just replay the canister's intended request. [TEE-enabled subnets](https://learn.internetcomputer.org/hc/en-us/articles/46124920595988-Trusted-Execution-Environments) mitigate this by running replicas in hardware-enforced enclaves, preventing node operators from reading canister memory. Consider deploying canisters that store sensitive credentials on a TEE-enabled subnet. +- **Shared API keys are visible to all replicas.** An API key stored in canister state is readable by every replica. A compromised replica could use the key to make entirely different, unauthorized requests to the external service: not just replay the canister's intended request. [TEE-enabled subnets](https://learn.internetcomputer.org/hc/en-us/articles/46124920595988-Trusted-Execution-Environments) mitigate this by running replicas in hardware-enforced enclaves, preventing node operators from reading canister memory. Consider deploying canisters that store sensitive credentials on a TEE-enabled subnet. ## HTTPS outcalls vs. oracles @@ -94,7 +94,7 @@ For exact pricing formulas, see the [cycles costs reference](../reference/cycles | **Cost** | Cycle cost of the outcall only | Oracle fees + ingress message costs | | **Latency** | Single round-trip (seconds) | Multiple hops: canister → oracle contract → oracle service → server → back (higher latency) | | **Setup** | Call the management canister API directly | Deploy or integrate with oracle contract, configure oracle provider | -| **Decentralization** | Built into the subnet — no third parties | Depends on the oracle provider's architecture | +| **Decentralization** | Built into the subnet: no third parties | Depends on the oracle provider's architecture | HTTPS outcalls can replace oracles for most use cases: price feeds, API queries, webhook notifications, and data verification. Oracles may still be useful if you need features like aggregated multi-source data feeds or historical data caching that an oracle provider maintains as a service. @@ -103,13 +103,13 @@ HTTPS outcalls can replace oracles for most use cases: price feeds, API queries, Two extensions are under consideration that may affect architecture decisions: - **Flexible quorum:** A canister could specify that only a single replica (instead of all replicas) makes the request. This would solve the idempotency problem for POST requests and reduce rate-limit pressure on external servers. -- **Multiple responses:** Instead of consensus on a single response, the canister could receive all individual replica responses and resolve differences in application logic — useful for fast-moving data like price feeds. +- **Multiple responses:** Instead of consensus on a single response, the canister could receive all individual replica responses and resolve differences in application logic: useful for fast-moving data like price feeds. ## Next steps -- [HTTPS outcalls guide](../guides/backends/https-outcalls.md) — practical how-to with code examples in Motoko and Rust -- [Chain Fusion: Ethereum integration](../guides/chain-fusion/ethereum.md) — uses HTTPS outcalls via the EVM RPC canister -- [Cycles costs reference](../reference/cycles-costs.md) — detailed pricing formulas -- [Learn Hub: HTTPS Outcalls](https://learn.internetcomputer.org/hc/en-us/articles/34211194553492) — additional learning material +- [HTTPS outcalls guide](../guides/backends/https-outcalls.md): practical how-to with code examples in Motoko and Rust +- [Chain Fusion: Ethereum integration](../guides/chain-fusion/ethereum.md): uses HTTPS outcalls via the EVM RPC canister +- [Cycles costs reference](../reference/cycles-costs.md): detailed pricing formulas +- [Learn Hub: HTTPS Outcalls](https://learn.internetcomputer.org/hc/en-us/articles/34211194553492): additional learning material diff --git a/docs/concepts/index.md b/docs/concepts/index.md index 347249d4..28d08a7b 100644 --- a/docs/concepts/index.md +++ b/docs/concepts/index.md @@ -29,5 +29,5 @@ Understand the ideas behind the Internet Computer before you build on it. These ## Trust and governance -- **[Security Model](security.md)** -- Canister isolation, trust boundaries, and the threat model for dapp developers. -- **[Governance](governance.md)** -- The NNS, SNS for dapp governance, neurons, and proposals. +- **[Security Model](security.md)** -- Canister isolation, trust boundaries, and the threat model for app developers. +- **[Governance](governance.md)** -- The NNS, SNS for app governance, neurons, and proposals. diff --git a/docs/concepts/network-overview.md b/docs/concepts/network-overview.md index 65f8566a..150a33ea 100644 --- a/docs/concepts/network-overview.md +++ b/docs/concepts/network-overview.md @@ -5,20 +5,20 @@ sidebar: order: 1 --- -The Internet Computer (ICP) is a network of independent blockchains called **subnets** that run smart contracts ([canisters](canisters.md)) at web speed. From a developer's perspective, the key things to understand are how your code gets replicated, how fast it runs, and how requests reach it. +The Internet Computer (ICP) is a network of independent blockchains called **subnets** that run [canisters](canisters.md) at web speed. From a developer's perspective, the key things to understand are how your code gets replicated, how fast it runs, and how requests reach it. ## Subnets A subnet is a group of nodes that run their own instance of the ICP consensus protocol. Each subnet maintains its own blockchain, executes canisters, and produces blocks independently of other subnets. -When you deploy a canister, it lands on one subnet and is replicated across every node in that subnet. This replication is what makes canisters tamperproof — a single node cannot unilaterally change a canister's state. +When you deploy a canister, it lands on one subnet and is replicated across every node in that subnet. This replication is what makes canisters tamperproof: a single node cannot unilaterally change a canister's state. ### What subnets mean for developers - **Parallelism.** Subnets run in parallel, so the network scales by adding more subnets. Your canister's performance depends on its subnet's load, not the network's total load. -- **Cross-subnet calls.** Canisters on different subnets can call each other through the network's messaging layer. These calls are slightly slower than calls within the same subnet (they require an extra consensus round), but they work transparently — you don't need to know which subnet a canister lives on. +- **Cross-subnet calls.** Canisters on different subnets can call each other through the network's messaging layer. These calls are slightly slower than calls within the same subnet (they require an extra consensus round), but they work transparently: you don't need to know which subnet a canister lives on. - **Subnet size and cost.** Subnets typically range from 13 to 40 nodes. Larger subnets provide stronger security guarantees (more nodes must collude to compromise state) but cost more cycles to run on. Most application canisters run on 13-node subnets. -- **Finality.** ICP achieves finality in 1–2 seconds. Once your update call returns, the state change is committed and replicated — there are no probabilistic confirmations or reorgs. +- **Finality.** ICP achieves finality in 1–2 seconds. Once your update call returns, the state change is committed and replicated: there are no probabilistic confirmations or reorgs. - **Shared storage budget.** All canisters on a subnet share a common storage budget. Each canister can use up to 500 GiB of stable memory, but the total available depends on the subnet's current utilization. Storage-heavy applications should consider subnet selection. - **Geographic distribution.** Nodes within a subnet are distributed across data centers, operators, and jurisdictions to maximize decentralization. Localized subnets also exist for applications with data residency requirements. @@ -28,9 +28,9 @@ For details on subnet types and how to choose one, see [Subnet types](../referen Each physical machine in the network is a **node**. Nodes run software called the **replica**, which implements the ICP protocol stack: consensus, message routing, execution, and state management. -Nodes are owned by **node providers** — independent entities who operate the hardware. Node providers are voted into the network by the governance system (NNS) and must meet specific hardware requirements. This process, called **deterministic decentralization**, ensures that subnet membership is diverse across operators, geographies, and jurisdictions. +Nodes are owned by **node providers**: independent entities who operate the hardware. Node providers are voted into the network by the governance system (NNS) and must meet specific hardware requirements. This process, called **deterministic decentralization**, ensures that subnet membership is diverse across operators, geographies, and jurisdictions. -As a developer, you don't interact with individual nodes directly. The protocol abstracts them away — you deploy to a subnet, and the network handles replication across its nodes. +As a developer, you don't interact with individual nodes directly. The protocol abstracts them away: you deploy to a subnet, and the network handles replication across its nodes. ## Consensus @@ -54,7 +54,7 @@ Boundary nodes are the entry point for all external traffic to ICP. They serve t 1. **HTTP gateway.** When a user's browser requests `https://.icp0.io`, a boundary node translates that HTTP request into a canister message, routes it to the correct subnet, and returns the response. 2. **API endpoint.** Agent libraries (like [`@icp-sdk/core/agent`](https://js.icp.build) in JavaScript) send ingress messages to boundary nodes, which forward them to the target canister's subnet. -Boundary nodes also cache query responses and provide TLS termination. They are not part of consensus and cannot modify canister state — they are routing infrastructure. +Boundary nodes also cache query responses and provide TLS termination. They are not part of consensus and cannot modify canister state: they are routing infrastructure. From a developer's perspective, boundary nodes are mostly transparent. You interact with them through the standard agent libraries or icp-cli, and they handle the routing. The main thing to be aware of is that query responses pass through a boundary node, which is why [certified variables](../guides/backends/certified-variables.md) exist for applications that need authenticated query results. @@ -64,29 +64,29 @@ Here is the path of a typical request: 1. A user's browser sends an HTTPS request to a boundary node. 2. The boundary node looks up which subnet hosts the target canister and forwards the message. -3. For update calls: the subnet's consensus protocol includes the message in a block, all nodes execute it, and the subnet signs the response. For query calls: a single node executes the call and returns the result — query responses are not threshold-signed by the subnet, so they should be treated as unverified unless the canister uses [certified variables](../guides/backends/certified-variables.md). +3. For update calls: the subnet's consensus protocol includes the message in a block, all nodes execute it, and the subnet signs the response. For query calls: a single node executes the call and returns the result: query responses are not threshold-signed by the subnet, so they should be treated as unverified unless the canister uses [certified variables](../guides/backends/certified-variables.md). 4. The boundary node returns the response to the user. -The entire flow — from user request to signed response — completes within the finality window described above for updates, and under 100 milliseconds for queries. +The entire flow (from user request to signed response) completes within the finality window described above for updates, and under 100 milliseconds for queries. ## Chain-key cryptography Each subnet has a single public key, but no individual node holds the corresponding private key. Instead, the key is split into shares distributed across the subnet's nodes using **threshold cryptography**. Nodes collectively sign responses without ever reconstructing the full key. -This means verifying a response from ICP only requires checking one signature against one public key — regardless of how many nodes are in the subnet. It also enables canisters to sign transactions on other blockchains (Bitcoin, Ethereum, and others) directly, without bridges or oracles. +This means verifying a response from ICP only requires checking one signature against one public key: regardless of how many nodes are in the subnet. It also enables canisters to sign transactions on other blockchains (Bitcoin, Ethereum, and others) directly, without bridges or oracles. For more on this, see [Chain-key cryptography](chain-key-cryptography.md). ## Governance -The network is governed by the **Network Nervous System (NNS)**, a DAO implemented as a set of canisters on ICP itself. All operational changes — protocol upgrades, subnet creation, node onboarding — go through NNS proposals and voting. This eliminates hard forks: approved upgrades are executed automatically. +The network is governed by the **Network Nervous System (NNS)**, a DAO implemented as a set of canisters on ICP itself. All operational changes (protocol upgrades, subnet creation, node onboarding) go through NNS proposals and voting. This eliminates hard forks: approved upgrades are executed automatically. Individual applications can also be governed by a **Service Nervous System (SNS)**, which applies the same DAO model at the application level. See [Governance](governance.md) for details. ## Next steps -- [Canisters](canisters.md) — what runs on the network -- [App architecture](app-architecture.md) — how applications use subnets and canisters -- [Subnet types](../reference/subnet-types.md) — comparing subnet sizes and properties +- [Canisters](canisters.md): what runs on the network +- [App architecture](app-architecture.md): how applications use subnets and canisters +- [Subnet types](../reference/subnet-types.md): comparing subnet sizes and properties diff --git a/docs/concepts/onchain-randomness.md b/docs/concepts/onchain-randomness.md index 1889996a..0e815fcc 100644 --- a/docs/concepts/onchain-randomness.md +++ b/docs/concepts/onchain-randomness.md @@ -5,17 +5,17 @@ sidebar: order: 7 --- -Generating unpredictable random numbers is a fundamental requirement for many applications — lotteries, games, fair selection, cryptographic protocols, and more. On a blockchain, this is harder than it sounds. +Generating unpredictable random numbers is a fundamental requirement for many applications: lotteries, games, fair selection, cryptographic protocols, and more. On a blockchain, this is harder than it sounds. ## Why randomness is hard on blockchains -Traditional blockchains execute every transaction deterministically. Each node replays the same operations and must arrive at the same state. This means randomness sources available to normal programs — such as OS entropy (`/dev/urandom`), hardware timers, or per-process seeds — cannot be used directly: they would produce different values on each replica, breaking consensus. +Traditional blockchains execute every transaction deterministically. Each node replays the same operations and must arrive at the same state. This means randomness sources available to normal programs (such as OS entropy (`/dev/urandom`), hardware timers, or per-process seeds) cannot be used directly: they would produce different values on each replica, breaking consensus. Naive alternatives have well-known weaknesses: - **Block hashes as seeds.** Miners or validators can selectively publish or withhold blocks to influence outcomes. Any actor who produces blocks can bias the result. - **Commit-reveal schemes.** Participants can abort after seeing others' commitments, biasing the outcome in their favor if abstaining is cheaper than losing. -- **Trusted oracles.** External randomness sources reintroduce centralization and single points of failure — contradicting the goal of trustless execution. +- **Trusted oracles.** External randomness sources reintroduce centralization and single points of failure: contradicting the goal of trustless execution. ICP addresses these limitations at the protocol level, so application developers do not need to implement workarounds themselves. @@ -26,7 +26,7 @@ ICP generates randomness using a **Verifiable Random Function (VRF)** executed c The process runs once per execution round: 1. **Round input.** The VRF is seeded with the current round number. Each round has a globally agreed-upon number, so the input is the same on every replica. -2. **Threshold evaluation.** The subnet's nodes collaborate using [chain-key cryptography](chain-key-cryptography.md) to evaluate the VRF. Computing the output requires a threshold of nodes to participate — the same threshold used in the consensus protocol. A minority of malicious nodes cannot bias or predict the result. +2. **Threshold evaluation.** The subnet's nodes collaborate using [chain-key cryptography](chain-key-cryptography.md) to evaluate the VRF. Computing the output requires a threshold of nodes to participate. The same threshold used in the consensus protocol. A minority of malicious nodes cannot bias or predict the result. 3. **Random tape.** The VRF output seeds a per-round pseudorandom number generator called the **random tape**. The random tape is then used to produce individual random values for each canister that requested randomness in the previous round. 4. **Delivery.** The `raw_rand` result is determined in the round after the call arrives (see one-round delay below), not when the canister submits it. The 32-byte blob delivered to the caller is the same on every replica, satisfying consensus. @@ -34,16 +34,16 @@ The process runs once per execution round: The threshold VRF provides three guarantees that address the blockchain randomness problem: -- **Unpredictability.** The output cannot be known before the threshold of nodes collaborates to compute it. Because the computation spans a round boundary, no party — including subnet nodes — can predict the result in advance. -- **Unbiasability.** No individual node can influence the output. A malicious node cannot single-handedly prevent the subnet from producing randomness — a threshold of honest nodes is sufficient — but it cannot steer the result toward a preferred value. This is in contrast to leader-based schemes where the block producer has exclusive influence. -- **Verifiability.** The VRF output includes a proof that any party can verify using the subnet's public key. This means the randomness is not just unpredictable — it is provably correct. External observers can confirm that the subnet followed the protocol. +- **Unpredictability.** The output cannot be known before the threshold of nodes collaborates to compute it. Because the computation spans a round boundary, no party (including subnet nodes) can predict the result in advance. +- **Unbiasability.** No individual node can influence the output. A malicious node cannot single-handedly prevent the subnet from producing randomness (a threshold of honest nodes is sufficient) but it cannot steer the result toward a preferred value. This is in contrast to leader-based schemes where the block producer has exclusive influence. +- **Verifiability.** The VRF output includes a proof that any party can verify using the subnet's public key. This means the randomness is not just unpredictable: it is provably correct. External observers can confirm that the subnet followed the protocol. ## The random tape and `raw_rand` The random tape is the developer-facing interface to ICP's randomness. When a canister calls `raw_rand` on the management canister, it receives 32 bytes derived from the random tape of the next execution round. Crucially: -- **One round of delay.** A `raw_rand` call submitted in round N receives entropy from round N+1. This ensures that the subnet nodes have not yet seen the round N+1 randomness when the call arrives — they cannot bias the output they have not yet computed. -- **Update calls only.** `raw_rand` is an inter-canister call to the management canister. It requires an update call context and cannot be used from a query call. Query calls execute on a single replica and do not participate in the subnet's consensus — there is no random beacon available. +- **One round of delay.** A `raw_rand` call submitted in round N receives entropy from round N+1. This ensures that the subnet nodes have not yet seen the round N+1 randomness when the call arrives: they cannot bias the output they have not yet computed. +- **Update calls only.** `raw_rand` is an inter-canister call to the management canister. It requires an update call context and cannot be used from a query call. Query calls execute on a single replica and do not participate in the subnet's consensus: there is no random beacon available. - **32 bytes per call.** Each invocation returns 32 bytes (256 bits) of entropy. This is sufficient to derive many independent values: four 64-bit integers, 32 single-byte selections, or a seed for a seeded pseudorandom number generator. ## Applications @@ -59,13 +59,13 @@ The threshold VRF is appropriate for a wide range of use cases where unpredictab The subnet's threshold VRF ensures the subnet itself did not bias the output. It does not prevent a canister from learning the randomness and reacting to it before revealing the outcome to users. For applications where users need to independently verify fairness, combine `raw_rand` with a commit-reveal scheme: commit to the parameters before requesting randomness, then reveal both together. This way, even if a canister were somehow compromised, users can audit whether the randomness was used as committed. -For applications that need verifiable randomness tied to a specific user or event identifier — rather than per-round subnet randomness — see the vetKeys VRF functionality described in [vetKeys](vetkeys.md). +For applications that need verifiable randomness tied to a specific user or event identifier (rather than per-round subnet randomness) see the vetKeys VRF functionality described in [vetKeys](vetkeys.md). ## Next steps -- [Onchain randomness guide](../guides/backends/randomness.md) — how to call `raw_rand` and derive typed values in Motoko and Rust -- [Management canister reference](../reference/management-canister.md#raw_rand) — `raw_rand` API specification -- [Chain-key cryptography](chain-key-cryptography.md) — the cryptographic foundation underlying the threshold VRF -- [Security](security.md) — how randomness fits into the broader ICP security model +- [Onchain randomness guide](../guides/backends/randomness.md): how to call `raw_rand` and derive typed values in Motoko and Rust +- [Management canister reference](../reference/management-canister.md#raw_rand): `raw_rand` API specification +- [Chain-key cryptography](chain-key-cryptography.md): the cryptographic foundation underlying the threshold VRF +- [Security](security.md): how randomness fits into the broader ICP security model diff --git a/docs/concepts/reverse-gas-model.md b/docs/concepts/reverse-gas-model.md index ce1d959e..3fb6d26a 100644 --- a/docs/concepts/reverse-gas-model.md +++ b/docs/concepts/reverse-gas-model.md @@ -7,22 +7,22 @@ sidebar: On most blockchains, users pay a gas fee every time they interact with a smart contract. On ICP, the model is flipped: **canisters pay for their own resource consumption using cycles**, and users pay nothing. This is the **reverse gas model**. -The result is a Web2-like user experience. Users can interact with any dapp on ICP without holding tokens, configuring a wallet, or approving every transaction. For developers, it means full control over cost management — and the responsibility that comes with it. +The result is a Web2-like user experience. Users can interact with any app on ICP without holding tokens, configuring a wallet, or approving every transaction. For developers, it means full control over cost management: and the responsibility that comes with it. ## What are cycles? Cycles are the unit of payment for resources on ICP. Every canister operation that consumes resources burns cycles from the canister's balance: -- **Compute** — executing instructions (update calls, timers, heartbeats) -- **Storage** — Wasm heap memory and stable memory, charged per byte per second -- **Messaging** — ingress messages from users, inter-canister calls, responses -- **Special features** — HTTPS outcalls, threshold signatures, Bitcoin integration, EVM RPC +- **Compute**: executing instructions (update calls, timers, heartbeats) +- **Storage**: Wasm heap memory and stable memory, charged per byte per second +- **Messaging**: ingress messages from users, inter-canister calls, responses +- **Special features**: HTTPS outcalls, threshold signatures, Bitcoin integration, EVM RPC -Query calls are free — they run on a single node, do not go through consensus, and are not charged. +Query calls are free: they run on a single node, do not go through consensus, and are not charged. ### Cycles are pegged to XDR -Unlike ICP tokens, whose price fluctuates with markets, cycles are pegged to the [Special Drawing Right (XDR)](https://www.imf.org/external/np/fin/data/rms_sdrv.aspx) — a basket of currencies maintained by the IMF. **1 trillion (T) cycles = 1 XDR** (approximately $1.30–$1.40 USD). This peg makes infrastructure costs predictable for developers regardless of ICP token price movements. +Unlike ICP tokens, whose price fluctuates with markets, cycles are pegged to the [Special Drawing Right (XDR)](https://www.imf.org/external/np/fin/data/rms_sdrv.aspx): a basket of currencies maintained by the IMF. **1 trillion (T) cycles = 1 XDR** (approximately $1.30–$1.40 USD). This peg makes infrastructure costs predictable for developers regardless of ICP token price movements. ## ICP tokens and cycles @@ -34,7 +34,7 @@ Once minted, cycles are held by principals via the **cycles ledger** (`um5iw-rqa ### Compute -By default, canisters are scheduled for execution on a best-effort basis — the subnet schedules them when capacity is available. Canisters that need guaranteed execution can set a `compute_allocation` in their settings, expressed as a percentage of one execution core: +By default, canisters are scheduled for execution on a best-effort basis. The subnet schedules them when capacity is available. Canisters that need guaranteed execution can set a `compute_allocation` in their settings, expressed as a percentage of one execution core: | Allocation | Guarantee | |---|---| @@ -67,11 +67,11 @@ Every canister is replicated across all nodes on its subnet. Costs scale with su The reverse gas model shifts payment from users to developers. This comes with ongoing obligations: -**Topping up** — canisters burn cycles continuously for storage and on every update call. Developers must monitor balances and keep canisters funded. A canister that runs out of cycles freezes immediately and stops responding to all calls. +**Topping up**: canisters burn cycles continuously for storage and on every update call. Developers must monitor balances and keep canisters funded. A canister that runs out of cycles freezes immediately and stops responding to all calls. -**Freezing threshold** — each canister has a configurable freezing threshold (default: 30 days of idle burn). If the cycle balance falls below this threshold, the canister is frozen before it can be deleted, giving developers time to top up. Increase this threshold for production canisters as a safety buffer. +**Freezing threshold**: each canister has a configurable freezing threshold (default: 30 days of idle burn). If the cycle balance falls below this threshold, the canister is frozen before it can be deleted, giving developers time to top up. Increase this threshold for production canisters as a safety buffer. -**Deletion** — a frozen canister that is not topped up within the threshold window is eventually deleted by the network, along with all its data. Deletion is permanent and irreversible. +**Deletion**: a frozen canister that is not topped up within the threshold window is eventually deleted by the network, along with all its data. Deletion is permanent and irreversible. These responsibilities can be automated. Tools like [CycleOps](https://cycleops.dev/) monitor balances and top up canisters automatically. @@ -79,16 +79,16 @@ These responsibilities can be automated. Tools like [CycleOps](https://cycleops. The XDR peg and flat per-resource pricing make ICP costs predictable in a way that transaction-fee blockchains are not: -- **No gas auctions** — there is no bidding for block space. Cycle prices are set by the NNS and change infrequently. -- **No per-transaction fees for users** — apps absorb all costs, like SaaS businesses absorb server bills. -- **Stable unit economics** — because cycles are pegged to XDR (not ICP price), infrastructure costs remain consistent even when ICP token price swings. +- **No gas auctions**: there is no bidding for block space. Cycle prices are set by the NNS and change infrequently. +- **No per-transaction fees for users**: apps absorb all costs, like SaaS businesses absorb server bills. +- **Stable unit economics**: because cycles are pegged to XDR (not ICP price), infrastructure costs remain consistent even when ICP token price swings. The tradeoff is that developers must forecast and fund usage upfront rather than letting users pay as they go. ## Related -- [Cycles Management](../guides/canister-management/cycles-management.md) — how to check balances, top up canisters, and set freezing thresholds -- [Cycles Costs Reference](../reference/cycles-costs.md) — exact cost tables for all operations -- [Canisters](./canisters.md) — canisters as the paying entity in the reverse gas model +- [Cycles Management](../guides/canister-management/cycles-management.md): how to check balances, top up canisters, and set freezing thresholds +- [Cycles Costs Reference](../reference/cycles-costs.md): exact cost tables for all operations +- [Canisters](./canisters.md): canisters as the paying entity in the reverse gas model diff --git a/docs/concepts/security.md b/docs/concepts/security.md index b44220ac..f1d8dbe9 100644 --- a/docs/concepts/security.md +++ b/docs/concepts/security.md @@ -1,11 +1,11 @@ --- title: "Security Model" -description: "The IC security model: canister isolation, trust boundaries, and the threat model for dapp developers" +description: "The IC security model: canister isolation, trust boundaries, and the threat model for app developers" sidebar: order: 12 --- -The Internet Computer provides strong security guarantees at the protocol level — replicated execution, threshold cryptography, and deterministic state machines. But the protocol cannot prevent bugs in your code. Understanding where the platform's guarantees end and your responsibilities begin is essential for building secure dapps. +The Internet Computer provides strong security guarantees at the protocol level: replicated execution, threshold cryptography, and deterministic state machines. But the protocol cannot prevent bugs in your code. Understanding where the platform's guarantees end and your responsibilities begin is essential for building secure apps. This page explains the IC security model from a developer's perspective: what the platform protects, what it does not, and what you need to handle yourself. @@ -13,21 +13,21 @@ This page explains the IC security model from a developer's perspective: what th Canisters execute in two modes, each with different trust properties: -**Update calls** go through consensus. Every node on the subnet executes the same code against the same state and must agree on the result. This makes update calls tamper-proof — a single malicious node cannot alter the outcome. The tradeoff is latency (~2 seconds). +**Update calls** go through consensus. Every node on the subnet executes the same code against the same state and must agree on the result. This makes update calls tamper-proof: a single malicious node cannot alter the outcome. The tradeoff is latency (~2 seconds). -**Query calls** run on a single replica. They are fast (~200ms) but the responding replica can return incorrect or fabricated results. Replica-signed queries provide partial mitigation (the replica signs its response), but for data that must be trustworthy, use [certified variables](../guides/backends/certified-variables.md) or update calls. Certified variables work by letting the canister set data that the subnet signs as part of the state tree — clients then verify the subnet's signature to confirm the response hasn't been tampered with. +**Query calls** run on a single replica. They are fast (~200ms) but the responding replica can return incorrect or fabricated results. Replica-signed queries provide partial mitigation (the replica signs its response), but for data that must be trustworthy, use [certified variables](../guides/backends/certified-variables.md) or update calls. Certified variables work by letting the canister set data that the subnet signs as part of the state tree: clients then verify the subnet's signature to confirm the response hasn't been tampered with. This distinction is the most important security boundary on the IC. Any data returned by a query call that is not backed by a certificate should be treated as unverified. ## Canister isolation -Each canister runs in its own WebAssembly sandbox with its own memory. Canisters cannot read or write each other's state — they can only communicate through explicit async messages. This isolation is enforced by the protocol, not by the canister code. +Each canister runs in its own WebAssembly sandbox with its own memory. Canisters cannot read or write each other's state: they can only communicate through explicit async messages. This isolation is enforced by the protocol, not by the canister code. -However, isolation does not mean independence. When a canister makes an inter-canister call, the call is an asynchronous message. Between sending the request and receiving the response, the canister can process other messages, and its state may change. This creates a class of vulnerabilities known as TOCTOU (time-of-check-time-of-use) — where a condition verified before an `await` is no longer true after it. See [Inter-canister call safety](../guides/security/inter-canister-calls.md) for patterns that mitigate this. +However, isolation does not mean independence. When a canister makes an inter-canister call, the call is an asynchronous message. Between sending the request and receiving the response, the canister can process other messages, and its state may change. This creates a class of vulnerabilities known as TOCTOU (time-of-check-time-of-use): where a condition verified before an `await` is no longer true after it. See [Inter-canister call safety](../guides/security/inter-canister-calls.md) for patterns that mitigate this. ## Trust boundaries -As a dapp developer, you should understand who trusts whom in the IC stack: +As an app developer, you should understand who trusts whom in the IC stack: ### What the protocol guarantees @@ -45,21 +45,21 @@ As a dapp developer, you should understand who trusts whom in the IC stack: ### Boundary nodes -Boundary nodes are the HTTP entry point to the IC. They route requests to the correct subnet but are not part of the trust model for update calls — the response is verified by the client against the subnet's public key regardless of which boundary node served it. +Boundary nodes are the HTTP entry point to the IC. They route requests to the correct subnet but are not part of the trust model for update calls. The response is verified by the client against the subnet's public key regardless of which boundary node served it. For query calls, the situation is different. A malicious boundary node could return a fabricated response to a query call. This is another reason to use certified data for any query response that users depend on for security-critical decisions. ### canister_inspect_message -`canister_inspect_message` is a hook that runs on a **single replica** before an update call enters consensus. It can reject messages early to save cycles (for example, dropping calls from the anonymous principal before Candid decoding). But it is not a security boundary — a malicious boundary node can bypass it, and it is never called for inter-canister calls, query calls, or management canister calls. Always enforce access control inside each method. +`canister_inspect_message` is a hook that runs on a **single replica** before an update call enters consensus. It can reject messages early to save cycles (for example, dropping calls from the anonymous principal before Candid decoding). But it is not a security boundary: a malicious boundary node can bypass it, and it is never called for inter-canister calls, query calls, or management canister calls. Always enforce access control inside each method. -## Threat model for dapp developers +## Threat model for app developers The following threats are your responsibility to mitigate: ### Missing access control -Every update method is publicly callable. If you do not check the caller, anyone can invoke admin functions, drain funds, or corrupt state. The anonymous principal (`2vxsx-fae`) is a particularly common gap — it must be explicitly rejected in any authenticated endpoint, because otherwise it acts as a shared identity that anyone can use. +Every update method is publicly callable. If you do not check the caller, anyone can invoke admin functions, drain funds, or corrupt state. The anonymous principal (`2vxsx-fae`) is a particularly common gap: it must be explicitly rejected in any authenticated endpoint, because otherwise it acts as a shared identity that anyone can use. See [Access management](../guides/security/access-management.md) for implementation patterns. @@ -71,11 +71,11 @@ See [Inter-canister call safety](../guides/security/inter-canister-calls.md) for ### Callback traps and partial rollback -If a message execution traps, all its state changes are rolled back. But for inter-canister calls, the first execution (before `await`) and the callback (after `await`) are separate messages. A trap in the callback only rolls back the callback's changes — mutations from the first execution persist. This means cleanup logic (like releasing locks or completing state transitions) must go in a cleanup context (`finally` in Motoko, `Drop` in Rust), not in regular callback code. +If a message execution traps, all its state changes are rolled back. But for inter-canister calls, the first execution (before `await`) and the callback (after `await`) are separate messages. A trap in the callback only rolls back the callback's changes: mutations from the first execution persist. This means cleanup logic (like releasing locks or completing state transitions) must go in a cleanup context (`finally` in Motoko, `Drop` in Rust), not in regular callback code. ### Cycle drain attacks -Anyone on the internet can send update calls to your canister, and each call consumes cycles for Candid decoding and execution — even if the call is ultimately rejected by your code. An attacker can drain your cycles by flooding the canister with messages. Mitigations include using `canister_inspect_message` as a first-pass filter (cheap rejection before decoding), monitoring cycle balances, and setting a conservative freezing threshold. +Anyone on the internet can send update calls to your canister, and each call consumes cycles for Candid decoding and execution: even if the call is ultimately rejected by your code. An attacker can drain your cycles by flooding the canister with messages. Mitigations include using `canister_inspect_message` as a first-pass filter (cheap rejection before decoding), monitoring cycle balances, and setting a conservative freezing threshold. See [DoS prevention](../guides/security/dos-prevention.md) for mitigation strategies. @@ -97,11 +97,11 @@ Users have no way to verify that a canister's running code matches its published ## What's next -- [Access management](../guides/security/access-management.md) — caller checks, guards, and role-based access control -- [Upgrade safety](../guides/security/canister-upgrades.md) — safe upgrade patterns -- [Inter-canister call safety](../guides/security/inter-canister-calls.md) — async pitfalls and mitigations -- [DoS prevention](../guides/security/dos-prevention.md) — cycle drain protection -- [Data integrity](../guides/security/data-integrity.md) — input validation and storage safety -- [Response certification](../guides/frontends/certification.md) — certified variables for query responses +- [Access management](../guides/security/access-management.md): caller checks, guards, and role-based access control +- [Upgrade safety](../guides/security/canister-upgrades.md): safe upgrade patterns +- [Inter-canister call safety](../guides/security/inter-canister-calls.md): async pitfalls and mitigations +- [DoS prevention](../guides/security/dos-prevention.md): cycle drain protection +- [Data integrity](../guides/security/data-integrity.md): input validation and storage safety +- [Response certification](../guides/frontends/certification.md): certified variables for query responses diff --git a/docs/concepts/timers.md b/docs/concepts/timers.md index a6b662db..2afd3f7c 100644 --- a/docs/concepts/timers.md +++ b/docs/concepts/timers.md @@ -5,11 +5,11 @@ sidebar: order: 6 --- -Canisters on the Internet Computer can schedule work to run automatically — after a delay or on a repeating interval — without any external trigger. This capability is built into the protocol itself, not bolted on with an offchain scheduler. +Canisters on the Internet Computer can schedule work to run automatically (after a delay or on a repeating interval) without any external trigger. This capability is built into the protocol itself, not bolted on with an offchain scheduler. ## The global timer -At the protocol level, each canister has a single **global timer**: a nanosecond timestamp stored alongside the canister's state. When the IC's execution environment reaches that timestamp, it delivers a `canister_global_timer` message to the canister. The canister handles this message the same way it handles any update call — the message is queued, subject to instruction limits, and executed on a single thread. +At the protocol level, each canister has a single **global timer**: a nanosecond timestamp stored alongside the canister's state. When the IC's execution environment reaches that timestamp, it delivers a `canister_global_timer` message to the canister. The canister handles this message the same way it handles any update call. The message is queued, subject to instruction limits, and executed on a single thread. Setting the timer is done through the `ic0.global_timer_set()` system API call, which takes an absolute timestamp in nanoseconds since the Unix epoch. This is the only mechanism the protocol provides directly. It is intentionally minimal: one timer, one callback, absolute time only. @@ -19,8 +19,8 @@ The IC interface specification defines this behavior in the [timer section](../r Most developers do not use the raw system API. Instead, they use the CDK timers libraries: -- **Rust:** [`ic-cdk-timers`](https://docs.rs/ic-cdk-timers/latest/ic_cdk_timers/) — provides `set_timer`, `set_timer_interval`, `set_timer_interval_serial`, and `clear_timer` -- **Motoko:** [`mo:core/Timer`](https://mops.one/core/docs/Timer) — provides `Timer.setTimer`, `Timer.recurringTimer`, and `Timer.cancelTimer` +- **Rust:** [`ic-cdk-timers`](https://docs.rs/ic-cdk-timers/latest/ic_cdk_timers/): provides `set_timer`, `set_timer_interval`, `set_timer_interval_serial`, and `clear_timer` +- **Motoko:** [`mo:core/Timer`](https://mops.one/core/docs/Timer): provides `Timer.setTimer`, `Timer.recurringTimer`, and `Timer.cancelTimer` These libraries build **multiple and periodic timers** on top of the single protocol global timer by: @@ -29,7 +29,7 @@ These libraries build **multiple and periodic timers** on top of the single prot 3. Implementing `canister_global_timer` to run each expired task and reschedule recurring tasks 4. Resetting the global timer to the next upcoming task after each execution -Each task fires as a **self-canister call** — the library invokes the canister itself to execute the task. This isolates tasks from each other and from the scheduling logic. Normal inter-canister call costs apply to each invocation. +Each task fires as a **self-canister call**: the library invokes the canister itself to execute the task. This isolates tasks from each other and from the scheduling logic. Normal inter-canister call costs apply to each invocation. ## One-shot vs. recurring timers @@ -37,12 +37,12 @@ There are two timer variants: **One-shot timers** fire once after a specified delay. The timer is deactivated after it fires. To repeat the work, you register another one-shot timer, or use a recurring timer instead. -**Recurring timers** fire repeatedly at a fixed interval. The library reschedules them when the self-call is dispatched — the next interval is measured from the originally-scheduled fire time, not from when the callback finishes. This means a 5-second recurring timer with a 2-second callback fires at 5s, 10s, 15s rather than 5s, 12s, 19s. A recurring timer keeps running until you explicitly cancel it or the canister is upgraded. +**Recurring timers** fire repeatedly at a fixed interval. The library reschedules them when the self-call is dispatched. The next interval is measured from the originally-scheduled fire time, not from when the callback finishes. This means a 5-second recurring timer with a 2-second callback fires at 5s, 10s, 15s rather than 5s, 12s, 19s. A recurring timer keeps running until you explicitly cancel it or the canister is upgraded. The Rust CDK offers two recurring timer variants: -- **`set_timer_interval`** — allows up to 5 concurrent invocations. If a new invocation is due while previous ones are still running (up to that limit), the new one runs alongside them. -- **`set_timer_interval_serial`** — enforces strict serial execution. If the previous invocation is still running when the next one is due, the new invocation is **silently skipped** (not delayed or queued). The next interval is measured from the originally-scheduled fire time, preserving the cadence. +- **`set_timer_interval`**: allows up to 5 concurrent invocations. If a new invocation is due while previous ones are still running (up to that limit), the new one runs alongside them. +- **`set_timer_interval_serial`**: enforces strict serial execution. If the previous invocation is still running when the next one is due, the new invocation is **silently skipped** (not delayed or queued). The next interval is measured from the originally-scheduled fire time, preserving the cadence. Use `set_timer_interval_serial` when your callback must not run concurrently with itself, and design it to be idempotent in case occasional invocations are skipped. @@ -64,22 +64,22 @@ For recurring interval tasks, treat the interval as an approximate target, not a **Timers are not persisted across canister upgrades.** The CDK timers libraries store the task queue in the canister's heap memory. When a canister is upgraded, the Wasm state is replaced and the timer queue is cleared. All pending timers are silently dropped. -If your canister needs timers to resume after an upgrade, you must re-register them explicitly — typically in the `postupgrade` hook (Motoko) or the `#[post_upgrade]` function (Rust). Read any needed configuration from stable memory or stable variables, then call the same registration logic as on initial install. +If your canister needs timers to resume after an upgrade, you must re-register them explicitly: typically in the `postupgrade` hook (Motoko) or the `#[post_upgrade]` function (Rust). Read any needed configuration from stable memory or stable variables, then call the same registration logic as on initial install. This means timer-dependent workflows must be designed with upgrade events in mind. A canister that runs an auction with a deadline stored in a timer must persist that deadline in stable storage and restore the timer on upgrade, or the deadline will be lost. ## Timers vs. heartbeats -Before timers, canisters could use **heartbeats** for periodic execution. A canister that exports `canister_heartbeat` receives a callback at approximately every subnet finalization round — roughly once per second. Heartbeats are still supported for backward compatibility, but have significant drawbacks compared to timers: +Before timers, canisters could use **heartbeats** for periodic execution. A canister that exports `canister_heartbeat` receives a callback at approximately every subnet finalization round: roughly once per second. Heartbeats are still supported for backward compatibility, but have significant drawbacks compared to timers: | | Timers | Heartbeats | |---|---|---| | Interval | Configurable, any duration | Fixed (~1s block rate) | -| Multiple tasks | Yes — a single canister can have many timers | No — one heartbeat per round | -| Cost when idle | Zero — timers only fire when needed | Always burns cycles, even if no work is done | +| Multiple tasks | Yes: a single canister can have many timers | No: one heartbeat per round | +| Cost when idle | Zero: timers only fire when needed | Always burns cycles, even if no work is done | | Disabling | Cancel the timer ID | Must upgrade the canister to remove the export | -**Use timers for all new canisters.** Heartbeats are only appropriate in the rare case where you need to respond to every single consensus round unconditionally — for example, sampling some state on every block regardless of whether there is work to do. Both timers and heartbeats operate at approximately the block rate (~1 second), so heartbeats do not provide finer time resolution than timers. +**Use timers for all new canisters.** Heartbeats are only appropriate in the rare case where you need to respond to every single consensus round unconditionally: for example, sampling some state on every block regardless of whether there is work to do. Both timers and heartbeats operate at approximately the block rate (~1 second), so heartbeats do not provide finer time resolution than timers. ## Security considerations @@ -87,12 +87,12 @@ Timers introduce two security-relevant properties developers should understand: **Vanishing on upgrades.** Any access control or security invariant implemented using timers will disappear silently during an upgrade. Do not rely on a timer to enforce time-bounded access, revoke permissions, or expire secrets. Use stable storage for security-critical state. -**Reentrancy.** Because each timer task executes as an inter-canister call, the canister can be re-entered at any await point — a new message, another timer callback, or a heartbeat can begin before the current timer handler finishes. If a timer handler awaits an inter-canister call and then reads or writes shared state, that state may have changed by the time execution resumes. Use `set_timer_interval_serial` (Rust) to enforce serial execution of recurring timers (at the cost of silently skipping invocations when the previous one is still running), and audit any state mutations that straddle await points. +**Reentrancy.** Because each timer task executes as an inter-canister call, the canister can be re-entered at any await point: a new message, another timer callback, or a heartbeat can begin before the current timer handler finishes. If a timer handler awaits an inter-canister call and then reads or writes shared state, that state may have changed by the time execution resumes. Use `set_timer_interval_serial` (Rust) to enforce serial execution of recurring timers (at the cost of silently skipping invocations when the previous one is still running), and audit any state mutations that straddle await points. ## Next steps -- [Timers guide](../guides/backends/timers.md) — practical API usage for Rust and Motoko -- [Canisters](canisters.md) — the canister execution model -- [IC interface specification](../reference/ic-interface-spec.md) — the protocol-level timer definition +- [Timers guide](../guides/backends/timers.md): practical API usage for Rust and Motoko +- [Canisters](canisters.md): the canister execution model +- [IC interface specification](../reference/ic-interface-spec.md): the protocol-level timer definition diff --git a/docs/getting-started/choose-your-path.md b/docs/getting-started/choose-your-path.md index 5660d732..34522d54 100644 --- a/docs/getting-started/choose-your-path.md +++ b/docs/getting-started/choose-your-path.md @@ -36,7 +36,7 @@ This is where most developers start after the quickstart. The backend guides cov **You want to:** Build a web application with a frontend that talks to your canister. -ICP can serve web assets directly from the blockchain through an asset canister, giving you a fully decentralized application with no external hosting required. +ICP can serve web assets directly from canisters, giving you a tamperproof application with no external hosting required. **Start with:** [Asset canister](../guides/frontends/asset-canister.md) -- deploy a frontend alongside your backend canister. @@ -91,7 +91,7 @@ The Service Nervous System (SNS) lets you tokenize your application and create a **You want to:** Use AI coding agents to build on ICP. -ICP has a set of [ICP skills](https://skills.internetcomputer.org) — structured knowledge files that AI agents can load to write canister code, debug deployments, and navigate the platform. If you work with tools like Claude Code, Cursor, or Copilot, ICP skills give them the context they need. +ICP has a set of [ICP skills](https://skills.internetcomputer.org): structured knowledge files that AI agents can load to write canister code, debug deployments, and navigate the platform. If you work with tools like Claude Code, Cursor, or Copilot, ICP skills give them the context they need. **Learn more:** [AI coding agents](../guides/tools/ai-coding-agents.md) diff --git a/docs/getting-started/project-structure.mdx b/docs/getting-started/project-structure.mdx index ad6322eb..d8a5a65d 100644 --- a/docs/getting-started/project-structure.mdx +++ b/docs/getting-started/project-structure.mdx @@ -21,8 +21,8 @@ A typical project generated by `icp new my-project` looks like this after the fi my-project/ ├── icp.yaml # Project configuration (the project root) ├── .icp/ # Generated files (canister IDs, build artifacts) ← created by icp deploy -│ ├── cache/ # Ephemeral — safe to delete, rebuilt automatically -│ └── data/ # Persistent — mainnet canister ID mappings +│ ├── cache/ # Ephemeral: safe to delete, rebuilt automatically +│ └── data/ # Persistent: mainnet canister ID mappings ├── backend/ │ ├── canister.yaml # Canister-specific configuration │ ├── Cargo.toml # Rust package manifest @@ -174,7 +174,7 @@ The hello-world template's `.gitignore` already excludes `.icp/cache/` and track ## Canister discovery -Canister IDs are assigned at deployment time and differ between environments. Hardcoding them creates problems when switching between local development and mainnet. `icp` solves this with automatic canister ID injection — triggered by `icp deploy`. +Canister IDs are assigned at deployment time and differ between environments. Hardcoding them creates problems when switching between local development and mainnet. `icp` solves this with automatic canister ID injection, triggered by `icp deploy`. During deployment: diff --git a/docs/getting-started/quickstart.md b/docs/getting-started/quickstart.md index f9c7e340..11b3192d 100644 --- a/docs/getting-started/quickstart.md +++ b/docs/getting-started/quickstart.md @@ -21,8 +21,8 @@ npm install -g @icp-sdk/icp-cli @icp-sdk/ic-wasm This installs: -- **icp-cli** — builds and deploys canisters on the Internet Computer -- **ic-wasm** — optimizes WebAssembly modules for onchain deployment +- **icp-cli**: builds and deploys canisters on the Internet Computer +- **ic-wasm**: optimizes WebAssembly modules for onchain deployment For Motoko projects, also install the Motoko package manager: @@ -48,7 +48,7 @@ icp new my-project --subfolder hello-world \ --define network_type=Default && cd my-project ``` -This creates a full-stack project from the `hello-world` template with a Motoko backend and React frontend. The `--define` flags skip interactive prompts — without them, `icp new` asks you to choose a template, language, and network type. +This creates a full-stack project from the `hello-world` template with a Motoko backend and React frontend. The `--define` flags skip interactive prompts. Without them, `icp new` asks you to choose a template, language, and network type. > **Prefer Rust?** Use `--define backend_type=rust` instead. You'll need Rust installed with the WASM target: `rustup target add wasm32-unknown-unknown`. @@ -58,7 +58,7 @@ Your new project contains: | Path | Description | |------|-------------| -| `icp.yaml` | Project configuration — lists your canisters | +| `icp.yaml` | Project configuration: lists your canisters | | `backend/` | Motoko source code with a `greet` function | | `frontend/` | React app that calls the backend | @@ -68,7 +68,7 @@ Your new project contains: icp network start -d ``` -This starts a local Internet Computer replica in the background. The local network comes pre-funded — you can deploy immediately without setting up a wallet or acquiring cycles. +This starts a local Internet Computer replica in the background. The local network comes pre-funded. You can deploy immediately without setting up a wallet or acquiring cycles. ## Deploy @@ -84,7 +84,7 @@ Deployed canisters: frontend: http://...localhost:8000 ``` -Open the **frontend URL** in your browser to see your app running. The **Candid UI URL** opens a web interface where you can test backend methods directly — try calling `greet` with your name. +Open the **frontend URL** in your browser to see your app running. The **Candid UI URL** opens a web interface where you can test backend methods directly. Try calling `greet` with your name. ## Call your canister @@ -96,7 +96,7 @@ icp canister call backend greet '("World")' Output: `("Hello, World!")` -The argument `'("World")'` uses [Candid](../reference/candid-spec.md) syntax — the interface description language for the Internet Computer. The outer single quotes are shell quoting; the Candid value itself is `("World")`. You can also omit the argument and `icp canister call` will prompt you interactively. +The argument `'("World")'` uses [Candid](../reference/candid-spec.md) syntax (the interface description language for the Internet Computer). The outer single quotes are shell quoting; the Candid value itself is `("World")`. You can also omit the argument and `icp canister call` will prompt you interactively. ## Stop the network @@ -108,11 +108,11 @@ icp network stop ## What's happening under the hood -The hello-world template deploys two [canisters](../concepts/canisters.md) — smart contracts that run on the Internet Computer: +The hello-world template deploys two [canisters](../concepts/canisters.md) that run on the Internet Computer: -1. **Backend canister** — Your Motoko code compiled to WebAssembly. It exposes a `greet` function through a [Candid](../reference/candid-spec.md) interface, making it callable from any client. +1. **Backend canister**: Your Motoko code compiled to WebAssembly. It exposes a `greet` function through a [Candid](../reference/candid-spec.md) interface, making it callable from any client. -2. **Frontend canister** — An asset canister that serves your React app. It automatically provides the backend's canister ID to your frontend code via a cookie, so the two canisters can communicate without manual configuration. +2. **Frontend canister**: An asset canister that serves your React app. It automatically provides the backend's canister ID to your frontend code via a cookie, so the two canisters can communicate without manual configuration. The `icp.yaml` file ties everything together: @@ -122,14 +122,14 @@ canisters: - frontend ``` -Each canister name maps to a directory containing its own `canister.yaml` with build configuration (recipe, source files, etc.). icp-cli handles the rest — compiling, optimizing, deploying, and wiring up canister-to-canister discovery. +Each canister name maps to a directory containing its own `canister.yaml` with build configuration (recipe, source files, etc.). icp-cli handles the rest: compiling, optimizing, deploying, and wiring up canister-to-canister discovery. ## Next steps -- [Project structure](project-structure.md) — Understand how icp-cli projects are organized -- [Choose your path](choose-your-path.md) — pick a development path based on what you want to build -- [Concepts: Canisters](../concepts/canisters.md) — Learn what canisters are and how they work -- [AI coding agents](../guides/tools/ai-coding-agents.md) — Use ICP skills to build on the Internet Computer with AI -- [icp-cli documentation](https://cli.internetcomputer.org/) — Full CLI reference and guides +- [Project structure](project-structure.md): understand how icp-cli projects are organized +- [Choose your path](choose-your-path.md): pick a development path based on what you want to build +- [Concepts: Canisters](../concepts/canisters.md): learn what canisters are and how they work +- [AI coding agents](../guides/tools/ai-coding-agents.md): use ICP skills to build on the Internet Computer with AI +- [icp-cli documentation](https://cli.internetcomputer.org/): full CLI reference and guides diff --git a/docs/guides/authentication/internet-identity.mdx b/docs/guides/authentication/internet-identity.mdx index b492d54d..10cfad99 100644 --- a/docs/guides/authentication/internet-identity.mdx +++ b/docs/guides/authentication/internet-identity.mdx @@ -17,7 +17,7 @@ When a user authenticates through Internet Identity, the following happens: 1. Your frontend opens an II popup window. 2. The user authenticates with a passkey or OpenID provider. -3. II creates a **delegation identity** — a temporary key pair that can sign messages on behalf of the user's master key. +3. II creates a **delegation identity**: a temporary key pair that can sign messages on behalf of the user's master key. 4. Your frontend receives this delegation and uses it to sign canister calls. 5. The backend canister sees the user's **principal** (derived from the delegation chain) as `msg.caller`. @@ -58,7 +58,7 @@ import { HttpAgent, Actor } from "@icp-sdk/core/agent"; import { safeGetCanisterEnv } from "@icp-sdk/core/agent/canister-env"; // Read the ic_env cookie set by the asset canister or Vite dev server. -// Contains IC_ROOT_KEY and canister IDs — works in both local and production without +// Contains IC_ROOT_KEY and canister IDs: works in both local and production without // environment branching. Available in browser contexts only; see note below for Node.js. const canisterEnv = safeGetCanisterEnv(); @@ -89,7 +89,7 @@ const authClient = await AuthClient.create(); const isAuthenticated = await authClient.isAuthenticated(); if (isAuthenticated) { const identity = authClient.getIdentity(); - // Restore session — create agent and actor with this identity + // Restore session: create agent and actor with this identity } // Login @@ -135,12 +135,12 @@ async function createAuthenticatedActor(identity, canisterId, idlFactory) { ``` :::note[Node.js environments] -`safeGetCanisterEnv()` reads the `ic_env` cookie set by the asset canister or Vite dev server — it only works in browser contexts. For Node.js scripts or tests connecting to a **local** replica, create the agent normally and call `await agent.fetchRootKey()` explicitly after creation. Never call `fetchRootKey()` against a mainnet endpoint — on mainnet the root key is pre-trusted, and fetching it at runtime exposes a man-in-the-middle risk. +`safeGetCanisterEnv()` reads the `ic_env` cookie set by the asset canister or Vite dev server (it only works in browser contexts. For Node.js scripts or tests connecting to a **local** replica, create the agent normally and call `await agent.fetchRootKey()` explicitly after creation. Never call `fetchRootKey()` against a mainnet endpoint) on mainnet the root key is pre-trusted, and fetching it at runtime exposes a man-in-the-middle risk. ::: ## Backend authentication -Your backend canister receives the caller's principal automatically through the IC protocol. You do not pass the principal as a function argument — use `msg.caller` (Motoko) or `ic_cdk::api::msg_caller()` (Rust) to read it. +Your backend canister receives the caller's principal automatically through the IC protocol. You do not pass the principal as a function argument: use `msg.caller` (Motoko) or `ic_cdk::api::msg_caller()` (Rust) to read it. ### Reject anonymous callers @@ -246,7 +246,7 @@ icp canister call backend protectedAction --identity anonymous # Expected: Error containing "Anonymous principal not allowed" ``` -For mainnet deployment, Internet Identity is already running — backend canister `rdmx6-jaaaa-aaaaa-aaadq-cai` and frontend canister `uqzsh-gqaaa-aaaaq-qaada-cai` (served at `https://id.ai`). Both IDs are identical on local replicas when `ii: true` is configured. Deploy only your own canisters: +For mainnet deployment, Internet Identity is already running: backend canister `rdmx6-jaaaa-aaaaa-aaadq-cai` and frontend canister `uqzsh-gqaaa-aaaaq-qaada-cai` (served at `https://id.ai`). Both IDs are identical on local replicas when `ii: true` is configured. Deploy only your own canisters: ```bash icp deploy -e ic @@ -257,7 +257,7 @@ icp deploy -e ic By default, each frontend origin produces a different user principal. If you serve your app from multiple domains (for example, migrating from `.icp0.io` to a custom domain), users would get different principals on each domain. :::note -II now automatically handles the `icp0.io` vs `ic0.app` domain difference — you do **not** need to use `derivationOrigin` or `ii-alternative-origins` for that case. Use alternative origins only when you have two genuinely distinct custom domains that should share the same user principal. +II now automatically handles the `icp0.io` vs `ic0.app` domain difference: you do **not** need to use `derivationOrigin` or `ii-alternative-origins` for that case. Use alternative origins only when you have two genuinely distinct custom domains that should share the same user principal. ::: To keep principals consistent across your own custom domains, configure **alternative origins**: @@ -301,19 +301,19 @@ To keep principals consistent across your own custom domains, configure **altern }); ``` - The primary origin (A) does not need `derivationOrigin` — it is only required on alternative origins. + The primary origin (A) does not need `derivationOrigin`: it is only required on alternative origins. For full details, see the [Internet Identity specification](../../reference/internet-identity-spec.md). ## Common mistakes -- **Using the wrong II URL per environment** — local development must point to `http://id.ai.localhost:8000`, mainnet to `https://id.ai`. Use the `getIdentityProviderUrl` helper (shown above) to switch based on hostname. -- **`fetch` "Illegal invocation" in bundled builds** — always pass `fetch: window.fetch.bind(window)` to `HttpAgent.create()`. Without explicit binding, bundlers (Vite, webpack) extract `fetch` from `window` and call it without the correct `this` context. -- **Missing `onSuccess`/`onError` callbacks** — `authClient.login()` requires both. Without them, login failures are silently swallowed. -- **Delegation expiry too long** — the maximum is 30 days. Values above this are silently clamped, causing confusing session behavior. Use 8 hours for typical apps. -- **Passing principal as a string argument** — the backend reads the caller automatically from the IC protocol. Do not pass it as a function parameter. -- **Using `shouldFetchRootKey: true` in browser code** — pass `rootKey: canisterEnv?.IC_ROOT_KEY` from `safeGetCanisterEnv()` instead. `shouldFetchRootKey: true` fetches the root key from the replica at runtime, which lets a man-in-the-middle substitute a fake key on mainnet. For Node.js scripts targeting a local replica only, `await agent.fetchRootKey()` is acceptable — but never on mainnet. -- **Creating multiple `AuthClient` instances** — create one on page load and reuse it. Multiple instances cause race conditions with session storage. +- **Using the wrong II URL per environment**: local development must point to `http://id.ai.localhost:8000`, mainnet to `https://id.ai`. Use the `getIdentityProviderUrl` helper (shown above) to switch based on hostname. +- **`fetch` "Illegal invocation" in bundled builds**: always pass `fetch: window.fetch.bind(window)` to `HttpAgent.create()`. Without explicit binding, bundlers (Vite, webpack) extract `fetch` from `window` and call it without the correct `this` context. +- **Missing `onSuccess`/`onError` callbacks**: `authClient.login()` requires both. Without them, login failures are silently swallowed. +- **Delegation expiry too long**: the maximum is 30 days. Values above this are silently clamped, causing confusing session behavior. Use 8 hours for typical apps. +- **Passing principal as a string argument**: the backend reads the caller automatically from the IC protocol. Do not pass it as a function parameter. +- **Using `shouldFetchRootKey: true` in browser code**: pass `rootKey: canisterEnv?.IC_ROOT_KEY` from `safeGetCanisterEnv()` instead. `shouldFetchRootKey: true` fetches the root key from the replica at runtime, which lets a man-in-the-middle substitute a fake key on mainnet. For Node.js scripts targeting a local replica only, `await agent.fetchRootKey()` is acceptable: but never on mainnet. +- **Creating multiple `AuthClient` instances**: create one on page load and reuse it. Multiple instances cause race conditions with session storage. ## Next steps @@ -323,6 +323,6 @@ For full details, see the [Internet Identity specification](../../reference/inte - [Security best practices](../../concepts/security.md) for identity and trust fundamentals - [AuthClient API reference](https://js.icp.build) for the full `@icp-sdk/auth` API -{/* TODO: Add Unity native app integration via deep links — see portal native-apps/unity_ii_* */} +{/* TODO: Add Unity native app integration via deep links: see portal native-apps/unity_ii_* */} -{/* Upstream: informed by dfinity/portal — docs/building-apps/authentication/overview.mdx, docs/building-apps/authentication/integrate-internet-identity.mdx, docs/building-apps/authentication/alternative-origins.mdx; dfinity/icskills — skills/internet-identity/SKILL.md */} +{/* Upstream: informed by dfinity/portal (docs/building-apps/authentication/overview.mdx, docs/building-apps/authentication/integrate-internet-identity.mdx, docs/building-apps/authentication/alternative-origins.mdx; dfinity/icskills) skills/internet-identity/SKILL.md */} diff --git a/docs/guides/authentication/verifiable-credentials.md b/docs/guides/authentication/verifiable-credentials.md index 328f9bb0..8fc7a591 100644 --- a/docs/guides/authentication/verifiable-credentials.md +++ b/docs/guides/authentication/verifiable-credentials.md @@ -1,13 +1,13 @@ --- title: "Verifiable Credentials" -description: "Issue and verify credentials on ICP using Internet Identity and the VC protocol — covers issuer and relying party integration patterns." +description: "Issue and verify credentials on ICP using Internet Identity and the VC protocol: covers issuer and relying party integration patterns." sidebar: order: 2 --- -A verifiable credential (VC) is a cryptographically signed digital attestation about a user — for example, that they are over 18, passed KYC, or are a member of an organization. On ICP, verifiable credentials are issued by canister-based issuers, mediated by Internet Identity, and consumed by relying party applications. +A verifiable credential (VC) is a cryptographically signed digital attestation about a user: for example, that they are over 18, passed KYC, or are a member of an organization. On ICP, verifiable credentials are issued by canister-based issuers, mediated by Internet Identity, and consumed by relying party applications. -This guide covers the VC architecture on ICP, how the protocol works, and how to implement both sides of the flow — issuer and relying party. +This guide covers the VC architecture on ICP, how the protocol works, and how to implement both sides of the flow: issuer and relying party. **Choose your path:** If you are building a service that attests claims about users (age verification, KYC, membership), go to [Implementing an issuer](#implementing-an-issuer). If you are building an app that requests credentials from an issuer to gate access, go to [Implementing a relying party](#implementing-a-relying-party). @@ -15,12 +15,12 @@ This guide covers the VC architecture on ICP, how the protocol works, and how to The VC protocol on ICP involves four actors: -- **User** — the person who holds the credential and consents to share it. -- **Issuer** — a canister (or service) that verifies claims about a user and issues credentials. Examples: an age verification service, an employer, a KYC provider. -- **Relying party** — a canister or application that requests credentials from an issuer to gate access or provide personalized experiences. -- **Identity provider** — Internet Identity, which acts as the communication bridge between the relying party and the issuer. Critically, II creates a temporary `id_alias` identifier so the issuer and relying party never learn each other's user principal — preserving unlinkability. +- **User**: the person who holds the credential and consents to share it. +- **Issuer**: a canister (or service) that verifies claims about a user and issues credentials. Examples: an age verification service, an employer, a KYC provider. +- **Relying party**: a canister or application that requests credentials from an issuer to gate access or provide personalized experiences. +- **Identity provider**: Internet Identity, which acts as the communication bridge between the relying party and the issuer. Critically, II creates a temporary `id_alias` identifier so the issuer and relying party never learn each other's user principal: preserving unlinkability. -The flow always runs through Internet Identity: the relying party requests a credential, II prompts the user for consent, II contacts the issuer, and the resulting signed credential is returned to the relying party. The issuer and relying party communicate only through II — they never exchange data directly. +The flow always runs through Internet Identity: the relying party requests a credential, II prompts the user for consent, II contacts the issuer, and the resulting signed credential is returned to the relying party. The issuer and relying party communicate only through II: they never exchange data directly. ## How the protocol works @@ -29,7 +29,7 @@ The flow always runs through Internet Identity: the relying party requests a cre 1. The user visits the relying party and triggers a credential request (for example, by trying to access a members-only feature). 2. The relying party opens an Internet Identity window at the `/vc-flow` path. 3. II shows the user a consent dialog that identifies the relying party, the issuer, and the requested credential type. -4. If the user approves, II creates an `id_alias` — an opaque temporary identifier unique to this RP/issuer pair. +4. If the user approves, II creates an `id_alias`: an opaque temporary identifier unique to this RP/issuer pair. 5. II calls the issuer's `prepare_credential` and `get_credential` endpoints. The issuer returns a signed JWT credential bound to the `id_alias`. 6. II returns a verifiable presentation (VP) to the relying party. The VP contains two nested JWTs: - An **id-alias credential** signed by II, proving that the relying party's user principal maps to the `id_alias`. @@ -53,7 +53,7 @@ The relying party then sends a `request_credential` message (see [Relying party] ## Implementing an issuer -An issuer is a canister that exposes four API endpoints. Internet Identity calls these endpoints on behalf of users during the VC flow. The issuer never opens connections itself — it responds to calls from II. +An issuer is a canister that exposes four API endpoints. Internet Identity calls these endpoints on behalf of users during the VC flow. The issuer never opens connections itself: it responds to calls from II. ### Issuer API endpoints @@ -97,7 +97,7 @@ Issues the signed credential. This endpoint: - Verifies that `prepared_context` is consistent with the earlier preparation step. - Returns the signed credential as a JWT. -The credential is signed using a [canister signature](../../reference/ic-interface-spec.md) — a signature produced by the canister's key, not an ECDSA or Ed25519 key. This means the canister must update `certified_data` in `prepare_credential` before the signature becomes available in `get_credential`. +The credential is signed using a [canister signature](../../reference/ic-interface-spec.md): a signature produced by the canister's key, not an ECDSA or Ed25519 key. This means the canister must update `certified_data` in `prepare_credential` before the signature becomes available in `get_credential`. ### Credential format convention @@ -132,7 +132,7 @@ The `credentialType` value is used as the key in `credentialSubject`, and the ar A compliant issuer for age verification would implement `prepare_credential` to check whether the user has a verified date of birth on record, and `get_credential` to return a signed JWT attesting `VerifiedAdult` with `minAge: 18`. -For complete Rust implementations of all four API endpoints, see the [vc-playground issuer example](https://github.com/dfinity/vc-playground/blob/main/issuer/src/main.rs). This is the primary reference implementation — the four endpoints above require careful handling of canister signatures and certified data, and the reference implementation shows the complete pattern including error handling and Candid interface definitions. +For complete Rust implementations of all four API endpoints, see the [vc-playground issuer example](https://github.com/dfinity/vc-playground/blob/main/issuer/src/main.rs). This is the primary reference implementation. The four endpoints above require careful handling of canister signatures and certified data, and the reference implementation shows the complete pattern including error handling and Candid interface definitions. @@ -146,14 +146,14 @@ A relying party requests credentials from issuers through Internet Identity. The ### Using the JavaScript SDK -The [@dfinity/verifiable-credentials](https://www.npmjs.com/package/@dfinity/verifiable-credentials) package handles the window messaging protocol for you. This is a dedicated VC package — it is separate from the `@icp-sdk/*` family used for general authentication. +The [@dfinity/verifiable-credentials](https://www.npmjs.com/package/@dfinity/verifiable-credentials) package handles the window messaging protocol for you. This is a dedicated VC package: it is separate from the `@icp-sdk/*` family used for general authentication. ```javascript import { requestVerifiablePresentation } from "@dfinity/verifiable-credentials/request-verifiable-presentation"; requestVerifiablePresentation({ onSuccess: async (verifiablePresentation) => { - // verifiablePresentation is a JWT string — validate it before trusting it + // verifiablePresentation is a JWT string: validate it before trusting it console.log("Received VP:", verifiablePresentation); }, onError(err) { @@ -184,13 +184,13 @@ The SDK: - Sends the `request_credential` JSON-RPC call. - Calls `onSuccess` with the VP JWT on success, or `onError` if the user cancels or an error occurs. -**Note:** `onSuccess` fires when the VP is received — it does not mean the credential is valid. You must verify the VP before acting on it. +**Note:** `onSuccess` fires when the VP is received: it does not mean the credential is valid. You must verify the VP before acting on it. ### Manual integration If you prefer to implement the window message protocol yourself, the three steps are: -**Step 1 — Open the II window** +**Step 1: Open the II window** Open a window to the identity provider's `/vc-flow` path: @@ -200,7 +200,7 @@ const iiWindow = window.open("https://id.ai/vc-flow"); Wait for the `vc-flow-ready` postMessage from II before sending a request. -**Step 2 — Send the credential request** +**Step 2: Send the credential request** Send a JSON-RPC `request_credential` message: @@ -236,7 +236,7 @@ Parameters: | `credentialSubject` | Yes | The user's principal at the relying party | | `derivationOrigin` | No | Alternative derivation origin for the RP's principal | -**Step 3 — Receive and handle the response** +**Step 3: Receive and handle the response** On success, II returns: @@ -289,8 +289,8 @@ The VP is a JWT with no signature in the outer layer. Decoded, it contains: The `verifiableCredential` array always contains exactly two JWTs in this order: -1. **id-alias credential** — signed by Internet Identity. Proves that the relying party's user principal maps to the `id_alias`. -2. **Issued credential** — signed by the issuer. The subject is the `id_alias`. +1. **id-alias credential**: signed by Internet Identity. Proves that the relying party's user principal maps to the `id_alias`. +2. **Issued credential**: signed by the issuer. The subject is the `id_alias`. **id-alias credential decoded:** @@ -354,7 +354,7 @@ From the **issued credential**: - The arguments in `vc.credentialSubject.` match what was requested. Cross-credential check: -- The `sub` of the issued credential matches `vc.credentialSubject.InternetIdentityIdAlias.hasIdAlias` from the id-alias credential. Note that this `sub` value uses the `did:` URI scheme (for example, `did:ic:...`) — it is not a bare principal text. Compare the full DID string, not just the principal portion. +- The `sub` of the issued credential matches `vc.credentialSubject.InternetIdentityIdAlias.hasIdAlias` from the id-alias credential. Note that this `sub` value uses the `did:` URI scheme (for example, `did:ic:...`): it is not a bare principal text. Compare the full DID string, not just the principal portion. This chain confirms that the issuer attested the claim for the same `id_alias` that II linked to your user's principal. @@ -389,9 +389,9 @@ The II frontend will be available at `http://id.ai.localhost:8000`. Point your ` The VC protocol provides the following privacy guarantees: -- **Unlinkability** — The issuer learns the user's `id_alias`, not their principal at the relying party. The relying party learns the `id_alias`, not the user's principal at the issuer. Neither party can correlate the user's identity across both services. -- **User consent** — No credential is issued without the user explicitly approving the consent dialog shown by Internet Identity. -- **Opaque errors** — Error responses from II do not reveal why a credential request failed, preventing information leakage about the user's status at the issuer. +- **Unlinkability**: The issuer learns the user's `id_alias`, not their principal at the relying party. The relying party learns the `id_alias`, not the user's principal at the issuer. Neither party can correlate the user's identity across both services. +- **User consent**: No credential is issued without the user explicitly approving the consent dialog shown by Internet Identity. +- **Opaque errors**: Error responses from II do not reveal why a credential request failed, preventing information leakage about the user's status at the issuer. ## Next steps diff --git a/docs/guides/backends/certified-variables.md b/docs/guides/backends/certified-variables.md index 02bb1f45..c1d489dd 100644 --- a/docs/guides/backends/certified-variables.md +++ b/docs/guides/backends/certified-variables.md @@ -13,11 +13,11 @@ For a conceptual overview of why query integrity matters, see [Security concepts The mechanism relies on three coordinated steps: -1. **Update call** — the canister modifies data, builds or updates a Merkle tree over that data, and calls `certified_data_set` (Rust) or `CertifiedData.set` (Motoko) with the tree's 32-byte root hash. The subnet includes this hash in its certified state tree each consensus round. +1. **Update call**: the canister modifies data, builds or updates a Merkle tree over that data, and calls `certified_data_set` (Rust) or `CertifiedData.set` (Motoko) with the tree's 32-byte root hash. The subnet includes this hash in its certified state tree each consensus round. -2. **Query call** — the canister calls `data_certificate()` / `CertifiedData.getCertificate()` to retrieve the subnet BLS certificate, builds a witness (Merkle proof) for the requested key, and returns `(data, certificate, witness)` to the caller. +2. **Query call**: the canister calls `data_certificate()` / `CertifiedData.getCertificate()` to retrieve the subnet BLS certificate, builds a witness (Merkle proof) for the requested key, and returns `(data, certificate, witness)` to the caller. -3. **Client verification** — the client verifies the certificate signature against the IC root public key, extracts the root hash from the certificate's state tree, then confirms the witness proves the data is included under that root hash. +3. **Client verification**: the client verifies the certificate signature against the IC root public key, extracts the root hash from the certificate's state tree, then confirms the witness proves the data is included under that root hash. ``` UPDATE CALL (goes through consensus): @@ -41,7 +41,7 @@ CLIENT: - `certified_data_set` accepts **at most 32 bytes**. You cannot certify arbitrary data directly. Build a Merkle tree over your data and certify only the 32-byte root hash. The tree provides proofs for individual values. - `certified_data_set` **must be called in update calls only**. Calling it in a query call traps. -- `data_certificate()` returns `None` in update calls — certificates are only available during query calls. +- `data_certificate()` returns `None` in update calls: certificates are only available during query calls. - After a canister upgrade, the certified data is cleared. Re-establish certification in both `#[init]` and `#[post_upgrade]` (Rust), or in `system func postupgrade` (Motoko). ## Rust implementation @@ -86,7 +86,7 @@ fn init() { #[post_upgrade] fn post_upgrade() { - // Certified data is cleared on upgrade — must be re-established. + // Certified data is cleared on upgrade: must be re-established. // Assumes tree data has already been loaded from stable memory. update_certified_data(); } @@ -209,7 +209,7 @@ import Text "mo:core/Text"; persistent actor { - // CertTree.Store is stable — persists across upgrades. + // CertTree.Store is stable: persists across upgrades. let certStore : CertTree.Store = CertTree.newStore(); let ct = CertTree.Ops(certStore); @@ -254,7 +254,7 @@ persistent actor { The client must verify the certificate before trusting the data. The `@dfinity/certificate-verification` package handles the full verification flow: 1. Verify the certificate BLS signature against the IC root public key -2. Check certificate freshness — the `/time` field must be within an acceptable window (recommended: 5 minutes) +2. Check certificate freshness. The `/time` field must be within an acceptable window (recommended: 5 minutes) 3. CBOR-decode the witness into a hash tree 4. Reconstruct the witness root hash 5. Compare it with the `certified_data` path in the certificate @@ -304,7 +304,7 @@ async function getVerifiedValue( // Confirm the canister-returned value matches what the witness proves. if (response.value !== null && response.value !== verifiedValue) { throw new Error( - "Response value does not match witness — canister returned tampered data" + "Response value does not match witness: canister returned tampered data" ); } @@ -320,7 +320,7 @@ The JS SDK documentation covers the full `verifyCertification` API at [js.icp.bu # Deploy the canister icp deploy backend -# Set a certified value (update call — goes through consensus) +# Set a certified value (update call: goes through consensus) icp canister call backend set '("greeting", "hello world")' # Query the certified value @@ -339,17 +339,17 @@ icp canister call backend get '("key")' ## Common mistakes -**Calling `certified_data_set` in a query call** — this traps immediately. The pattern is: set the hash during update calls, retrieve the certificate during query calls. +**Calling `certified_data_set` in a query call**: this traps immediately. The pattern is: set the hash during update calls, retrieve the certificate during query calls. -**Not updating the hash after data changes** — if you modify the tree but forget to call `certified_data_set`, query responses will fail client verification because the certificate proves a stale hash. +**Not updating the hash after data changes**: if you modify the tree but forget to call `certified_data_set`, query responses will fail client verification because the certificate proves a stale hash. -**Forgetting to re-certify after upgrade** — certified data is cleared on upgrade. Both `#[init]` and `#[post_upgrade]` (Rust) or `system func postupgrade` (Motoko) must call the certification function. +**Forgetting to re-certify after upgrade**: certified data is cleared on upgrade. Both `#[init]` and `#[post_upgrade]` (Rust) or `system func postupgrade` (Motoko) must call the certification function. -**Building the witness for the wrong key** — the Merkle proof must correspond to the exact key being queried. A witness for `users/alice` will not verify `users/bob`. +**Building the witness for the wrong key**: the Merkle proof must correspond to the exact key being queried. A witness for `users/alice` will not verify `users/bob`. -**Skipping certificate freshness checks on the client** — the certificate's `/time` field contains the subnet timestamp. Without a freshness check, an attacker could replay a stale certificate with outdated data. Always check that `certificate_time` is within an acceptable delta (5 minutes is recommended). +**Skipping certificate freshness checks on the client**: the certificate's `/time` field contains the subnet timestamp. Without a freshness check, an attacker could replay a stale certificate with outdated data. Always check that `certificate_time` is within an acceptable delta (5 minutes is recommended). -**Assuming `data_certificate()` is available in update calls** — it returns `None` / `null` in update calls. Only query calls can access the certificate. +**Assuming `data_certificate()` is available in update calls**: it returns `None` / `null` in update calls. Only query calls can access the certificate. ## HTTP asset certification @@ -359,8 +359,8 @@ See [Frontend certification](../../guides/frontends/certification.md) for the as ## Next steps -- [Security concepts](../../concepts/security.md) — why query integrity matters and when to use certified variables vs replicated queries -- [Frontend certification](../../guides/frontends/certification.md) — HTTP asset certification for the asset canister -- [IC Interface Specification](../../reference/ic-interface-spec.md) — the certified data system API and certificate format +- [Security concepts](../../concepts/security.md): why query integrity matters and when to use certified variables vs replicated queries +- [Frontend certification](../../guides/frontends/certification.md): HTTP asset certification for the asset canister +- [IC Interface Specification](../../reference/ic-interface-spec.md): the certified data system API and certificate format diff --git a/docs/guides/backends/data-persistence.mdx b/docs/guides/backends/data-persistence.mdx index f3680254..c3547c84 100644 --- a/docs/guides/backends/data-persistence.mdx +++ b/docs/guides/backends/data-persistence.mdx @@ -7,7 +7,7 @@ sidebar: import { Tabs, TabItem } from '@astrojs/starlight/components'; -Canister state lives in two places: **heap memory** and **stable memory** (persistent, survives upgrades). In Rust and most languages, heap memory is wiped on upgrade — any data you care about must be stored in stable memory. In Motoko, the `persistent actor` pattern automatically preserves all actor state across upgrades without any additional work. +Canister state lives in two places: **heap memory** and **stable memory** (persistent, survives upgrades). In Rust and most languages, heap memory is wiped on upgrade: any data you care about must be stored in stable memory. In Motoko, the `persistent actor` pattern automatically preserves all actor state across upgrades without any additional work. This guide shows how to store data durably in both Motoko and Rust. For a conceptual explanation of why stable memory works this way, see [Orthogonal Persistence](../../concepts/orthogonal-persistence.md). @@ -16,7 +16,7 @@ This guide shows how to store data durably in both Motoko and Rust. For a concep -Use `persistent actor`. All `let` and `var` declarations inside the actor body are automatically persisted across upgrades — no `stable` keyword, no upgrade hooks. +Use `persistent actor`. All `let` and `var` declarations inside the actor body are automatically persisted across upgrades. No `stable` keyword, no upgrade hooks. ```motoko import Map "mo:core/Map"; @@ -26,18 +26,18 @@ import Time "mo:core/Time"; persistent actor { - // Custom type — defined inside the actor body + // Custom type: defined inside the actor body type User = { id : Nat; name : Text; created : Int; }; - // Automatically persisted across upgrades — no "stable" keyword needed + // Automatically persisted across upgrades: no "stable" keyword needed let users = Map.empty(); var userCounter : Nat = 0; - // Transient data — resets to 0 on every upgrade + // Transient data: resets to 0 on every upgrade transient var requestCount : Nat = 0; public func addUser(name : Text) : async Nat { @@ -60,7 +60,7 @@ persistent actor { Map.size(users) }; - // Resets to 0 after every upgrade — use transient for ephemeral state + // Resets to 0 after every upgrade: use transient for ephemeral state public query func getRequestCount() : async Nat { requestCount }; @@ -69,11 +69,11 @@ persistent actor { **Key rules:** -- `let` for collections (`Map`, `List`, `Set`) — auto-persisted, no serialization needed -- `var` for simple values (`Nat`, `Text`, `Bool`) — auto-persisted +- `let` for collections (`Map`, `List`, `Set`): auto-persisted, no serialization needed +- `var` for simple values (`Nat`, `Text`, `Bool`): auto-persisted - `transient var` for caches or counters that should reset on upgrade -- No `pre_upgrade` / `post_upgrade` hooks needed — the runtime handles persistence -- Do not write `stable let` or `stable var` — redundant in `persistent actor` and produces compiler warnings +- No `pre_upgrade` / `post_upgrade` hooks needed. The runtime handles persistence +- Do not write `stable let` or `stable var`: redundant in `persistent actor` and produces compiler warnings **mops.toml:** @@ -89,7 +89,7 @@ core = "2.0.0" -Rust canisters use [`ic-stable-structures`](https://docs.rs/ic-stable-structures/latest/ic_stable_structures/) for persistent storage. The `MemoryManager` partitions stable memory into virtual memories, each backing a separate data structure. Data lives in stable memory from the start — no serialization on upgrade. +Rust canisters use [`ic-stable-structures`](https://docs.rs/ic-stable-structures/latest/ic_stable_structures/) for persistent storage. The `MemoryManager` partitions stable memory into virtual memories, each backing a separate data structure. Data lives in stable memory from the start. No serialization on upgrade. **Cargo.toml:** @@ -128,7 +128,7 @@ struct User { } impl Storable for User { - // Prefer Unbounded — avoids breakage when adding new fields. + // Prefer Unbounded: avoids breakage when adding new fields. // Bounded requires a fixed max_size; if the encoded size of a value // exceeds max_size after a schema change, writes will trap. // Existing stored data is unaffected, but no new or updated records @@ -163,7 +163,7 @@ use std::cell::RefCell; type Memory = VirtualMemory; -// Each structure gets its own MemoryId — NEVER reuse IDs across structures +// Each structure gets its own MemoryId: NEVER reuse IDs across structures const USERS_MEM_ID: MemoryId = MemoryId::new(0); const COUNTER_MEM_ID: MemoryId = MemoryId::new(1); @@ -186,12 +186,12 @@ thread_local! { #[init] fn init() { - // One-time initialization — stable structures auto-initialize from above + // One-time initialization: stable structures auto-initialize from above } #[post_upgrade] fn post_upgrade() { - // Stable structures auto-restore — no deserialization needed here. + // Stable structures auto-restore: no deserialization needed here. // Re-initialize timers or other transient state if needed. } @@ -230,12 +230,12 @@ ic_cdk::export_candid!(); **Key rules:** -- Each structure gets a unique `MemoryId` — reusing IDs corrupts both structures +- Each structure gets a unique `MemoryId`: reusing IDs corrupts both structures - `StableBTreeMap` for keyed collections; keys need `Storable + Ord` - `StableCell` for single values (counters, config flags) -- `StableLog` for append-only logs — requires two `MemoryId`s (index + data) -- `thread_local! { RefCell> }` is the correct pattern — `RefCell` wraps the stable structure, not a heap `HashMap` -- No `pre_upgrade`/`post_upgrade` serialization needed — data is already in stable memory +- `StableLog` for append-only logs: requires two `MemoryId`s (index + data) +- `thread_local! { RefCell> }` is the correct pattern: `RefCell` wraps the stable structure, not a heap `HashMap` +- No `pre_upgrade`/`post_upgrade` serialization needed: data is already in stable memory @@ -245,7 +245,7 @@ ic_cdk::export_candid!(); -When upgrading a Motoko canister, the type of every persistent field must be compatible with its stored value. Violating this causes the upgrade to trap — the canister continues running on the old Wasm with its data intact, but cannot be upgraded until the type conflict is resolved. +When upgrading a Motoko canister, the type of every persistent field must be compatible with its stored value. Violating this causes the upgrade to trap. The canister continues running on the old Wasm with its data intact, but cannot be upgraded until the type conflict is resolved. **Safe changes (always OK):** - Add new `let` or `var` fields with initial values @@ -261,7 +261,7 @@ When upgrading a Motoko canister, the type of every persistent field must be com When using more than one stable structure, give each a unique `MemoryId`. `StableLog` requires two memory regions (index + data). -This example extends the snippet above — it reuses the same `Memory` type alias, `MemoryManager`, `DefaultMemoryImpl`, `RefCell`, and `User` struct, and adds `Post` and `AUDIT_LOG`: +This example extends the snippet above: it reuses the same `Memory` type alias, `MemoryManager`, `DefaultMemoryImpl`, `RefCell`, and `User` struct, and adds `Post` and `AUDIT_LOG`: ```rust use ic_stable_structures::{ @@ -280,7 +280,7 @@ struct Post { content: String, } -// Assign one MemoryId per structure — never reuse +// Assign one MemoryId per structure: never reuse const USERS_MEM_ID: MemoryId = MemoryId::new(0); const POSTS_MEM_ID: MemoryId = MemoryId::new(1); const COUNTER_MEM_ID: MemoryId = MemoryId::new(2); @@ -324,7 +324,7 @@ Avoid serializing heap data to stable memory in `pre_upgrade` hooks. This patter #[pre_upgrade] fn pre_upgrade() { // If STATE is large, this hits the instruction limit and traps. - // A trapped pre_upgrade prevents the upgrade from completing — + // A trapped pre_upgrade prevents the upgrade from completing: // the canister is stuck on the old code. Recovery is possible via // the skip_pre_upgrade flag (which bypasses the hook at the cost of // losing any state it would have serialized), but it's an emergency @@ -500,7 +500,7 @@ fn transfer_with_id(amount: u64, idempotency_key: String) -> bool { -Supports higher throughput and concurrent callers. Requires bounded storage — expire entries after the deduplication window. +Supports higher throughput and concurrent callers. Requires bounded storage: expire entries after the deduplication window. ## Verify persistence across upgrades @@ -535,7 +535,7 @@ icp canister call backend getUser '(0)' # Transient state resets icp canister call backend getRequestCount '()' -# Returns: (0 : nat) — expected, transient var resets on upgrade +# Returns: (0 : nat): expected, transient var resets on upgrade ``` @@ -573,9 +573,9 @@ If the count drops to 0 after upgrade, the data is not in stable memory. Review ## Related -- [Orthogonal Persistence](../../concepts/orthogonal-persistence.md) — conceptual explanation of heap vs. stable memory -- [Canister Lifecycle](../canister-management/lifecycle.md) — upgrade hooks and canister lifecycle -- [Stable Structures (Rust)](../../languages/rust/stable-structures.md) — deep dive into `ic-stable-structures` -- [Motoko](../../languages/motoko/index.md) — Motoko language overview and persistence model +- [Orthogonal Persistence](../../concepts/orthogonal-persistence.md): conceptual explanation of heap vs. stable memory +- [Canister Lifecycle](../canister-management/lifecycle.md): upgrade hooks and canister lifecycle +- [Stable Structures (Rust)](../../languages/rust/stable-structures.md): deep dive into `ic-stable-structures` +- [Motoko](../../languages/motoko/index.md): Motoko language overview and persistence model {/* Upstream: informed by dfinity/portal docs/building-apps/canister-management/storage.mdx, docs/building-apps/best-practices/storage.mdx, docs/building-apps/best-practices/idempotency.mdx */} diff --git a/docs/guides/backends/https-outcalls.mdx b/docs/guides/backends/https-outcalls.mdx index 9372bc22..161cf494 100644 --- a/docs/guides/backends/https-outcalls.mdx +++ b/docs/guides/backends/https-outcalls.mdx @@ -8,20 +8,20 @@ sidebar: import { Tabs, TabItem } from '@astrojs/starlight/components'; import CodeExample from '../../../src/components/CodeExample.astro'; -Canisters can make HTTP requests to external web services using HTTPS outcalls. This lets your canister fetch offchain data, call REST APIs, or send notifications — all from onchain code. +Canisters can make HTTP requests to external web services using HTTPS outcalls. This lets your canister fetch offchain data, call REST APIs, or send notifications: all from onchain code. -HTTPS outcalls are available through the [IC management canister](../../reference/management-canister.md) (`aaaaa-aa`) via the `http_request` method. The `GET`, `HEAD`, and `POST` methods are supported. `HEAD` works identically to `GET` but returns only headers — useful for checking resource availability without downloading the body. Only HTTPS (not plain HTTP) is supported. +HTTPS outcalls are available through the [IC management canister](../../reference/management-canister.md) (`aaaaa-aa`) via the `http_request` method. The `GET`, `HEAD`, and `POST` methods are supported. `HEAD` works identically to `GET` but returns only headers: useful for checking resource availability without downloading the body. Only HTTPS (not plain HTTP) is supported. For how the consensus mechanism works for outcalls, see [Concepts: HTTPS Outcalls](../../concepts/https-outcalls.md). ## How HTTPS outcalls work -By default, every replica node in the subnet independently makes the same HTTP request — called **replicated mode**. All nodes must agree on the response before execution continues. Two constraints apply regardless of mode: +By default, every replica node in the subnet independently makes the same HTTP request: called **replicated mode**. All nodes must agree on the response before execution continues. Two constraints apply regardless of mode: - Cycles to cover the request cost **must be attached** at call time. In Rust, `ic_cdk::management_canister::http_request` auto-calculates and attaches cycles. In Motoko, cycles must be attached explicitly with `await (with cycles = ...)`. -- The **maximum response body is 2MB** (2,097,152 bytes). Requests exceeding this limit fail. Always set `max_response_bytes` to a tight upper bound — omitting it defaults to 2MB and charges cycles accordingly. +- The **maximum response body is 2MB** (2,097,152 bytes). Requests exceeding this limit fail. Always set `max_response_bytes` to a tight upper bound: omitting it defaults to 2MB and charges cycles accordingly. -In replicated mode, a transform function is strongly recommended — without one, responses across nodes will likely differ and consensus will fail. In non-replicated mode (`is_replicated = false`), a transform is unnecessary because only one node makes the request. See [Replicated vs non-replicated mode](#replicated-vs-non-replicated-mode) below. +In replicated mode, a transform function is strongly recommended: without one, responses across nodes will likely differ and consensus will fail. In non-replicated mode (`is_replicated = false`), a transform is unnecessary because only one node makes the request. See [Replicated vs non-replicated mode](#replicated-vs-non-replicated-mode) below. ## Replicated vs non-replicated mode @@ -36,7 +36,7 @@ HTTPS outcalls have two modes, controlled by the `is_replicated` field: **Rate limit risk in replicated mode:** On a 13-node subnet, 13 identical requests hit the external API within milliseconds. Many APIs enforce per-second or per-IP rate limits that this will trigger. If the API you're calling has rate limits, prefer `is_replicated = false`. -**Use replicated mode** when you need a strong integrity guarantee that the response was not tampered with by a single node — for example, fetching price data used in financial logic. +**Use replicated mode** when you need a strong integrity guarantee that the response was not tampered with by a single node: for example, fetching price data used in financial logic. **Use non-replicated mode** when calling APIs with rate limits, when the endpoint is idempotent and you trust the result, or for POST requests where duplicate submission is undesirable. @@ -98,7 +98,7 @@ Because these examples use replicated mode, they include a transform function to POST requests work the same way, with two additional considerations: -- **Idempotency:** In replicated mode, all replicas independently send the same request — typically 13 times on a 13-node subnet. Add an `Idempotency-Key` header so the server can deduplicate. Alternatively, use non-replicated mode (`is_replicated = false`) where only one replica sends the request. +- **Idempotency:** In replicated mode, all replicas independently send the same request: typically 13 times on a 13-node subnet. Add an `Idempotency-Key` header so the server can deduplicate. Alternatively, use non-replicated mode (`is_replicated = false`) where only one replica sends the request. - **Non-replicated mode:** For POST requests where you don't need consensus on the response, non-replicated mode avoids duplicate requests entirely. @@ -126,7 +126,7 @@ POST requests work the same way, with two additional considerations: ## Transform functions -In replicated mode, a transform function is strongly recommended — without one, responses across nodes will likely differ and consensus will fail. In non-replicated mode it is unnecessary. The transform runs on each replica before consensus and must be a `query` method. At minimum, strip all HTTP response headers — they contain non-deterministic fields like `Date`, `Set-Cookie`, and tracking IDs: +In replicated mode, a transform function is strongly recommended (without one, responses across nodes will likely differ and consensus will fail. In non-replicated mode it is unnecessary. The transform runs on each replica before consensus and must be a `query` method. At minimum, strip all HTTP response headers) they contain non-deterministic fields like `Date`, `Set-Cookie`, and tracking IDs: - In Motoko: `{ response with headers = [] }` - In Rust: `HttpRequestResult { headers: vec![], ..raw.response }` @@ -137,7 +137,7 @@ If the response body also contains dynamic fields (timestamps, per-request IDs, ## Cycle costs -HTTPS outcall costs are based on `max_response_bytes`, not the actual response size. If you omit `max_response_bytes`, the system assumes 2MB and charges approximately **21.5 billion cycles** — even for a 1KB response. Always set a tight upper bound. Unused cycles are refunded, but you still pay for the declared maximum. +HTTPS outcall costs are based on `max_response_bytes`, not the actual response size. If you omit `max_response_bytes`, the system assumes 2MB and charges approximately **21.5 billion cycles**: even for a 1KB response. Always set a tight upper bound. Unused cycles are refunded, but you still pay for the declared maximum. In Rust, `ic_cdk::management_canister::http_request` computes and attaches the exact cost automatically using the `ic0.cost_http_request` system API. In Motoko, cycles must be attached explicitly with `await (with cycles = ...)`. @@ -151,21 +151,21 @@ See [Cycles Costs](../../reference/cycles-costs.md) for the full pricing table. ## Limitations and pitfalls - **Public endpoints only.** HTTPS outcalls can only reach public internet endpoints. Localhost (`127.0.0.1`), private IP ranges (`10.x.x.x`, `192.168.x.x`), and other non-routable addresses are blocked. -- **`Host` header may be required.** Some API endpoints require the `Host` header to be explicitly set. The IC does not automatically set it from the URL — add it to your headers if the server requires it. +- **`Host` header may be required.** Some API endpoints require the `Host` header to be explicitly set. The IC does not automatically set it from the URL: add it to your headers if the server requires it. - **~30-second timeout.** If the external server does not respond within the timeout, the call traps. Design for failure and handle errors gracefully. ## Testing locally Use the "Full example in ICP Ninja" links above to deploy and test directly in the browser. To test locally with icp-cli, clone the example and run `icp network start -d && icp deploy`. -> **Note:** The local replica runs a single node, so all responses reach consensus automatically — even without a transform function. Verify your transform produces identical output for varying inputs (different headers, timestamps) before deploying to a multi-node subnet, where mismatches cause "no consensus" errors. +> **Note:** The local replica runs a single node, so all responses reach consensus automatically: even without a transform function. Verify your transform produces identical output for varying inputs (different headers, timestamps) before deploying to a multi-node subnet, where mismatches cause "no consensus" errors. ## Next steps -- [Concepts: HTTPS Outcalls](../../concepts/https-outcalls.md) — how consensus works for outcalls -- [Management canister reference](../../reference/management-canister.md#http_request) — full `http_request` parameter reference including all fields -- [Exchange Rate Canister (XRC)](https://github.com/dfinity/exchange-rate-canister) — a production service powered by HTTPS outcalls that fetches cryptocurrency and fiat exchange rates -- [Chain Fusion: Ethereum](../chain-fusion/ethereum.md) — the EVM RPC canister uses HTTPS outcalls under the hood -- [Cycles Costs](../../reference/cycles-costs.md) — outcall pricing details +- [Concepts: HTTPS Outcalls](../../concepts/https-outcalls.md): how consensus works for outcalls +- [Management canister reference](../../reference/management-canister.md#http_request): full `http_request` parameter reference including all fields +- [Exchange Rate Canister (XRC)](https://github.com/dfinity/exchange-rate-canister): a production service powered by HTTPS outcalls that fetches cryptocurrency and fiat exchange rates +- [Chain Fusion: Ethereum](../chain-fusion/ethereum.md): the EVM RPC canister uses HTTPS outcalls under the hood +- [Cycles Costs](../../reference/cycles-costs.md): outcall pricing details {/* Upstream: informed by dfinity/portal docs/building-apps/network-features/using-http/https-outcalls/; dfinity/examples send_http_get, send_http_post */} diff --git a/docs/guides/backends/onchain-ai.mdx b/docs/guides/backends/onchain-ai.mdx index 56ea4600..075ba2d0 100644 --- a/docs/guides/backends/onchain-ai.mdx +++ b/docs/guides/backends/onchain-ai.mdx @@ -7,14 +7,14 @@ sidebar: import { Tabs, TabItem } from '@astrojs/starlight/components'; -The LLM canister is an onchain service that gives ICP canisters access to large language models without relying on HTTPS outcalls to external AI APIs. Your canister calls a shared system canister, which routes inference requests to nodes running model weights onchain. No API keys, no off-chain dependencies — AI inference becomes a native part of your canister logic. +The LLM canister is an onchain service that gives ICP canisters access to large language models without relying on HTTPS outcalls to external AI APIs. Your canister calls a shared system canister, which routes inference requests to nodes running model weights onchain. No API keys, no off-chain dependencies: AI inference becomes a native part of your canister logic. ## What the LLM canister provides The LLM canister (canister ID: `w36hm-eqaaa-aaaal-qr76a-cai`) exposes two APIs: -- **Prompt API** — send a single text prompt and receive a text response. Best for one-shot interactions. -- **Chat API** — send a sequence of messages with roles (`system`, `user`, `assistant`) and receive the next assistant turn. Best for multi-turn conversations. +- **Prompt API**: send a single text prompt and receive a text response. Best for one-shot interactions. +- **Chat API**: send a sequence of messages with roles (`system`, `user`, `assistant`) and receive the next assistant turn. Best for multi-turn conversations. Currently supported models: @@ -245,9 +245,9 @@ During the initial rollout, the LLM canister enforces the following limits: | Max output tokens | 1000 | | Streaming | Not supported | -Requests that exceed these limits return an error. Design your application to stay within these bounds — for example, by trimming old messages from conversation history before each call. +Requests that exceed these limits return an error. Design your application to stay within these bounds: for example, by trimming old messages from conversation history before each call. -Streaming is not currently supported — the LLM canister returns the complete response when inference finishes. +Streaming is not currently supported. The LLM canister returns the complete response when inference finishes. ## Deploy and test @@ -256,7 +256,7 @@ Streaming is not currently supported — the LLM canister returns the complete r The LLM canister is not available in a local replica. To develop locally, mock the LLM canister behind a canister interface: ```motoko -// mock_llm.mo — local test stub +// mock_llm.mo: local test stub import LLM "mo:llm"; persistent actor { @@ -282,7 +282,7 @@ icp canister call -e ic prompt '("What is the Internet Comput ## Full example -The complete chatbot example — with frontend — is available in the `dfinity/examples` repository: +The complete chatbot example (with frontend) is available in the `dfinity/examples` repository: - [Rust LLM chatbot](https://github.com/dfinity/examples/tree/master/rust/llm_chatbot) - [Motoko LLM chatbot](https://github.com/dfinity/examples/tree/master/motoko/llm_chatbot) @@ -291,8 +291,8 @@ Both examples include a browser UI and can be deployed to mainnet in a single co ## Next steps -- [HTTPS outcalls](https-outcalls.md) — call external AI APIs when you need more model options or larger context windows -- [Data persistence](data-persistence.md) — persist conversation history across canister upgrades using stable memory -- [App architecture](../../concepts/app-architecture.md) — understand where AI inference fits in a multi-canister application +- [HTTPS outcalls](https-outcalls.md): call external AI APIs when you need more model options or larger context windows +- [Data persistence](data-persistence.md): persist conversation history across canister upgrades using stable memory +- [App architecture](../../concepts/app-architecture.md): understand where AI inference fits in a multi-canister application -{/* Upstream: informed by dfinity/examples — rust/llm_chatbot, motoko/llm_chatbot; limits verified against dfinity/llm */} +{/* Upstream: informed by dfinity/examples: rust/llm_chatbot, motoko/llm_chatbot; limits verified against dfinity/llm */} diff --git a/docs/guides/backends/randomness.md b/docs/guides/backends/randomness.md index 93b5d197..1a91b6a6 100644 --- a/docs/guides/backends/randomness.md +++ b/docs/guides/backends/randomness.md @@ -11,9 +11,9 @@ For how ICP produces unpredictable randomness without any trusted party, see [On ## Why blockchain randomness is different -Most blockchains execute transactions deterministically — every node replays the same operations and must reach the same state. This means you cannot use typical randomness sources like `Math.random()` or `/dev/urandom`: they would produce different values on each replica, breaking consensus. +Most blockchains execute transactions deterministically: every node replays the same operations and must reach the same state. This means you cannot use typical randomness sources like `Math.random()` or `/dev/urandom`: they would produce different values on each replica, breaking consensus. -ICP solves this with a threshold Verifiable Random Function (VRF). The result of `raw_rand` is produced collaboratively by the subnet's nodes using a random beacon that no single node can predict or bias. Every node independently verifies the output is correct, and the same 32 bytes are delivered to all replicas — satisfying both unpredictability and consensus. +ICP solves this with a threshold Verifiable Random Function (VRF). The result of `raw_rand` is produced collaboratively by the subnet's nodes using a random beacon that no single node can predict or bias. Every node independently verifies the output is correct, and the same 32 bytes are delivered to all replicas: satisfying both unpredictability and consensus. ## The `raw_rand` API @@ -21,9 +21,9 @@ The management canister (`aaaaa-aa`) exposes `raw_rand`, which returns 32 bytes - **Caller:** Canisters only (not callable via ingress messages / external clients) - **Parameters:** None -- **Returns:** `blob` — 32 bytes +- **Returns:** `blob`: 32 bytes -Because `raw_rand` is an update call to the management canister, it can only be invoked from an update context in your canister. **Randomness is not available in query calls** — a query executes on a single replica and cannot access the subnet-level random beacon. Attempting to call `raw_rand` from a query will trap. +Because `raw_rand` is an update call to the management canister, it can only be invoked from an update context in your canister. **Randomness is not available in query calls**: a query executes on a single replica and cannot access the subnet-level random beacon. Attempting to call `raw_rand` from a query will trap. See the [Management Canister reference](../../reference/management-canister.md#raw_rand) for the full API specification. @@ -58,7 +58,7 @@ async fn get_random_bytes() -> Vec { } ``` -`raw_rand` is an async call — it must be awaited from an `async` function marked `#[ic_cdk::update]`. +`raw_rand` is an async call: it must be awaited from an `async` function marked `#[ic_cdk::update]`. ## Generating a random number in a range @@ -78,7 +78,7 @@ public shared func rollDie(sides : Nat) : async Nat { }; ``` -For multiple random values in a single call, convert the 32-byte blob to an array and index directly — no additional `raw_rand` calls needed: +For multiple random values in a single call, convert the 32-byte blob to an array and index directly. No additional `raw_rand` calls needed: ```motoko import Random "mo:core/Random"; @@ -137,7 +137,7 @@ async fn roll_multiple_dice(count: usize, sides: u64) -> Vec { ## Choosing winners from a list -A common use case is selecting one or more random elements from a list — for example, choosing a lottery winner or assigning roles in a game. +A common use case is selecting one or more random elements from a list: for example, choosing a lottery winner or assigning roles in a game. **Motoko** @@ -207,9 +207,9 @@ The `std_rng` feature compiles `StdRng` without requiring OS entropy, which is c **Always use randomness in update calls, never in queries.** Query calls execute on a single replica and cannot access the random beacon. The `raw_rand` API will trap if called from a query context. -**One call per decision round.** Each call to `raw_rand` costs cycles and involves an inter-canister call to the management canister. Batch your entropy needs: a single 32-byte blob provides 256 bits of entropy — enough for 4 independent `u64` values, 32 independent byte selections, or one `StdRng` seed for unlimited draws. +**One call per decision round.** Each call to `raw_rand` costs cycles and involves an inter-canister call to the management canister. Batch your entropy needs: a single 32-byte blob provides 256 bits of entropy: enough for 4 independent `u64` values, 32 independent byte selections, or one `StdRng` seed for unlimited draws. -**Understand the timing guarantee.** The value returned by `raw_rand` is determined during the round in which the management canister processes the call, not when your canister submits it. Subnet nodes collaborate to produce the value under the consensus protocol — no individual node can predict or bias the output. This is appropriate for games, lotteries, and fair selection. For use cases requiring verifiable fairness to external observers who do not trust the subnet operator, combine `raw_rand` with a commit-reveal scheme. +**Understand the timing guarantee.** The value returned by `raw_rand` is determined during the round in which the management canister processes the call, not when your canister submits it. Subnet nodes collaborate to produce the value under the consensus protocol. No individual node can predict or bias the output. This is appropriate for games, lotteries, and fair selection. For use cases requiring verifiable fairness to external observers who do not trust the subnet operator, combine `raw_rand` with a commit-reveal scheme. **Reentrancy caution.** Because `raw_rand` is an async call, your canister's execution can be interleaved with other messages at the `await` point. If you check state before the `await` and rely on that state after, another message may have modified it in between. See [Canister security](../security/inter-canister-calls.md) for reentrancy patterns. @@ -223,9 +223,9 @@ Note: this example predates `mo:core` and uses `mo:base/Random.Finite`. The patt ## Next steps -- [Onchain Randomness (concept)](../../concepts/onchain-randomness.md) — how the IC's threshold VRF works -- [Management Canister](../../reference/management-canister.md) — `raw_rand` API reference -- [Data Integrity](../security/data-integrity.md) — using randomness in a secure application design -- [Inter-canister calls](../canister-calls/onchain-calls.md) — async patterns and reentrancy +- [Onchain Randomness (concept)](../../concepts/onchain-randomness.md): how the IC's threshold VRF works +- [Management Canister](../../reference/management-canister.md): `raw_rand` API reference +- [Data Integrity](../security/data-integrity.md): using randomness in a secure application design +- [Inter-canister calls](../canister-calls/onchain-calls.md): async patterns and reentrancy diff --git a/docs/guides/backends/timers.mdx b/docs/guides/backends/timers.mdx index 817d48a8..2d6a54c9 100644 --- a/docs/guides/backends/timers.mdx +++ b/docs/guides/backends/timers.mdx @@ -7,7 +7,7 @@ sidebar: import { Tabs, TabItem } from '@astrojs/starlight/components'; -Canisters can schedule code to run automatically after a delay or on a repeating interval — no external cron job required. This guide covers the timer APIs for Rust and Motoko, how system time works, upgrade handling, and when to use heartbeats instead. +Canisters can schedule code to run automatically after a delay or on a repeating interval. No external cron job required. This guide covers the timer APIs for Rust and Motoko, how system time works, upgrade handling, and when to use heartbeats instead. ## System time @@ -32,7 +32,7 @@ let now_ns: u64 = ic_cdk::api::time(); -System time is constant within a single message execution — it does not advance mid-call. Different messages in the same round may observe different timestamps. +System time is constant within a single message execution: it does not advance mid-call. Different messages in the same round may observe different timestamps. ## One-shot timers @@ -66,7 +66,7 @@ let timer_id: TimerId = ic_cdk_timers::set_timer( ); ``` -`set_timer` takes a future directly — no closure or `ic_cdk::spawn` wrapper needed. +`set_timer` takes a future directly. No closure or `ic_cdk::spawn` wrapper needed. @@ -108,13 +108,13 @@ let timer_id: TimerId = ic_cdk_timers::set_timer_interval( -For recurring tasks that mutate state, use `set_timer_interval_serial` in Rust to prevent concurrent invocations — if the interval fires while the previous invocation is still running, the new one is skipped: +For recurring tasks that mutate state, use `set_timer_interval_serial` in Rust to prevent concurrent invocations: if the interval fires while the previous invocation is still running, the new one is skipped: ```rust ic_cdk_timers::set_timer_interval_serial( Duration::from_secs(3600), async || { - // safe to mutate state — only one invocation runs at a time + // safe to mutate state: only one invocation runs at a time }, ); ``` @@ -142,10 +142,10 @@ ic_cdk_timers::clear_timer(timer_id); ## Common patterns -- **Periodic cleanup** — purge expired cache entries, remove stale sessions, or compact data structures on a fixed schedule. -- **Scheduled data aggregation** — periodically fetch exchange rates, collect metrics, or roll up statistics from child canisters. -- **Timed state transitions** — expire auctions, unlock funds after a vesting period, or transition a proposal from "voting" to "decided" after a deadline. -- **Heartbeat-to-timer migration** — replace a `canister_heartbeat` export with a recurring timer at the desired interval (see [Heartbeats](#heartbeats-legacy) below). +- **Periodic cleanup**: purge expired cache entries, remove stale sessions, or compact data structures on a fixed schedule. +- **Scheduled data aggregation**: periodically fetch exchange rates, collect metrics, or roll up statistics from child canisters. +- **Timed state transitions**: expire auctions, unlock funds after a vesting period, or transition a proposal from "voting" to "decided" after a deadline. +- **Heartbeat-to-timer migration**: replace a `canister_heartbeat` export with a recurring timer at the desired interval (see [Heartbeats](#heartbeats-legacy) below). ## Starting timers on canister init @@ -221,8 +221,8 @@ See [Cycles and costs](../../reference/cycles-costs.md) for current pricing. Heartbeats call `canister_heartbeat` at intervals close to the blockchain finalization rate (~1s). They predate timers and have significant drawbacks: -- Fixed interval close to block rate — cannot be adjusted -- Run every block regardless of whether work is needed — burns cycles continuously +- Fixed interval close to block rate: cannot be adjusted +- Run every block regardless of whether work is needed: burns cycles continuously - Cannot be disabled without upgrading to remove the export **Prefer timers for all new code.** Heartbeats are only appropriate when you need sub-second execution or must respond to every block unconditionally. @@ -251,7 +251,7 @@ For protocol internals, see [Timers](../../concepts/timers.md) and the [IC inter Yes. Each timer executes as a self-canister call, so normal update message instruction limits apply with DTS enabled. **What happens if a timer handler awaits an inter-canister call?** -Normal await point rules apply — any new execution can start at the await point (a new message, another timer, or a heartbeat). The current timer handler resumes after the new execution finishes or reaches its own await point. +Normal await point rules apply: any new execution can start at the await point (a new message, another timer, or a heartbeat). The current timer handler resumes after the new execution finishes or reaches its own await point. **What happens if a periodic timer takes longer than its interval?** With `set_timer_interval`, multiple invocations can run concurrently. With `set_timer_interval_serial`, the new invocation is skipped if the previous one is still running. If there are no await points, the timer is rescheduled after execution completes. @@ -265,7 +265,7 @@ System time is returned in nanoseconds. For DateTime conversions, use these pack ## Limitations -- Timer resolution is similar to the block rate — choose durations well above ~1s. +- Timer resolution is similar to the block rate: choose durations well above ~1s. - The CDK timers library uses **relative time** only. To schedule at an absolute time, calculate the duration from `now` to the target time manually. - Using timers for security (e.g., access control) is strongly discouraged. Timers vanish on upgrades and reinstalls, and reentrancy can undermine access checks. @@ -277,8 +277,8 @@ For a complete working example with cycle tracking and multiple timers: ## Next steps -- [Canister lifecycle](../canister-management/lifecycle.md) — init, pre/post-upgrade hooks -- [Timers (concept)](../../concepts/timers.md) — how the IC protocol timer works -- [Cycles and costs](../../reference/cycles-costs.md) — current pricing +- [Canister lifecycle](../canister-management/lifecycle.md): init, pre/post-upgrade hooks +- [Timers (concept)](../../concepts/timers.md): how the IC protocol timer works +- [Cycles and costs](../../reference/cycles-costs.md): current pricing {/* Upstream: informed by dfinity/portal docs/building-apps/network-features/periodic-tasks-timers.mdx, docs/building-apps/network-features/time-and-timestamps.mdx, dfinity/cdk-rs ic-cdk-timers/src/lib.rs, and caffeinelabs/motoko-core src/Timer.mo */} diff --git a/docs/guides/canister-calls/candid.mdx b/docs/guides/canister-calls/candid.mdx index 9accd593..26411ab0 100644 --- a/docs/guides/canister-calls/candid.mdx +++ b/docs/guides/canister-calls/candid.mdx @@ -9,7 +9,7 @@ import { Tabs, TabItem } from '@astrojs/starlight/components'; Candid is the interface description language for the Internet Computer. Every canister exposes its public API through a Candid `.did` file that describes which methods it offers, what arguments they accept, and what they return. Because Candid is language-agnostic, a Motoko canister, a Rust canister, and a JavaScript frontend can all communicate through the same interface without any manual serialization code. -Candid handles the binary encoding and decoding transparently. You work with native types in your language — `String` in Rust, `Text` in Motoko, `string` in JavaScript — and Candid maps them to a common type system for transport. +Candid handles the binary encoding and decoding transparently. You work with native types in your language (`String` in Rust, `Text` in Motoko, `string` in JavaScript) and Candid maps them to a common type system for transport. ## The `.did` file @@ -21,7 +21,7 @@ service : { } ``` -This declares a service with one method, `greet`, that takes a `text` argument, returns `text`, and can be called as a query (no consensus required). The `query` annotation tells the network this method only reads state — see [Canisters: Query calls](../../concepts/canisters.md#query-calls) for details. +This declares a service with one method, `greet`, that takes a `text` argument, returns `text`, and can be called as a query (no consensus required). The `query` annotation tells the network this method only reads state: see [Canisters: Query calls](../../concepts/canisters.md#query-calls) for details. A more complete example with multiple methods: @@ -51,7 +51,7 @@ service address_book : { } ``` -Candid uses **structural typing** — two type definitions with different names but the same structure are interchangeable. The named alias is purely for readability. +Candid uses **structural typing**: two type definitions with different names but the same structure are interchangeable. The named alias is purely for readability. ### Init arguments @@ -101,7 +101,7 @@ For the complete type reference, including subtyping rules, see the [Candid spec -The Motoko compiler generates Candid descriptions automatically from your actor's type signature. When you build with icp-cli, the `.did` file is placed in the build output directory — no manual authoring needed. +The Motoko compiler generates Candid descriptions automatically from your actor's type signature. When you build with icp-cli, the `.did` file is placed in the build output directory. No manual authoring needed. You can also provide a hand-written `.did` file by setting the `candid` field in `icp.yaml`. This is useful when you want an explicit API contract that is versioned independently of the implementation: @@ -112,7 +112,7 @@ canisters: type: "@dfinity/motoko@v4.1.0" configuration: main: backend/app.mo - candid: backend/backend.did # Optional — overrides auto-generation + candid: backend/backend.did # Optional: overrides auto-generation ``` If `candid` is omitted, the Motoko recipe auto-generates the interface from the source. @@ -275,7 +275,7 @@ icp canister call my_canister set_address '("Alice", record { street = "123 Main ### From JavaScript -The [JS SDK](https://js.icp.build) (`@icp-sdk/core`) translates Candid types into native JavaScript values. To call a canister from JavaScript, you need typed declarations generated from the `.did` file — see [Binding generation](#binding-generation) below for how to set this up. +The [JS SDK](https://js.icp.build) (`@icp-sdk/core`) translates Candid types into native JavaScript values. To call a canister from JavaScript, you need typed declarations generated from the `.did` file: see [Binding generation](#binding-generation) below for how to set this up. The generated declarations export a `createActor` function and an `idlFactory` that describes the interface: @@ -284,7 +284,7 @@ import { createActor } from "./declarations/my_canister"; const canister = createActor(canisterId, { agentOptions: { host } }); -// Call a method — arguments and return values are native JS types +// Call a method: arguments and return values are native JS types const greeting = await canister.greet("World"); console.log(greeting); // "Hello, World!" ``` @@ -298,9 +298,9 @@ When one canister calls another, Candid handles the argument encoding and respon Candid defines subtyping rules that let you evolve a service's interface without breaking existing clients. The safe changes are: - **Add new methods.** Existing clients simply don't call them. -- **Add return values.** Extend the result sequence — old clients ignore the extra values. -- **Remove trailing parameters.** Shorten the parameter list — old clients still send the extra arguments, which are silently ignored. -- **Add optional parameters.** Extend the parameter list with `opt` types — old clients that don't send them get `null` by default. +- **Add return values.** Extend the result sequence: old clients ignore the extra values. +- **Remove trailing parameters.** Shorten the parameter list: old clients still send the extra arguments, which are silently ignored. +- **Add optional parameters.** Extend the parameter list with `opt` types: old clients that don't send them get `null` by default. - **Widen parameter types.** Change a parameter to a supertype of its previous type (for example, `nat` to `int`). - **Narrow return types.** Change a result to a subtype of its previous type (for example, `int` to `nat`). @@ -343,7 +343,7 @@ To deprecate a record field without breaking existing clients, change its type t ```candid type UserProfile = record { name : text; - middle_name : reserved; // Deprecated — ignored by current code + middle_name : reserved; // Deprecated: ignored by current code email : text; }; ``` @@ -420,7 +420,7 @@ async fn invoke_callee() { } ``` -The `.dynamic_callee("PUBLIC_CANISTER_ID:callee")` mode reads the canister ID from a canister environment variable at runtime — the same `PUBLIC_CANISTER_ID:` variables that `icp deploy` injects (see [canister discovery](onchain-calls.md#canister-discovery)). For canisters with fixed IDs, use `.static_callee(principal)` instead. +The `.dynamic_callee("PUBLIC_CANISTER_ID:callee")` mode reads the canister ID from a canister environment variable at runtime. The same `PUBLIC_CANISTER_ID:` variables that `icp deploy` injects (see [canister discovery](onchain-calls.md#canister-discovery)). For canisters with fixed IDs, use `.static_callee(principal)` instead. {/* Needs human verification: the upstream ic-cdk-bindgen README uses ICP_CANISTER_ID: in its example, but icp-cli sets PUBLIC_CANISTER_ID:. We use PUBLIC_CANISTER_ID: here to match icp-cli. An issue has been filed on dfinity/cdk-rs to align the upstream README. */} For type selector configuration and advanced options, see the [`ic-cdk-bindgen` documentation](https://crates.io/crates/ic-cdk-bindgen). @@ -430,7 +430,7 @@ For type selector configuration and advanced options, see the [`ic-cdk-bindgen` ## Candid tools -**`didc`** — the Candid CLI for checking `.did` files, encoding/decoding values, and testing subtype compatibility. Download from the [Candid releases page](https://github.com/dfinity/candid/releases). +**`didc`**: the Candid CLI for checking `.did` files, encoding/decoding values, and testing subtype compatibility. Download from the [Candid releases page](https://github.com/dfinity/candid/releases). | Command | What it does | |---------|-------------| @@ -439,12 +439,12 @@ For type selector configuration and advanced options, see the [`ic-cdk-bindgen` | `didc decode ` | Decode binary Candid back to text | | `didc subtype new.did old.did` | Check that `new` is a safe upgrade from `old` | -**Candid UI** — a web interface for calling canister methods directly from a browser, generated automatically for every deployed canister. Useful for testing and debugging without writing frontend code. Access it at `https://a4gq6-oaaaa-aaaab-qaa4q-cai.icp0.io/?id=` for mainnet canisters. +**Candid UI**: a web interface for calling canister methods directly from a browser, generated automatically for every deployed canister. Useful for testing and debugging without writing frontend code. Access it at `https://a4gq6-oaaaa-aaaab-qaa4q-cai.icp0.io/?id=` for mainnet canisters. ## Next steps -- [Onchain calls](onchain-calls.md) — make inter-canister calls using Candid interfaces -- [Offchain calls](offchain-calls.md) — call canisters from JavaScript frontends and agents -- [Candid specification](../../reference/candid-spec.md) — full type reference and subtyping rules +- [Onchain calls](onchain-calls.md): make inter-canister calls using Candid interfaces +- [Offchain calls](offchain-calls.md): call canisters from JavaScript frontends and agents +- [Candid specification](../../reference/candid-spec.md): full type reference and subtyping rules {/* Upstream: informed by dfinity/portal docs/building-apps/interact-with-canisters/candid/ (3 files), docs/building-apps/developer-tools/cdks/rust/generating-candid.mdx, icp-cli concepts/binding-generation.md, ic-cdk-bindgen README, and @icp-sdk/bindgen. Type mappings verified against .sources/candid (spec), .sources/motoko (IDL-Motoko.md), and .sources/cdk-rs. */} diff --git a/docs/guides/canister-calls/offchain-calls.md b/docs/guides/canister-calls/offchain-calls.md index 05b5b3aa..43d11421 100644 --- a/docs/guides/canister-calls/offchain-calls.md +++ b/docs/guides/canister-calls/offchain-calls.md @@ -5,7 +5,7 @@ sidebar: order: 3 --- -An **agent** is a client-side library that constructs ingress messages, signs them with a cryptographic identity, and sends them to ICP boundary nodes. Agents handle the protocol details — CBOR encoding, request IDs, certificate verification — so your application code works with native language types. +An **agent** is a client-side library that constructs ingress messages, signs them with a cryptographic identity, and sends them to ICP boundary nodes. Agents handle the protocol details (CBOR encoding, request IDs, certificate verification) so your application code works with native language types. ## How agents work @@ -25,11 +25,11 @@ The IC has two call types that agents route differently: | | Query | Update | |---|---|---| | State changes | Not allowed | Allowed | -| Routing | Single replica — fast (~200ms) | Goes through consensus (~2–4 seconds) | +| Routing | Single replica: fast (~200ms) | Goes through consensus (~2–4 seconds) | | Response verification | Node key signatures verified by default; certified data provides app-layer guarantees | Full certificate from consensus | | Candid annotation | `query` | (default) | -The Candid interface definition tells the agent which call type to use. When you generate typed bindings from a `.did` file, the generated code routes each method correctly — you do not need to decide manually. +The Candid interface definition tells the agent which call type to use. When you generate typed bindings from a `.did` file, the generated code routes each method correctly: you do not need to decide manually. ## Available agents @@ -37,7 +37,7 @@ DFINITY maintains official agents for JavaScript/TypeScript and Rust. Several co ### Official agents -**JavaScript / TypeScript — `@icp-sdk/core`** +**JavaScript / TypeScript: `@icp-sdk/core`** The primary agent for browser and Node.js applications. Install from npm: @@ -49,7 +49,7 @@ Import path: `@icp-sdk/core/agent` Full documentation: [js.icp.build](https://js.icp.build) -**Rust — `ic-agent`** +**Rust: `ic-agent`** A low-level Rust library for building applications that interact with ICP. Add to your project: @@ -104,7 +104,7 @@ const canisterEnv = getCanisterEnv(); const canisterId = canisterEnv["PUBLIC_CANISTER_ID:backend"]; // Pass rootKey only on non-standard networks. On mainnet the IC root key is -// embedded in the agent — omit rootKey there. +// embedded in the agent: omit rootKey there. // In local development, let the agent fetch the root key from the local replica. const actor = createActor(canisterId, { agentOptions: { @@ -128,7 +128,7 @@ const agent = await HttpAgent.create({ host: "https://icp-api.io", // Omit identity to use the anonymous identity. // Pass an identity here for authenticated calls. - // IC root key is embedded in the agent for mainnet — do not set shouldFetchRootKey. + // IC root key is embedded in the agent for mainnet: do not set shouldFetchRootKey. }); const actor = createActor("", { agent }); @@ -148,7 +148,7 @@ const agent = await HttpAgent.create({ Once you have an actor, call methods as regular async functions. The generated bindings handle Candid encoding and routing: ```typescript -// Query call — fast, read-only +// Query call: fast, read-only const greeting = await actor.greet("Ada"); console.log(greeting); // "Hello, Ada!" ``` @@ -269,7 +269,7 @@ interface CanisterEnv { const env = getCanisterEnv(); const backendId = env["PUBLIC_CANISTER_ID:backend"]; -const rootKey = env.IC_ROOT_KEY; // Uint8Array — use for certificate verification +const rootKey = env.IC_ROOT_KEY; // Uint8Array: use for certificate verification ``` This works identically on local networks and mainnet without code changes. @@ -280,7 +280,7 @@ During development, your dev server runs outside the asset canister and the `ic_ ```typescript // vite.config.ts -const IC_ROOT_KEY_HEX = "308182..."; // placeholder — replace with your local replica root key +const IC_ROOT_KEY_HEX = "308182..."; // placeholder: replace with your local replica root key const BACKEND_CANISTER_ID = "bkyz2-fmaaa-aaaaa-qaaaq-cai"; // from `icp canister list` export default defineConfig({ @@ -322,9 +322,9 @@ const agent = await HttpAgent.create({ ## Next steps -- [Candid and binding generation](candid.md) — generate typed clients from `.did` files -- [Onchain calls](onchain-calls.md) — canister-to-canister calls from within the IC -- [Internet Identity](../authentication/internet-identity.md) — adding user authentication to offchain calls -- [Asset canister](../frontends/asset-canister.md) — deploying the frontend that makes these calls +- [Candid and binding generation](candid.md): generate typed clients from `.did` files +- [Onchain calls](onchain-calls.md): canister-to-canister calls from within the IC +- [Internet Identity](../authentication/internet-identity.md): adding user authentication to offchain calls +- [Asset canister](../frontends/asset-canister.md): deploying the frontend that makes these calls diff --git a/docs/guides/canister-calls/onchain-calls.mdx b/docs/guides/canister-calls/onchain-calls.mdx index 36b10294..47057bed 100644 --- a/docs/guides/canister-calls/onchain-calls.mdx +++ b/docs/guides/canister-calls/onchain-calls.mdx @@ -106,7 +106,7 @@ In Motoko, `public shared ({ caller })` binds the original caller at method entr **Cleanup with `finally`** -Use `try/finally` (with or without `catch`) to guarantee cleanup code runs — even if code after an `await` traps. This is useful for releasing locks or rolling back temporary state: +Use `try/finally` (with or without `catch`) to guarantee cleanup code runs: even if code after an `await` traps. This is useful for releasing locks or rolling back temporary state: ```motoko var locked = false; @@ -123,7 +123,7 @@ public shared func guarded() : async () { }; ``` -The `finally` block must be effect-free: no `await`, no `throw`, no async calls. It must return `()` and should not trap — a trapping `finally` block can prevent future upgrades. +The `finally` block must be effect-free: no `await`, no `throw`, no async calls. It must return `()` and should not trap: a trapping `finally` block can prevent future upgrades. @@ -192,9 +192,9 @@ let counter_id = Principal::from_text( -Deployment order does not matter — `icp deploy` creates all canisters first, then injects variables, then installs code. Variables are only updated for the canisters being deployed, so run `icp deploy` (without arguments) when adding new canisters to update all of them. +Deployment order does not matter: `icp deploy` creates all canisters first, then injects variables, then installs code. Variables are only updated for the canisters being deployed, so run `icp deploy` (without arguments) when adding new canisters to update all of them. -> **Tip:** For Rust canisters that make inter-canister calls, [`ic-cdk-bindgen`](candid.md#binding-generation) can generate type-safe call stubs from `.did` files — so you call typed functions instead of manually constructing `Call::unbounded_wait` with string method names. See [Binding generation](candid.md#binding-generation) for details. +> **Tip:** For Rust canisters that make inter-canister calls, [`ic-cdk-bindgen`](candid.md#binding-generation) can generate type-safe call stubs from `.did` files: so you call typed functions instead of manually constructing `Call::unbounded_wait` with string method names. See [Binding generation](candid.md#binding-generation) for details. ### Alternative approaches @@ -212,7 +212,7 @@ icp deploy my_canister --argument "(principal \"$TARGET_ID\")" ## Bounded vs unbounded wait -Every inter-canister call must choose a wait strategy. By default, calls use **unbounded wait** — the caller waits indefinitely until the callee responds. **Bounded wait** (also called best-effort messaging) adds a timeout: if the callee hasn't responded by the deadline, the system returns a `SYS_UNKNOWN` response. +Every inter-canister call must choose a wait strategy. By default, calls use **unbounded wait**: the caller waits indefinitely until the callee responds. **Bounded wait** (also called best-effort messaging) adds a timeout: if the callee hasn't responded by the deadline, the system returns a `SYS_UNKNOWN` response. @@ -220,10 +220,10 @@ Every inter-canister call must choose a wait strategy. By default, calls use **u By default, `await` uses unbounded wait. Add a `timeout` parenthetical (in seconds) to use bounded wait: ```motoko -// Unbounded wait (default) — guaranteed response +// Unbounded wait (default): guaranteed response let result = await Counter.get(); -// Bounded wait — best-effort response with 25-second deadline +// Bounded wait: best-effort response with 25-second deadline let result = await (with timeout = 25) Counter.get(); // Reusable timeout configuration @@ -242,11 +242,11 @@ The Rust CDK provides separate constructors for each strategy: ```rust use ic_cdk::call::Call; -// Unbounded wait — guaranteed response +// Unbounded wait: guaranteed response Call::unbounded_wait(callee, "method") .await -// Bounded wait — best-effort response with 5-second timeout +// Bounded wait: best-effort response with 5-second timeout Call::bounded_wait(callee, "method") .change_timeout(5) // timeout in seconds .await @@ -257,8 +257,8 @@ Call::bounded_wait(callee, "method") **When to use each:** -- **Unbounded wait** — the callee is guaranteed to respond (including rejects). Use for calls to canisters you control and trust to respond promptly. -- **Bounded wait** — the caller may receive `SYS_UNKNOWN` after the timeout or if the subnet runs low on resources. Use for calls to third-party or untrusted canisters. +- **Unbounded wait**: the callee is guaranteed to respond (including rejects). Use for calls to canisters you control and trust to respond promptly. +- **Bounded wait**: the caller may receive `SYS_UNKNOWN` after the timeout or if the subnet runs low on resources. Use for calls to third-party or untrusted canisters. **Upgrade safety:** unbounded wait calls may prevent your canister from upgrading until the callee responds. If the callee is unresponsive or malicious, your canister could be stuck indefinitely. Prefer bounded wait when calling canisters you do not control. @@ -340,7 +340,7 @@ The key mechanism is passing a **shared function reference** (`callback`) across The same pattern works in Rust using Candid's `Func` type to pass callback references between canisters. The publisher stores `candid::Func` values and invokes them with `Call::unbounded_wait`; the subscriber registers its own method as a callback. -A Rust pub/sub example is not yet available in the [examples repo](https://github.com/dfinity/examples). See the Motoko tab for the full pattern — the architecture is identical. +A Rust pub/sub example is not yet available in the [examples repo](https://github.com/dfinity/examples). See the Motoko tab for the full pattern. The architecture is identical. {/* TODO: Create a Rust pub/sub example in dfinity/examples */} diff --git a/docs/guides/canister-calls/parallel-calls.mdx b/docs/guides/canister-calls/parallel-calls.mdx index fb3a6491..265a9345 100644 --- a/docs/guides/canister-calls/parallel-calls.mdx +++ b/docs/guides/canister-calls/parallel-calls.mdx @@ -30,7 +30,7 @@ Parallel calls are most beneficial when the caller and callee are on **different ## How parallel calls work -In Motoko, futures are first-class values. You can start a call by evaluating `c.method()` without immediately awaiting it — this sends the request message and returns an `async T` handle. Collecting all handles before awaiting lets all calls run concurrently. +In Motoko, futures are first-class values. You can start a call by evaluating `c.method()` without immediately awaiting it: this sends the request message and returns an `async T` handle. Collecting all handles before awaiting lets all calls run concurrently. In Rust, you can collect calls into a `Vec` by calling `.into_future()` on each [`Call::bounded_wait(...)`](https://docs.rs/ic-cdk/latest/ic_cdk/call/struct.Call.html) expression (since `Call` implements `IntoFuture`), then pass them to [`futures::future::join_all`](https://docs.rs/futures/latest/futures/future/fn.join_all.html), which awaits all of them together. @@ -38,7 +38,7 @@ In Rust, you can collect calls into a `Vec` by calling `.into_future()` on each The following example shows a `caller` canister that issues `n` calls to a `callee` canister's `ping` method, either sequentially or in parallel. -**Sequential version** — each call is awaited before the next is sent: +**Sequential version**: each call is awaited before the next is sent: @@ -110,7 +110,7 @@ pub async fn sequential_calls(n: u64) -> u64 { -**Parallel version** — all requests are dispatched before any response is awaited: +**Parallel version**: all requests are dispatched before any response is awaited: @@ -137,7 +137,7 @@ persistent actor { case (?c) { c }; }; - // Evaluate c.ping() without awaiting — sends the request and returns a + // Evaluate c.ping() without awaiting: sends the request and returns a // future. Collecting futures before any await dispatches all requests // concurrently. var futures = List.empty(); @@ -181,7 +181,7 @@ pub async fn parallel_calls(n: u64) -> u64 { let callee = CALLEE.with(|c| c.borrow().unwrap()); // Build all futures before awaiting any of them. All requests are - // dispatched when join_all polls each future — all fire before any + // dispatched when join_all polls each future: all fire before any // response is awaited. // Box::pin erases the lifetime parameters so futures can be collected // into a homogeneous Vec. @@ -204,9 +204,9 @@ The full working example is available in [`dfinity/examples`](https://github.com ## In-flight call limit -The IC enforces a limit on the number of in-flight calls a canister can have outstanding to any other single canister — approximately 500 per canister pair. Dispatching more calls than this limit causes the excess to be rejected immediately. Sequential calls stay within the limit because only one call is in-flight at a time. Parallel calls can exceed it when `n` is large. +The IC enforces a limit on the number of in-flight calls a canister can have outstanding to any other single canister: approximately 500 per canister pair. Dispatching more calls than this limit causes the excess to be rejected immediately. Sequential calls stay within the limit because only one call is in-flight at a time. Parallel calls can exceed it when `n` is large. -If calls fail due to the in-flight limit, do not retry immediately — the limit will still be full right after the failure. Instead, retry from a [timer](../backends/timers.md) or heartbeat after a delay. +If calls fail due to the in-flight limit, do not retry immediately. The limit will still be full right after the failure. Instead, retry from a [timer](../backends/timers.md) or heartbeat after a delay. ## Handling partial failures @@ -245,7 +245,7 @@ Because each inter-canister call is a separate async boundary, a failure in one ## Composite queries -A **composite query** is a query method that can call other query and composite query methods. Unlike update calls, composite queries are read-only, run without consensus, and complete without going through the full consensus pipeline — making them far lower latency than update-based parallel calls. +A **composite query** is a query method that can call other query and composite query methods. Unlike update calls, composite queries are read-only, run without consensus, and complete without going through the full consensus pipeline: making them far lower latency than update-based parallel calls. Use composite queries when all the data you need can be read from query endpoints and you do not need to modify state. @@ -269,7 +269,7 @@ Use composite queries when all the data you need can be read from query endpoint ```motoko import Array "mo:core/Array"; -// Bucket canister — regular query +// Bucket canister: regular query persistent actor class Bucket(n : Nat, i : Nat) { // ...state omitted... @@ -280,7 +280,7 @@ persistent actor class Bucket(n : Nat, i : Nat) { }; }; -// Map canister — composite query calling into Bucket +// Map canister: composite query calling into Bucket persistent actor Map { let n = 4; type Bucket = actor { get : Nat -> async ?Text }; @@ -355,8 +355,8 @@ Parallel and composite calls carry the same atomicity properties as any inter-ca ## Next steps -- [Onchain calls](onchain-calls.md) — making basic inter-canister calls -- [Canister optimization](../canister-management/optimization.md) — profiling and improving throughput -- [Inter-canister call security](../security/inter-canister-calls.md) — atomicity, reentrancy, and call safety +- [Onchain calls](onchain-calls.md): making basic inter-canister calls +- [Canister optimization](../canister-management/optimization.md): profiling and improving throughput +- [Inter-canister call security](../security/inter-canister-calls.md): atomicity, reentrancy, and call safety -{/* Upstream: informed by dfinity/portal — docs/building-apps/interact-with-canisters/advanced-calls.mdx, docs/building-apps/interact-with-canisters/query-calls.mdx; dfinity/examples — motoko/parallel_calls, rust/parallel_calls, motoko/composite_query, rust/composite_query; dfinity/cdk-rs — ic-cdk/src/call.rs, ic-cdk/src/futures.rs */} +{/* Upstream: informed by dfinity/portal (docs/building-apps/interact-with-canisters/advanced-calls.mdx, docs/building-apps/interact-with-canisters/query-calls.mdx; dfinity/examples) motoko/parallel_calls, rust/parallel_calls, motoko/composite_query, rust/composite_query; dfinity/cdk-rs: ic-cdk/src/call.rs, ic-cdk/src/futures.rs */} diff --git a/docs/guides/canister-management/cycles-management.mdx b/docs/guides/canister-management/cycles-management.mdx index ae97e824..bea6275a 100644 --- a/docs/guides/canister-management/cycles-management.mdx +++ b/docs/guides/canister-management/cycles-management.mdx @@ -7,7 +7,7 @@ sidebar: import { Tabs, TabItem } from '@astrojs/starlight/components'; -Canisters on ICP pay for compute and storage using **cycles**. Unlike gas on Ethereum, cycles are paid by the canister — not the caller. This is ICP's [reverse gas model](../../concepts/reverse-gas-model.md): developers fund their own canisters, and users interact for free. +Canisters on ICP pay for compute and storage using **cycles**. Unlike gas on Ethereum, cycles are paid by the canister: not the caller. This is ICP's [reverse gas model](../../concepts/reverse-gas-model.md): developers fund their own canisters, and users interact for free. This guide covers everything you need to manage cycles in production: acquiring them, monitoring balances, setting thresholds, and deploying to mainnet. @@ -16,16 +16,16 @@ This guide covers everything you need to manage cycles in production: acquiring Cycles are priced in XDR (Special Drawing Rights), a stable international reserve asset. **1 trillion cycles (1T) costs approximately 1 XDR**, which is roughly 1.3 USD as of 2025. This pricing is enforced by the Cycles Minting Canister (CMC) and updated via NNS governance proposals, so the cycle cost of a given operation stays stable even as ICP's price fluctuates. Canisters are charged continuously for: -- **Storage** — bytes held in stable memory and heap -- **Compute** — CPU cycles consumed by update and query calls -- **Messages** — ingress and inter-canister calls -- **Special operations** — HTTPS outcalls, threshold signatures, Bitcoin API calls +- **Storage**: bytes held in stable memory and heap +- **Compute**: CPU cycles consumed by update and query calls +- **Messages**: ingress and inter-canister calls +- **Special operations**: HTTPS outcalls, threshold signatures, Bitcoin API calls See [Cycles costs reference](../../reference/cycles-costs.md) for exact cost tables by subnet size. ### Local vs mainnet cycles -Local development uses fabricated cycles — canisters on a local network start with a large balance and never actually run out. Code that works locally can fail on mainnet if the canister is underfunded. Always test with realistic cycle amounts before deploying. +Local development uses fabricated cycles: canisters on a local network start with a large balance and never actually run out. Code that works locally can fail on mainnet if the canister is underfunded. Always test with realistic cycle amounts before deploying. ## Acquiring cycles @@ -40,7 +40,7 @@ icp identity principal # Output: xxxxx-xxxxx-xxxxx-xxxxx-xxx ``` -Save your seed phrase — it is shown only once. Without it, you permanently lose access to the identity and any funds it controls. +Save your seed phrase: it is shown only once. Without it, you permanently lose access to the identity and any funds it controls. ### Step 2: Get ICP tokens @@ -133,7 +133,7 @@ fn get_balance() -> Nat { ## Topping up canisters -Anyone can top up any canister — you do not need to be its controller. +Anyone can top up any canister: you do not need to be its controller. ```bash # Top up by canister name (in your project environment) @@ -225,7 +225,7 @@ See [Canister settings](settings.md) for all available settings and their syntax **When a canister is frozen:** - Update calls return an error immediately - Query calls still succeed (read-only) -- The canister is not deleted yet — top it up to unfreeze +- The canister is not deleted yet: top it up to unfreeze **When a frozen canister runs out of cycles entirely:** - The canister is deleted along with all its state @@ -366,66 +366,66 @@ Each environment maintains separate canister IDs. Mainnet IDs are stored in `.ic Before deploying to mainnet, verify each of the following: -- **Fund canisters** — Top up all canisters with at least 2–5T cycles each before deploying -- **Set a freezing threshold** — Use 90 days (`7776000` seconds) or more for production -- **Add a backup controller** — Without a backup, losing your identity means losing the canister permanently: +- **Fund canisters**: Top up all canisters with at least 2–5T cycles each before deploying +- **Set a freezing threshold**: Use 90 days (`7776000` seconds) or more for production +- **Add a backup controller**: Without a backup, losing your identity means losing the canister permanently: ```bash icp canister settings update backend --add-controller BACKUP_PRINCIPAL -e ic ``` -- **Verify cycle balance after deploy** — Check immediately after `icp deploy -e ic`: +- **Verify cycle balance after deploy**: Check immediately after `icp deploy -e ic`: ```bash icp canister status backend -e ic ``` -- **Enable reproducible builds** — See [Reproducible builds](reproducible-builds.md) to ensure your WASM is verifiable -- **Review canister settings** — See [Canister settings](settings.md) for memory allocation, compute allocation, and access controls -- **Review security** — See [Canister upgrades security](../security/canister-upgrades.md) for safe upgrade patterns +- **Enable reproducible builds**: See [Reproducible builds](reproducible-builds.md) to ensure your WASM is verifiable +- **Review canister settings**: See [Canister settings](settings.md) for memory allocation, compute allocation, and access controls +- **Review security**: See [Canister upgrades security](../security/canister-upgrades.md) for safe upgrade patterns ## Monitoring cycle balances -There is no built-in alerting for low balances — monitoring is your responsibility. Options: +There is no built-in alerting for low balances: monitoring is your responsibility. Options: -**Manual monitoring** — Check regularly via icp-cli: +**Manual monitoring**: Check regularly via icp-cli: ```bash # Check all canisters in an environment at once icp canister status -e ic ``` -**Automated monitoring services** — Third-party services can monitor balances and alert or auto-top-up: -- [CycleOps](https://cycleops.dev) — Onchain monitoring with automated top-ups and email notifications -- [Canistergeek](https://cusyh-iyaaa-aaaah-qcpba-cai.raw.icp0.io/) — Cycles, memory, and log monitoring in one place +**Automated monitoring services**: Third-party services can monitor balances and alert or auto-top-up: +- [CycleOps](https://cycleops.dev): Onchain monitoring with automated top-ups and email notifications +- [Canistergeek](https://cusyh-iyaaa-aaaah-qcpba-cai.raw.icp0.io/): Cycles, memory, and log monitoring in one place **Automated top-up libraries:** -- Rust: [canfund](https://github.com/dfinity/canfund) — DFINITY-maintained library for automated canister funding -- Motoko: [cycles-manager](https://github.com/CycleOperators/cycles-manager) — Permissioned multi-canister cycles management +- Rust: [canfund](https://github.com/dfinity/canfund): DFINITY-maintained library for automated canister funding +- Motoko: [cycles-manager](https://github.com/CycleOperators/cycles-manager): Permissioned multi-canister cycles management ## Common mistakes -**Sending cycles to the wrong canister** — Cycles transferred to the wrong principal cannot be recovered. Double-check canister IDs before topping up. +**Sending cycles to the wrong canister**: Cycles transferred to the wrong principal cannot be recovered. Double-check canister IDs before topping up. -**Using the wrong flag (`-n` vs `-e`)** — Use `-e ic` for canister operations by name; use `-n ic` for token/cycles operations and canister IDs: +**Using the wrong flag (`-n` vs `-e`)**: Use `-e ic` for canister operations by name; use `-n ic` for token/cycles operations and canister IDs: ```bash # Correct icp canister top-up backend --amount 1T -e ic icp cycles balance -n ic -# Incorrect (fails — canister name requires -e) +# Incorrect (fails: canister name requires -e) icp canister top-up backend --amount 1T -n ic ``` -**Forgetting to add a backup controller** — Your identity is the only controller by default. If you lose access to it, the canister cannot be managed, upgraded, or deleted. +**Forgetting to add a backup controller**: Your identity is the only controller by default. If you lose access to it, the canister cannot be managed, upgraded, or deleted. -**Confusing local and mainnet cycles** — Local deployments use fabricated cycles and never freeze. Test with realistic amounts on a staging environment before going to production. +**Confusing local and mainnet cycles**: Local deployments use fabricated cycles and never freeze. Test with realistic amounts on a staging environment before going to production. -**Using `ExperimentalCycles` in Motoko** — In `mo:core`, the module is `Cycles`, not `ExperimentalCycles`. `import ExperimentalCycles "mo:base/ExperimentalCycles"` will fail with `mo:core`. Use `import Cycles "mo:core/Cycles"`. +**Using `ExperimentalCycles` in Motoko**: In `mo:core`, the module is `Cycles`, not `ExperimentalCycles`. `import ExperimentalCycles "mo:base/ExperimentalCycles"` will fail with `mo:core`. Use `import Cycles "mo:core/Cycles"`. ## Next steps -- [Canister settings](settings.md) — Freezing threshold, memory allocation, compute allocation -- [Canister lifecycle](lifecycle.md) — Create, install, upgrade, and delete canisters -- [Cycles costs reference](../../reference/cycles-costs.md) — Exact cost tables per operation -- [Reverse gas model](../../concepts/reverse-gas-model.md) — Why canisters pay for execution -- [Reproducible builds](reproducible-builds.md) — Verify your WASM is trustworthy before deploying -- [icp-cli docs](https://cli.internetcomputer.org/) — Full command reference +- [Canister settings](settings.md): Freezing threshold, memory allocation, compute allocation +- [Canister lifecycle](lifecycle.md): Create, install, upgrade, and delete canisters +- [Cycles costs reference](../../reference/cycles-costs.md): Exact cost tables per operation +- [Reverse gas model](../../concepts/reverse-gas-model.md): Why canisters pay for execution +- [Reproducible builds](reproducible-builds.md): Verify your WASM is trustworthy before deploying +- [icp-cli docs](https://cli.internetcomputer.org/): Full command reference -{/* Upstream: informed by dfinity/portal — docs/building-apps/canister-management/topping-up.mdx, docs/building-apps/getting-started/tokens-and-cycles.mdx; dfinity/icp-cli — docs/guides/deploying-to-mainnet.md, docs/guides/tokens-and-cycles.md, docs/guides/managing-environments.md; dfinity/icskills — skills/cycles-management/SKILL.md */} +{/* Upstream: informed by dfinity/portal (docs/building-apps/canister-management/topping-up.mdx, docs/building-apps/getting-started/tokens-and-cycles.mdx; dfinity/icp-cli) docs/guides/deploying-to-mainnet.md, docs/guides/tokens-and-cycles.md, docs/guides/managing-environments.md; dfinity/icskills: skills/cycles-management/SKILL.md */} diff --git a/docs/guides/canister-management/large-wasm.md b/docs/guides/canister-management/large-wasm.md index 93326576..70d1314e 100644 --- a/docs/guides/canister-management/large-wasm.md +++ b/docs/guides/canister-management/large-wasm.md @@ -13,10 +13,10 @@ This guide covers both approaches, explains Wasm64 for canisters that need exten A compiled Wasm binary grows for several reasons: -- **Dense dependency trees** — Rust canisters that pull in many crates accumulate dead code that the compiler cannot always eliminate. -- **Embedded data** — ML model weights, large lookup tables, or static assets compiled into the binary. -- **Complex business logic** — feature-rich canisters with many update and query methods. -- **Debug symbols** — by default, Rust release builds include name sections and other debug metadata. +- **Dense dependency trees**: Rust canisters that pull in many crates accumulate dead code that the compiler cannot always eliminate. +- **Embedded data**: ML model weights, large lookup tables, or static assets compiled into the binary. +- **Complex business logic**: feature-rich canisters with many update and query methods. +- **Debug symbols**: by default, Rust release builds include name sections and other debug metadata. Before reaching for the chunk store, consider whether [canister optimization](optimization.md) can reduce the binary enough to fit under 2 MiB. @@ -69,14 +69,14 @@ When compression alone is not enough, the Wasm chunk store lets you upload modul ### How the chunk store works -1. **Upload chunks** — Call `upload_chunk` on the management canister to store up to 1 MiB chunks in the target canister's chunk store. Each call returns the SHA-256 hash of the stored chunk. -2. **Assemble and install** — Call `install_chunked_code` with the ordered list of chunk hashes. The system concatenates the chunks, verifies the aggregate hash matches `wasm_module_hash`, and installs the result as if you had called `install_code` directly. +1. **Upload chunks**: Call `upload_chunk` on the management canister to store up to 1 MiB chunks in the target canister's chunk store. Each call returns the SHA-256 hash of the stored chunk. +2. **Assemble and install**: Call `install_chunked_code` with the ordered list of chunk hashes. The system concatenates the chunks, verifies the aggregate hash matches `wasm_module_hash`, and installs the result as if you had called `install_code` directly. -The chunk store is bounded: each chunk is at most 1 MiB, and there is a maximum number of chunks per store (`CHUNK_STORE_SIZE`, defined in the IC interface spec — see the [management canister reference](../../reference/management-canister.md) for the exact value). You can inspect stored chunks with `stored_chunks` and clear the store with `clear_chunk_store`. +The chunk store is bounded: each chunk is at most 1 MiB, and there is a maximum number of chunks per store (`CHUNK_STORE_SIZE`, defined in the IC interface spec: see the [management canister reference](../../reference/management-canister.md) for the exact value). You can inspect stored chunks with `stored_chunks` and clear the store with `clear_chunk_store`. ### icp-cli handles this automatically -When you run `icp deploy` or `icp canister install` with a Wasm module larger than 2 MiB, icp-cli automatically uses the chunk store — no configuration required. The tool splits the module, uploads each chunk, and calls `install_chunked_code` behind the scenes. +When you run `icp deploy` or `icp canister install` with a Wasm module larger than 2 MiB, icp-cli automatically uses the chunk store. No configuration required. The tool splits the module, uploads each chunk, and calls `install_chunked_code` behind the scenes. ```bash icp deploy @@ -84,7 +84,7 @@ icp deploy ### Combining compression with the chunk store -You can combine gzip compression with the chunk store. A compressed module that is still larger than 2 MiB will still be split into chunks, but fewer chunks are needed — which means fewer upload calls and lower cycle costs. Enable both `shrink` and `compress` in your recipe, and let icp-cli decide whether chunking is needed. +You can combine gzip compression with the chunk store. A compressed module that is still larger than 2 MiB will still be split into chunks, but fewer chunks are needed: which means fewer upload calls and lower cycle costs. Enable both `shrink` and `compress` in your recipe, and let icp-cli decide whether chunking is needed. ### Cycle costs @@ -92,7 +92,7 @@ Storing each chunk costs cycles proportional to 1 MiB of storage (even if the ch ## Wasm64: 64-bit memory addressing -Standard ICP canisters use the `wasm32-unknown-unknown` target, which limits addressable memory to 4 GiB. For canisters that need more — for example, those holding large in-memory datasets or running inference on large models — ICP supports the `wasm64-unknown-unknown` target with up to 6 GiB of addressable heap memory (an ICP platform limit). +Standard ICP canisters use the `wasm32-unknown-unknown` target, which limits addressable memory to 4 GiB. For canisters that need more (for example, those holding large in-memory datasets or running inference on large models) ICP supports the `wasm64-unknown-unknown` target with up to 6 GiB of addressable heap memory (an ICP platform limit). Wasm64 is a separate concern from the chunk store. You might use one, the other, or both: the chunk store addresses the 2 MiB upload limit, while Wasm64 addresses the runtime memory limit. @@ -134,7 +134,7 @@ canisters: - ic-wasm "$ICP_WASM_OUTPUT_PATH" -o "${ICP_WASM_OUTPUT_PATH}" metadata "candid:service" -f 'backend/backend.did' -v public --keep-name-section ``` -The canister code itself does not require changes — the same Rust CDK code works on both `wasm32` and `wasm64`: +The canister code itself does not require changes. The same Rust CDK code works on both `wasm32` and `wasm64`: ```rust #[ic_cdk::query] @@ -174,12 +174,12 @@ SIMD is available on every ICP node and does not require any special canister co SIMD provides the largest gains for workloads with regular, data-parallel structure: -- **AI/ML inference** — matrix multiplications, activation functions, convolutions -- **Image processing** — pixel transforms, filtering, encoding/decoding -- **Cryptographic operations** — hash computation, field arithmetic -- **Scientific computing** — numerical simulations, signal processing +- **AI/ML inference**: matrix multiplications, activation functions, convolutions +- **Image processing**: pixel transforms, filtering, encoding/decoding +- **Cryptographic operations**: hash computation, field arithmetic +- **Scientific computing**: numerical simulations, signal processing -For "classical" canister operations — reward distribution, token accounting, query logic — the gains are smaller but still measurable. +For "classical" canister operations: reward distribution, token accounting, query logic. The gains are smaller but still measurable. ### Loop auto-vectorization @@ -232,23 +232,23 @@ Compare instruction counts with and without SIMD to measure the speedup. Lower i ## Troubleshooting -**"Wasm module too large" error during install** — The module exceeds 2 MiB. Verify that icp-cli is up to date (automatic chunk store support was added in v0.2.x). If using a manual install flow, switch to the `install_chunked_code` management canister API. +**"Wasm module too large" error during install**: The module exceeds 2 MiB. Verify that icp-cli is up to date (automatic chunk store support was added in v0.2.x). If using a manual install flow, switch to the `install_chunked_code` management canister API. -**"Wasm chunk store error" during install** — The canister may lack sufficient cycles to store chunks (each 1 MiB chunk incurs a storage cost). Top up the canister's cycles balance before retrying. If chunks from a previous failed attempt are occupying the store, call `clear_chunk_store` first. +**"Wasm chunk store error" during install**: The canister may lack sufficient cycles to store chunks (each 1 MiB chunk incurs a storage cost). Top up the canister's cycles balance before retrying. If chunks from a previous failed attempt are occupying the store, call `clear_chunk_store` first. -**Wasm64 build fails with missing target** — The `nightly` toolchain and `rust-src` component must both be installed. Run: +**Wasm64 build fails with missing target**: The `nightly` toolchain and `rust-src` component must both be installed. Run: ```bash rustup toolchain install nightly rustup component add rust-src --toolchain nightly ``` -**SIMD instructions have no measurable effect** — Some loops cannot be auto-vectorized. Check that the loop body is tight, operates on a contiguous slice, and does not contain branches or function calls that prevent vectorization. Profile with `ic_cdk::api::instruction_counter` to confirm the function is a bottleneck before investing in SIMD intrinsics. +**SIMD instructions have no measurable effect**: Some loops cannot be auto-vectorized. Check that the loop body is tight, operates on a contiguous slice, and does not contain branches or function calls that prevent vectorization. Profile with `ic_cdk::api::instruction_counter` to confirm the function is a bottleneck before investing in SIMD intrinsics. ## Next steps -- [Canister optimization](optimization.md) — reduce Wasm size before reaching for the chunk store -- [Execution errors reference](../../reference/execution-errors.md) — Wasm size and chunk store error codes -- [Canister lifecycle](lifecycle.md) — deployment modes and install options +- [Canister optimization](optimization.md): reduce Wasm size before reaching for the chunk store +- [Execution errors reference](../../reference/execution-errors.md): Wasm size and chunk store error codes +- [Canister lifecycle](lifecycle.md): deployment modes and install options diff --git a/docs/guides/canister-management/lifecycle.mdx b/docs/guides/canister-management/lifecycle.mdx index e471b335..04cb2841 100644 --- a/docs/guides/canister-management/lifecycle.mdx +++ b/docs/guides/canister-management/lifecycle.mdx @@ -15,12 +15,12 @@ This guide walks through each phase with practical icp-cli commands and explains A canister progresses through these phases: -1. **Create** — register an empty canister on the network, receiving a unique canister ID -2. **Install** — load compiled WebAssembly code into the canister -3. **Run** — the canister processes messages and serves requests -4. **Upgrade** — replace the code while preserving stable state -5. **Stop** — pause message processing (required before deletion) -6. **Delete** — permanently remove the canister and reclaim cycles +1. **Create**: register an empty canister on the network, receiving a unique canister ID +2. **Install**: load compiled WebAssembly code into the canister +3. **Run**: the canister processes messages and serves requests +4. **Upgrade**: replace the code while preserving stable state +5. **Stop**: pause message processing (required before deletion) +6. **Delete**: permanently remove the canister and reclaim cycles In practice, `icp deploy` handles steps 1–3 automatically. You interact with individual steps when you need finer control. @@ -46,7 +46,7 @@ When you run `icp deploy`, canister creation happens automatically for any canis ### Build -Building compiles your source code to a WebAssembly (Wasm) module. icp-cli delegates to the language toolchain — Cargo for Rust, moc for Motoko: +Building compiles your source code to a WebAssembly (Wasm) module. icp-cli delegates to the language toolchain: Cargo for Rust, moc for Motoko: ```bash icp build @@ -86,10 +86,10 @@ icp deploy -e ic # deploy to mainnet What `icp deploy` does: -1. **Build** — compile all target canisters to Wasm -2. **Create** — create canisters on the network (if they don't already exist) -3. **Install or upgrade** — install code on new canisters, upgrade existing ones -4. **Sync** — run post-deployment steps (such as uploading frontend assets) +1. **Build**: compile all target canisters to Wasm +2. **Create**: create canisters on the network (if they don't already exist) +3. **Install or upgrade**: install code on new canisters, upgrade existing ones +4. **Sync**: run post-deployment steps (such as uploading frontend assets) ## Canister states @@ -160,7 +160,7 @@ When you run `icp deploy` on an existing canister, icp-cli automatically: Stopping before the upgrade prevents data inconsistencies from messages being processed during the code swap. -> **Note:** `--mode upgrade` is rarely needed explicitly — `auto` mode (the default) already upgrades existing canisters. Use `--mode upgrade` in CI pipelines where you want the command to fail if the canister doesn't already exist. +> **Note:** `--mode upgrade` is rarely needed explicitly: `auto` mode (the default) already upgrades existing canisters. Use `--mode upgrade` in CI pipelines where you want the command to fail if the canister doesn't already exist. ### Preserving state across upgrades @@ -184,7 +184,7 @@ persistent actor Counter { }; ``` -All `var` declarations in a `persistent actor` are automatically stable — they survive upgrades without any additional code. Use `transient var` for values that should reset on each upgrade (such as caches): +All `var` declarations in a `persistent actor` are automatically stable: they survive upgrades without any additional code. Use `transient var` for values that should reset on each upgrade (such as caches): ```motoko import Map "mo:core/Map"; @@ -195,7 +195,7 @@ persistent actor Cache { }; ``` -> **Tip:** `persistent actor` is the recommended pattern. Avoid `pre_upgrade`/`post_upgrade` hooks in Motoko when possible — if `pre_upgrade` traps, the canister becomes permanently non-upgradeable. +> **Tip:** `persistent actor` is the recommended pattern. Avoid `pre_upgrade`/`post_upgrade` hooks in Motoko when possible: if `pre_upgrade` traps, the canister becomes permanently non-upgradeable. @@ -282,23 +282,23 @@ Remaining cycles are refunded to the controller who made the delete request. Sometimes you need to move a canister to a different subnet. Common reasons include: -- **Wrong subnet** — the canister was deployed to an unintended subnet -- **Geographic requirements** — data residency rules require a specific region -- **Replication needs** — moving to a larger subnet for higher fault tolerance -- **Colocation** — consolidating canisters onto the same subnet for efficient inter-canister calls +- **Wrong subnet**: the canister was deployed to an unintended subnet +- **Geographic requirements**: data residency rules require a specific region +- **Replication needs**: moving to a larger subnet for higher fault tolerance +- **Colocation**: consolidating canisters onto the same subnet for efficient inter-canister calls There are two approaches, depending on whether you need to keep the canister ID: | Approach | State | Canister ID | When to use | |----------|-------|-------------|-------------| -| **Snapshot transfer** | Preserved | New ID | Default — simpler and safer | +| **Snapshot transfer** | Preserved | New ID | Default: simpler and safer | | **Full migration** | Preserved | Preserved | When the canister ID is load-bearing | Preserving the canister ID matters when: -- **Threshold signatures (tECDSA/tSchnorr)** — signing keys are cryptographically bound to the canister's principal. A new ID means losing access to derived keys and any assets they control on other blockchains. -- **VetKeys** — decryption keys are derived from the canister ID. A new ID makes previously encrypted data inaccessible. -- **External references** — other canisters, frontends, or off-chain systems reference the canister by ID. This includes Internet Identity sessions tied to a canister-ID-based domain. +- **Threshold signatures (tECDSA/tSchnorr)**: signing keys are cryptographically bound to the canister's principal. A new ID means losing access to derived keys and any assets they control on other blockchains. +- **VetKeys**: decryption keys are derived from the canister ID. A new ID makes previously encrypted data inaccessible. +- **External references**: other canisters, frontends, or off-chain systems reference the canister by ID. This includes Internet Identity sessions tied to a canister-ID-based domain. Both approaches use [canister snapshots](snapshots.md) to transfer state. For the complete step-by-step procedure, see the [icp-cli canister migration guide](https://github.com/dfinity/icp-cli/blob/main/docs/guides/canister-migration.md). @@ -370,7 +370,7 @@ For a complete canister factory example, see the [canister factory example](http ## Canister history -Every canister maintains a history of at least its most recent 20 changes — including creation, code installations, upgrades, reinstalls, and controller changes. Older entries may be dropped, but the 20 most recent are always retained. This is useful for security audits and verifying code integrity. +Every canister maintains a history of at least its most recent 20 changes: including creation, code installations, upgrades, reinstalls, and controller changes. Older entries may be dropped, but the 20 most recent are always retained. This is useful for security audits and verifying code integrity. ### Query history from Rust @@ -402,11 +402,11 @@ The status output includes the module hash and controller list. For full change ## Trapping and error handling -A **trap** is an unrecoverable error during WebAssembly execution — caused by panics, division by zero, out-of-bounds memory access, or explicit trap calls. When a canister traps: +A **trap** is an unrecoverable error during WebAssembly execution: caused by panics, division by zero, out-of-bounds memory access, or explicit trap calls. When a canister traps: - The current message execution ends with an error - All state changes from the current message are rolled back -- For inter-canister calls, only the callback's state changes roll back — state changes made before the `await` persist +- For inter-canister calls, only the callback's state changes roll back: state changes made before the `await` persist ### Traps during upgrades @@ -434,11 +434,11 @@ The IC decompresses the module automatically during installation. For strategies ## Next steps -- [Canister settings](settings.md) — configure controllers, memory allocation, and freezing thresholds -- [Cycles management](cycles-management.md) — fund canisters and monitor cycle consumption -- [Data persistence](../backends/data-persistence.md) — deep dive into stable memory and persistence strategies -- [Canister snapshots](snapshots.md) — create backups before risky upgrades -- [Upgrade safety](../security/canister-upgrades.md) — security considerations for safe upgrades -- [Testing strategies](../testing/strategies.md) — test lifecycle operations locally +- [Canister settings](settings.md): configure controllers, memory allocation, and freezing thresholds +- [Cycles management](cycles-management.md): fund canisters and monitor cycle consumption +- [Data persistence](../backends/data-persistence.md): deep dive into stable memory and persistence strategies +- [Canister snapshots](snapshots.md): create backups before risky upgrades +- [Upgrade safety](../security/canister-upgrades.md): security considerations for safe upgrades +- [Testing strategies](../testing/strategies.md): test lifecycle operations locally -{/* Upstream: informed by dfinity/portal — docs/building-apps/canister-management/, docs/building-apps/developing-canisters/ — dfinity/icp-cli — docs/concepts/build-deploy-sync.md, docs/guides/canister-migration.md */} +{/* Upstream: informed by dfinity/portal (docs/building-apps/canister-management/, docs/building-apps/developing-canisters/) dfinity/icp-cli: docs/concepts/build-deploy-sync.md, docs/guides/canister-migration.md */} diff --git a/docs/guides/canister-management/logs.md b/docs/guides/canister-management/logs.md index fbf3b1c1..3f98b826 100644 --- a/docs/guides/canister-management/logs.md +++ b/docs/guides/canister-management/logs.md @@ -5,13 +5,13 @@ sidebar: order: 3 --- -Canister logs help you understand what your canister is doing at runtime, including during traps. The Internet Computer captures log output from update calls, timers, heartbeats, and lifecycle hooks — even when the canister traps mid-execution. Logs are retrievable by canister controllers and optionally by other principals. +Canister logs help you understand what your canister is doing at runtime, including during traps. The Internet Computer captures log output from update calls, timers, heartbeats, and lifecycle hooks: even when the canister traps mid-execution. Logs are retrievable by canister controllers and optionally by other principals. ## Writing log messages Both Rust and Motoko support printing messages to the canister log. -**Rust** — use `ic_cdk::println!`: +**Rust**: use `ic_cdk::println!`: ```rust use ic_cdk::{init, update}; @@ -30,7 +30,7 @@ fn process(value: u64) -> u64 { The `ic_cdk::println!` macro formats a string and writes it to the canister log on the IC. Outside of Wasm (for example in unit tests), it falls back to `std::println!`. -**Motoko** — use `Debug.print` from `mo:core/Debug`: +**Motoko**: use `Debug.print` from `mo:core/Debug`: ```motoko import Debug "mo:core/Debug"; @@ -189,7 +189,7 @@ Supported suffixes: `kb` (1,000 bytes), `kib` (1,024 bytes), `mb` (1,000,000 byt ## Backtrace debugging -When a canister traps, ICP records a **backtrace** — the function call stack at the point of the trap — and appends it to the canister logs. If the caller has [log access](#log-visibility), the backtrace also appears in the error response they receive. +When a canister traps, ICP records a **backtrace**: the function call stack at the point of the trap: and appends it to the canister logs. If the caller has [log access](#log-visibility), the backtrace also appears in the error response they receive. For example, if a Rust canister performs an out-of-bounds stable memory write: @@ -258,7 +258,7 @@ The statistics are cumulative since the canister was created. They are updated a -**Rust** — read query stats from `canister_status`: +**Rust**: read query stats from `canister_status`: ```rust use ic_cdk::{management_canister, update}; @@ -283,7 +283,7 @@ async fn print_query_stats() -> String { } ``` -**Motoko** — call `canister_status` on the management canister: +**Motoko**: call `canister_status` on the management canister: ```motoko import Principal "mo:core/Principal"; @@ -371,7 +371,7 @@ API BNs expose access logs over WebSocket. The URL format is: wss://{api_bn_domain}/logs/canister/{canister_id} ``` -For full coverage, connect to **all** API BNs — each node only streams the requests it handles, and traffic is distributed across nodes. +For full coverage, connect to **all** API BNs: each node only streams the requests it handles, and traffic is distributed across nodes. To discover the current list of API BN domains, fetch them from the IC's certified state using `agent-rs`: @@ -403,8 +403,8 @@ async fn main() -> Result<()> { ## Next steps -- [Canister lifecycle](lifecycle.md) — configure log visibility and memory limits when creating or deploying a canister -- [Testing strategies](../testing/strategies.md) — use canister logs as part of your debugging workflow -- [CLI reference](https://cli.internetcomputer.org/) — full documentation for `icp canister logs` and `icp canister settings update` +- [Canister lifecycle](lifecycle.md): configure log visibility and memory limits when creating or deploying a canister +- [Testing strategies](../testing/strategies.md): use canister logs as part of your debugging workflow +- [CLI reference](https://cli.internetcomputer.org/): full documentation for `icp canister logs` and `icp canister settings update` diff --git a/docs/guides/canister-management/optimization.md b/docs/guides/canister-management/optimization.md index 62d6efb1..5f889f26 100644 --- a/docs/guides/canister-management/optimization.md +++ b/docs/guides/canister-management/optimization.md @@ -9,18 +9,18 @@ Canister Wasm binaries compiled from Rust or Motoko are often larger than necess This guide covers the main tools and techniques available: -- **`ic-wasm shrink`** — strip unused functions and debug info from the compiled Wasm -- **Rust `Cargo.toml` profile settings** — link-time optimization and compiler tuning -- **Motoko GC configuration** — selecting the right garbage collector for your workload -- **WebAssembly SIMD** — accelerate compute-heavy workloads (Rust only) -- **Performance counters** — measure actual instruction usage to find bottlenecks -- **Low Wasm memory hook** — react before the canister runs out of Wasm memory +- **`ic-wasm shrink`**: strip unused functions and debug info from the compiled Wasm +- **Rust `Cargo.toml` profile settings**: link-time optimization and compiler tuning +- **Motoko GC configuration**: selecting the right garbage collector for your workload +- **WebAssembly SIMD**: accelerate compute-heavy workloads (Rust only) +- **Performance counters**: measure actual instruction usage to find bottlenecks +- **Low Wasm memory hook**: react before the canister runs out of Wasm memory ## Reducing binary size with `ic-wasm shrink` `ic-wasm` is included when you install `icp-cli`. Its `shrink` command removes unreachable functions, dead code, and debug sections from your compiled Wasm module. -**Using the official Rust recipe** — enable the `shrink` option in `icp.yaml`: +**Using the official Rust recipe**: enable the `shrink` option in `icp.yaml`: ```yaml canisters: @@ -44,7 +44,7 @@ canisters: shrink: true ``` -**Running `ic-wasm` directly** — if you have a custom build pipeline: +**Running `ic-wasm` directly**: if you have a custom build pipeline: ```bash ic-wasm backend.wasm -o backend.wasm shrink --keep-name-section @@ -60,7 +60,7 @@ In addition to `ic-wasm`, Rust offers compiler-level optimizations through the ` ```toml [profile.release] -lto = true # Link-time optimization — merges crates for better dead code removal +lto = true # Link-time optimization: merges crates for better dead code removal opt-level = 3 # Maximum optimization (default is 3 for release) codegen-units = 1 # Single codegen unit enables more aggressive cross-function optimization ``` @@ -71,7 +71,7 @@ For binary size over speed, use `opt-level = "z"` (optimize for size, disabling ## Motoko: Garbage collector options -The Motoko compiler uses the **incremental GC** by default starting with Motoko 0.15 and enhanced orthogonal persistence. You cannot choose a different GC when enhanced orthogonal persistence is active — the GC is fixed. +The Motoko compiler uses the **incremental GC** by default starting with Motoko 0.15 and enhanced orthogonal persistence. You cannot choose a different GC when enhanced orthogonal persistence is active. The GC is fixed. For projects using legacy persistence (without enhanced orthogonal persistence), you can select an alternative GC by passing compiler arguments through the Motoko recipe: @@ -85,7 +85,7 @@ canisters: args: --incremental-gc ``` -> **New projects:** If you are using enhanced orthogonal persistence (the current default), no `args` configuration is needed — the incremental GC is already selected automatically. The `args` field only becomes relevant when selecting an alternative GC under `--legacy-persistence`. +> **New projects:** If you are using enhanced orthogonal persistence (the current default), no `args` configuration is needed. The incremental GC is already selected automatically. The `args` field only becomes relevant when selecting an alternative GC under `--legacy-persistence`. The incremental GC is designed to scale for large heap sizes and is more efficient on average than the older copying or compacting collectors. It is the recommended choice for most workloads. @@ -95,7 +95,7 @@ For legacy-persistence projects: if `--legacy-persistence` is specified, you can ICP supports WebAssembly SIMD (Single Instruction, Multiple Data) instructions, which allow a single instruction to operate on multiple data values simultaneously. This is useful for numeric-heavy workloads like image processing, matrix multiplication, and machine learning inference. -SIMD is a **Rust-only feature** — the Motoko compiler does not expose SIMD controls. +SIMD is a **Rust-only feature**: the Motoko compiler does not expose SIMD controls. ### Enabling SIMD globally @@ -129,9 +129,9 @@ The `dfinity/examples` repository contains a complete SIMD benchmarking example Before optimizing, measure where cycles are actually spent. ICP exposes two performance counters via `ic_cdk::api`: -**`instruction_counter()`** — instructions executed since the last entry point. Resets at each `await` point (each `await` creates a new entry point). +**`instruction_counter()`**: instructions executed since the last entry point. Resets at each `await` point (each `await` creates a new entry point). -**`call_context_instruction_counter()`** — cumulative instructions across the entire call context, including across `await` points. Use this to measure the total cost of an async flow. +**`call_context_instruction_counter()`**: cumulative instructions across the entire call context, including across `await` points. Use this to measure the total cost of an async flow. ```rust use ic_cdk::api::{instruction_counter, call_context_instruction_counter}; @@ -192,7 +192,7 @@ use ic_cdk::on_low_wasm_memory; fn handle_low_memory() { // Shed cached state, emit a log entry, or set a flag // to reject new requests until memory is reclaimed - ic_cdk::println!("Low Wasm memory — shedding cache"); + ic_cdk::println!("Low Wasm memory: shedding cache"); with_state_mut(|s| { s.cache.clear(); s.low_memory_triggered = true; @@ -217,22 +217,22 @@ persistent actor { The `lowmemory` hook is an `async*` function, so it can perform async operations. -A complete Rust example is available at `rust/low_wasm_memory` in `dfinity/examples`. It demonstrates the full lifecycle: setting memory limits via canister settings, watching memory grow through the heartbeat, and observing the hook fire. A `motoko/low_wasm_memory` example also exists, but note that it currently uses the legacy `mo:base` library — use the inline snippet above as the reference for `mo:core`-compatible code. +A complete Rust example is available at `rust/low_wasm_memory` in `dfinity/examples`. It demonstrates the full lifecycle: setting memory limits via canister settings, watching memory grow through the heartbeat, and observing the hook fire. A `motoko/low_wasm_memory` example also exists, but note that it currently uses the legacy `mo:base` library: use the inline snippet above as the reference for `mo:core`-compatible code. ## Combining techniques Most production canisters benefit from combining several techniques: -1. **Always enable `shrink`** in your recipe — it is low-effort and typically reduces binary size by removing dead code. Pairs well with `lto = true` in Rust. +1. **Always enable `shrink`** in your recipe: it is low-effort and typically reduces binary size by removing dead code. Pairs well with `lto = true` in Rust. 2. **Set `wasm_memory_limit` and `wasm_memory_threshold`** on any canister that holds large amounts of heap data, and implement the low memory hook. -3. **Profile before optimizing** — use `instruction_counter()` in a staging environment to identify which endpoints are expensive before spending time on SIMD or algorithmic changes. -4. **Consider SIMD for ML/compute workloads** — if you are running inference, image processing, or signal processing in Rust, enabling `simd128` globally is often worth the build-time cost. +3. **Profile before optimizing**: use `instruction_counter()` in a staging environment to identify which endpoints are expensive before spending time on SIMD or algorithmic changes. +4. **Consider SIMD for ML/compute workloads**: if you are running inference, image processing, or signal processing in Rust, enabling `simd128` globally is often worth the build-time cost. ## Next steps -- [Large Wasm](large-wasm.md) — when binary size exceeds the upload limit -- [Cycles costs](../../reference/cycles-costs.md) — how Wasm size and instruction count map to cycle charges -- [Canister lifecycle](lifecycle.md) — how optimized builds integrate with the icp-cli deploy workflow +- [Large Wasm](large-wasm.md): when binary size exceeds the upload limit +- [Cycles costs](../../reference/cycles-costs.md): how Wasm size and instruction count map to cycle charges +- [Canister lifecycle](lifecycle.md): how optimized builds integrate with the icp-cli deploy workflow diff --git a/docs/guides/canister-management/reproducible-builds.md b/docs/guides/canister-management/reproducible-builds.md index 3153ba6d..9052d044 100644 --- a/docs/guides/canister-management/reproducible-builds.md +++ b/docs/guides/canister-management/reproducible-builds.md @@ -5,13 +5,13 @@ sidebar: order: 6 --- -A reproducible build produces the same WebAssembly module byte-for-byte whenever anyone compiles the same source code in the same documented environment. For canisters, this matters because ICP lets anyone query a canister's Wasm hash — but only a reproducible build makes that hash meaningful. Without it, a published hash cannot be linked to readable source code. +A reproducible build produces the same WebAssembly module byte-for-byte whenever anyone compiles the same source code in the same documented environment. For canisters, this matters because ICP lets anyone query a canister's Wasm hash: but only a reproducible build makes that hash meaningful. Without it, a published hash cannot be linked to readable source code. This guide explains how to structure your canister project for reproducibility, how to use Docker to standardize build environments, and how users can verify a deployed canister using `icp canister status`. ## Why reproducibility matters -ICP does not expose a canister's Wasm module directly — only its SHA-256 hash. This is a deliberate privacy measure: developers may want to keep source code private. However, if you do publish your source code, a reproducible build lets users confirm that the hash matches what they compiled themselves. +ICP does not expose a canister's Wasm module directly. Only its SHA-256 hash. This is a deliberate privacy measure: developers may want to keep source code private. However, if you do publish your source code, a reproducible build lets users confirm that the hash matches what they compiled themselves. This is most important for canisters that hold other users' funds or execute critical operations. Before interacting with such a canister, a cautious user can: @@ -31,7 +31,7 @@ Use `icp canister status` with the canister ID to retrieve the current module ha icp canister status rdmx6-jaaaa-aaaaa-aaadq-cai -n ic ``` -The output includes the module hash alongside cycle balance, controller list, and other status fields. Anyone can query this hash — no controller access is required. Use `-p` / `--public` to explicitly read only public information from the state tree: +The output includes the module hash alongside cycle balance, controller list, and other status fields. Anyone can query this hash. No controller access is required. Use `-p` / `--public` to explicitly read only public information from the state tree: ```bash icp canister status rdmx6-jaaaa-aaaaa-aaadq-cai -n ic --public @@ -51,9 +51,9 @@ If the canister's controller list is empty, or the only controller is a blackhol To allow users to reproduce your build, you must publish: -1. **The exact source code** used to build the deployed Wasm — typically a tagged commit in a public repository, or an archived source package -2. **A complete description of the build environment** — operating system, compiler versions, toolchain versions, and any relevant environment variables -3. **Deterministic build instructions** — a script or `Dockerfile` that produces the same output when run in the described environment +1. **The exact source code** used to build the deployed Wasm: typically a tagged commit in a public repository, or an archived source package +2. **A complete description of the build environment**: operating system, compiler versions, toolchain versions, and any relevant environment variables +3. **Deterministic build instructions**: a script or `Dockerfile` that produces the same output when run in the described environment ### Pinning dependencies @@ -103,7 +103,7 @@ cargo build --locked --target wasm32-unknown-unknown --release Docker is the standard approach for distributing reproducible build environments. A `Dockerfile` pins the operating system and toolchain versions so anyone building your canister works in an identical environment. :::caution -Pin your Docker builds to `x86_64`. Builds are generally not reproducible across CPU architectures. If you develop on Apple Silicon (M-series), use [lima](https://github.com/lima-vm/lima) to run an x86_64 Linux VM — lima is more stable than Docker Desktop or Docker Machine for this use case on macOS. +Pin your Docker builds to `x86_64`. Builds are generally not reproducible across CPU architectures. If you develop on Apple Silicon (M-series), use [lima](https://github.com/lima-vm/lima) to run an x86_64 Linux VM: lima is more stable than Docker Desktop or Docker Machine for this use case on macOS. ::: ### Example Dockerfile for a Rust canister @@ -147,9 +147,9 @@ WORKDIR /canister Key design choices in this `Dockerfile`: -- **Official base image** — starting from `ubuntu:22.04` gives users a trusted, unmodified foundation -- **Direct installation, not package managers** — package managers do not pin transitive dependencies reliably; installing tools directly with fixed version numbers ensures everyone gets the same binary -- **`ic-wasm` included** — required for Wasm shrinking, which strips debug info and reduces file size +- **Official base image**: starting from `ubuntu:22.04` gives users a trusted, unmodified foundation +- **Direct installation, not package managers**: package managers do not pin transitive dependencies reliably; installing tools directly with fixed version numbers ensures everyone gets the same binary +- **`ic-wasm` included**: required for Wasm shrinking, which strips debug info and reduces file size Place this `Dockerfile` in your canister project directory. Build the container image: @@ -199,14 +199,14 @@ Compute the hash for your Wasm file with `sha256sum`: sha256sum dist/my-canister.wasm ``` -The recipe will fail with a hash mismatch error if the Wasm file does not match the declared `sha256`. This makes it safe to check the hash into version control alongside the path — users and CI pipelines can reproduce the deployment exactly. +The recipe will fail with a hash mismatch error if the Wasm file does not match the declared `sha256`. This makes it safe to check the hash into version control alongside the path: users and CI pipelines can reproduce the deployment exactly. Optional recipe parameters: | Parameter | Type | Default | Description | |-----------|------|---------|-------------| | `path` | string | required | Local path to the prebuilt Wasm | -| `sha256` | string | — | SHA-256 hash for integrity verification | +| `sha256` | string | - | SHA-256 hash for integrity verification | | `shrink` | boolean | `false` | Remove unused functions and debug info | | `compress` | boolean | `false` | Gzip compress the Wasm | | `metadata` | array | `[]` | Custom metadata key-value pairs to inject | @@ -261,8 +261,8 @@ Maintaining a reproducible build over years requires more than getting it workin ## Next steps -- [Canister lifecycle](lifecycle.md) — deploy and upgrade workflow -- [Canister settings](settings.md) — configure controllers and make canisters immutable -- [Cycles management](cycles-management.md) — top up canisters before long-term deployment +- [Canister lifecycle](lifecycle.md): deploy and upgrade workflow +- [Canister settings](settings.md): configure controllers and make canisters immutable +- [Cycles management](cycles-management.md): top up canisters before long-term deployment diff --git a/docs/guides/canister-management/settings.mdx b/docs/guides/canister-management/settings.mdx index 23d61ebe..79a06e97 100644 --- a/docs/guides/canister-management/settings.mdx +++ b/docs/guides/canister-management/settings.mdx @@ -94,7 +94,7 @@ Like compute allocation, memory allocation incurs a rental fee based on time and ### Freezing threshold -The minimum time the canister should be able to survive on its current cycle balance. Survival is estimated based on the canister's memory usage and the subnet's current storage cost — a canister with large stable memory freezes sooner than execution rate alone would suggest. If the balance drops below what is needed to sustain this duration, the canister freezes. +The minimum time the canister should be able to survive on its current cycle balance. Survival is estimated based on the canister's memory usage and the subnet's current storage cost: a canister with large stable memory freezes sooner than execution rate alone would suggest. If the balance drops below what is needed to sustain this duration, the canister freezes. | Property | Value | |----------|-------| @@ -197,7 +197,7 @@ settings: ### Log memory limit -{/* Needs human verification: log_memory_limit is exposed by icp-cli but is absent from the canonical ic.did — verify whether this is a management canister setting or an icp-cli layer setting */} +{/* Needs human verification: log_memory_limit is exposed by icp-cli but is absent from the canonical ic.did: verify whether this is a management canister setting or an icp-cli layer setting */} Maximum memory for storing canister logs. Oldest logs are purged when usage exceeds this value. @@ -229,7 +229,7 @@ Controls who can list and read canister snapshots through the management caniste | `allowed_viewers` | Specific principals can list and read snapshots | :::note -Configuring `snapshot_visibility` via `icp.yaml` or CLI flags is not yet supported in icp-cli. Set it programmatically via the management canister — see [Updating settings programmatically](#updating-settings-programmatically). +Configuring `snapshot_visibility` via `icp.yaml` or CLI flags is not yet supported in icp-cli. Set it programmatically via the management canister: see [Updating settings programmatically](#updating-settings-programmatically). ::: ### Environment variables @@ -346,7 +346,7 @@ import Principal "mo:core/Principal"; persistent actor Self { - // All fields are optional — only include those you want to change + // All fields are optional: only include those you want to change type CanisterSettings = { controllers : ?[Principal]; compute_allocation : ?Nat; @@ -457,4 +457,4 @@ How you configure controllers depends on the trust model for your canister: - [Cycles costs reference](../../reference/cycles-costs.md) -- Pricing for compute and memory allocation. - [Management canister reference](../../reference/management-canister.md) -- Full interface specification. -{/* Upstream: informed by dfinity/portal — docs/building-apps/canister-management/settings.mdx, docs/building-apps/canister-management/control.mdx, docs/references/_attachments/ic.did (snapshot_visibility field); dfinity/icp-cli — docs/reference/canister-settings.md, docs/reference/cli.md; dfinity/icskills — skills/cycles-management/SKILL.md */} +{/* Upstream: informed by dfinity/portal (docs/building-apps/canister-management/settings.mdx, docs/building-apps/canister-management/control.mdx, docs/references/_attachments/ic.did (snapshot_visibility field); dfinity/icp-cli) docs/reference/canister-settings.md, docs/reference/cli.md; dfinity/icskills: skills/cycles-management/SKILL.md */} diff --git a/docs/guides/canister-management/snapshots.md b/docs/guides/canister-management/snapshots.md index 824c3e2d..1af0dfde 100644 --- a/docs/guides/canister-management/snapshots.md +++ b/docs/guides/canister-management/snapshots.md @@ -5,7 +5,7 @@ sidebar: order: 5 --- -Canister snapshots capture the full state of a canister — its compiled Wasm module, Wasm heap memory, stable memory, certified variables, and chunk store — at a specific point in time. You can restore a canister to a snapshot to roll back after a failed upgrade, recover from data corruption, or transfer state to another canister. +Canister snapshots capture the full state of a canister (its compiled Wasm module, Wasm heap memory, stable memory, certified variables, and chunk store) at a specific point in time. You can restore a canister to a snapshot to roll back after a failed upgrade, recover from data corruption, or transfer state to another canister. Only controllers of a canister can create or restore snapshots. Up to 10 snapshots per canister can be stored on-chain at a time. @@ -13,9 +13,9 @@ Only controllers of a canister can create or restore snapshots. Up to 10 snapsho Snapshots are useful in three situations: -- **Pre-upgrade backup** — Take a snapshot before deploying an upgrade. If the upgrade introduces a bug or breaks state, restore the snapshot to roll back instantly. -- **Disaster recovery** — If a canister traps with an unrecoverable error and you have a snapshot, you can restore the canister to its last known-good state. -- **State transfer** — Download a snapshot to disk, then upload it to another canister. This is the foundation of canister migration between subnets. +- **Pre-upgrade backup**: Take a snapshot before deploying an upgrade. If the upgrade introduces a bug or breaks state, restore the snapshot to roll back instantly. +- **Disaster recovery**: If a canister traps with an unrecoverable error and you have a snapshot, you can restore the canister to its last known-good state. +- **State transfer**: Download a snapshot to disk, then upload it to another canister. This is the foundation of canister migration between subnets. ## Creating a snapshot @@ -147,7 +147,7 @@ icp canister start my-canister -e ic ## Example: transferring state between canisters -Download a snapshot from a source canister and upload it to a target canister. This download-then-upload workflow is the foundation of canister migration between subnets — direct restore (`load_canister_snapshot`) only works within the same subnet, so cross-subnet transfer requires downloading the snapshot locally first and uploading it to the target. +Download a snapshot from a source canister and upload it to a target canister. This download-then-upload workflow is the foundation of canister migration between subnets: direct restore (`load_canister_snapshot`) only works within the same subnet, so cross-subnet transfer requires downloading the snapshot locally first and uploading it to the target. All snapshot commands accept either canister names (with `-e`) or canister IDs (with `-n`). Use `-n ic` when the target canister is not part of your project. @@ -178,8 +178,8 @@ icp canister status my-canister -e ic ## Next steps -- [Canister lifecycle](lifecycle.md) — Understand how snapshots fit into the upgrade workflow -- [Canister upgrades security](../security/canister-upgrades.md) — Security considerations when using snapshot-based rollbacks -- [icp-cli canister snapshot reference](https://cli.internetcomputer.org/) — Full command reference for all snapshot subcommands +- [Canister lifecycle](lifecycle.md): Understand how snapshots fit into the upgrade workflow +- [Canister upgrades security](../security/canister-upgrades.md): Security considerations when using snapshot-based rollbacks +- [icp-cli canister snapshot reference](https://cli.internetcomputer.org/): Full command reference for all snapshot subcommands diff --git a/docs/guides/canister-management/subnet-selection.md b/docs/guides/canister-management/subnet-selection.md index d50a5ff2..2e1dbfc3 100644 --- a/docs/guides/canister-management/subnet-selection.md +++ b/docs/guides/canister-management/subnet-selection.md @@ -5,16 +5,16 @@ sidebar: order: 8 --- -The Internet Computer is composed of independent subnets — each a blockchain that hosts canisters and runs its own consensus. By default, icp-cli selects a subnet automatically when you deploy. This guide explains when and how to target a specific subnet. +The Internet Computer is composed of independent subnets: each a blockchain that hosts canisters and runs its own consensus. By default, icp-cli selects a subnet automatically when you deploy. This guide explains when and how to target a specific subnet. ## When to choose a subnet Default subnet selection works for most projects. Consider targeting a specific subnet when you have: -- **Data residency requirements** — The European subnet ensures all nodes are located within Europe, which can support GDPR-aligned infrastructure for applications with regional data sovereignty requirements. -- **Higher security needs** — The fiduciary subnet has 34 nodes instead of 13, providing stronger fault tolerance and Byzantine fault resistance for DeFi and high-value applications. -- **Colocation goals** — Placing canisters on the same subnet eliminates cross-subnet message overhead and reduces inter-canister call latency. -- **Storage constraints** — Subnets share a storage budget across all their canisters. A subnet near capacity imposes extra reservation costs. Storage-heavy canisters benefit from deploying to subnets with more available headroom. +- **Data residency requirements**: The European subnet ensures all nodes are located within Europe, which can support GDPR-aligned infrastructure for applications with regional data sovereignty requirements. +- **Higher security needs**: The fiduciary subnet has 34 nodes instead of 13, providing stronger fault tolerance and Byzantine fault resistance for DeFi and high-value applications. +- **Colocation goals**: Placing canisters on the same subnet eliminates cross-subnet message overhead and reduces inter-canister call latency. +- **Storage constraints**: Subnets share a storage budget across all their canisters. A subnet near capacity imposes extra reservation costs. Storage-heavy canisters benefit from deploying to subnets with more available headroom. ## Subnet types @@ -26,7 +26,7 @@ The [ICP Dashboard](https://dashboard.internetcomputer.org/subnets) shows curren ### Fiduciary subnet -The fiduciary subnet (`pzp6e`) has 34 nodes instead of 13, providing higher security through a larger replication factor. Canisters on this subnet pay approximately 2.6× the cycle costs of a 13-node subnet — costs scale linearly with node count. The fiduciary subnet is designed for DeFi applications that require stronger guarantees than a standard application subnet provides. +The fiduciary subnet (`pzp6e`) has 34 nodes instead of 13, providing higher security through a larger replication factor. Canisters on this subnet pay approximately 2.6× the cycle costs of a 13-node subnet: costs scale linearly with node count. The fiduciary subnet is designed for DeFi applications that require stronger guarantees than a standard application subnet provides. The fiduciary subnet also hosts the threshold signature signing keys (t-ECDSA and t-Schnorr) and the EVM RPC canister. @@ -34,7 +34,7 @@ The fiduciary subnet also hosts the threshold signature signing keys (t-ECDSA an The European subnet (`bkfrj`) restricts all node machines to the European geographic region. This allows developers and enterprises to build applications that combine blockchain decentralization with regional data residency. The European subnet is one option for applications targeting GDPR-aligned infrastructure. -Note that deploying to the European subnet is a necessary but not sufficient condition for GDPR compliance — developers must evaluate their full application architecture against applicable requirements. +Note that deploying to the European subnet is a necessary but not sufficient condition for GDPR compliance: developers must evaluate their full application architecture against applicable requirements. ### System subnets @@ -42,15 +42,15 @@ System subnets host canisters that provide core ICP functionality (NNS, Internet The three system subnets are: -- `tdb26` — NNS canisters -- `uzr34` — Internet Identity, cycles ledger, exchange rate canister, ICP dashboard, and threshold signature key backup -- `w4rem` — Bitcoin integration canisters +- `tdb26`: NNS canisters +- `uzr34`: Internet Identity, cycles ledger, exchange rate canister, ICP dashboard, and threshold signature key backup +- `w4rem`: Bitcoin integration canisters ## Default subnet behavior When you run `icp deploy` without specifying a subnet, icp-cli uses the following logic: -1. If canisters in this environment already exist on mainnet, new canisters are created on the same subnet — keeping your project colocated automatically. +1. If canisters in this environment already exist on mainnet, new canisters are created on the same subnet: keeping your project colocated automatically. 2. If no canisters exist yet, icp-cli selects a random application subnet. This default keeps related canisters together and works correctly for most projects. @@ -60,10 +60,10 @@ This default keeps related canisters together and works correctly for most proje Use the [ICP Dashboard](https://dashboard.internetcomputer.org/subnets) to browse available subnets: 1. Browse the subnet list or filter by type (Application, Fiduciary, etc.) or node location. -2. Click on a subnet to view details — node count, geographic distribution, current canister load, and block rate. +2. Click on a subnet to view details: node count, geographic distribution, current canister load, and block rate. 3. Copy the subnet principal (a text ID like `pzp6e-ekpqk-3c5x7-2h6so-njoeq-mt45d-h3h6c-q3mxf-vpeez-fez7a-iae`). -To find which subnet an existing canister is on, search for the canister ID on the [ICP Dashboard](https://dashboard.internetcomputer.org) — the canister detail page shows its subnet. +To find which subnet an existing canister is on, search for the canister ID on the [ICP Dashboard](https://dashboard.internetcomputer.org): the canister detail page shows its subnet. ## Deploying to a specific subnet @@ -80,7 +80,7 @@ icp deploy my_canister -e ic --subnet pzp6e-ekpqk-3c5x7-2h6so-njoeq-mt45d-h3h6c- icp canister create my_canister -e ic --subnet pzp6e-ekpqk-3c5x7-2h6so-njoeq-mt45d-h3h6c-q3mxf-vpeez-fez7a-iae ``` -The `--subnet` flag only affects canister creation. If a canister already exists, it stays on its current subnet — the flag has no effect on existing canisters. +The `--subnet` flag only affects canister creation. If a canister already exists, it stays on its current subnet. The flag has no effect on existing canisters. > **Tip:** Subnet principal IDs can change over time. Always verify the current ID for a named subnet on the [ICP Dashboard](https://dashboard.internetcomputer.org/subnets) before using it in production scripts. @@ -96,7 +96,7 @@ This is useful when you already have a deployed canister and want new canisters ## Storage capacity considerations -Subnets enforce a storage reservation policy above 750 GiB of total utilization. When a subnet's total storage usage exceeds that threshold, reservation costs scale linearly — canisters must reserve cycles for future storage payments up to 10 years of projected costs at full subnet capacity. +Subnets enforce a storage reservation policy above 750 GiB of total utilization. When a subnet's total storage usage exceeds that threshold, reservation costs scale linearly: canisters must reserve cycles for future storage payments up to 10 years of projected costs at full subnet capacity. If you expect your canister to use significant storage, check the current utilization of candidate subnets on the [ICP Dashboard](https://dashboard.internetcomputer.org/subnets) before deploying. Choosing a subnet with available headroom avoids unexpected reservation costs as your canister grows. @@ -106,22 +106,22 @@ For details on storage costs and the reservation formula, see [Cycles costs](../ ### "Subnet not found" or canister creation fails -Verify the subnet ID is correct. Some subnets — including all system subnets — do not accept arbitrary canister creation. Confirm the subnet accepts new canisters on the ICP Dashboard before deploying. +Verify the subnet ID is correct. Some subnets (including all system subnets) do not accept arbitrary canister creation. Confirm the subnet accepts new canisters on the ICP Dashboard before deploying. ### Canister is on the wrong subnet Canisters cannot be moved between subnets while keeping the same canister ID by default. Your options depend on whether you can accept a new ID: -- **New canister ID is acceptable** — Transfer state via [canister snapshots](snapshots.md) to a new canister on the correct subnet. -- **Canister ID must be preserved** — Transfer state via snapshots, copy settings, then use `icp canister migrate-id` to move the ID to the new canister. +- **New canister ID is acceptable**: Transfer state via [canister snapshots](snapshots.md) to a new canister on the correct subnet. +- **Canister ID must be preserved**: Transfer state via snapshots, copy settings, then use `icp canister migrate-id` to move the ID to the new canister. -Note that any canister ID change means losing access to any threshold signature keys (tECDSA, tSchnorr) and vetKeys derived by the original canister — these are cryptographically bound to the canister ID. Any assets or encrypted data tied to those keys become permanently inaccessible under the new ID. +Note that any canister ID change means losing access to any threshold signature keys (tECDSA, tSchnorr) and vetKeys derived by the original canister: these are cryptographically bound to the canister ID. Any assets or encrypted data tied to those keys become permanently inaccessible under the new ID. ## Next steps -- [Cycles costs](../../reference/cycles-costs.md) — Cost tables and the subnet multiplier formula -- [Subnet types reference](../../reference/subnet-types.md) — Full reference for all subnet types with node counts and properties -- [Canister snapshots](snapshots.md) — Transfer state between canisters when migrating subnets -- [Network overview](../../concepts/network-overview.md) — How subnets fit into the ICP architecture +- [Cycles costs](../../reference/cycles-costs.md): Cost tables and the subnet multiplier formula +- [Subnet types reference](../../reference/subnet-types.md): Full reference for all subnet types with node counts and properties +- [Canister snapshots](snapshots.md): Transfer state between canisters when migrating subnets +- [Network overview](../../concepts/network-overview.md): How subnets fit into the ICP architecture diff --git a/docs/guides/chain-fusion/bitcoin.mdx b/docs/guides/chain-fusion/bitcoin.mdx index c78bd9b5..128b10c7 100644 --- a/docs/guides/chain-fusion/bitcoin.mdx +++ b/docs/guides/chain-fusion/bitcoin.mdx @@ -7,12 +7,12 @@ sidebar: import { Tabs, TabItem } from '@astrojs/starlight/components'; -ICP provides a protocol-level integration with the Bitcoin network. Canisters can hold BTC, generate Bitcoin addresses, build transactions, sign them with threshold ECDSA or Schnorr signatures, and submit them to the Bitcoin network — all without bridges or oracles. +ICP provides a protocol-level integration with the Bitcoin network. Canisters can hold BTC, generate Bitcoin addresses, build transactions, sign them with threshold ECDSA or Schnorr signatures, and submit them to the Bitcoin network: all without bridges or oracles. There are two approaches to working with Bitcoin on ICP: -- **ckBTC (chain-key Bitcoin)** — a 1:1 BTC-backed token native to ICP. Transfers settle in 1-2 seconds with a 10 satoshi fee. Best for most applications that need to accept, hold, or transfer Bitcoin value. -- **Direct Bitcoin API** — call the management canister to read UTXOs, get balances, and submit raw Bitcoin transactions. Best for advanced use cases that need full control over Bitcoin transactions (custom scripts, Ordinals, Runes, BRC-20). +- **ckBTC (chain-key Bitcoin)**: a 1:1 BTC-backed token native to ICP. Transfers settle in 1-2 seconds with a 10 satoshi fee. Best for most applications that need to accept, hold, or transfer Bitcoin value. +- **Direct Bitcoin API**: call the management canister to read UTXOs, get balances, and submit raw Bitcoin transactions. Best for advanced use cases that need full control over Bitcoin transactions (custom scripts, Ordinals, Runes, BRC-20). This guide covers both approaches. @@ -27,7 +27,7 @@ ckBTC is the recommended path for most developers. The ckBTC minter canister hol | ckBTC Ledger | `mxzaz-hqaaa-aaaar-qaada-cai` | `mc6ru-gyaaa-aaaar-qaaaq-cai` | | ckBTC Minter | `mqygn-kiaaa-aaaar-qaadq-cai` | `ml52i-qqaaa-aaaar-qaaba-cai` | | ckBTC Index | `n5wcd-faaaa-aaaar-qaaea-cai` | `mm444-5iaaa-aaaar-qaabq-cai` | -| ckBTC Checker | `oltsj-fqaaa-aaaar-qal5q-cai` | — | +| ckBTC Checker | `oltsj-fqaaa-aaaar-qal5q-cai` | - | ### Deposit flow (BTC to ckBTC) @@ -447,11 +447,11 @@ For use cases that require full control over Bitcoin transactions (custom script The Bitcoin canister exposes these methods: -- `bitcoin_get_balance` — returns the balance of a Bitcoin address in satoshis -- `bitcoin_get_utxos` — returns unspent transaction outputs for an address -- `bitcoin_get_current_fee_percentiles` — returns fee percentiles from recent transactions -- `bitcoin_get_block_headers` — returns raw block headers for a height range -- `bitcoin_send_transaction` — submits a signed transaction to the Bitcoin network +- `bitcoin_get_balance`: returns the balance of a Bitcoin address in satoshis +- `bitcoin_get_utxos`: returns unspent transaction outputs for an address +- `bitcoin_get_current_fee_percentiles`: returns fee percentiles from recent transactions +- `bitcoin_get_block_headers`: returns raw block headers for a height range +- `bitcoin_send_transaction`: submits a signed transaction to the Bitcoin network The `ic-cdk-bitcoin-canister` crate provides `get_blockchain_info()` for querying blockchain state (current height, chain tip hash, timestamp, etc.): @@ -461,9 +461,9 @@ use ic_cdk_bitcoin_canister::get_blockchain_info; let info = get_blockchain_info(get_network()) .await .expect("Failed to get blockchain info"); -// info.height — current chain height -// info.block_hash — chain tip block hash (hex) -// info.timestamp — tip block timestamp +// info.height : current chain height +// info.block_hash: chain tip block hash (hex) +// info.timestamp : tip block timestamp ``` Add to `Cargo.toml` alongside `ic-cdk`: @@ -474,8 +474,8 @@ ic-cdk-bitcoin-canister = "0.2" The threshold signature system provides: -- `ecdsa_public_key` / `sign_with_ecdsa` — for standard Bitcoin (P2PKH, P2SH) addresses -- `schnorr_public_key` / `sign_with_schnorr` — for Taproot (P2TR) addresses +- `ecdsa_public_key` / `sign_with_ecdsa`: for standard Bitcoin (P2PKH, P2SH) addresses +- `schnorr_public_key` / `sign_with_schnorr`: for Taproot (P2TR) addresses All Bitcoin API calls require cycles. The `ic-cdk-bitcoin-canister` crate handles cycle calculation and attachment automatically. @@ -591,9 +591,9 @@ Building a full Bitcoin transaction flow involves these steps: The complete implementation for all steps (address generation, transaction construction, signing, and submission) is more than 30 lines per language. See these working examples: -- [basic_bitcoin (Motoko)](https://github.com/dfinity/examples/tree/master/motoko/basic_bitcoin) — full send/receive with ECDSA and Schnorr -- [basic_bitcoin (Rust)](https://github.com/dfinity/examples/tree/master/rust/basic_bitcoin) — full send/receive with ECDSA and Schnorr -- [threshold-ecdsa (Motoko)](https://github.com/dfinity/examples/tree/master/motoko/threshold-ecdsa) — ECDSA signing +- [basic_bitcoin (Motoko)](https://github.com/dfinity/examples/tree/master/motoko/basic_bitcoin): full send/receive with ECDSA and Schnorr +- [basic_bitcoin (Rust)](https://github.com/dfinity/examples/tree/master/rust/basic_bitcoin): full send/receive with ECDSA and Schnorr +- [threshold-ecdsa (Motoko)](https://github.com/dfinity/examples/tree/master/motoko/threshold-ecdsa): ECDSA signing ### Cycle costs @@ -727,12 +727,12 @@ docker stop bitcoind && docker rm bitcoind ## Next steps -- [Chain fusion overview](../../concepts/chain-fusion.md) — understand how ICP integrates with external blockchains -- [Chain-key cryptography](../../concepts/chain-key-cryptography.md) — learn how threshold ECDSA and Schnorr signatures work -- [Chain-key tokens](../defi/chain-key-tokens.md) — explore ckBTC, ckETH, and other chain-key tokens -- [Ethereum integration](ethereum.md) — apply similar patterns for Ethereum -- [Management canister reference](../../reference/management-canister.md) — full API reference for `bitcoin_get_utxos`, `sign_with_ecdsa`, and other management canister methods -- [Bitcoin canister API specification](https://github.com/dfinity/bitcoin-canister/blob/master/INTERFACE_SPECIFICATION.md) — detailed API documentation -- [Bitcoin integration (Learn Hub)](https://learn.internetcomputer.org/hc/en-us/articles/34211154520084) — protocol-level details of how ICP connects to Bitcoin +- [Chain fusion overview](../../concepts/chain-fusion.md): understand how ICP integrates with external blockchains +- [Chain-key cryptography](../../concepts/chain-key-cryptography.md): learn how threshold ECDSA and Schnorr signatures work +- [Chain-key tokens](../defi/chain-key-tokens.md): explore ckBTC, ckETH, and other chain-key tokens +- [Ethereum integration](ethereum.md): apply similar patterns for Ethereum +- [Management canister reference](../../reference/management-canister.md): full API reference for `bitcoin_get_utxos`, `sign_with_ecdsa`, and other management canister methods +- [Bitcoin canister API specification](https://github.com/dfinity/bitcoin-canister/blob/master/INTERFACE_SPECIFICATION.md): detailed API documentation +- [Bitcoin integration (Learn Hub)](https://learn.internetcomputer.org/hc/en-us/articles/34211154520084): protocol-level details of how ICP connects to Bitcoin -{/* Upstream: informed by dfinity/portal — docs/build-on-btc/*, docs/references/bitcoin-how-it-works.mdx, docs/references/cycles-cost-formulas.mdx; dfinity/icskills — skills/ckbtc/SKILL.md; dfinity/icp-cli-templates — bitcoin-starter/; dfinity/cdk-rs — ic-cdk-bitcoin-canister 0.2 */} +{/* Upstream: informed by dfinity/portal (docs/build-on-btc/*, docs/references/bitcoin-how-it-works.mdx, docs/references/cycles-cost-formulas.mdx; dfinity/icskills) skills/ckbtc/SKILL.md; dfinity/icp-cli-templates (bitcoin-starter/; dfinity/cdk-rs) ic-cdk-bitcoin-canister 0.2 */} diff --git a/docs/guides/chain-fusion/chain-fusion-signer.md b/docs/guides/chain-fusion/chain-fusion-signer.md index 0ee3b10b..15a022d0 100644 --- a/docs/guides/chain-fusion/chain-fusion-signer.md +++ b/docs/guides/chain-fusion/chain-fusion-signer.md @@ -1,6 +1,6 @@ --- title: "Chain Fusion Signer" -description: "Use the Chain Fusion Signer canister to sign transactions for Bitcoin, Ethereum, and other chains from web apps and the command line — no backend canister required." +description: "Use the Chain Fusion Signer canister to sign transactions for Bitcoin, Ethereum, and other chains from web apps and the command line. No backend canister required." sidebar: order: 5 --- @@ -25,7 +25,7 @@ Every signer API call deducts cycles from your cycles ledger account. Before cal SIGNER="grghe-syaaa-aaaar-qabyq-cai" CYCLES_LEDGER="um5iw-rqaaa-aaaaq-qaaba-cai" -# Approve 1 trillion cycles — enough for ~27 signing operations +# Approve 1 trillion cycles: enough for ~27 signing operations icp canister call "$CYCLES_LEDGER" icrc2_approve \ "(record { amount = 1_000_000_000_000 : nat; @@ -229,7 +229,7 @@ async function approveAndSign(identity: Identity, messageHash: string) { -OISY Wallet uses the Chain Fusion Signer as its production signing backend and serves as a reference implementation. OISY uses `PatronPaysIcrc2Cycles` — the OISY backend canister pre-approves cycles on each user's behalf, so individual users pay no cycles directly. +OISY Wallet uses the Chain Fusion Signer as its production signing backend and serves as a reference implementation. OISY uses `PatronPaysIcrc2Cycles`: the OISY backend canister pre-approves cycles on each user's behalf, so individual users pay no cycles directly. ## API fees @@ -261,16 +261,16 @@ The `opt PaymentType` argument accepts these variants: Pass `null` instead of a payment type to use the canister's default, which is `CallerPaysIcrc2Cycles`. -**Note on token variants:** `CallerPaysIcrc2Tokens` and `PatronPaysIcrc2Tokens` are supported by this canister, but the ledger is hardcoded to the Cycles Ledger — they do not accept arbitrary tokens such as ckBTC or ckETH. All five variants settle in cycles. +**Note on token variants:** `CallerPaysIcrc2Tokens` and `PatronPaysIcrc2Tokens` are supported by this canister, but the ledger is hardcoded to the Cycles Ledger: they do not accept arbitrary tokens such as ckBTC or ckETH. All five variants settle in cycles. -These variants are defined by [papi](https://github.com/dfinity/papi), an open-source Rust library for adding payment gateways to ICP canisters. The Chain Fusion Signer uses papi internally to handle fee collection. If you want to charge callers in your own canister — using the same `CallerPaysIcrc2Cycles` or `PatronPaysIcrc2Cycles` patterns — papi provides the implementation. +These variants are defined by [papi](https://github.com/dfinity/papi), an open-source Rust library for adding payment gateways to ICP canisters. The Chain Fusion Signer uses papi internally to handle fee collection. If you want to charge callers in your own canister (using the same `CallerPaysIcrc2Cycles` or `PatronPaysIcrc2Cycles` patterns) papi provides the implementation. ## Next steps -- [Bitcoin integration guide](bitcoin.md) — build a full Bitcoin dapp with your own signing backend -- [Ethereum integration guide](ethereum.md) — EVM RPC canister for reading Ethereum state -- [Cycles Ledger](../../reference/system-canisters.md#cycles-ledger) — fund your account with cycles -- [Offline key derivation](offline-key-derivation.md) — derive ETH/BTC addresses for any canister principal without a management canister call -- [papi](https://github.com/dfinity/papi) — add the same `CallerPaysIcrc2Cycles` / `PatronPaysIcrc2Cycles` payment pattern to your own canister +- [Bitcoin integration guide](bitcoin.md): build a full Bitcoin app with your own signing backend +- [Ethereum integration guide](ethereum.md): EVM RPC canister for reading Ethereum state +- [Cycles Ledger](../../reference/system-canisters.md#cycles-ledger): fund your account with cycles +- [Offline key derivation](offline-key-derivation.md): derive ETH/BTC addresses for any canister principal without a management canister call +- [papi](https://github.com/dfinity/papi): add the same `CallerPaysIcrc2Cycles` / `PatronPaysIcrc2Cycles` payment pattern to your own canister diff --git a/docs/guides/chain-fusion/dogecoin.md b/docs/guides/chain-fusion/dogecoin.md index 95707e66..b452a906 100644 --- a/docs/guides/chain-fusion/dogecoin.md +++ b/docs/guides/chain-fusion/dogecoin.md @@ -11,8 +11,8 @@ The Dogecoin integration is currently in **beta**. No major API changes are expe ICP canisters can interact directly with the Dogecoin network without bridges or oracles. The integration works through two components: -- **Dogecoin canister** — a system canister controlled by the NNS that exposes an API for querying Dogecoin network state (UTXOs, balances, block information) and submitting signed transactions. -- **Threshold ECDSA** — canisters request threshold ECDSA signatures from the management canister to sign Dogecoin transactions. The private key is never reconstructed; it exists only as secret shares distributed across subnet nodes. +- **Dogecoin canister**: a system canister controlled by the NNS that exposes an API for querying Dogecoin network state (UTXOs, balances, block information) and submitting signed transactions. +- **Threshold ECDSA**: canisters request threshold ECDSA signatures from the management canister to sign Dogecoin transactions. The private key is never reconstructed; it exists only as secret shares distributed across subnet nodes. This is the same model as the [Bitcoin integration](bitcoin.md), using a UTXO-based transaction model and secp256k1 ECDSA signatures. The main difference is that Dogecoin transactions are submitted through the Dogecoin canister rather than the Bitcoin management canister API. @@ -20,12 +20,12 @@ This is the same model as the [Bitcoin integration](bitcoin.md), using a UTXO-ba When a canister wants to send DOGE, it follows these steps: -1. **Get a public key** — call `ecdsa_public_key` on the management canister with a derivation path unique to the user or context. -2. **Derive a Dogecoin address** — compute a P2PKH address from the public key using Dogecoin's address format. -3. **Read UTXOs** — call `dogecoin_get_utxos` on the Dogecoin canister to list unspent outputs for the address. -4. **Build the transaction** — select UTXOs as inputs, set outputs (recipient and change address), and compute the transaction hash. -5. **Sign each input** — call `sign_with_ecdsa` on the management canister to sign the transaction hash for each input. -6. **Submit the transaction** — call `dogecoin_send_transaction` on the Dogecoin canister to broadcast the signed transaction. +1. **Get a public key**: call `ecdsa_public_key` on the management canister with a derivation path unique to the user or context. +2. **Derive a Dogecoin address**: compute a P2PKH address from the public key using Dogecoin's address format. +3. **Read UTXOs**: call `dogecoin_get_utxos` on the Dogecoin canister to list unspent outputs for the address. +4. **Build the transaction**: select UTXOs as inputs, set outputs (recipient and change address), and compute the transaction hash. +5. **Sign each input**: call `sign_with_ecdsa` on the management canister to sign the transaction hash for each input. +6. **Submit the transaction**: call `dogecoin_send_transaction` on the Dogecoin canister to broadcast the signed transaction. For reading balances and UTXO state without sending a transaction, only steps 1–3 are needed. @@ -33,10 +33,10 @@ For reading balances and UTXO state without sending a transaction, only steps 1 The Dogecoin canister exposes these methods: -- `dogecoin_get_utxos` — returns unspent transaction outputs for a Dogecoin address -- `dogecoin_get_balance` — returns the balance of a Dogecoin address in koinus (1 DOGE = 100,000,000 koinus) -- `dogecoin_get_current_fee_percentiles` — returns fee percentiles from recent Dogecoin transactions -- `dogecoin_send_transaction` — submits a signed transaction to the Dogecoin network +- `dogecoin_get_utxos`: returns unspent transaction outputs for a Dogecoin address +- `dogecoin_get_balance`: returns the balance of a Dogecoin address in koinus (1 DOGE = 100,000,000 koinus) +- `dogecoin_get_current_fee_percentiles`: returns fee percentiles from recent Dogecoin transactions +- `dogecoin_send_transaction`: submits a signed transaction to the Dogecoin network The Dogecoin canister is an NNS-controlled system canister. Your canisters can call it directly without additional setup or trust assumptions beyond the NNS governance process. For the current canister ID and complete interface specification, see the [Dogecoin canister repository](https://github.com/dfinity/dogecoin-canister). @@ -63,7 +63,7 @@ use ic_cdk::update; use ic_cdk::call::Call; // Replace with the canister ID from https://github.com/dfinity/dogecoin-canister -// Using this placeholder will panic at runtime — replace before deploying. +// Using this placeholder will panic at runtime: replace before deploying. const DOGECOIN_CANISTER: &str = "xxxxxxxxx-xxxxx-xxxxx-xxxxx-xxx"; #[derive(CandidType, Deserialize, Clone, Debug)] @@ -105,27 +105,27 @@ async fn get_dogecoin_balance(address: String, network: DogecoinNetwork) -> u64 -Motoko canisters can call the Dogecoin canister using actor-based inter-canister calls with `(with cycles = amount)` syntax — the same pattern used for the Bitcoin integration. +Motoko canisters can call the Dogecoin canister using actor-based inter-canister calls with `(with cycles = amount)` syntax. The same pattern used for the Bitcoin integration. ## Transaction flow Sending DOGE from a canister involves address derivation, UTXO selection, transaction construction, threshold signing, and submission. This multi-step process closely mirrors the Bitcoin direct API workflow. -For a complete, working implementation covering all steps — including deriving a Dogecoin address from a threshold ECDSA public key, constructing a transaction with proper input/output structure, signing each input, and broadcasting — see: +For a complete, working implementation covering all steps (including deriving a Dogecoin address from a threshold ECDSA public key, constructing a transaction with proper input/output structure, signing each input, and broadcasting) see: -- [Build on Dogecoin book](https://dfinity.github.io/dogecoin-canister) — step-by-step guide with complete examples -- [basic_dogecoin example](https://github.com/dfinity/dogecoin-canister/tree/master/examples/basic_dogecoin) — complete Rust example for the full send flow +- [Build on Dogecoin book](https://dfinity.github.io/dogecoin-canister): step-by-step guide with complete examples +- [basic_dogecoin example](https://github.com/dfinity/dogecoin-canister/tree/master/examples/basic_dogecoin): complete Rust example for the full send flow The [Bitcoin integration guide](bitcoin.md) covers the same conceptual steps with complete inline code. Because Dogecoin is a fork of Bitcoin and shares the same UTXO model and secp256k1 ECDSA signatures, the patterns translate directly with these differences: - Use the Dogecoin canister for UTXO queries and transaction submission (not the management canister's `bitcoin_*` API) - Use Dogecoin's P2PKH address format (mainnet addresses start with `D`) - Dogecoin uses koinus instead of satoshis (1 DOGE = 100,000,000 koinus) -- Dogecoin uses a different fee rate — use `dogecoin_get_current_fee_percentiles` to get current rates +- Dogecoin uses a different fee rate: use `dogecoin_get_current_fee_percentiles` to get current rates ## Relationship to Bitcoin integration -Dogecoin is a fork of Bitcoin and shares its fundamental transaction model: UTXO-based, secp256k1 ECDSA signatures, and similar transaction structure. Both integrations on ICP are direct protocol-level integrations — no bridges or external oracles. +Dogecoin is a fork of Bitcoin and shares its fundamental transaction model: UTXO-based, secp256k1 ECDSA signatures, and similar transaction structure. Both integrations on ICP are direct protocol-level integrations. No bridges or external oracles. The key differences in implementation: @@ -145,10 +145,10 @@ The Dogecoin canister is controlled by the [Network Nervous System](../../concep ## Next steps -- [Chain fusion overview](../../concepts/chain-fusion.md) — understand how ICP integrates with external blockchains -- [Bitcoin integration](bitcoin.md) — the same UTXO-based integration with complete code examples -- [Chain-key cryptography](../../concepts/chain-key-cryptography.md) — how threshold ECDSA signatures work -- [Chain-key tokens](../defi/chain-key-tokens.md) — ckBTC, ckETH, and upcoming ckDOGE -- [Build on Dogecoin book](https://dfinity.github.io/dogecoin-canister) — full tutorial for building Dogecoin smart contracts on ICP +- [Chain fusion overview](../../concepts/chain-fusion.md): understand how ICP integrates with external blockchains +- [Bitcoin integration](bitcoin.md): the same UTXO-based integration with complete code examples +- [Chain-key cryptography](../../concepts/chain-key-cryptography.md): how threshold ECDSA signatures work +- [Chain-key tokens](../defi/chain-key-tokens.md): ckBTC, ckETH, and upcoming ckDOGE +- [Build on Dogecoin book](https://dfinity.github.io/dogecoin-canister): full tutorial for building Dogecoin apps on ICP diff --git a/docs/guides/chain-fusion/ethereum.mdx b/docs/guides/chain-fusion/ethereum.mdx index 4bb92d61..b515628f 100644 --- a/docs/guides/chain-fusion/ethereum.mdx +++ b/docs/guides/chain-fusion/ethereum.mdx @@ -7,7 +7,7 @@ sidebar: import { Tabs, TabItem } from '@astrojs/starlight/components'; -ICP canisters can read data from Ethereum and other EVM-compatible chains, sign transactions with threshold ECDSA, and submit them onchain — all without bridges, oracles, or external signers. This guide covers the EVM RPC canister, which handles JSON-RPC calls to Ethereum nodes on your behalf. +ICP canisters can read data from Ethereum and other EVM-compatible chains, sign transactions with threshold ECDSA, and submit them onchain: all without bridges, oracles, or external signers. This guide covers the EVM RPC canister, which handles JSON-RPC calls to Ethereum nodes on your behalf. For a conceptual overview of how ICP connects to other blockchains, see [Chain Fusion](../../concepts/chain-fusion.md). @@ -41,10 +41,10 @@ The EVM RPC canister supports Ethereum and several L2 networks out of the box. Y | Provider | Ethereum | Sepolia | Arbitrum | Base | Optimism | |---|---|---|---|---|---| | Alchemy | yes | yes | yes | yes | yes | -| Ankr | yes | — | yes | yes | yes | +| Ankr | yes | - | yes | yes | yes | | BlockPi | yes | yes | yes | yes | yes | -| Cloudflare | yes | — | — | — | — | -| LlamaNodes | yes | — | yes | yes | yes | +| Cloudflare | yes | - | - | - | - | +| LlamaNodes | yes | - | yes | yes | yes | | PublicNode | yes | yes | yes | yes | yes | Pass `null` (Motoko) or `None` (Rust) for the provider list to use all available defaults. To use a specific provider, pass it explicitly (e.g., `#EthMainnet(#PublicNode)` in Motoko, `RpcService::EthMainnet(EthMainnetService::PublicNode)` in Rust). @@ -53,10 +53,10 @@ Pass `null` (Motoko) or `None` (Rust) for the provider list to use all available The EVM RPC canister offers two styles of API: -- **Typed Candid-RPC methods** like `eth_getBlockByNumber` and `eth_getTransactionReceipt` — these query multiple providers by default and return a `MultiRpcResult` with built-in consensus. -- **Raw JSON-RPC** via the `request` method — sends a single JSON-RPC request to one provider. More flexible, but you handle parsing and consensus yourself. +- **Typed Candid-RPC methods** like `eth_getBlockByNumber` and `eth_getTransactionReceipt`: these query multiple providers by default and return a `MultiRpcResult` with built-in consensus. +- **Raw JSON-RPC** via the `request` method: sends a single JSON-RPC request to one provider. More flexible, but you handle parsing and consensus yourself. -{/* Needs human verification: `canister:name` import syntax (e.g., `import EvmRpc "canister:evm_rpc"`) may not work with icp-cli; the Motoko team is redesigning canister discovery. The Motoko examples below use this syntax — verify the correct import approach with icp-cli before shipping. */} +{/* Needs human verification: `canister:name` import syntax (e.g., `import EvmRpc "canister:evm_rpc"`) may not work with icp-cli; the Motoko team is redesigning canister discovery. The Motoko examples below use this syntax: verify the correct import approach with icp-cli before shipping. */} ### Get the latest block (typed API) @@ -644,11 +644,11 @@ icp network start -d icp deploy -e local ``` -On mainnet, only the backend is deployed — the EVM RPC canister is already available at `7hfb6-caaaa-aaaar-qadga-cai`. +On mainnet, only the backend is deployed. The EVM RPC canister is already available at `7hfb6-caaaa-aaaar-qadga-cai`. ### Testing via icp-cli -Query methods (`requestCost`, `getProviders`) work directly from the CLI. Update calls require cycles — the CLI cannot attach cycles to a direct canister call. Test those through your backend canister's wrapper functions instead, since the backend attaches cycles to the inter-canister call internally: +Query methods (`requestCost`, `getProviders`) work directly from the CLI. Update calls require cycles. The CLI cannot attach cycles to a direct canister call. Test those through your backend canister's wrapper functions instead, since the backend attaches cycles to the inter-canister call internally: ```bash # Query: estimate cost (no cycles needed) @@ -699,11 +699,11 @@ ic-cdk = "0.20" ## Next steps -- [Bitcoin integration](bitcoin.md) — similar patterns for BTC using the Bitcoin API -- [Chain-key tokens](../defi/chain-key-tokens.md) — learn about ckETH and other chain-key tokens backed 1:1 by native assets -- [Chain Fusion concepts](../../concepts/chain-fusion.md) — understand how ICP connects to external blockchains -- [HTTPS outcalls](../backends/https-outcalls.md) — the underlying mechanism the EVM RPC canister uses -- [basic_ethereum example](https://github.com/dfinity/examples/tree/master/rust/basic_ethereum) — complete end-to-end Rust example with address generation, signing, and transaction submission -- [EVM RPC canister source](https://github.com/dfinity/evm-rpc-canister) — canister source code and Candid interface +- [Bitcoin integration](bitcoin.md): similar patterns for BTC using the Bitcoin API +- [Chain-key tokens](../defi/chain-key-tokens.md): learn about ckETH and other chain-key tokens backed 1:1 by native assets +- [Chain Fusion concepts](../../concepts/chain-fusion.md): understand how ICP connects to external blockchains +- [HTTPS outcalls](../backends/https-outcalls.md): the underlying mechanism the EVM RPC canister uses +- [basic_ethereum example](https://github.com/dfinity/examples/tree/master/rust/basic_ethereum): complete end-to-end Rust example with address generation, signing, and transaction submission +- [EVM RPC canister source](https://github.com/dfinity/evm-rpc-canister): canister source code and Candid interface -{/* Upstream: informed by dfinity/portal — docs/building-apps/chain-fusion/ethereum/*; dfinity/icskills — skills/evm-rpc/SKILL.md; dfinity/cdk-rs — ic-cdk/src/management_canister.rs */} +{/* Upstream: informed by dfinity/portal (docs/building-apps/chain-fusion/ethereum/*; dfinity/icskills) skills/evm-rpc/SKILL.md; dfinity/cdk-rs: ic-cdk/src/management_canister.rs */} diff --git a/docs/guides/chain-fusion/offline-key-derivation.md b/docs/guides/chain-fusion/offline-key-derivation.md index 151b7e8d..af09233d 100644 --- a/docs/guides/chain-fusion/offline-key-derivation.md +++ b/docs/guides/chain-fusion/offline-key-derivation.md @@ -1,6 +1,6 @@ --- title: "Offline public key derivation" -description: "Derive canister threshold public keys and blockchain addresses offline — no management canister call or cycles required." +description: "Derive canister threshold public keys and blockchain addresses offline. No management canister call or cycles required." sidebar: order: 6 --- @@ -75,18 +75,18 @@ npx @dfinity/ic-pub-key derive ecdsa secp256k1 \ --chaincode \ --derivationpath -# Schnorr Ed25519 (mainnet key_1 is the default — no flags needed for the master key) +# Schnorr Ed25519 (mainnet key_1 is the default: no flags needed for the master key) npx @dfinity/ic-pub-key derive schnorr ed25519 \ --derivationpath ``` -For deriving Chain Fusion Signer addresses specifically (ETH/BTC for a given principal), use the `signer` commands instead — see the [Chain Fusion Signer guide](chain-fusion-signer.md#derive-offline-no-cycles). +For deriving Chain Fusion Signer addresses specifically (ETH/BTC for a given principal), use the `signer` commands instead: see the [Chain Fusion Signer guide](chain-fusion-signer.md#derive-offline-no-cycles). ## Next steps -- [Chain Fusion Signer](chain-fusion-signer.md) — sign transactions for Bitcoin and Ethereum from web apps and CLI -- [Management canister reference](../../reference/management-canister.md#chain-key-signing) — the on-chain `ecdsa_public_key` and `schnorr_public_key` methods -- [Chain-key cryptography](../../concepts/chain-key-cryptography.md) — how threshold key derivation works +- [Chain Fusion Signer](chain-fusion-signer.md): sign transactions for Bitcoin and Ethereum from web apps and CLI +- [Management canister reference](../../reference/management-canister.md#chain-key-signing): the on-chain `ecdsa_public_key` and `schnorr_public_key` methods +- [Chain-key cryptography](../../concepts/chain-key-cryptography.md): how threshold key derivation works diff --git a/docs/guides/chain-fusion/solana.mdx b/docs/guides/chain-fusion/solana.mdx index 775d72c4..fa20a4cf 100644 --- a/docs/guides/chain-fusion/solana.mdx +++ b/docs/guides/chain-fusion/solana.mdx @@ -7,7 +7,7 @@ sidebar: import { Tabs, TabItem } from '@astrojs/starlight/components'; -ICP canisters can interact directly with the Solana network: read account balances, query transaction history, and sign and submit transactions — all without bridges, oracles, or external signers. This guide covers the SOL RPC canister for querying Solana and threshold Ed25519 signatures for signing Solana transactions. +ICP canisters can interact directly with the Solana network: read account balances, query transaction history, and sign and submit transactions: all without bridges, oracles, or external signers. This guide covers the SOL RPC canister for querying Solana and threshold Ed25519 signatures for signing Solana transactions. For a conceptual overview of how ICP connects to other blockchains, see [Chain Fusion](../../concepts/chain-fusion.md). @@ -15,8 +15,8 @@ For a conceptual overview of how ICP connects to other blockchains, see [Chain F Two ICP features enable Solana integration: -- **[HTTPS outcalls](../backends/https-outcalls.md)** — canisters can make HTTP requests to external services. The SOL RPC canister uses HTTPS outcalls to reach Solana JSON-RPC providers and aggregates their responses for consensus. -- **Threshold Ed25519** — Solana uses Ed25519 signatures for authorizing transactions. ICP provides a threshold signature scheme where a canister can sign messages using a key that no single node holds outright. This lets canisters sign valid Solana transactions without ever exposing a private key. +- **[HTTPS outcalls](../backends/https-outcalls.md)**: canisters can make HTTP requests to external services. The SOL RPC canister uses HTTPS outcalls to reach Solana JSON-RPC providers and aggregates their responses for consensus. +- **Threshold Ed25519**: Solana uses Ed25519 signatures for authorizing transactions. ICP provides a threshold signature scheme where a canister can sign messages using a key that no single node holds outright. This lets canisters sign valid Solana transactions without ever exposing a private key. ## SOL RPC canister @@ -24,7 +24,7 @@ The SOL RPC canister (`2xib7-jqaaa-aaaar-qai6q-cai`) is deployed on ICP mainnet 1. Your canister sends a JSON-RPC request with cycles attached. 2. The SOL RPC canister fans the request out to multiple Solana RPC providers via HTTPS outcalls. -3. Responses are aggregated — the canister returns the result once providers agree. +3. Responses are aggregated. The canister returns the result once providers agree. 4. Unused cycles are refunded. No API keys are required. The SOL RPC canister is controlled by the [Network Nervous System](../../concepts/governance.md), so any change to it requires an NNS proposal. @@ -119,7 +119,7 @@ The response is the raw JSON-RPC response string. The `getBalance` result contai ### Other common queries -Any Solana JSON-RPC method works the same way — pass the JSON payload as the first argument to `request` and set the second argument (`max_response_bytes`) to the expected response size. Larger values cost more cycles; set it to the minimum needed: +Any Solana JSON-RPC method works the same way: pass the JSON payload as the first argument to `request` and set the second argument (`max_response_bytes`) to the expected response size. Larger values cost more cycles; set it to the minimum needed: ```rust // Get latest slot @@ -144,7 +144,7 @@ For the full list of supported methods, see the [Solana JSON-RPC documentation]( ## Signing Solana transactions -Solana uses Ed25519 signatures for all transactions. ICP supports threshold Ed25519 via the management canister's `sign_with_schnorr` method (using the `ed25519` algorithm variant). The key is distributed across ICP subnet nodes — no single node ever holds the full private key. +Solana uses Ed25519 signatures for all transactions. ICP supports threshold Ed25519 via the management canister's `sign_with_schnorr` method (using the `ed25519` algorithm variant). The key is distributed across ICP subnet nodes. No single node ever holds the full private key. The signing flow for a Solana transaction: 1. Get your canister's Ed25519 public key from the management canister. @@ -223,7 +223,7 @@ The returned `public_key` is the raw 32-byte Ed25519 public key. To use it as a ### Sign a transaction message -`sign_with_schnorr` takes the full message bytes — not a hash. For Solana transactions, pass the serialized transaction message bytes directly. +`sign_with_schnorr` takes the full message bytes: not a hash. For Solana transactions, pass the serialized transaction message bytes directly. @@ -301,10 +301,10 @@ The returned 64-byte signature is a valid Ed25519 signature that Solana accepts | Key ID | Environment | |---|---| -| `test_key_1` | ICP mainnet — test key, reduced security. Use for development and testing only. | -| `key_1` | ICP mainnet — production key. Use for production deployments. | +| `test_key_1` | ICP mainnet: test key, reduced security. Use for development and testing only. | +| `key_1` | ICP mainnet: production key. Use for production deployments. | -Ed25519 does not have a local development key — unlike ECDSA (which has `dfx_test_key` for local replica testing), there is no Ed25519 equivalent. All Ed25519 signing must be tested on ICP mainnet using `test_key_1`. Plan your test workflow accordingly: local replica development is not possible for the signing steps. +Ed25519 does not have a local development key: unlike ECDSA (which has `dfx_test_key` for local replica testing), there is no Ed25519 equivalent. All Ed25519 signing must be tested on ICP mainnet using `test_key_1`. Plan your test workflow accordingly: local replica development is not possible for the signing steps. ## Complete transaction example @@ -326,26 +326,26 @@ Every SOL RPC call requires cycles to cover HTTPS outcall costs. The `sign_with_ | SOL RPC `request` (small response, 1–2 providers) | ~1–5B cycles | | `sign_with_schnorr` (Ed25519, Rust cdk auto-attached) | ~26.15B cycles | -Send 10B cycles per RPC call as a starting budget — unused cycles are refunded. Set `max_response_bytes` to the minimum needed; smaller values reduce costs. +Send 10B cycles per RPC call as a starting budget: unused cycles are refunded. Set `max_response_bytes` to the minimum needed; smaller values reduce costs. ## Current status and limitations The Solana integration is newer than the Bitcoin and Ethereum integrations: -- **SOL RPC canister is live on mainnet** — deployed and functional, with the API surface still evolving. -- **Threshold Ed25519 is available** — both test (`test_key_1`) and production (`key_1`) keys are live on ICP mainnet. -- **No SPL token helpers** — SPL token operations (reading token accounts, transferring tokens) require constructing JSON-RPC calls and transaction instructions manually. -- **No ckSOL token** — unlike Bitcoin (ckBTC) and Ethereum (ckETH), there is no chain-key SOL token yet. -- **Transaction construction is manual** — there is no official ICP library for building Solana transactions. See the [basic_solana example](https://github.com/dfinity/sol-rpc-canister/tree/main/examples/basic_solana) for a reference implementation. +- **SOL RPC canister is live on mainnet**: deployed and functional, with the API surface still evolving. +- **Threshold Ed25519 is available**: both test (`test_key_1`) and production (`key_1`) keys are live on ICP mainnet. +- **No SPL token helpers**: SPL token operations (reading token accounts, transferring tokens) require constructing JSON-RPC calls and transaction instructions manually. +- **No ckSOL token**: unlike Bitcoin (ckBTC) and Ethereum (ckETH), there is no chain-key SOL token yet. +- **Transaction construction is manual**: there is no official ICP library for building Solana transactions. See the [basic_solana example](https://github.com/dfinity/sol-rpc-canister/tree/main/examples/basic_solana) for a reference implementation. Follow the [SOL RPC canister repository](https://github.com/dfinity/sol-rpc-canister/blob/main/README.md) for the latest updates. ## Next steps -- [SOL RPC canister README](https://github.com/dfinity/sol-rpc-canister/blob/main/README.md) — full documentation and the `basic_solana` end-to-end example -- [Bitcoin integration](bitcoin.md) — direct protocol-level BTC integration -- [Ethereum integration](ethereum.md) — EVM RPC canister, similar JSON-RPC pattern -- [HTTPS outcalls](../backends/https-outcalls.md) — the mechanism underlying the SOL RPC canister -- [Chain Fusion concepts](../../concepts/chain-fusion.md) — how ICP connects to other blockchains +- [SOL RPC canister README](https://github.com/dfinity/sol-rpc-canister/blob/main/README.md): full documentation and the `basic_solana` end-to-end example +- [Bitcoin integration](bitcoin.md): direct protocol-level BTC integration +- [Ethereum integration](ethereum.md): EVM RPC canister, similar JSON-RPC pattern +- [HTTPS outcalls](../backends/https-outcalls.md): the mechanism underlying the SOL RPC canister +- [Chain Fusion concepts](../../concepts/chain-fusion.md): how ICP connects to other blockchains -{/* Upstream: informed by dfinity/portal — docs/building-apps/chain-fusion/solana/overview.mdx; dfinity/cdk-rs — ic-cdk/src/management_canister.rs, ic-management-canister-types/src/lib.rs; dfinity/examples — rust/basic_solana/README.md */} +{/* Upstream: informed by dfinity/portal (docs/building-apps/chain-fusion/solana/overview.mdx; dfinity/cdk-rs) ic-cdk/src/management_canister.rs, ic-management-canister-types/src/lib.rs; dfinity/examples: rust/basic_solana/README.md */} diff --git a/docs/guides/defi/chain-key-tokens.mdx b/docs/guides/defi/chain-key-tokens.mdx index 513de9e6..6034e42c 100644 --- a/docs/guides/defi/chain-key-tokens.mdx +++ b/docs/guides/defi/chain-key-tokens.mdx @@ -1,15 +1,15 @@ --- title: "Chain-Key Tokens" -description: "Work with ckBTC and ckETH — ICP-native representations of Bitcoin and Ether with 1-2 second finality and no custodians" +description: "Work with ckBTC and ckETH: ICP-native representations of Bitcoin and Ether with 1-2 second finality and no custodians" sidebar: order: 2 --- import { Tabs, TabItem } from '@astrojs/starlight/components'; -Chain-key tokens are ICP-native tokens that represent assets from other blockchains. Each one is backed 1:1 by the original asset and is controlled entirely by ICP smart contracts — no bridges, no wrapped tokens, no third-party custodians. +Chain-key tokens are ICP-native tokens that represent assets from other blockchains. Each one is backed 1:1 by the original asset and is controlled entirely by ICP canisters. No bridges, no wrapped tokens, no third-party custodians. -**ckBTC** (chain-key Bitcoin) is backed by real BTC held by the ckBTC minter canister. **ckETH** (chain-key Ether) is backed by real ETH held by the ckETH minter canister. Both are ICRC-1 tokens, so any code that works with the ICP ledger also works with ckBTC and ckETH — you only swap the canister ID. +**ckBTC** (chain-key Bitcoin) is backed by real BTC held by the ckBTC minter canister. **ckETH** (chain-key Ether) is backed by real ETH held by the ckETH minter canister. Both are ICRC-1 tokens, so any code that works with the ICP ledger also works with ckBTC and ckETH: you only swap the canister ID. This guide covers: the minting and redemption flows, how to call the minter and ledger from a canister, subaccount derivation for per-user deposit addresses, and the trust model that keeps the peg. @@ -17,7 +17,7 @@ For plain ICRC-1/ICRC-2 transfers without the minting/withdrawal flows, see [Tok ## How chain-key tokens maintain their peg -The ckBTC and ckETH minter canisters are ICP smart contracts that hold real BTC and ETH in addresses they control through [chain-key cryptography](../../concepts/chain-key-cryptography.md). The minters use threshold signatures to sign Bitcoin and Ethereum transactions — no private key exists anywhere; signing requires cooperation from the subnet's nodes. +The ckBTC and ckETH minter canisters hold real BTC and ETH in addresses they control through [chain-key cryptography](../../concepts/chain-key-cryptography.md). The minters use threshold signatures to sign Bitcoin and Ethereum transactions. No private key exists anywhere; signing requires cooperation from the subnet's nodes. When a user deposits BTC, the minter mints exactly the same amount of ckBTC. When a user withdraws ckBTC, the minter burns the tokens and sends BTC on-chain. The peg holds by design: every ckBTC in circulation corresponds to exactly one satoshi of BTC held by the minter. The ckBTC checker canister publishes a public audit of reserves. @@ -56,10 +56,10 @@ This means ckBTC and ckETH are not wrapped tokens in the traditional sense. They The deposit flow has two steps: -1. **Get a deposit address** — call `get_btc_address` on the ckBTC minter with the user's principal and an optional subaccount. The minter returns a unique Bitcoin address. -2. **Mint ckBTC** — after the user sends BTC to that address, call `update_balance` on the minter. The minter checks for new UTXOs and mints ckBTC to the corresponding ICRC-1 account. +1. **Get a deposit address**: call `get_btc_address` on the ckBTC minter with the user's principal and an optional subaccount. The minter returns a unique Bitcoin address. +2. **Mint ckBTC**: after the user sends BTC to that address, call `update_balance` on the minter. The minter checks for new UTXOs and mints ckBTC to the corresponding ICRC-1 account. -The minter requires a minimum number of Bitcoin confirmations before minting (currently 6 on mainnet). `update_balance` returns `NoNewUtxos` if confirmations have not yet been reached — your app should poll or prompt the user to wait. +The minter requires a minimum number of Bitcoin confirmations before minting (currently 6 on mainnet). `update_balance` returns `NoNewUtxos` if confirmations have not yet been reached: your app should poll or prompt the user to wait. @@ -103,7 +103,7 @@ persistent actor Self { #GenericError : { error_code : Nat64; error_message : Text }; }; - // ckBTC minter — mainnet + // ckBTC minter: mainnet transient let ckbtcMinter : actor { get_btc_address : shared ({ owner : ?Principal; subaccount : ?Blob }) -> async Text; update_balance : shared ({ owner : ?Principal; subaccount : ?Blob }) -> async UpdateBalanceResult; @@ -208,8 +208,8 @@ async fn get_deposit_address() -> String { To convert ckBTC back to BTC, your canister must: -1. **Approve the minter** — call `icrc2_approve` on the ckBTC ledger, granting the minter canister an allowance to burn ckBTC from the user's account. The amount must include the transfer fee. -2. **Request withdrawal** — call `retrieve_btc_with_approval` on the minter with the destination Bitcoin address and the amount in satoshis. The minimum withdrawal amount is 50,000 satoshis (0.0005 BTC). +1. **Approve the minter**: call `icrc2_approve` on the ckBTC ledger, granting the minter canister an allowance to burn ckBTC from the user's account. The amount must include the transfer fee. +2. **Request withdrawal**: call `retrieve_btc_with_approval` on the minter with the destination Bitcoin address and the amount in satoshis. The minimum withdrawal amount is 50,000 satoshis (0.0005 BTC). The minter burns the ckBTC and submits a Bitcoin transaction. BTC arrives at the destination address after Bitcoin confirmations (typically 1-2 hours on mainnet). @@ -434,7 +434,7 @@ async fn withdraw_to_btc(btc_address: String, amount: u64) -> RetrieveBtcResult ## ckETH: deposit and withdrawal -The ckETH minter works similarly to ckBTC but targets Ethereum. Deposits are detected via HTTPS outcalls to Ethereum RPC nodes — the minter monitors a helper contract for ETH transfers and mints ckETH when it detects them. +The ckETH minter works similarly to ckBTC but targets Ethereum. Deposits are detected via HTTPS outcalls to Ethereum RPC nodes. The minter monitors a helper contract for ETH transfers and mints ckETH when it detects them. ### Depositing ETH to get ckETH @@ -468,11 +468,11 @@ icp canister call ss2fx-dyaaa-aaaar-qacoq-cai icrc1_transfer \ })' -e ic ``` -> Query `icrc1_fee` on the ckETH ledger before transferring — the fee is denominated in wei and can change. +> Query `icrc1_fee` on the ckETH ledger before transferring. The fee is denominated in wei and can change. ## Transferring chain-key tokens -ckBTC and ckETH are ICRC-1 tokens. Transfers work the same as any ICRC-1 transfer — call `icrc1_transfer` on the respective ledger. The only difference is the canister ID and the fee. +ckBTC and ckETH are ICRC-1 tokens. Transfers work the same as any ICRC-1 transfer: call `icrc1_transfer` on the respective ledger. The only difference is the canister ID and the fee. ```bash # Check ckBTC balance (amount in satoshis) @@ -495,7 +495,7 @@ icp canister call mxzaz-hqaaa-aaaar-qaada-cai icrc1_transfer \ })' -e ic ``` -For Motoko and Rust transfer examples, see [Token ledgers](token-ledgers.md) — the code is identical to ICRC-1 transfers, just with the ckBTC or ckETH ledger canister ID and the correct fee. +For Motoko and Rust transfer examples, see [Token ledgers](token-ledgers.md): the code is identical to ICRC-1 transfers, just with the ckBTC or ckETH ledger canister ID and the correct fee. ## Subaccount derivation for deposit flows @@ -613,17 +613,17 @@ Chain-key tokens and native chain integration serve different use cases: |-|-------|----------------| | Settlement | 1–2 seconds | Minutes (Bitcoin confirmations) | | Use case | Token transfers, DeFi, payments | Direct UTXO access, custom signing | -| Custody | Minter canister (ICP smart contract) | Your canister directly | +| Custody | Minter canister | Your canister directly | | Fee | 10 satoshis per ckBTC transfer | Bitcoin network fees | -If you need direct control over Bitcoin UTXOs or want to construct custom Bitcoin transactions, see [Bitcoin integration](../chain-fusion/bitcoin.md). If you need fast, low-fee token transfers within ICP dapps, ckBTC is the simpler choice. +If you need direct control over Bitcoin UTXOs or want to construct custom Bitcoin transactions, see [Bitcoin integration](../chain-fusion/bitcoin.md). If you need fast, low-fee token transfers within ICP apps, ckBTC is the simpler choice. ## Next steps -- [Token ledgers](token-ledgers.md) — ICRC-1/ICRC-2 transfer patterns for all tokens, including ckBTC and ckETH -- [Bitcoin integration](../chain-fusion/bitcoin.md) — native BTC UTXO access and threshold signing -- [Ethereum integration](../chain-fusion/ethereum.md) — calling Ethereum contracts from ICP canisters -- [Wallet integration](wallet-integration.md) — connecting wallets for token flows -- [Token standards](../../reference/token-standards.md) — ICRC-1 and ICRC-2 formal specifications +- [Token ledgers](token-ledgers.md): ICRC-1/ICRC-2 transfer patterns for all tokens, including ckBTC and ckETH +- [Bitcoin integration](../chain-fusion/bitcoin.md): native BTC UTXO access and threshold signing +- [Ethereum integration](../chain-fusion/ethereum.md): calling Ethereum contracts from ICP canisters +- [Wallet integration](wallet-integration.md): connecting wallets for token flows +- [Token standards](../../reference/token-standards.md): ICRC-1 and ICRC-2 formal specifications {/* Upstream: informed by dfinity/icskills skills/ckbtc/SKILL.md; dfinity/icskills skills/icrc-ledger/SKILL.md; dfinity/portal docs/defi/chain-key-tokens/cketh/overview.mdx */} diff --git a/docs/guides/defi/rosetta.md b/docs/guides/defi/rosetta.md index b0ef6a8a..0b881dba 100644 --- a/docs/guides/defi/rosetta.md +++ b/docs/guides/defi/rosetta.md @@ -11,13 +11,13 @@ This guide covers both implementations, focusing on what exchange operators and ## What is Rosetta? -Rosetta defines a uniform HTTP API for blockchain integrations. Clients — exchanges, custody platforms, analytics tools — interact with a Rosetta node rather than directly with chain-specific APIs. This lowers integration cost for operators already supporting other chains. +Rosetta defines a uniform HTTP API for blockchain integrations. Clients (exchanges, custody platforms, analytics tools) interact with a Rosetta node rather than directly with chain-specific APIs. This lowers integration cost for operators already supporting other chains. ICP Rosetta exposes the standard Rosetta endpoints: -- **Data API** — query balances, blocks, and transactions -- **Construction API** — create and sign transactions offline, then submit them -- **Network API** — network status and configuration +- **Data API**: query balances, blocks, and transactions +- **Construction API**: create and sign transactions offline, then submit them +- **Network API**: network status and configuration Both ICP Rosetta and ICRC Rosetta implement the full Rosetta specification and pass all `rosetta-cli` tests. @@ -43,7 +43,7 @@ Pull the official image: docker pull dfinity/rosetta-api ``` -**Test environment** — uses TESTICP tokens with no real value. Ideal for learning and development. +**Test environment**: uses TESTICP tokens with no real value. Ideal for learning and development. ```bash docker run \ @@ -55,7 +55,7 @@ docker run \ Get free TESTICP tokens from the [faucet](https://faucet.internetcomputer.org/). The test ICP ledger canister ID is `xafvr-biaaa-aaaai-aql5q-cai`. -**Production with data persistence** — mount `/data` so the node does not re-sync from scratch on restart: +**Production with data persistence**: mount `/data` so the node does not re-sync from scratch on restart: ```bash docker volume create rosetta @@ -70,7 +70,7 @@ docker run \ Use a specific version tag in production. Check available versions on [DockerHub](https://hub.docker.com/r/dfinity/rosetta-api/tags). -**Custom canister** — connect to a specific test ledger: +**Custom canister**: connect to a specific test ledger: ```bash docker run \ @@ -129,7 +129,7 @@ Pull the official image: docker pull dfinity/ic-icrc-rosetta-api ``` -**Quick start** — connects to the TICRC1 test token (`3jkp5-oyaaa-aaaaj-azwqa-cai`): +**Quick start**: connects to the TICRC1 test token (`3jkp5-oyaaa-aaaaj-azwqa-cai`): ```bash docker run \ @@ -143,7 +143,7 @@ docker run \ Get free TICRC1 test tokens from the [faucet](https://faucet.internetcomputer.org/). -**Single-token production** — connects to ckBTC with data persistence: +**Single-token production**: connects to ckBTC with data persistence: ```bash docker volume create ic-icrc-rosetta @@ -159,7 +159,7 @@ docker run \ --multi-tokens-store-dir /data ``` -**Multi-token deployment** — track ckBTC and ckETH simultaneously: +**Multi-token deployment**: track ckBTC and ckETH simultaneously: ```bash docker run \ @@ -228,7 +228,7 @@ The Data API provides read-only access to chain data. This section covers: netwo ### Fetch network information -Retrieve the network identifier — use this as a health check and to confirm the correct `network_identifier` for subsequent calls: +Retrieve the network identifier: use this as a health check and to confirm the correct `network_identifier` for subsequent calls: ```bash curl --location 'localhost:8081/network/list' \ @@ -410,16 +410,16 @@ The Construction API enables offline transaction signing: you prepare and sign t The construction flow consists of these endpoints, called in order: -1. **`construction/derive`** — derive an account identifier from a public key -2. **`construction/preprocess`** — get parameters needed for metadata fetch -3. **`construction/metadata`** — fetch transaction-specific metadata (e.g., nonce, fee) -4. **`construction/payloads`** — get signable hex payloads for the requested operations -5. **`construction/combine`** — combine signatures with the unsigned transaction -6. **`construction/submit`** — broadcast the signed transaction +1. **`construction/derive`**: derive an account identifier from a public key +2. **`construction/preprocess`**: get parameters needed for metadata fetch +3. **`construction/metadata`**: fetch transaction-specific metadata (e.g., nonce, fee) +4. **`construction/payloads`**: get signable hex payloads for the requested operations +5. **`construction/combine`**: combine signatures with the unsigned transaction +6. **`construction/submit`**: broadcast the signed transaction Two additional optional endpoints are supported and used by some integrators: -- **`construction/parse`** — parse a signed or unsigned transaction back into operations, useful for verifying intent before broadcast -- **`construction/hash`** — compute the transaction hash from a signed transaction, useful for tracking before submission +- **`construction/parse`**: parse a signed or unsigned transaction back into operations, useful for verifying intent before broadcast +- **`construction/hash`**: compute the transaction hash from a signed transaction, useful for tracking before submission ### Key generation @@ -446,25 +446,25 @@ openssl ec -in my_secp256k1_key.pem -pubout -conv_form compressed -outform DER | ICP Rosetta supports these operation types. The full list is returned by the `network/options` endpoint at runtime: **Token operations:** -- `TRANSACTION` — token transfer -- `MINT` — mint new tokens (minting account only) -- `BURN` — burn tokens -- `APPROVE` — approve a spender (ICRC-2) -- `FEE` — explicit fee debit (used internally and in transaction representation) +- `TRANSACTION`: token transfer +- `MINT`: mint new tokens (minting account only) +- `BURN`: burn tokens +- `APPROVE`: approve a spender (ICRC-2) +- `FEE`: explicit fee debit (used internally and in transaction representation) **Neuron and governance operations:** -- `STAKE` — stake ICP to create a neuron -- `START_DISSOLVING` / `STOP_DISSOLVING` — change neuron dissolve state -- `SET_DISSOLVE_TIMESTAMP` — set a neuron's dissolve deadline -- `CHANGE_AUTO_STAKE_MATURITY` — toggle automatic maturity restaking -- `DISBURSE` — disburse matured neuron funds -- `ADD_HOTKEY` / `REMOVE_HOTKEY` — manage neuron hotkeys -- `SPAWN` — spawn a new neuron from maturity -- `MERGE_MATURITY` / `STAKE_MATURITY` — handle accumulated maturity -- `REGISTER_VOTE` — vote on NNS proposals -- `FOLLOW` — configure neuron following -- `NEURON_INFO` — retrieve neuron metadata -- `LIST_NEURONS` — list neurons controlled by a principal +- `STAKE`: stake ICP to create a neuron +- `START_DISSOLVING` / `STOP_DISSOLVING`: change neuron dissolve state +- `SET_DISSOLVE_TIMESTAMP`: set a neuron's dissolve deadline +- `CHANGE_AUTO_STAKE_MATURITY`: toggle automatic maturity restaking +- `DISBURSE`: disburse matured neuron funds +- `ADD_HOTKEY` / `REMOVE_HOTKEY`: manage neuron hotkeys +- `SPAWN`: spawn a new neuron from maturity +- `MERGE_MATURITY` / `STAKE_MATURITY`: handle accumulated maturity +- `REGISTER_VOTE`: vote on NNS proposals +- `FOLLOW`: configure neuron following +- `NEURON_INFO`: retrieve neuron metadata +- `LIST_NEURONS`: list neurons controlled by a principal For a complete reference of the construction flow with request/response examples for each operation type, see the [ICP Rosetta construction API](https://github.com/dfinity/ic/tree/master/rs/rosetta-api) in the IC repository. @@ -472,10 +472,10 @@ For a complete reference of the construction flow with request/response examples ICRC Rosetta supports two categories of construction operations: -- **`TRANSFER`** — direct token transfer between accounts (ICRC-1). Two operations per request: one debit (`TRANSFER` with negative amount) and one credit (`TRANSFER` with positive amount). -- **`APPROVE` + `SPENDER`** — authorize a spender to transfer tokens on your behalf (ICRC-2). The `APPROVE` operation sets the allowance amount; the `SPENDER` operation identifies the authorized principal. +- **`TRANSFER`**: direct token transfer between accounts (ICRC-1). Two operations per request: one debit (`TRANSFER` with negative amount) and one credit (`TRANSFER` with positive amount). +- **`APPROVE` + `SPENDER`**: authorize a spender to transfer tokens on your behalf (ICRC-2). The `APPROVE` operation sets the allowance amount; the `SPENDER` operation identifies the authorized principal. -The construction flow is the same as for ICP. The network identifier is the ledger canister ID and the port is 8082. You do not need to include a `FEE` operation — ICRC Rosetta deducts the fee automatically, though you may include it to make the debit explicit. +The construction flow is the same as for ICP. The network identifier is the ledger canister ID and the port is 8082. You do not need to include a `FEE` operation: ICRC Rosetta deducts the fee automatically, though you may include it to make the debit explicit. ## Requirements and limitations @@ -496,20 +496,20 @@ Both implementations: - Pass all `rosetta-cli` tests - Accept any valid Rosetta request -Neither implementation supports UTXO features — no UTXO messages appear in responses. +Neither implementation supports UTXO features. No UTXO messages appear in responses. ## Example scripts The DFINITY IC repository contains Python example scripts for both implementations: -- **ICP Rosetta examples**: [`rs/rosetta-api/examples/icp/python`](https://github.com/dfinity/ic/tree/master/rs/rosetta-api/examples/icp/python) — balance queries, transfers, block reading, NNS governance interactions -- **ICRC Rosetta examples**: [`rs/rosetta-api/examples/icrc1/python`](https://github.com/dfinity/ic/tree/master/rs/rosetta-api/examples/icrc1/python) — ICRC-1 token operations with a `RosettaClient` library supporting automatic token discovery +- **ICP Rosetta examples**: [`rs/rosetta-api/examples/icp/python`](https://github.com/dfinity/ic/tree/master/rs/rosetta-api/examples/icp/python): balance queries, transfers, block reading, NNS governance interactions +- **ICRC Rosetta examples**: [`rs/rosetta-api/examples/icrc1/python`](https://github.com/dfinity/ic/tree/master/rs/rosetta-api/examples/icrc1/python): ICRC-1 token operations with a `RosettaClient` library supporting automatic token discovery Each directory includes a `requirements.txt` and a `run_tests.sh` script for isolated test environments. ## Next steps -- [Token ledgers](token-ledgers.mdx) — interact directly with ICP and ICRC-1 ledgers from canister code -- [Token standards](../../reference/token-standards.md) — ICRC-1 and ICRC-2 specifications +- [Token ledgers](token-ledgers.mdx): interact directly with ICP and ICRC-1 ledgers from canister code +- [Token standards](../../reference/token-standards.md): ICRC-1 and ICRC-2 specifications diff --git a/docs/guides/defi/token-ledgers.mdx b/docs/guides/defi/token-ledgers.mdx index c71e3175..c26c31a5 100644 --- a/docs/guides/defi/token-ledgers.mdx +++ b/docs/guides/defi/token-ledgers.mdx @@ -7,13 +7,13 @@ sidebar: import { Tabs, TabItem } from '@astrojs/starlight/components'; -Every token on ICP — ICP, ckBTC, ckETH, and custom tokens — is managed by a **ledger canister** that implements the ICRC token standards. Because all ledgers share the same interface, code that works with the ICP ledger also works with ckBTC, ckETH, or any ICRC-1 compliant token. You only need to swap the canister ID and fee. +Every token on ICP (ICP, ckBTC, ckETH, and custom tokens) is managed by a **ledger canister** that implements the ICRC token standards. Because all ledgers share the same interface, code that works with the ICP ledger also works with ckBTC, ckETH, or any ICRC-1 compliant token. You only need to swap the canister ID and fee. This guide covers the most common token operations: transfers, approvals, subaccounts, and local test ledger setup. For the formal standard specifications, see [Token standards](../../reference/token-standards.md). ## Well-known token ledgers -The table below lists a few well-known ledgers used throughout this guide. Many more tokens exist on ICP — see the [ICP Dashboard token list](https://dashboard.internetcomputer.org/tokens) for a broader overview. Anyone can deploy an ICRC-1 compliant ledger. +The table below lists a few well-known ledgers used throughout this guide. Many more tokens exist on ICP: see the [ICP Dashboard token list](https://dashboard.internetcomputer.org/tokens) for a broader overview. Anyone can deploy an ICRC-1 compliant ledger. | Token | Ledger canister ID | |-------|-------------------| @@ -186,7 +186,7 @@ Always set `created_at_time` to enable deduplication. Without it, two identical ### Checking balances -Query an account's balance with `icrc1_balance_of`. This is a query call — fast and free. +Query an account's balance with `icrc1_balance_of`. This is a query call: fast and free. @@ -234,7 +234,7 @@ async fn get_balance(ledger: Principal, owner: Principal) -> Result ## Approve and transfer-from (ICRC-2) -ICRC-2 adds an approve/transferFrom pattern, similar to ERC-20 on Ethereum. The token owner first approves a spender for a certain amount, then the spender calls `icrc2_transfer_from` to move tokens. This is a two-step flow — calling `transfer_from` without a prior approval fails with `InsufficientAllowance`. +ICRC-2 adds an approve/transferFrom pattern, similar to ERC-20 on Ethereum. The token owner first approves a spender for a certain amount, then the spender calls `icrc2_transfer_from` to move tokens. This is a two-step flow: calling `transfer_from` without a prior approval fails with `InsufficientAllowance`. **When to use:** DEX swaps, payment processors, subscription services, or any case where a canister needs to pull tokens from a user's account. @@ -417,7 +417,7 @@ async fn transfer_from( ## Working with subaccounts -An ICRC-1 account is a principal plus an optional 32-byte subaccount. Subaccounts let a single canister manage many logical accounts — useful for deposit flows where each user gets a unique deposit address. +An ICRC-1 account is a principal plus an optional 32-byte subaccount. Subaccounts let a single canister manage many logical accounts: useful for deposit flows where each user gets a unique deposit address. To derive a subaccount from a principal (a common pattern for deposit accounts): @@ -483,12 +483,12 @@ ICRC-37 extends ICRC-7 with an approval workflow (similar to how ICRC-2 extends Key operations: -- **`icrc7_transfer`** — transfer one or more NFTs by token ID -- **`icrc7_balance_of`** — count how many NFTs an account owns -- **`icrc7_owner_of`** — look up the owner of specific token IDs -- **`icrc7_tokens_of`** — list token IDs owned by an account -- **`icrc37_approve_tokens`** — approve a spender for specific NFTs (ICRC-37) -- **`icrc37_transfer_from`** — transfer NFTs using a prior approval (ICRC-37) +- **`icrc7_transfer`**: transfer one or more NFTs by token ID +- **`icrc7_balance_of`**: count how many NFTs an account owns +- **`icrc7_owner_of`**: look up the owner of specific token IDs +- **`icrc7_tokens_of`**: list token IDs owned by an account +- **`icrc37_approve_tokens`**: approve a spender for specific NFTs (ICRC-37) +- **`icrc37_transfer_from`**: transfer NFTs using a prior approval (ICRC-37) For a complete working example with minting, transferring, and a frontend, see the [nft-creator example](https://github.com/dfinity/examples/tree/master/motoko/nft-creator). For the full standard specifications, see [ICRC-7](https://github.com/dfinity/ICRC/blob/main/ICRCs/ICRC-7/ICRC-7.md) and [ICRC-37](https://github.com/dfinity/ICRC/blob/main/ICRCs/ICRC-37/ICRC-37.md). @@ -562,9 +562,9 @@ icp canister call icrc1_ledger icrc1_transfer \ ## Next steps -- [Token standards](../../reference/token-standards.md) — formal ICRC-1, ICRC-2, ICRC-7, and ICRC-37 specifications -- [Chain-key tokens](chain-key-tokens.md) — working with ckBTC and ckETH (minting, deposits, withdrawals) -- [Wallet integration](wallet-integration.md) — connecting wallets to your dapp -- [Onchain calls](../canister-calls/onchain-calls.md) — how inter-canister calls work (ledger calls are inter-canister calls) +- [Token standards](../../reference/token-standards.md): formal ICRC-1, ICRC-2, ICRC-7, and ICRC-37 specifications +- [Chain-key tokens](chain-key-tokens.md): working with ckBTC and ckETH (minting, deposits, withdrawals) +- [Wallet integration](wallet-integration.md): connecting wallets to your app +- [Onchain calls](../canister-calls/onchain-calls.md): how inter-canister calls work (ledger calls are inter-canister calls) -{/* Upstream: informed by dfinity/portal docs/defi/token-standards/, docs/defi/token-integrations/ — icrc-1.mdx, icrc-2.mdx, icrc-7.mdx, icrc-37.mdx; dfinity/icskills skills/icrc-ledger/SKILL.md; dfinity/examples motoko/nft-creator */} +{/* Upstream: informed by dfinity/portal docs/defi/token-standards/, docs/defi/token-integrations/: icrc-1.mdx, icrc-2.mdx, icrc-7.mdx, icrc-37.mdx; dfinity/icskills skills/icrc-ledger/SKILL.md; dfinity/examples motoko/nft-creator */} diff --git a/docs/guides/defi/wallet-integration.md b/docs/guides/defi/wallet-integration.md index b270ff5c..ebf589ff 100644 --- a/docs/guides/defi/wallet-integration.md +++ b/docs/guides/defi/wallet-integration.md @@ -1,11 +1,11 @@ --- title: "Wallet Integration" -description: "Integrate ICRC signer-standard wallets with your dapp using explicit per-action user approval." +description: "Integrate ICRC signer-standard wallets with your app using explicit per-action user approval." sidebar: order: 4 --- -Wallet integration on the Internet Computer uses a popup-based signer model where every meaningful action requires explicit user approval. The dapp opens a wallet popup, requests permission, and the wallet shows a human-readable consent message before executing each canister call. +Wallet integration on the Internet Computer uses a popup-based signer model where every meaningful action requires explicit user approval. The app opens a wallet popup, requests permission, and the wallet shows a human-readable consent message before executing each canister call. This guide covers integration using `@icp-sdk/signer`, the signer library in the ICP JavaScript SDK. @@ -20,7 +20,7 @@ Internet Identity and wallet signers serve different purposes: | **After approval** | Session delegation (sign-once, act-many) | Single call executed | | **Use when** | Read data, frequent writes, session-based UX | Token transfers, approvals, high-value one-off actions | -Use Internet Identity for login. Use a wallet signer when your dapp needs users to explicitly approve individual transactions — token transfers, NFT operations, or any action where a per-operation confirmation dialog is appropriate. +Use Internet Identity for login. Use a wallet signer when your app needs users to explicitly approve individual transactions: token transfers, NFT operations, or any action where a per-operation confirmation dialog is appropriate. ## ICRC signer standards @@ -28,11 +28,11 @@ The signer model is defined by a set of ICRC standards: | Standard | What it covers | |---|---| -| ICRC-21 | Canister call consent messages — human-readable summaries | -| ICRC-25 | Signer interaction standard — permission lifecycle | -| ICRC-27 | Accounts — requesting the user's principal | -| ICRC-29 | Window PostMessage transport — popup communication | -| ICRC-49 | Call canister — routing calls through the signer | +| ICRC-21 | Canister call consent messages: human-readable summaries | +| ICRC-25 | Signer interaction standard: permission lifecycle | +| ICRC-27 | Accounts: requesting the user's principal | +| ICRC-29 | Window PostMessage transport: popup communication | +| ICRC-49 | Call canister: routing calls through the signer | A compliant wallet (such as [OISY](https://oisy.com)) implements all five standards. @@ -40,10 +40,10 @@ A compliant wallet (such as [OISY](https://oisy.com)) implements all five standa The lifecycle of a wallet-initiated call: -1. Your dapp creates a `Signer` pointing to the wallet's signer URL -2. Call `getAccounts()` — the wallet popup opens and prompts the user to share their account +1. Your app creates a `Signer` pointing to the wallet's signer URL +2. Call `getAccounts()`: the wallet popup opens and prompts the user to share their account 3. Construct a `SignerAgent` using the returned principal -4. Use the agent with any canister actor — the wallet intercepts every call, fetches an ICRC-21 consent message from the target canister, shows it to the user, and only executes if the user approves +4. Use the agent with any canister actor. The wallet intercepts every call, fetches an ICRC-21 consent message from the target canister, shows it to the user, and only executes if the user approves The key insight: a `SignerAgent` is a drop-in replacement for `HttpAgent`. Code that creates actors with `HttpAgent` can switch to `SignerAgent` to add wallet approval to every call. @@ -86,7 +86,7 @@ await signer.requestPermissions([{ method: 'icrc27_accounts' }]); const accounts = await signer.getAccounts(); ``` -If you skip this step, the signer handles permissions per-method — the user sees a permissions prompt the first time each method is called. +If you skip this step, the signer handles permissions per-method. The user sees a permissions prompt the first time each method is called. ## Create a SignerAgent @@ -154,7 +154,7 @@ await signer.closeChannel(); ## Session persistence -The signer session is tied to the browser tab. After a page reload, the user's principal is no longer available from the signer. To avoid opening the popup again immediately, store the principal in `sessionStorage` and restore it on mount — then re-establish the signer session lazily when the user initiates a transfer: +The signer session is tied to the browser tab. After a page reload, the user's principal is no longer available from the signer. To avoid opening the popup again immediately, store the principal in `sessionStorage` and restore it on mount: then re-establish the signer session lazily when the user initiates a transfer: ```javascript import { Principal } from '@icp-sdk/core/principal'; @@ -186,9 +186,9 @@ try { } catch (err) { if (err instanceof SignerError) { switch (err.code) { - case 3001: // ACTION_ABORTED — user closed the popup or rejected the prompt + case 3001: // ACTION_ABORTED: user closed the popup or rejected the prompt break; - case 3000: // PERMISSION_NOT_GRANTED — permission was denied + case 3000: // PERMISSION_NOT_GRANTED: permission was denied break; default: console.error('Signer error', err.code, err.message); @@ -202,8 +202,8 @@ Common `err.code` values from the ICRC-25 standard: | Code | Meaning | |------|---------| | `3000` | Permission not granted | -| `3001` | Action aborted — user closed the popup or rejected | -| `4000` | Network error — IC call failed | +| `3001` | Action aborted: user closed the popup or rejected | +| `4000` | Network error: IC call failed | ## Local development @@ -217,13 +217,13 @@ const signer = new Signer({ const readAgent = await HttpAgent.create({ host: 'http://localhost:8000' }); ``` -For a test signer target, you can use any ICRC-25-compliant wallet running locally that exposes a `/sign` endpoint — for example, a local instance of [OISY](https://github.com/dfinity/oisy-wallet) or a custom signer built with `@icp-sdk/signer`. +For a test signer target, you can use any ICRC-25-compliant wallet running locally that exposes a `/sign` endpoint: for example, a local instance of [OISY](https://github.com/dfinity/oisy-wallet) or a custom signer built with `@icp-sdk/signer`. -On mainnet, omit `host` from `HttpAgent.create()` — it defaults to `https://icp0.io`. +On mainnet, omit `host` from `HttpAgent.create()`: it defaults to `https://icp0.io`. ## Working example -The [oisy-signer-demo](https://github.com/dfinity/examples/tree/master/hosting/oisy-signer-demo) example shows a complete dapp that: +The [oisy-signer-demo](https://github.com/dfinity/examples/tree/master/hosting/oisy-signer-demo) example shows a complete app that: 1. Connects to OISY and fetches the user's accounts 2. Queries ICRC-1 token balances using a read-only agent @@ -243,15 +243,15 @@ icp deploy Two additional libraries are available for more advanced wallet integration scenarios: -- [`@dfinity/ledger-wallet-identity`](https://www.npmjs.com/package/@dfinity/ledger-wallet-identity) — hardware wallet identity support -- [`@dfinity/icrc21-agent`](https://www.npmjs.com/package/@dfinity/icrc21-agent) — standalone ICRC-21 consent message agent +- [`@dfinity/ledger-wallet-identity`](https://www.npmjs.com/package/@dfinity/ledger-wallet-identity): hardware wallet identity support +- [`@dfinity/icrc21-agent`](https://www.npmjs.com/package/@dfinity/icrc21-agent): standalone ICRC-21 consent message agent Both libraries are expected to move to the `@icp-sdk` namespace on npm and will likely be covered in the wallet-integration skill going forward. They are not documented in detail here. ## Next steps -- [Internet Identity integration](../authentication/internet-identity.md) — add authentication alongside wallet signing -- [Token ledgers](token-ledgers.md) — work with ICRC-1 and ICRC-2 token standards -- [Token standards reference](../../reference/token-standards.md) — ICRC-1, ICRC-2, and related standards +- [Internet Identity integration](../authentication/internet-identity.md): add authentication alongside wallet signing +- [Token ledgers](token-ledgers.md): work with ICRC-1 and ICRC-2 token standards +- [Token standards reference](../../reference/token-standards.md): ICRC-1, ICRC-2, and related standards diff --git a/docs/guides/frontends/asset-canister.md b/docs/guides/frontends/asset-canister.md index 4ecde41b..93fc1804 100644 --- a/docs/guides/frontends/asset-canister.md +++ b/docs/guides/frontends/asset-canister.md @@ -154,7 +154,7 @@ npm run build icp deploy frontend ``` -If only static assets changed (no WASM update needed), use `icp sync` instead of a full redeploy — it skips canister reinstallation and only uploads changed files: +If only static assets changed (no WASM update needed), use `icp sync` instead of a full redeploy: it skips canister reinstallation and only uploads changed files: ```bash icp sync frontend diff --git a/docs/guides/frontends/certification.md b/docs/guides/frontends/certification.md index da2f7df1..8e79f8d8 100644 --- a/docs/guides/frontends/certification.md +++ b/docs/guides/frontends/certification.md @@ -5,7 +5,7 @@ sidebar: order: 3 --- -Query responses on ICP are answered by a single replica without going through consensus. A malicious or faulty replica could return fabricated data. **Response certification** solves this: canisters commit a cryptographic hash to the subnet's certified state, and query responses include a certificate signed by the subnet's threshold BLS key. HTTP gateways (boundary nodes) verify every response automatically, so users are protected without any extra client-side code — as long as the canister certifies its responses. +Query responses on ICP are answered by a single replica without going through consensus. A malicious or faulty replica could return fabricated data. **Response certification** solves this: canisters commit a cryptographic hash to the subnet's certified state, and query responses include a certificate signed by the subnet's threshold BLS key. HTTP gateways (boundary nodes) verify every response automatically, so users are protected without any extra client-side code: as long as the canister certifies its responses. This guide explains how certification works at the HTTP layer, what the asset canister does automatically, when you need custom certification, and how to verify certificates client-side. @@ -13,11 +13,11 @@ This guide explains how certification works at the HTTP layer, what the asset ca The asset canister implements **HTTP certification v2**, a protocol on top of certified data: -1. **Certification setup (update call)** — when an asset is uploaded, the canister inserts its path, response headers, and body hash into a Merkle tree and commits the tree's root hash via `certified_data_set`. The subnet includes this root hash in its certified state each consensus round. +1. **Certification setup (update call)**: when an asset is uploaded, the canister inserts its path, response headers, and body hash into a Merkle tree and commits the tree's root hash via `certified_data_set`. The subnet includes this root hash in its certified state each consensus round. -2. **HTTP query call** — when a browser requests an asset, the canister retrieves the subnet BLS certificate via `data_certificate()`, generates a Merkle proof (witness) for the requested path, and returns the response with `IC-Certificate` and `IC-Certificate-Expression` headers containing the certificate and witness. +2. **HTTP query call**: when a browser requests an asset, the canister retrieves the subnet BLS certificate via `data_certificate()`, generates a Merkle proof (witness) for the requested path, and returns the response with `IC-Certificate` and `IC-Certificate-Expression` headers containing the certificate and witness. -3. **Boundary node verification** — the HTTP gateway (boundary node) verifies the BLS signature on the certificate, extracts the certified root hash, and confirms the witness proves the response body and headers are included under that root hash. If verification fails, the gateway returns an error. +3. **Boundary node verification**: the HTTP gateway (boundary node) verifies the BLS signature on the certificate, extracts the certified root hash, and confirms the witness proves the response body and headers are included under that root hash. If verification fails, the gateway returns an error. ``` UPLOAD (update call, goes through consensus): @@ -47,7 +47,7 @@ The asset canister supports two serving modes: | Domain | Certification | Notes | |--------|--------------|-------| | `.icp0.io` | Verified | Boundary node checks every response | -| `.raw.icp0.io` | None | Responses not verified — use only when necessary | +| `.raw.icp0.io` | None | Responses not verified: use only when necessary | Raw access is enabled by default. Disable it in `.ic-assets.json5` for any assets that must not be served unverified: @@ -80,10 +80,10 @@ The asset canister certifies the full response: path, response body, status code Always certify headers that affect browser behavior. In particular: -- `Content-Type` — if uncertified, a malicious replica could serve HTML with `Content-Type: application/javascript`, causing the browser to execute it in a different context -- Security headers (`Content-Security-Policy`, `X-Frame-Options`, etc.) — if uncertified, a malicious replica could strip them +- `Content-Type`: if uncertified, a malicious replica could serve HTML with `Content-Type: application/javascript`, causing the browser to execute it in a different context +- Security headers (`Content-Security-Policy`, `X-Frame-Options`, etc.): if uncertified, a malicious replica could strip them -The `security_policy: "standard"` option in `.ic-assets.json5` certifies a baseline set of security headers. For custom headers, list them explicitly in `headers` — the asset canister certifies everything in that object. +The `security_policy: "standard"` option in `.ic-assets.json5` certifies a baseline set of security headers. For custom headers, list them explicitly in `headers`: the asset canister certifies everything in that object. ## Custom HTTP canisters @@ -97,7 +97,7 @@ Use custom HTTP certification when: - You need to certify dynamic responses (generated per request, not pre-uploaded assets) - You are building a canister that functions as its own frontend without using the standard asset canister -For static assets (HTML, CSS, JS, images), use the standard asset canister instead — it handles all certification automatically and is more efficient. +For static assets (HTML, CSS, JS, images), use the standard asset canister instead: it handles all certification automatically and is more efficient. ### Using ic-asset-certification @@ -167,7 +167,7 @@ fn init() { #[post_upgrade] fn post_upgrade() { - // Certified data is cleared on upgrade — must be re-established. + // Certified data is cleared on upgrade: must be re-established. certify_assets(); } @@ -196,13 +196,13 @@ For the full pattern including streaming, 404 fallbacks, and compressed encoding ### Using ic-http-certification -For more control — certifying dynamic responses, certifying only specific headers, or building a custom CEL expression — use the lower-level `ic-http-certification` crate directly. See the [ic-http-certification documentation](https://docs.rs/ic-http-certification) for details. +For more control (certifying dynamic responses, certifying only specific headers, or building a custom CEL expression) use the lower-level `ic-http-certification` crate directly. See the [ic-http-certification documentation](https://docs.rs/ic-http-certification) for details. ## Client-side certificate verification For standard asset serving via the asset canister, verification is transparent: the boundary node verifies every response before forwarding it to the browser, and you do not need any JavaScript verification code. -For custom canisters returning certified data over the Candid interface (not HTTP), you may need to verify the certificate in JavaScript. This is the pattern covered in [Certified variables](../backends/certified-variables.md) — the canister returns `(data, certificate, witness)` as Candid values, and the frontend verifies them with `@dfinity/certificate-verification`. +For custom canisters returning certified data over the Candid interface (not HTTP), you may need to verify the certificate in JavaScript. This is the pattern covered in [Certified variables](../backends/certified-variables.md): the canister returns `(data, certificate, witness)` as Candid values, and the frontend verifies them with `@dfinity/certificate-verification`. ### When client-side verification is needed @@ -221,7 +221,7 @@ npm install @dfinity/certificate-verification The `verifyCertification` function performs the full six-step verification: 1. Verify the certificate BLS signature against the IC root public key -2. Check certificate freshness — `/time` must be within `maxCertificateTimeOffsetMs` of the current time +2. Check certificate freshness: `/time` must be within `maxCertificateTimeOffsetMs` of the current time 3. CBOR-decode the witness into a hash tree 4. Reconstruct the witness root hash 5. Compare with `certified_data` in the certificate @@ -269,7 +269,7 @@ async function getVerifiedValue( // Confirm the canister-returned value matches what the witness proves. if (response.value !== null && response.value !== verifiedValue) { throw new Error( - "Response value does not match witness — canister returned tampered data" + "Response value does not match witness: canister returned tampered data" ); } @@ -288,14 +288,14 @@ const agent = await HttpAgent.create({ host: IS_LOCAL ? "http://localhost:8000" : "https://icp-api.io", // Only fetch root key on local networks. // On mainnet, the root key is hardcoded in the JS SDK. - // Fetching it on mainnet is a security risk — never do this in production. + // Fetching it on mainnet is a security risk: never do this in production. shouldFetchRootKey: IS_LOCAL, }); // Use agent.rootKey in verifyCertification calls ``` -> **Never call `fetchRootKey()` or set `shouldFetchRootKey: true` against mainnet.** These options let the agent fetch the root key from the replica over an unauthenticated connection — a man-in-the-middle could supply a fake root key and make forged certificates appear valid. On mainnet, the root key is hardcoded in the JS SDK. +> **Never call `fetchRootKey()` or set `shouldFetchRootKey: true` against mainnet.** These options let the agent fetch the root key from the replica over an unauthenticated connection: a man-in-the-middle could supply a fake root key and make forged certificates appear valid. On mainnet, the root key is hardcoded in the JS SDK. For the full working example including a backend canister, see the [certified-counter example](https://github.com/dfinity/response-verification/tree/main/examples/certification/certified-counter). @@ -315,9 +315,9 @@ For the full working example including a backend canister, see the [certified-co ## Next steps -- [Asset canister](asset-canister.md) — deploy and configure the standard asset canister with automatic certification -- [Certified variables](../backends/certified-variables.md) — certify Candid query responses from backend canisters -- [Security concepts](../../concepts/security.md) — why query integrity matters -- [HTTP Gateway specification](../../reference/http-gateway-spec.md) — how boundary nodes verify responses +- [Asset canister](asset-canister.md): deploy and configure the standard asset canister with automatic certification +- [Certified variables](../backends/certified-variables.md): certify Candid query responses from backend canisters +- [Security concepts](../../concepts/security.md): why query integrity matters +- [HTTP Gateway specification](../../reference/http-gateway-spec.md): how boundary nodes verify responses diff --git a/docs/guides/frontends/custom-domains.md b/docs/guides/frontends/custom-domains.md index 81e143a1..f0b9299f 100644 --- a/docs/guides/frontends/custom-domains.md +++ b/docs/guides/frontends/custom-domains.md @@ -43,7 +43,7 @@ Some registrars omit the main domain suffix when entering records. For `app.exam - `_canister-id.app` instead of `_canister-id.app.example.com` - `_acme-challenge.app` instead of `_acme-challenge.app.example.com` -**Apex domains:** Many registrars do not allow a `CNAME` on the apex (e.g., `example.com` without a subdomain). Use your provider's `ANAME` or `ALIAS` record type if available — these work like CNAME flattening and point to `CUSTOM_DOMAIN.icp1.io`. For GoDaddy apex domains, use Cloudflare or another provider that supports apex CNAME flattening. +**Apex domains:** Many registrars do not allow a `CNAME` on the apex (e.g., `example.com` without a subdomain). Use your provider's `ANAME` or `ALIAS` record type if available: these work like CNAME flattening and point to `CUSTOM_DOMAIN.icp1.io`. For GoDaddy apex domains, use Cloudflare or another provider that supports apex CNAME flattening. **Cloudflare users (if you already use Cloudflare as your DNS provider):** Disable Universal SSL under SSL/TLS > Edge Certificates before registering. Cloudflare's Universal SSL interferes with the ACME certificate challenge used by ICP. Also set DNS mode to "DNS only" (not proxied). If you are on Namecheap, GoDaddy, or Route 53 without Cloudflare, this note does not apply to you. @@ -129,7 +129,7 @@ If validation fails, the response indicates what is wrong: | Missing DNS CNAME record | Add the `_acme-challenge` CNAME pointing to `_acme-challenge.CUSTOM_DOMAIN.icp2.io` | | Missing DNS TXT record | Add the `_canister-id` TXT record with your canister ID | | Invalid DNS TXT record | Ensure the TXT value is a valid canister ID (no extra spaces or quotes) | -| More than one DNS TXT record | Remove duplicate `_canister-id` TXT records — keep exactly one | +| More than one DNS TXT record | Remove duplicate `_canister-id` TXT records: keep exactly one | | Failed to retrieve known domains | Ensure `.well-known/ic-domains` is deployed and served (`ignore: false` in `.ic-assets.json5`) | | Domain missing from list | Add the domain to the `ic-domains` file and redeploy | @@ -154,9 +154,9 @@ A successful response: Common registration errors: -- **bad_request** — Invalid domain format, missing DNS records, or validation errors. Run the validate endpoint first. -- **conflict** — A certificate already exists for this domain, or another registration task is in progress. Retry after a few minutes. -- **internal_server_error** — An unexpected error occurred. Retry later. +- **bad_request**: Invalid domain format, missing DNS records, or validation errors. Run the validate endpoint first. +- **conflict**: A certificate already exists for this domain, or another registration task is in progress. Retry after a few minutes. +- **internal_server_error**: An unexpected error occurred. Retry later. ## Step 6: Wait for certificate provisioning @@ -172,8 +172,8 @@ The `registration_status` field progresses from `registering` → `registered`: |---|---| | `registering` | Request accepted, certificate provisioning in progress | | `registered` | Certificate issued, domain is live | -| `expired` | Certificate has expired — re-register with a `POST` request to trigger a new provisioning cycle | -| `failed` | Registration failed — check the error message in the response | +| `expired` | Certificate has expired: re-register with a `POST` request to trigger a new provisioning cycle | +| `failed` | Registration failed: check the error message in the response | Once `registered`, wait a few more minutes for propagation to all HTTP gateways before testing in a browser. @@ -220,7 +220,7 @@ const host = isProduction ? "https://icp-api.io" : undefined; const agent = await HttpAgent.create({ host }); ``` -Without this, `HttpAgent` falls back to using the page origin as the API host — which will fail on custom domains since they do not proxy IC API traffic. +Without this, `HttpAgent` falls back to using the page origin as the API host: which will fail on custom domains since they do not proxy IC API traffic. For local development, you also need to pass `shouldFetchRootKey: true` so the agent can fetch the replica's root key. See [Asset canister](asset-canister.md) for a complete local + mainnet agent setup example. @@ -252,7 +252,7 @@ To point an existing custom domain at a different canister: curl -sL -X DELETE "https://icp0.io/custom-domains/v1/CUSTOM_DOMAIN" | jq ``` -3. Confirm deletion — the status endpoint should return 404: +3. Confirm deletion. The status endpoint should return 404: ```bash curl -sL -X GET "https://icp0.io/custom-domains/v1/CUSTOM_DOMAIN" | jq @@ -359,8 +359,8 @@ Remove any duplicates and keep exactly one record containing your canister ID. ## Next steps -- [Certification](certification.md) — Enable certified asset responses for your custom domain -- [Cycles management](../canister-management/cycles-management.md) — Ensure your canister has sufficient cycles for production traffic -- [Internet Identity](../authentication/internet-identity.md) — Configure alternative origins if your users authenticate with II +- [Certification](certification.md): Enable certified asset responses for your custom domain +- [Cycles management](../canister-management/cycles-management.md): Ensure your canister has sufficient cycles for production traffic +- [Internet Identity](../authentication/internet-identity.md): Configure alternative origins if your users authenticate with II diff --git a/docs/guides/frontends/frameworks.md b/docs/guides/frontends/frameworks.md index cebca6c0..44039d6b 100644 --- a/docs/guides/frontends/frameworks.md +++ b/docs/guides/frontends/frameworks.md @@ -5,7 +5,7 @@ sidebar: order: 4 --- -ICP hosts frontend applications as asset canisters — static files (HTML, CSS, JavaScript) deployed onchain and served with certified responses. Any framework that can produce a static build output works: React, Vue, Svelte, Next.js, and even game engines like Unity WebGL and Godot. +ICP hosts frontend applications as asset canisters: static files (HTML, CSS, JavaScript) deployed onchain and served with certified responses. Any framework that can produce a static build output works: React, Vue, Svelte, Next.js, and even game engines like Unity WebGL and Godot. This guide shows you how to configure your framework's build pipeline, wire up the ICP JavaScript SDK, and deploy to an asset canister. @@ -24,7 +24,7 @@ Every frontend framework integration follows the same pattern: 3. Use `@icp-sdk/core` in your app to read canister IDs and the root key at runtime from the `ic_env` cookie served by the asset canister 4. Deploy with `icp deploy` -The asset canister injects an `ic_env` cookie into every HTML response. This cookie carries the root key and any `PUBLIC_CANISTER_ID:` environment variables you set — so your frontend never needs canister IDs baked into the build artifact. +The asset canister injects an `ic_env` cookie into every HTML response. This cookie carries the root key and any `PUBLIC_CANISTER_ID:` environment variables you set: so your frontend never needs canister IDs baked into the build artifact. ## React with Vite @@ -87,7 +87,7 @@ export default defineConfig({ The `icpBindgen` Vite plugin regenerates TypeScript bindings whenever the `.did` file changes during development. -The `server.headers` block simulates the `ic_env` cookie during `vite dev`. In production, the asset canister injects this cookie automatically — your code reads it without any build-time environment variables. +The `server.headers` block simulates the `ic_env` cookie during `vite dev`. In production, the asset canister injects this cookie automatically: your code reads it without any build-time environment variables. Install the required packages: @@ -178,15 +178,15 @@ export default defineConfig({ }); ``` -If your Vue app calls `getCanisterEnv()` to read canister IDs, add the same `server.headers` block from the React section to simulate the `ic_env` cookie during local development — otherwise `getCanisterEnv()` will throw because the cookie is absent. The `icp.yaml` configuration is the same as the React example — point `dir` at `dist`. +If your Vue app calls `getCanisterEnv()` to read canister IDs, add the same `server.headers` block from the React section to simulate the `ic_env` cookie during local development (otherwise `getCanisterEnv()` will throw because the cookie is absent. The `icp.yaml` configuration is the same as the React example) point `dir` at `dist`. ## Authentication -Authentication with Internet Identity is framework-agnostic — the `@icp-sdk/auth` package works the same way in React, Vue, Svelte, and Next.js static export mode. See the [Internet Identity guide](../authentication/internet-identity.md) for integration steps. +Authentication with Internet Identity is framework-agnostic. The `@icp-sdk/auth` package works the same way in React, Vue, Svelte, and Next.js static export mode. See the [Internet Identity guide](../authentication/internet-identity.md) for integration steps. ## Svelte and SvelteKit -For SvelteKit, you must configure static export mode before deploying — the asset canister serves static files and does not support server-side rendering. +For SvelteKit, you must configure static export mode before deploying. The asset canister serves static files and does not support server-side rendering. ### SvelteKit with static adapter @@ -222,11 +222,11 @@ canisters: dir: build ``` -For Svelte (without SvelteKit), Vite is the standard build tool. The `vite.config.js` setup is the same as Vue — swap `@vitejs/plugin-vue` for `@sveltejs/vite-plugin-svelte`. +For Svelte (without SvelteKit), Vite is the standard build tool. The `vite.config.js` setup is the same as Vue: swap `@vitejs/plugin-vue` for `@sveltejs/vite-plugin-svelte`. ## Next.js -Next.js requires static export mode. Server components, API routes, and `getServerSideProps` are not supported in an asset canister — the canister only serves static files. +Next.js requires static export mode. Server components, API routes, and `getServerSideProps` are not supported in an asset canister. The canister only serves static files. Enable static export in your Next.js config: @@ -260,7 +260,7 @@ Only Next.js pages that can be statically generated are compatible with ICP. Any ## Game engines -Game engines that export HTML5 or WebGL builds can be deployed as asset canisters without a backend canister. The build output is pre-generated in the export step of the engine — `icp.yaml` just copies the files into place. +Game engines that export HTML5 or WebGL builds can be deployed as asset canisters without a backend canister. The build output is pre-generated in the export step of the engine: `icp.yaml` just copies the files into place. ### Unity WebGL @@ -319,7 +319,7 @@ icp deploy # http://.localhost:8000 ``` -No Vite plugin or JS SDK integration is needed for game builds — the asset canister serves the pre-built HTML and JavaScript files directly. +No Vite plugin or JS SDK integration is needed for game builds. The asset canister serves the pre-built HTML and JavaScript files directly. ## Static sites @@ -370,8 +370,8 @@ icp canister settings show frontend -i ## Next steps -- [Asset canister](asset-canister.md) — configure headers, caching, and SPA routing in `.ic-assets.json5` -- [Internet Identity](../authentication/internet-identity.md) — add authentication to your frontend -- [Project structure](../../getting-started/project-structure.md) — explore the hello-world template with React, Vite, and a Motoko backend +- [Asset canister](asset-canister.md): configure headers, caching, and SPA routing in `.ic-assets.json5` +- [Internet Identity](../authentication/internet-identity.md): add authentication to your frontend +- [Project structure](../../getting-started/project-structure.md): explore the hello-world template with React, Vite, and a Motoko backend diff --git a/docs/guides/governance/launching.md b/docs/guides/governance/launching.md index cbf4bf71..477a64cd 100644 --- a/docs/guides/governance/launching.md +++ b/docs/guides/governance/launching.md @@ -1,21 +1,21 @@ --- title: "Launching an SNS" -description: "Decentralize your dapp with an SNS: token economics, governance setup, and NNS proposal submission" +description: "Decentralize your app with an SNS: token economics, governance setup, and NNS proposal submission" sidebar: order: 1 --- -A Service Nervous System (SNS) is a DAO framework that transfers control of your dapp from your team to a community of token holders. After launch, canister upgrades, treasury spending, and governance parameters all require token holder votes — your team no longer has unilateral control. +A Service Nervous System (SNS) is a DAO framework that transfers control of your app from your team to a community of token holders. After launch, canister upgrades, treasury spending, and governance parameters all require token holder votes: your team no longer has unilateral control. This guide walks through the complete launch process: from designing your tokenomics and configuring `sns_init.yaml`, to adding NNS root as a co-controller and submitting the NNS proposal that triggers the swap. ## Before you start -SNS launch is irreversible. Once the NNS proposal is adopted and the swap succeeds, your dapp canisters are fully controlled by SNS root. Review these prerequisites before proceeding: +SNS launch is irreversible. Once the NNS proposal is adopted and the swap succeeds, your app canisters are fully controlled by SNS root. Review these prerequisites before proceeding: -- Your dapp canisters are deployed and working on mainnet +- Your app canisters are deployed and working on mainnet - You hold an NNS neuron with sufficient stake to submit proposals (8 ICP minimum stake, plus dissolve delay) -- You have done a security review and open-sourced the dapp code +- You have done a security review and open-sourced the app code - Your tokenomics design is finalized and community-vetted See [concepts/governance.md](../../concepts/governance.md) for background on how SNS DAOs work. @@ -26,14 +26,14 @@ See [concepts/governance.md](../../concepts/governance.md) for background on how Before writing a single line of configuration, define these parameters clearly: -**Token utility** — explain what the token is used for within your ecosystem. Common utilities include governance voting, access to premium features, and in-app payments. +**Token utility**: explain what the token is used for within your ecosystem. Common utilities include governance voting, access to premium features, and in-app payments. -**Initial token allocation** — decide how tokens are split across these buckets: -- **Developer neurons** — tokens for founders and seed investors, with vesting schedules. Best practice is 12–48 months of vesting. -- **Treasury** — tokens controlled by the DAO governance canister for future spending proposals. -- **Swap** — tokens sold during the decentralization swap in exchange for ICP. +**Initial token allocation**: decide how tokens are split across these buckets: +- **Developer neurons**: tokens for founders and seed investors, with vesting schedules. Best practice is 12–48 months of vesting. +- **Treasury**: tokens controlled by the DAO governance canister for future spending proposals. +- **Swap**: tokens sold during the decentralization swap in exchange for ICP. -**Swap parameters** — set realistic minimums. If you require 500 participants but only 200 show up, the entire swap fails and all ICP is refunded. Most successful SNS launches use 100–200 minimum participants. +**Swap parameters**: set realistic minimums. If you require 500 participants but only 200 show up, the entire swap fails and all ICP is refunded. Most successful SNS launches use 100–200 minimum participants. Use the [SNS Tokenomics Analyzer](https://dashboard.internetcomputer.org/sns/tokenomics) to evaluate your configuration and model voting power distribution before committing to parameters. @@ -56,12 +56,12 @@ Start a discussion thread in the [SNS Launch Proposals](https://forum.dfinity.or Before submitting the NNS proposal: -- [ ] Dapp canisters are deployed and stable on mainnet +- [ ] App canisters are deployed and stable on mainnet - [ ] Source code is open-sourced with reproducible build instructions - [ ] Security review completed, critical findings addressed - [ ] SNS launch tested locally using [dfinity/sns-testing](https://github.com/dfinity/sns-testing) - [ ] SNS testflight deployed on mainnet to verify governance and upgrade flows -- [ ] All dapp operations (canister upgrades, asset updates) tested via SNS proposals +- [ ] All app operations (canister upgrades, asset updates) tested via SNS proposals - [ ] Cycles management strategy in place so canisters never run out after launch - [ ] Frontend SNS integration tested (swap UI, proposal voting) - [ ] `sns_init.yaml` parameters validated locally @@ -72,15 +72,15 @@ All launch parameters are defined in a single YAML file. Copy the [template](htt The file has five sections: -**Project metadata** — name, description, logo, and URL displayed to NNS voters and swap participants. +**Project metadata**: name, description, logo, and URL displayed to NNS voters and swap participants. -**Governance parameters** — proposal rejection fees, voting periods, minimum neuron stake, and voting power bonuses for dissolve delay and neuron age. +**Governance parameters**: proposal rejection fees, voting periods, minimum neuron stake, and voting power bonuses for dissolve delay and neuron age. -**Token configuration** — token name, ticker symbol, and ledger transaction fee. +**Token configuration**: token name, ticker symbol, and ledger transaction fee. -**Token distribution** — developer neuron allocations (with dissolve delays and vesting periods), treasury balance, and swap allocation. +**Token distribution**: developer neuron allocations (with dissolve delays and vesting periods), treasury balance, and swap allocation. -**Swap parameters** — minimum and maximum ICP participation, minimum participant count, swap duration (3–7 days is standard), Neurons' Fund participation, and any geo-restrictions or confirmation text. +**Swap parameters**: minimum and maximum ICP participation, minimum participant count, swap duration (3–7 days is standard), Neurons' Fund participation, and any geo-restrictions or confirmation text. Example configuration skeleton: @@ -165,25 +165,25 @@ Swap: - CN ``` -Add comments throughout the file explaining your parameter choices — the NNS community will read this file when evaluating the proposal. +Add comments throughout the file explaining your parameter choices. The NNS community will read this file when evaluating the proposal. **Important constraints:** -- `fallback_controller_principals` must be set. If the swap fails, these principals regain control of the dapp canisters. Without this, your dapp becomes uncontrollable if the swap fails. +- `fallback_controller_principals` must be set. If the swap fails, these principals regain control of the app canisters. Without this, your app becomes uncontrollable if the swap fails. - Developer neuron `total` must equal the sum of all neuron stakes, treasury, and swap allocations exactly. - Only six proposal types are blocked during the swap window: `ManageNervousSystemParameters`, `TransferSnsTreasuryFunds`, `MintSnsTokens`, `UpgradeSnsControlledCanister`, `RegisterDappCanisters`, and `DeregisterDappCanisters`. Do not plan operations requiring these during the swap. ## Launch stages -The SNS launch proceeds through 11 stages. Only the first three require action from your team — the rest are automatic. +The SNS launch proceeds through 11 stages. Only the first three require action from your team. The rest are automatic. ### Stage 1: Define parameters (manual) -Finalize `sns_init.yaml` with the parameters you have designed. These parameters become locked into the NNS proposal — you cannot change them after submission. +Finalize `sns_init.yaml` with the parameters you have designed. These parameters become locked into the NNS proposal: you cannot change them after submission. ### Stage 2: Add NNS root as co-controller (manual) -Add the NNS root canister (`r7inp-6aaaa-aaaaa-aaabq-cai`) as a co-controller of each dapp canister. This is required for the automated stages to proceed — it gives NNS the authority to transfer canister control to SNS root after the proposal is adopted. +Add the NNS root canister (`r7inp-6aaaa-aaaaa-aaabq-cai`) as a co-controller of each app canister. This is required for the automated stages to proceed: it gives NNS the authority to transfer canister control to SNS root after the proposal is adopted. ```bash icp canister settings update BACKEND_CANISTER_ID \ @@ -195,7 +195,7 @@ icp canister settings update FRONTEND_CANISTER_ID \ -e ic ``` -Also revoke any special permissions your team held. For example, if developers had direct commit access to asset canisters, revoke that now — after launch, asset updates must go through SNS proposals: +Also revoke any special permissions your team held. For example, if developers had direct commit access to asset canisters, revoke that now: after launch, asset updates must go through SNS proposals: ```bash # If using the asset canister, revoke direct commit permission from developer principals @@ -219,7 +219,7 @@ dfx sns propose --network ic --neuron $NEURON_ID sns_init.yaml There can only be one SNS creation proposal active in the NNS at a time. If another project's proposal is currently being voted on, you must wait for it to resolve before submitting yours. -After submitting, monitor your proposal's status on the [NNS dapp](https://nns.ic0.app) or by querying NNS governance directly. +After submitting, monitor your proposal's status on the [NNS app](https://nns.ic0.app) or by querying NNS governance directly. ### Stages 4–11: Automatic @@ -229,18 +229,18 @@ After the NNS community votes to adopt the proposal, the remaining stages execut |-------|-------------| | 4 | NNS community votes; if adopted, remaining stages are triggered | | 5 | SNS-W deploys uninitialized SNS canisters on an SNS subnet | -| 6 | SNS root becomes sole controller of dapp canisters | +| 6 | SNS root becomes sole controller of app canisters | | 7 | SNS canisters are initialized in pre-decentralization-swap mode | | 8 | 24-hour minimum wait before swap opens (timing protocol applied) | | 9 | Decentralization swap opens; users send ICP and receive SNS neurons | | 10 | Swap closes (duration expires or maximum ICP reached) | | 11 | Finalization: exchange rate set, SNS neurons distributed, normal mode activated | -If the swap reaches the minimum participation requirements, it succeeds: SNS governance enters normal mode, token holders become the DAO, and your dapp is fully decentralized. If the swap fails (not enough participants or ICP), everything reverts: your dapp's control returns to the `fallback_controller_principals`, and all ICP contributions are refunded. +If the swap reaches the minimum participation requirements, it succeeds: SNS governance enters normal mode, token holders become the DAO, and your app is fully decentralized. If the swap fails (not enough participants or ICP), everything reverts: your app's control returns to the `fallback_controller_principals`, and all ICP contributions are refunded. ## Prepare your canister for SNS governance -Your canister code does not need to change for basic SNS compatibility — SNS governance controls upgrades through the standard canister management API. However, if your canister has admin functions that were previously protected by principal checks, transition them to accept calls from the SNS governance canister: +Your canister code does not need to change for basic SNS compatibility: SNS governance controls upgrades through the standard canister management API. However, if your canister has admin functions that were previously protected by principal checks, transition them to accept calls from the SNS governance canister: **Motoko:** @@ -287,7 +287,7 @@ use ic_cdk::update; use std::cell::RefCell; thread_local! { - // ⚠ STATE LOSS: thread_local! RefCell is heap storage — it is wiped on upgrade. + // ⚠ STATE LOSS: thread_local! RefCell is heap storage: it is wiped on upgrade. // Use ic-stable-structures in production to persist across upgrades. // See: https://docs.rs/ic-stable-structures/latest/ic_stable_structures/ for StableCell. static SNS_GOVERNANCE: RefCell> = RefCell::new(None); @@ -348,7 +348,7 @@ icp canister call sns_governance get_nervous_system_parameters '()' # Verify total token supply matches your configuration icp canister call sns_ledger icrc1_total_supply '()' -# Confirm dapp canister controller is SNS root (not your principal) +# Confirm app canister controller is SNS root (not your principal) icp canister status BACKEND_CANISTER_ID ``` @@ -361,23 +361,23 @@ icp canister call SNS_SWAP_CANISTER_ID get_state '()' -e ic ## Common mistakes -**Setting `min_participants` too high.** If the minimum is not reached, the swap fails and all ICP is refunded. Start conservative — 100–200 is typical for a first launch. +**Setting `min_participants` too high.** If the minimum is not reached, the swap fails and all ICP is refunded. Start conservative: 100–200 is typical for a first launch. **Forgetting to add NNS root as co-controller.** The launch will fail at stage 6 if NNS root was not added before the proposal was submitted. -**Not doing a testflight first.** The SNS testflight deploys a mock SNS on mainnet without doing a real swap — it lets you test governance flows and canister upgrade proposals before committing to the real launch. +**Not doing a testflight first.** The SNS testflight deploys a mock SNS on mainnet without doing a real swap: it lets you test governance flows and canister upgrade proposals before committing to the real launch. **Developer neurons with no vesting or short dissolve delays.** These are separate but related concerns: a *vesting period* prevents a neuron from being dissolved during the vesting window; a *dissolve delay* sets the cooldown before a stopped neuron becomes liquid. Developer neurons with no vesting period and zero dissolve delay allow the team to immediately sell tokens post-launch. Set both a vesting period and a dissolve delay (12–48 months is standard for each) to demonstrate long-term commitment to the NNS community. **Unreasonable tokenomics.** The NNS community votes on your proposal. Excessive developer allocation, zero vesting, or swap parameters outside reasonable bounds will lead to rejection. Review past successful SNS launches (OpenChat, Hot or Not, Kinic) for parameter ranges the community accepts. -**Not defining fallback controllers.** Without `fallback_controller_principals`, a failed swap leaves your dapp without any controllers — permanently unupgradeable. +**Not defining fallback controllers.** Without `fallback_controller_principals`, a failed swap leaves your app without any controllers: permanently unupgradeable. **Swap duration too short.** Less than 24 hours is risky given global time zones. Three to seven days is standard. ## Next steps -- [Testing an SNS](testing.md) — test your SNS configuration locally and with a mainnet testflight before submitting the NNS proposal -- [Managing an SNS](managing.md) — post-launch operations: submitting proposals, managing the treasury, upgrading canisters +- [Testing an SNS](testing.md): test your SNS configuration locally and with a mainnet testflight before submitting the NNS proposal +- [Managing an SNS](managing.md): post-launch operations: submitting proposals, managing the treasury, upgrading canisters diff --git a/docs/guides/governance/managing.md b/docs/guides/governance/managing.md index 14120210..b134b5d9 100644 --- a/docs/guides/governance/managing.md +++ b/docs/guides/governance/managing.md @@ -5,7 +5,7 @@ sidebar: order: 3 --- -After an SNS launch succeeds, no single entity controls the dapp or its governance canisters — the community does. Every upgrade, parameter change, treasury transfer, and asset update must go through an SNS proposal and be approved by token holder vote. This guide covers the day-to-day operations of a live SNS: submitting and understanding proposals, keeping canisters funded with cycles, updating asset canisters via governance, and participating as a neuron holder. +After an SNS launch succeeds, no single entity controls the app or its governance canisters. The community does. Every upgrade, parameter change, treasury transfer, and asset update must go through an SNS proposal and be approved by token holder vote. This guide covers the day-to-day operations of a live SNS: submitting and understanding proposals, keeping canisters funded with cycles, updating asset canisters via governance, and participating as a neuron holder. For background on how SNS DAOs work, see [SNS governance concepts](../../concepts/governance.md). For the launch process itself, see [Launching an SNS](launching.md). @@ -14,7 +14,7 @@ For background on how SNS DAOs work, see [SNS governance concepts](../../concept An SNS proposal is a call to a method on a specific canister, executed fully onchain if the DAO adopts the proposal. Any eligible neuron (one meeting the minimum stake and dissolve delay requirements set in the nervous system parameters) can submit a proposal. The submitter pays a rejection fee if the proposal is rejected. Proposals are adopted or rejected based on these rules: -- A proposal is **adopted immediately** if more than half of all available voting power votes yes — the result cannot be reversed, so waiting is pointless. +- A proposal is **adopted immediately** if more than half of all available voting power votes yes. The result cannot be reversed, so waiting is pointless. - A proposal is **rejected immediately** if at least half of all available voting power votes no. - If the voting deadline is reached, the proposal is adopted if there are more yes votes than no votes and the used voting power exceeds the minimum threshold (currently set to 3% of total available voting power). @@ -58,7 +58,7 @@ quill sns \ quill send message.json ``` -The `sns_canister_ids.json` file lists all canister IDs for your SNS — see the [quill example file](https://github.com/dfinity/quill/blob/master/e2e/assets/sns_canister_ids.json) for the format. +The `sns_canister_ids.json` file lists all canister IDs for your SNS: see the [quill example file](https://github.com/dfinity/quill/blob/master/e2e/assets/sns_canister_ids.json) for the format. Community-built tools like [ic-toolkit.app/sns-management](https://ic-toolkit.app/sns-management) provide a web interface for submitting proposals without using the CLI directly. @@ -68,7 +68,7 @@ SNS governance comes with built-in proposal types. Below are the most common one ### Motion -A motion proposal has no on-chain effect — it does not call any method. Use it for opinion polls, governance signaling, or gathering community consensus before a technical proposal. +A motion proposal has no on-chain effect: it does not call any method. Use it for opinion polls, governance signaling, or gathering community consensus before a technical proposal. ```bash quill sns \ @@ -93,7 +93,7 @@ quill send message.json ### ManageNervousSystemParameters -Each SNS can be customized through its nervous system parameters, which configure voting mechanics, staking requirements, reward rates, and more. Any parameter can be updated by proposal — set `null` for fields you do not want to change. +Each SNS can be customized through its nervous system parameters, which configure voting mechanics, staking requirements, reward rates, and more. Any parameter can be updated by proposal: set `null` for fields you do not want to change. ```candid type NervousSystemParameters = record { @@ -180,7 +180,7 @@ quill send message.json ### UpgradeSnsControlledCanister -Upgrades a dapp canister controlled by the SNS to a new Wasm. Because Wasm binaries are large and awkward to pass as CLI arguments, use `quill sns make-upgrade-canister-proposal` instead of `make-proposal`: +Upgrades an app canister controlled by the SNS to a new Wasm. Because Wasm binaries are large and awkward to pass as CLI arguments, use `quill sns make-upgrade-canister-proposal` instead of `make-proposal`: ```bash export WASM_PATH="/home/user/my_backend.wasm.gz" @@ -317,7 +317,7 @@ quill send message.json ### MintSnsTokens -Mints new SNS tokens to a specific account. Use sparingly — unexpected minting dilutes existing token holders and can erode community trust. +Mints new SNS tokens to a specific account. Use sparingly: unexpected minting dilutes existing token holders and can erode community trust. ```bash quill sns \ @@ -344,19 +344,19 @@ quill send message.json ## Custom proposals (generic nervous system functions) -Custom proposals let SNS communities define their own governance-gated operations beyond what the native proposal types provide. A custom proposal calls a specific method on a specific canister when adopted — any behavior your dapp needs can be made governable this way. +Custom proposals let SNS communities define their own governance-gated operations beyond what the native proposal types provide. A custom proposal calls a specific method on a specific canister when adopted: any behavior your app needs can be made governable this way. Each custom proposal has two parts: - **Target**: the canister and method that execute the action when the proposal is adopted -- **Validator**: the canister and method that validate the payload when the proposal is submitted (not at execution time — validate again in the target method) +- **Validator**: the canister and method that validate the payload when the proposal is submitted (not at execution time: validate again in the target method) ### Security considerations Before registering a custom proposal: - The target and validator canisters should be controlled by the SNS DAO, not by individual principals - The target method must verify that only the SNS governance canister is the caller -- Both methods must always return a response — if the governance canister has an open call context it cannot be stopped, which blocks urgent upgrades -- Validate inputs again in the target method at execution time, not just in the validator — conditions can change during the multi-day voting period +- Both methods must always return a response: if the governance canister has an open call context it cannot be stopped, which blocks urgent upgrades +- Validate inputs again in the target method at execution time, not just in the validator: conditions can change during the multi-day voting period - Avoid inter-canister calls in both methods to minimize re-entrancy risk ### AddGenericNervousSystemFunction @@ -396,7 +396,7 @@ quill send message.json IDs 0–999 are reserved for native proposal types. Use IDs 1000+ for custom proposals. -The SNS governance interface also accepts an optional `topic` field in `GenericNervousSystemFunction` to categorize the proposal under a governance topic. The `topic` field is `opt Topic` — omitting it is valid, but setting an appropriate topic helps token holders filter and follow proposals by category. +The SNS governance interface also accepts an optional `topic` field in `GenericNervousSystemFunction` to categorize the proposal under a governance topic. The `topic` field is `opt Topic`: omitting it is valid, but setting an appropriate topic helps token holders filter and follow proposals by category. ### ExecuteGenericNervousSystemFunction @@ -451,10 +451,10 @@ quill send message.json ## Cycles management -SNS communities are fully responsible for keeping all SNS canisters and governed dapp canisters funded with cycles. The NNS maintains the code but not the cycle balances. If any canister runs out of cycles and is deleted, it is gone permanently. +SNS communities are fully responsible for keeping all SNS canisters and governed app canisters funded with cycles. The NNS maintains the code but not the cycle balances. If any canister runs out of cycles and is deleted, it is gone permanently. :::caution[Archive canisters] -The SNS ledger automatically spawns archive canisters as blocks accumulate. When a new archive is spawned, the ledger transfers a portion of its cycles to fund the archive. **Monitor archive canisters separately** — if an archive runs out of cycles, ledger block history is lost. SNS canisters start with 180T cycles; the ledger starts with 60T (allocated as 30T for itself and 30T per archive). +The SNS ledger automatically spawns archive canisters as blocks accumulate. When a new archive is spawned, the ledger transfers a portion of its cycles to fund the archive. **Monitor archive canisters separately**: if an archive runs out of cycles, ledger block history is lost. SNS canisters start with 180T cycles; the ledger starts with 60T (allocated as 30T for itself and 30T per archive). ::: ### Find all canisters and their cycle balances @@ -467,7 +467,7 @@ icp canister call SNS_ROOT_CANISTER_ID get_sns_canisters_summary '(record { upda -This returns a list of all SNS framework canisters and registered dapp canisters with their current cycle balances. +This returns a list of all SNS framework canisters and registered app canisters with their current cycle balances. ### Top up a canister @@ -489,7 +489,7 @@ For a broader guide on cycles management strategies, see [Cycles management](../ ## Asset canister updates -A dapp controlled by an SNS often includes an asset canister that serves the frontend. Once the SNS launches, the governance canister holds `Commit` permissions on the asset canister — no one can update assets without a successful governance vote. +An app controlled by an SNS often includes an asset canister that serves the frontend. Once the SNS launches, the governance canister holds `Commit` permissions on the asset canister. No one can update assets without a successful governance vote. The update process uses a custom proposal (generic nervous system function): 1. A principal with `Prepare` permissions stages the new assets @@ -552,7 +552,7 @@ icp canister call YOUR_ASSET_CANISTER_ID create_asset \ icp canister call YOUR_ASSET_CANISTER_ID set_asset_content \ '(record { key = "/index.html"; sha256 = null; chunk_ids = vec { 1 : nat }; content_encoding = "identity" })' -e ic -# 4. Propose committing the batch — this locks the batch for proposal and returns the evidence hash +# 4. Propose committing the batch: this locks the batch for proposal and returns the evidence hash icp canister call YOUR_ASSET_CANISTER_ID propose_commit_batch \ '(record { batch_id = 2 : nat; operations = vec {} })' -e ic # Returns: (record { evidence = blob "..." }) @@ -613,7 +613,7 @@ icp canister call YOUR_ASSET_CANISTER_ID delete_batch \ ## Neuron management -Neurons are the staking units that give token holders voting power and a share of governance rewards. To create an SNS neuron, stake SNS tokens to the SNS governance canister using the NNS dapp or a compatible wallet. The SNS governance canister derives your neuron's subaccount from your principal and a nonce using a domain-separated hash. +Neurons are the staking units that give token holders voting power and a share of governance rewards. To create an SNS neuron, stake SNS tokens to the SNS governance canister using the NNS app or a compatible wallet. The SNS governance canister derives your neuron's subaccount from your principal and a nonce using a domain-separated hash. The SNS neuron staking flow is a two-step process: first transfer SNS tokens to the governance canister using the derived subaccount, then call `claim_or_refresh_neuron_from_account` on the SNS governance canister to claim the neuron. Note that this is distinct from NNS neuron staking, which uses NNS governance and the ICP ledger. @@ -637,17 +637,17 @@ icp canister status SNS_GOVERNANCE_CANISTER_ID -e ic **Not monitoring archive canister cycles.** The SNS ledger spawns archive canisters automatically. These are easy to miss since they are not in the original canister list. If an archive runs out of cycles, the ledger's transaction history is permanently lost. -**Submitting a proposal before community discussion.** The rejection fee is paid even if the proposal passes — but a surprise proposal with no prior discussion often gets rejected, wasting the fee and damaging community trust. Always post in the DAO forum before submitting a proposal. +**Submitting a proposal before community discussion.** The rejection fee is paid even if the proposal passes: but a surprise proposal with no prior discussion often gets rejected, wasting the fee and damaging community trust. Always post in the DAO forum before submitting a proposal. **Forgetting to validate at execution time in custom proposals.** The validator method runs when the proposal is submitted; conditions can change over the voting period (often multiple days). Your target method must re-validate any invariants it relies on. -**Allowing asset canister permissions to remain with individual principals.** After SNS launch, developers should hold only `Prepare` permissions on the asset canister — not `Commit`. If a developer retains `Commit` permission, they can bypass governance and update the frontend unilaterally. +**Allowing asset canister permissions to remain with individual principals.** After SNS launch, developers should hold only `Prepare` permissions on the asset canister: not `Commit`. If a developer retains `Commit` permission, they can bypass governance and update the frontend unilaterally. **Treasury transfers without a clear spending plan.** `TransferSnsTreasuryFunds` proposals without a detailed explanation of how funds will be used frequently get rejected. Include the full budget breakdown, expected deliverables, and timeline in the proposal summary. ## Next steps -- [Testing an SNS](testing.md) — test SNS governance flows locally before committing to mainnet changes -- [Launching an SNS](launching.md) — the complete launch process reference +- [Testing an SNS](testing.md): test SNS governance flows locally before committing to mainnet changes +- [Launching an SNS](launching.md): the complete launch process reference diff --git a/docs/guides/governance/testing.md b/docs/guides/governance/testing.md index 77ed308d..7af00c73 100644 --- a/docs/guides/governance/testing.md +++ b/docs/guides/governance/testing.md @@ -9,49 +9,49 @@ Testing your SNS before launch catches configuration mistakes that are impossibl These stages address different questions: -- **Local testing** — Does the SNS launch process work? Can proposals be submitted and voted on? Do upgrade flows work as designed? -- **Mainnet testflight** — Does your dapp operate correctly *after* decentralization? Does your team have the right tooling and workflows for day-to-day governance operations? +- **Local testing**: Does the SNS launch process work? Can proposals be submitted and voted on? Do upgrade flows work as designed? +- **Mainnet testflight**: Does your app operate correctly *after* decentralization? Does your team have the right tooling and workflows for day-to-day governance operations? -Run both stages before submitting your NNS proposal. Skipping testflight is one of the most common mistakes teams make — the post-decentralization operational experience is very different from what local testing reveals. +Run both stages before submitting your NNS proposal. Skipping testflight is one of the most common mistakes teams make. The post-decentralization operational experience is very different from what local testing reveals. ## Before you start You should already have: - A working `sns_init.yaml` with parameters defined (see [Launching an SNS](launching.md)) -- Dapp canisters deployed on mainnet +- App canisters deployed on mainnet - Reviewed the SNS launch stages and what each one does ## Stage 1: Local testing with sns-testing -The [dfinity/sns-testing](https://github.com/dfinity/sns-testing) repository contains scripts that simulate the full SNS launch flow on a local replica. The main goal is to confirm that the launch process itself — from proposal submission through swap finalization — works with your configuration. +The [dfinity/sns-testing](https://github.com/dfinity/sns-testing) repository contains scripts that simulate the full SNS launch flow on a local replica. The main goal is to confirm that the launch process itself (from proposal submission through swap finalization) works with your configuration. Using `sns-testing` you can: - Initiate proposals - Pass proposals - Start decentralization swaps -- Upgrade a dapp via DAO voting +- Upgrade an app via DAO voting ### What sns-testing covers -`sns-testing` is designed around a single-canister dapp and a standard local IC environment. It works best when your dapp matches that setup. If you have a multi-canister dapp or custom governance flows, you may need to fork or adapt it. +`sns-testing` is designed around a single-canister app and a standard local IC environment. It works best when your app matches that setup. If you have a multi-canister app or custom governance flows, you may need to fork or adapt it. -This is intentional — `sns-testing` is one example of how to test the SNS process, not a universal test harness. Adapt it for your dapp or use your own tooling. +This is intentional: `sns-testing` is one example of how to test the SNS process, not a universal test harness. Adapt it for your app or use your own tooling. ### Steps The following maps each SNS launch stage to what you do (or observe) locally: -**Step 0: Deploy your dapp locally** +**Step 0: Deploy your app locally** -For a test dapp bundled with sns-testing: +For a test app bundled with sns-testing: ```bash ./deploy_test_canister.sh ``` -For your own dapp, deploy using your normal setup. For a multi-canister dapp, use whatever scripts or configuration you use to deploy locally. +For your own app, deploy using your normal setup. For a multi-canister app, use whatever scripts or configuration you use to deploy locally. **Step 1: Add NNS root as co-controller** @@ -88,7 +88,7 @@ Stages 4 through 10 run automatically after the proposal is adopted: |-------|-------------| | 4 | NNS votes on and adopts the proposal | | 5 | SNS-W deploys SNS canisters | -| 6 | SNS root becomes sole controller of your dapp | +| 6 | SNS root becomes sole controller of your app | | 7 | SNS canisters are initialized in pre-swap mode | | 8 | Swap opens; participate: `./participate_in_sns_swap.sh` | | 9 | Swap closes | @@ -117,18 +117,18 @@ use candid::Principal; // pocket-ic = "9" #[test] fn test_canister_under_sns_governance() { - // Build an instance with NNS and SNS subnets — matching mainnet topology + // Build an instance with NNS and SNS subnets: matching mainnet topology let pic = PocketIcBuilder::new() .with_nns_subnet() .with_sns_subnet() // requires human verification: check pocket-ic 9.x API .with_application_subnet() .build(); - // Get the application subnet for your dapp canisters + // Get the application subnet for your app canisters let app_subnets = pic.topology().get_app_subnets(); let app_subnet = app_subnets[0]; - // Create and install your dapp canister on the application subnet + // Create and install your app canister on the application subnet let canister_id = pic.create_canister_on_subnet(None, None, app_subnet); pic.add_cycles(canister_id, 2_000_000_000_000); @@ -143,16 +143,16 @@ See [PocketIC](../testing/pocket-ic.md) for the full setup guide, including mult ## Stage 2: Mainnet testflight -An SNS testflight deploys a mock SNS directly to the mainnet without going through an NNS proposal or running a real decentralization swap. You retain full control of the mock SNS throughout the test flight — there are no real token holders, no real swap participants, and no irreversible steps. +An SNS testflight deploys a mock SNS directly to the mainnet without going through an NNS proposal or running a real decentralization swap. You retain full control of the mock SNS throughout the test flight: there are no real token holders, no real swap participants, and no irreversible steps. -**The testflight tests what local testing cannot:** how your dapp operates after the transfer of control. You will interact with your dapp exclusively through SNS proposals, which reveals operational gaps that developers consistently miss: +**The testflight tests what local testing cannot:** how your app operates after the transfer of control. You will interact with your app exclusively through SNS proposals, which reveals operational gaps that developers consistently miss: -- Gaps in proposal tooling — creating, describing, and executing proposals for routine operations -- Missing custom (generic) proposals for operations specific to your dapp -- Cycles management issues — canisters that go dark because no one can top them up through governance -- Monitoring blind spots — metrics and alerting that relied on direct canister access +- Gaps in proposal tooling: creating, describing, and executing proposals for routine operations +- Missing custom (generic) proposals for operations specific to your app +- Cycles management issues: canisters that go dark because no one can top them up through governance +- Monitoring blind spots: metrics and alerting that relied on direct canister access -Run the testflight for days or weeks, not hours. Operate your dapp in this mode as if it were live: push updates, respond to issues, exercise every governance flow you expect to need after launch. +Run the testflight for days or weeks, not hours. Operate your app in this mode as if it were live: push updates, respond to issues, exercise every governance flow you expect to need after launch. ### Testflight vs. production @@ -172,8 +172,8 @@ The testflight commands below require the `dfx sns` extension. No `icp-cli` equi You also need: -- [quill](https://github.com/dfinity/quill) — for submitting SNS proposals from the command line -- [didc](https://github.com/dfinity/candid) — for encoding Candid payloads +- [quill](https://github.com/dfinity/quill): for submitting SNS proposals from the command line +- [didc](https://github.com/dfinity/candid): for encoding Candid payloads ### Step 1: Import and download SNS canisters @@ -208,7 +208,7 @@ Copy the neuron ID that appears after the colon for use in subsequent steps. ### Step 3: Add SNS root as co-controller -Add the SNS root canister as an **additional** controller of each dapp canister. Keep yourself as a controller too — this lets you abort the testflight later if needed. +Add the SNS root canister as an **additional** controller of each app canister. Keep yourself as a controller too: this lets you abort the testflight later if needed. ```bash # Locally: @@ -221,7 +221,7 @@ icp canister settings update test \ -e ic ``` -### Step 4: Register dapp canisters with SNS root +### Step 4: Register app canisters with SNS root Register your canisters with the testflight SNS by submitting a proposal via `quill`. Set the environment variables for your deployment: @@ -229,7 +229,7 @@ Register your canisters with the testflight SNS by submitting a proposal via `qu export DEVELOPER_NEURON_ID="" # icp identity default prints the current identity name; the .config/dfx/identity/ path # is where dfx stores PEM files. If you created your identity with icp-cli, the path -# may differ — check ~/.config/icp/identity/ or the path shown by `icp identity export`. +# may differ: check ~/.config/icp/identity/ or the path shown by `icp identity export`. export PEM_FILE="$HOME/.config/dfx/identity/$(icp identity default)/identity.pem" export CID="$(icp canister id test -e ic)" ``` @@ -267,7 +267,7 @@ Verify registration succeeded: ```bash icp canister call sns_root list_sns_canisters '(record {})' -e ic -# Expected: your dapp canisters listed under "dapps" +# Expected: your app canisters listed under "apps" ``` ### Step 5: Test canister upgrades via SNS proposals @@ -298,7 +298,7 @@ The `grep -v "^ *new_canister_wasm"` suppresses the WASM binary in output. Omit ### Testing generic proposals -Generic proposals let you execute arbitrary code on SNS-managed canisters through governance. If your dapp requires operations beyond standard canister upgrades — for example, updating configuration, rotating keys, or publishing new content — you will need generic proposals. +Generic proposals let you execute arbitrary code on SNS-managed canisters through governance. If your app requires operations beyond standard canister upgrades (for example, updating configuration, rotating keys, or publishing new content) you will need generic proposals. First, implement the required validation and execution functions in your canister: @@ -405,7 +405,7 @@ Adjust `limit` to fetch only the most recent proposals if you have many. ### Aborting the testflight -When you have finished testing, verify that you are still a controller of your dapp canisters: +When you have finished testing, verify that you are still a controller of your app canisters: ```bash icp canister status test -e ic @@ -431,11 +431,11 @@ The `dfx sns init-config-file validate` command in the checklist below requires **Local testing** - [ ] Full SNS launch cycle completed locally with `sns-testing` - [ ] Canister upgrade via SNS proposal tested and working -- [ ] Custom (generic) proposals registered and tested if your dapp needs them +- [ ] Custom (generic) proposals registered and tested if your app needs them - [ ] Token distribution matches expected neuron balances **Mainnet testflight** -- [ ] Testflight SNS deployed and dapp canisters registered +- [ ] Testflight SNS deployed and app canisters registered - [ ] Canister upgrade executed successfully via SNS proposal - [ ] All governance flows needed for day-to-day operations have been tested - [ ] Cycles management strategy confirmed: governance can top up canisters @@ -452,7 +452,7 @@ For the full pre-submission checklist including tokenomics review and community ## Next steps -- [Managing an SNS](managing.md) — post-launch operations: submitting proposals, managing the treasury, and upgrading canisters once your SNS is live -- [PocketIC](../testing/pocket-ic.md) — set up PocketIC for automated canister integration tests with NNS and SNS subnets +- [Managing an SNS](managing.md): post-launch operations: submitting proposals, managing the treasury, and upgrading canisters once your SNS is live +- [PocketIC](../testing/pocket-ic.md): set up PocketIC for automated canister integration tests with NNS and SNS subnets diff --git a/docs/guides/index.md b/docs/guides/index.md index 4c8d8333..082e97f8 100644 --- a/docs/guides/index.md +++ b/docs/guides/index.md @@ -24,8 +24,8 @@ Practical how-to guides organized by development stage. Each guide solves a spec - **[Chain Fusion](chain-fusion/bitcoin.md)** -- Native Bitcoin, Ethereum, Solana, and Dogecoin integration. - **[DeFi](defi/token-ledgers.md)** -- Token ledgers, chain-key tokens, and the Rosetta API. -- **[Governance](governance/launching.md)** -- Launch and manage an SNS DAO for your dapp. +- **[Governance](governance/launching.md)** -- Launch and manage an SNS DAO for your app. ## Developer tools -- **[Tools](tools/ai-coding-agents.md)** — AI coding agents with ICP skills, developer tools, and migrating from dfx. +- **[Tools](tools/ai-coding-agents.md)**: AI coding agents with ICP skills, developer tools, and migrating from dfx. diff --git a/docs/guides/security/access-management.mdx b/docs/guides/security/access-management.mdx index 5dc45f41..db58f47a 100644 --- a/docs/guides/security/access-management.mdx +++ b/docs/guides/security/access-management.mdx @@ -14,14 +14,14 @@ Every canister method is callable by anyone on the internet. Without explicit ac Use this as a quick reference when securing your canister: - [ ] Reject the anonymous principal (`2vxsx-fae`) in every authenticated endpoint -- [ ] Check the caller inside each update method — not just in `canister_inspect_message` +- [ ] Check the caller inside each update method: not just in `canister_inspect_message` - [ ] Use the `guard` attribute (Rust) or guard functions (Motoko) to enforce access rules - [ ] Add a backup controller so you never lose canister access - [ ] Use `canister_inspect_message` only as a cycle-saving optimization, never as a security boundary ## How caller identity works -When a canister receives a message, the network includes the caller's principal. This identity is provided by the system — it cannot be forged or spoofed. You access it with: +When a canister receives a message, the network includes the caller's principal. This identity is provided by the system: it cannot be forged or spoofed. You access it with: - **Motoko:** `shared({ caller })` pattern on public functions - **Rust:** `ic_cdk::api::msg_caller()` @@ -32,7 +32,7 @@ Every principal is one of these types: |------|--------|---------|---------| | User | Varies (self-authenticating) | `wo5qg-ysjaa-aaaaa-...` | Human with a cryptographic identity | | Canister | 10 bytes, ends in `-cai` | `rrkah-fqaaa-aaaaa-aaaaq-cai` | Another canister making an inter-canister call | -| Anonymous | Fixed | `2vxsx-fae` | Unauthenticated caller — no identity | +| Anonymous | Fixed | `2vxsx-fae` | Unauthenticated caller: no identity | | Management | Fixed | `aaaaa-aa` | IC management canister (system calls) | ## Reject anonymous callers @@ -81,7 +81,7 @@ fn protected_action() -> String { } ``` -The Rust `guard` attribute runs the check before the method body executes. If the guard returns `Err`, the call is rejected. This is more robust than calling guard functions inside the method — you cannot forget to add it. Multiple guards can be chained: +The Rust `guard` attribute runs the check before the method body executes. If the guard returns `Err`, the call is rejected. This is more robust than calling guard functions inside the method: you cannot forget to add it. Multiple guards can be chained: ```rust #[update(guard = "require_authenticated", guard = "require_admin")] @@ -100,7 +100,7 @@ There is no built-in role system on ICP. You implement it yourself by tracking p -The `shared(msg)` pattern on an actor class captures the deployer's principal atomically — no separate init call, no front-running risk. Use `transient` for the owner since it gets recomputed from `msg.caller` on each install/upgrade. +The `shared(msg)` pattern on an actor class captures the deployer's principal atomically. No separate init call, no front-running risk. Use `transient` for the owner since it gets recomputed from `msg.caller` on each install/upgrade. ```motoko import Principal "mo:core/Principal"; @@ -209,14 +209,14 @@ fn remove_admin(admin: Principal) { #[update(guard = "require_admin")] fn admin_action() { - // ... protected logic — guard already validated caller + // ... protected logic: guard already validated caller } ``` -Always include admin revocation (`removeAdmin`). Missing revocation is a common source of bugs — once granted, admin access should be removable. +Always include admin revocation (`removeAdmin`). Missing revocation is a common source of bugs: once granted, admin access should be removable. ## Controller checks @@ -244,7 +244,7 @@ import Runtime "mo:core/Runtime"; -In Rust, there is no built-in `is_controller` function — checking controllers requires an async call to the management canister. See [onchain calls](../canister-calls/onchain-calls.md) for inter-canister call patterns. +In Rust, there is no built-in `is_controller` function: checking controllers requires an async call to the management canister. See [onchain calls](../canister-calls/onchain-calls.md) for inter-canister call patterns. @@ -262,13 +262,13 @@ icp canister settings update backend --add-controller -e ic icp canister settings update backend --remove-controller -e ic ``` -Always add a backup controller. If you lose the private key of the only controller, the canister becomes permanently unupgradeable — there is no recovery mechanism. +Always add a backup controller. If you lose the private key of the only controller, the canister becomes permanently unupgradeable: there is no recovery mechanism. -## `canister_inspect_message` — cycle optimization only +## `canister_inspect_message`: cycle optimization only `canister_inspect_message` is a hook that runs on a single replica before consensus. It can reject ingress messages early to save cycles on Candid decoding and execution. However, it is **not a security boundary**: -- It runs on one node without consensus — a malicious boundary node can bypass it +- It runs on one node without consensus: a malicious boundary node can bypass it - It is never called for inter-canister calls, query calls, or management canister calls Always duplicate real access checks inside each method. Use `inspect_message` only to reduce cycle waste from spam. @@ -318,7 +318,7 @@ fn inspect_message() { if msg_caller() != Principal::anonymous() { accept_message(); } - // Silently reject anonymous — saves cycles + // Silently reject anonymous: saves cycles } _ => accept_message(), } @@ -368,8 +368,8 @@ icp canister call backend whoami ## Next steps -- [Security concepts](../../concepts/security.md) — understand the IC security model -- [Canister settings](../canister-management/settings.md) — configure controllers and freezing thresholds -- [DoS prevention](dos-prevention.md) — rate limiting as an access control mechanism +- [Security concepts](../../concepts/security.md): understand the IC security model +- [Canister settings](../canister-management/settings.md): configure controllers and freezing thresholds +- [DoS prevention](dos-prevention.md): rate limiting as an access control mechanism -{/* Upstream: informed by dfinity/icskills — skills/canister-security/SKILL.md, dfinity/portal — docs/building-apps/best-practices/general.mdx */} +{/* Upstream: informed by dfinity/icskills (skills/canister-security/SKILL.md, dfinity/portal) docs/building-apps/best-practices/general.mdx */} diff --git a/docs/guides/security/canister-upgrades.md b/docs/guides/security/canister-upgrades.md index 84f583b5..c00502d2 100644 --- a/docs/guides/security/canister-upgrades.md +++ b/docs/guides/security/canister-upgrades.md @@ -14,9 +14,9 @@ Use this before every production upgrade: - [ ] Take a snapshot immediately before upgrading - [ ] Run the upgrade locally first with `icp deploy` - [ ] Verify data survives: write → upgrade → read -- [ ] Check Candid interface compatibility — no removed methods, no breaking type changes +- [ ] Check Candid interface compatibility. No removed methods, no breaking type changes - [ ] Avoid `pre_upgrade` hooks that serialize large state (use stable structures instead) -- [ ] In Motoko, use `persistent actor` (which eliminates the need for pre_upgrade hooks) — avoid manual `pre_upgrade`/`post_upgrade` +- [ ] In Motoko, use `persistent actor` (which eliminates the need for pre_upgrade hooks): avoid manual `pre_upgrade`/`post_upgrade` - [ ] Confirm you have a backup controller (cannot recover from a trapped `post_upgrade` without one) - [ ] Add a rollback plan: snapshot ID recorded, restore procedure tested @@ -56,21 +56,21 @@ persistent actor Counter { public query func get() : async Nat { count }; - // transient: resets to [] on each upgrade — correct for caches, transient logs, and reset-on-upgrade counters + // transient: resets to [] on each upgrade: correct for caches, transient logs, and reset-on-upgrade counters transient var recentCallers : [Principal] = []; }; ``` **Key rules:** -- All `let`/`var` fields persist automatically — no `stable` keyword needed +- All `let`/`var` fields persist automatically. No `stable` keyword needed - `transient var` for caches or counters that should reset on upgrade -- Do not write manual `pre_upgrade`/`post_upgrade` hooks — the runtime handles everything +- Do not write manual `pre_upgrade`/`post_upgrade` hooks. The runtime handles everything - If a persistent field's type changes incompatibly, the upgrade traps. See [Schema evolution](#schema-evolution). ### Rust: use stable structures -In Rust, use [`ic-stable-structures`](https://docs.rs/ic-stable-structures/latest/ic_stable_structures/) to store data directly in stable memory. Data lives there from the start — no serialization step on upgrade. +In Rust, use [`ic-stable-structures`](https://docs.rs/ic-stable-structures/latest/ic_stable_structures/) to store data directly in stable memory. Data lives there from the start. No serialization step on upgrade. ```rust use ic_stable_structures::{ @@ -81,7 +81,7 @@ use std::cell::RefCell; type Memory = VirtualMemory; -// Each structure must have its own unique MemoryId — never reuse IDs +// Each structure must have its own unique MemoryId: never reuse IDs const USERS_MEM_ID: MemoryId = MemoryId::new(0); const COUNTER_MEM_ID: MemoryId = MemoryId::new(1); @@ -103,7 +103,7 @@ thread_local! { #[ic_cdk::post_upgrade] fn post_upgrade() { - // Stable structures auto-restore — no deserialization needed. + // Stable structures auto-restore: no deserialization needed. // Re-initialize timers or transient state here if required. } ``` @@ -119,12 +119,12 @@ The serialization-based upgrade pattern is common in older Rust code but is fund #[ic_cdk::pre_upgrade] fn pre_upgrade() { // If STATE is large, this hits the instruction limit and traps. - // A trapped pre_upgrade prevents the upgrade — canister stays on old code. + // A trapped pre_upgrade prevents the upgrade: canister stays on old code. ic_cdk::storage::stable_save((STATE.with(|s| s.borrow().clone()),)).unwrap(); } ``` -When `pre_upgrade` traps due to instruction exhaustion, the canister cannot be upgraded. The `skip_pre_upgrade` flag (an emergency escape hatch via the management canister's `install_code` API — see [Management canister reference](../../reference/management-canister.md)) bypasses the hook — but anything the hook would have saved is lost. Use stable structures so the upgrade path cannot brick itself under load. +When `pre_upgrade` traps due to instruction exhaustion, the canister cannot be upgraded. The `skip_pre_upgrade` flag (an emergency escape hatch via the management canister's `install_code` API (see [Management canister reference](../../reference/management-canister.md)) bypasses the hook) but anything the hook would have saved is lost. Use stable structures so the upgrade path cannot brick itself under load. ## Candid interface compatibility @@ -149,7 +149,7 @@ The IC checks your new Wasm module's Candid interface against the old one before | Add a required (non-optional) parameter | Old clients don't send it | | Change a parameter type to an incompatible type | Old clients send invalid values | -**Example — safe evolution:** +**Example: safe evolution:** ```candid // Before @@ -158,7 +158,7 @@ service counter : { get : () -> (int) query; } -// After — safe: optional param added, new return value, new method +// After: safe: optional param added, new return value, new method service counter : { add : (nat, label : opt text) -> (new_val : nat); get : () -> (nat, last_change : nat) query; @@ -212,7 +212,7 @@ When upgrading a `persistent actor`, the runtime checks that every persistent fi **Safe changes:** -- Add new `let` or `var` fields with initial values — the runtime initializes them on upgrade +- Add new `let` or `var` fields with initial values. The runtime initializes them on upgrade - Add optional record fields (e.g., change `{ name : Text }` to `{ name : Text; email : ?Text }`) - Widen a field's type (e.g., `Nat` → `Int`) @@ -240,7 +240,7 @@ struct UserV2 { id: u64, name: String, created: u64, - // New optional field — safe to add: old records deserialize with None + // New optional field: safe to add: old records deserialize with None email: Option, } @@ -262,9 +262,9 @@ impl Storable for UserV2 { **Rules:** -- Use `Option` for new fields — Candid deserializes absent fields as `None`, so old records remain readable after the upgrade +- Use `Option` for new fields: Candid deserializes absent fields as `None`, so old records remain readable after the upgrade - Use `Bound::Unbounded` unless you have a strict size requirement -- Never reorder `MemoryId` allocations across upgrades — same effect as changing a field type +- Never reorder `MemoryId` allocations across upgrades: same effect as changing a field type - For breaking schema changes, use a versioned enum and migrate records lazily on read ## Testing upgrades locally @@ -316,13 +316,13 @@ icp canister call backend get_user_count '()' # Must still return: (1 : nat64) ``` -If the count drops to zero after upgrade, your data is not in stable memory — review your storage declarations before touching mainnet. +If the count drops to zero after upgrade, your data is not in stable memory: review your storage declarations before touching mainnet. For advanced scenarios (upgrade rollbacks, schema migrations, concurrent call safety), use [PocketIC](../testing/pocket-ic.md) to script multi-step upgrade scenarios in a controlled environment. ## Controller safety -You cannot upgrade a canister without a valid controller. Losing all controller keys leaves the canister permanently frozen at its current code — there is no recovery path on the IC. +You cannot upgrade a canister without a valid controller. Losing all controller keys leaves the canister permanently frozen at its current code: there is no recovery path on the IC. ```bash # Check current controllers @@ -341,10 +341,10 @@ See [Access management](access-management.md) for detailed controller management ## Next steps -- [Data persistence](../backends/data-persistence.md) — stable structures and upgrade patterns in depth -- [Canister lifecycle](../canister-management/lifecycle.md) — the full upgrade sequence and install modes -- [Canister snapshots](../canister-management/snapshots.md) — create and restore snapshots -- [Testing strategies](../testing/strategies.md) — test upgrade scenarios before deploying to mainnet -- [Access management](access-management.md) — manage controllers and prevent lock-out +- [Data persistence](../backends/data-persistence.md): stable structures and upgrade patterns in depth +- [Canister lifecycle](../canister-management/lifecycle.md): the full upgrade sequence and install modes +- [Canister snapshots](../canister-management/snapshots.md): create and restore snapshots +- [Testing strategies](../testing/strategies.md): test upgrade scenarios before deploying to mainnet +- [Access management](access-management.md): manage controllers and prevent lock-out diff --git a/docs/guides/security/data-integrity.md b/docs/guides/security/data-integrity.md index e7cf86fc..3a3c988f 100644 --- a/docs/guides/security/data-integrity.md +++ b/docs/guides/security/data-integrity.md @@ -11,7 +11,7 @@ For a conceptual overview of how these fit into the IC security model, see [Secu ## Onchain encryption with vetKeys -Canister state on standard application subnets is readable by node operators. If your application stores private data (notes, messages, files), you must encrypt it before storing. vetKeys (verifiably encrypted threshold keys) give canisters access to cryptographic key material derived by a threshold quorum of subnet nodes — no single node ever holds the raw key. +Canister state on standard application subnets is readable by node operators. If your application stores private data (notes, messages, files), you must encrypt it before storing. vetKeys (verifiably encrypted threshold keys) give canisters access to cryptographic key material derived by a threshold quorum of subnet nodes. No single node ever holds the raw key. The core workflow: @@ -97,7 +97,7 @@ fn init() { } ``` -Expose the two endpoints callers need — one to retrieve an encrypted key, one to retrieve the verification key: +Expose the two endpoints callers need: one to retrieve an encrypted key, one to retrieve the verification key: ```rust use candid::Principal; @@ -233,7 +233,7 @@ Generate a fresh transport key pair each session, then request and decrypt the v ```typescript import { TransportSecretKey, DerivedPublicKey, EncryptedVetKey } from "@dfinity/vetkeys"; -// 1. Generate an ephemeral transport key — new one each session +// 1. Generate an ephemeral transport key: new one each session const transportSecretKey = TransportSecretKey.fromSeed(crypto.getRandomValues(new Uint8Array(32))); const transportPublicKey = transportSecretKey.publicKey(); @@ -279,7 +279,7 @@ const plaintext = await crypto.subtle.decrypt({ name: "AES-GCM", iv }, aesKey, c ### Common mistakes - **Reusing transport keys across sessions.** Generate a fresh transport key pair for each session. If an attacker ever learns the transport secret, they can decrypt all keys derived while that secret was in use. -- **Using derived key bytes directly as an AES key.** The `encrypted_key` field from `vetkd_derive_key` is an encrypted blob. After decryption, call `toDerivedKeyMaterial()` before using for AES — do not use the raw bytes directly. +- **Using derived key bytes directly as an AES key.** The `encrypted_key` field from `vetkd_derive_key` is an encrypted blob. After decryption, call `toDerivedKeyMaterial()` before using for AES: do not use the raw bytes directly. - **Putting secret data in the `input` field.** The `input` field is sent to the management canister in plaintext and serves as a key identifier (e.g., a user principal or document ID). Never use it for actual secret data. - **Inconsistent context values.** The `context` field on the canister and on the frontend must match exactly. A mismatch causes silent decryption failure. @@ -291,7 +291,7 @@ This is useful for private messaging, sealed auctions, and any case where you wa > **Access control:** If you implement IBE without using `KeyManager` or `EncryptedMaps`, your canister must verify that `caller == recipient_principal` before calling `vetkd_derive_key`. Without this check, any caller can request any derived key and decrypt messages meant for someone else. The `ic-vetkeys` library handles this automatically. -**TypeScript IBE example — encrypt (sender side):** +**TypeScript IBE example: encrypt (sender side):** ```typescript import { IbeCiphertext, IbeIdentity, IbeSeed } from "@dfinity/vetkeys"; @@ -306,7 +306,7 @@ const ciphertext = IbeCiphertext.encrypt( const serialized = ciphertext.serialize(); // store this onchain (ciphertext, not plaintext) ``` -**TypeScript IBE example — decrypt (recipient side):** +**TypeScript IBE example: decrypt (recipient side):** ```typescript import { TransportSecretKey, DerivedPublicKey, EncryptedVetKey, IbeCiphertext } from "@dfinity/vetkeys"; @@ -345,9 +345,9 @@ const derivedKey: DerivedPublicKey = canisterKey.deriveSubKey( ``` For complete IBE and encrypted storage examples, see: -- [Password manager example](https://github.com/dfinity/vetkeys/tree/main/examples/password_manager) — encrypted key-value storage with `EncryptedMaps` -- [Encrypted notes dapp](https://github.com/dfinity/vetkeys/tree/main/examples/encrypted_notes_dapp_vetkd) — per-user encrypted note storage -- [IBE example](https://github.com/dfinity/vetkeys/tree/main/examples/basic_ibe) — identity-based encryption with Internet Identity principals +- [Password manager example](https://github.com/dfinity/vetkeys/tree/main/examples/password_manager): encrypted key-value storage with `EncryptedMaps` +- [Encrypted notes app](https://github.com/dfinity/vetkeys/tree/main/examples/encrypted_notes_dapp_vetkd): per-user encrypted note storage +- [IBE example](https://github.com/dfinity/vetkeys/tree/main/examples/basic_ibe): identity-based encryption with Internet Identity principals ## Certified variables for data authenticity @@ -362,29 +362,29 @@ For the full implementation guide, including Merkle tree construction, witness g **Key rules:** - `certified_data_set` may only be called during update calls (not query calls) -- You can only certify 32 bytes — build a Merkle tree and certify the root hash -- Re-certify data in `post_upgrade` — certified data is cleared on upgrade +- You can only certify 32 bytes: build a Merkle tree and certify the root hash +- Re-certify data in `post_upgrade`: certified data is cleared on upgrade - Clients must verify certificate freshness (the certificate embeds a timestamp; reject certificates older than ~5 minutes) ## Signature verification for external data -When your canister receives data from external parties — signed messages, X.509 CSRs, or HTTP request signatures — it must verify the cryptographic signature before trusting the data. ICP verifies signatures on ingress messages automatically, but canister-to-canister or external data flows require manual verification. +When your canister receives data from external parties (signed messages, X.509 CSRs, or HTTP request signatures) it must verify the cryptographic signature before trusting the data. ICP verifies signatures on ingress messages automatically, but canister-to-canister or external data flows require manual verification. ### IC ingress message signatures -Every ingress call to a canister is signed by the caller's identity. The IC verifies these signatures automatically before the message reaches your canister — you do not need to verify them yourself. The `caller` principal in your canister method is already authenticated. +Every ingress call to a canister is signed by the caller's identity. The IC verifies these signatures automatically before the message reaches your canister: you do not need to verify them yourself. The `caller` principal in your canister method is already authenticated. For workflows that require additional independent verification (such as verifying a message offline or in a different context), the IC uses the following signature schemes: -- **Ed25519** — used by Internet Identity and many wallet implementations -- **ECDSA on secp256r1 (P-256)** — used by some hardware authenticators -- **ECDSA on secp256k1** — used by Bitcoin-compatible wallets +- **Ed25519**: used by Internet Identity and many wallet implementations +- **ECDSA on secp256r1 (P-256)**: used by some hardware authenticators +- **ECDSA on secp256k1**: used by Bitcoin-compatible wallets To verify IC signatures independently (outside the IC, or as a second layer of validation), use the `ic-validator-ingress-message` Rust crate or the `@dfinity/standalone-sig-verifier-web` JavaScript library. See the [independently verifying IC signatures (Rust)](https://github.com/dfinity/ic/tree/master/rs/validator) documentation, or the [`@dfinity/standalone-sig-verifier-web` npm package](https://www.npmjs.com/package/@dfinity/standalone-sig-verifier-web) for the JavaScript path. ### X.509 certificate handling -Canisters can act as certificate authorities using threshold signing keys. Because no single node ever holds the threshold private key, only the canister (via consensus) can sign certificates — this gives you a CA whose private key cannot be exfiltrated. +Canisters can act as certificate authorities using threshold signing keys. Because no single node ever holds the threshold private key, only the canister (via consensus) can sign certificates: this gives you a CA whose private key cannot be exfiltrated. The pattern: a canister generates a root CA certificate signed with its threshold Ed25519 or ECDSA key, then issues child certificates for CSRs submitted by external parties. Certificates can be verified by any standard X.509 tool. @@ -414,7 +414,7 @@ This approach is used when you need to issue certificates to external systems th ### Local development ```bash -# Start a local network — test_key_1 and key_1 are provisioned automatically +# Start a local network: test_key_1 and key_1 are provisioned automatically icp network start -d # Deploy your canister @@ -422,12 +422,12 @@ icp deploy backend # Test public key retrieval icp canister call backend getPublicKey '()' -# Returns: (blob "...") — the vetKD public key for your canister +# Returns: (blob "..."): the vetKD public key for your canister # Test key derivation (requires a 48-byte transport public key blob) # In practice, the frontend generates this using TransportSecretKey.fromSeed() icp canister call backend deriveKey '(blob "\00\01\02...")' -# Returns: (blob "...") — the encrypted derived key +# Returns: (blob "..."): the encrypted derived key ``` ### Mainnet deployment @@ -447,9 +447,9 @@ Confirm that: ## Next steps -- [vetKeys concept guide](../../concepts/vetkeys.md) — how the threshold key derivation protocol works -- [Encryption guide](./encryption.md) — vetKeys encryption patterns including EncryptedMaps (coming soon) -- [Certified variables](../backends/certified-variables.md) — full certified data implementation -- [Security model](../../concepts/security.md) — IC security guarantees and threat model +- [vetKeys concept guide](../../concepts/vetkeys.md): how the threshold key derivation protocol works +- [Encryption guide](./encryption.md): vetKeys encryption patterns including EncryptedMaps (coming soon) +- [Certified variables](../backends/certified-variables.md): full certified data implementation +- [Security model](../../concepts/security.md): IC security guarantees and threat model diff --git a/docs/guides/security/dos-prevention.md b/docs/guides/security/dos-prevention.md index ce66e6a9..43baa53c 100644 --- a/docs/guides/security/dos-prevention.md +++ b/docs/guides/security/dos-prevention.md @@ -5,7 +5,7 @@ sidebar: order: 4 --- -ICP's [reverse gas model](../../concepts/reverse-gas-model.md) means your canister pays for every message it processes — including messages from attackers. Anyone on the internet can send update calls to your canister, and each call burns cycles even if your code ultimately rejects it. Left unmitigated, this lets an attacker drain your cycle balance by flooding your canister with messages. +ICP's [reverse gas model](../../concepts/reverse-gas-model.md) means your canister pays for every message it processes: including messages from attackers. Anyone on the internet can send update calls to your canister, and each call burns cycles even if your code ultimately rejects it. Left unmitigated, this lets an attacker drain your cycle balance by flooding your canister with messages. This guide covers the patterns that protect against denial-of-service (DoS) attacks: early message filtering, rate limiting, resource allocation, and cycle monitoring. @@ -28,17 +28,17 @@ Every ingress message (external call to your canister) costs cycles. The cost in - Per-instruction fees for all code executed before a trap or rejection - Candid decoding, which runs before your method body -This means an attacker can drain your cycles simply by sending many messages — the canister pays for Candid decoding and early checks even when it rejects the call. See [Cycles costs](../../reference/cycles-costs.md) for exact figures. +This means an attacker can drain your cycles simply by sending many messages. The canister pays for Candid decoding and early checks even when it rejects the call. See [Cycles costs](../../reference/cycles-costs.md) for exact figures. ### Use inspect_message as a first-pass filter -`canister_inspect_message` runs on a **single replica** before a message enters consensus. Code in this hook does not burn cycles, so it is an efficient place to drop messages that are obviously invalid — for example, calls from the anonymous principal to authenticated endpoints. +`canister_inspect_message` runs on a **single replica** before a message enters consensus. Code in this hook does not burn cycles, so it is an efficient place to drop messages that are obviously invalid: for example, calls from the anonymous principal to authenticated endpoints. **Critical limitation:** `canister_inspect_message` is not a security boundary. It runs on one node and can be bypassed by a malicious boundary node. It is also never called for inter-canister calls, query calls, or management canister calls. Always duplicate real access control inside each update method. See [Access management](access-management.md) for the full access control pattern. -`inspect_message` has a budget of **200 million instructions** — do not perform expensive work here. Use it only to short-circuit calls that are structurally invalid (wrong caller type, missing required data). +`inspect_message` has a budget of **200 million instructions**: do not perform expensive work here. Use it only to short-circuit calls that are structurally invalid (wrong caller type, missing required data). -**Motoko — inspect_message:** +**Motoko: inspect_message:** ```motoko import Principal "mo:core/Principal"; @@ -65,7 +65,7 @@ system func inspect( }; ``` -**Rust — inspect_message:** +**Rust: inspect_message:** ```rust use ic_cdk::api::{accept_message, msg_caller, msg_method_name}; @@ -83,7 +83,7 @@ fn inspect_message() { if msg_caller() != Principal::anonymous() { accept_message(); } - // Silently reject anonymous — saves cycles on Candid decoding + // Silently reject anonymous: saves cycles on Candid decoding } // Public methods: accept all _ => accept_message(), @@ -95,9 +95,9 @@ fn inspect_message() { For expensive operations (chain-key signing, HTTPS outcalls, large state writes), enforce per-caller concurrency limits. Allowing the same caller to queue up many concurrent requests multiplies the cost of any single caller's flood. -The CallerGuard pattern prevents concurrent calls from the same principal. While the guard is held, any second call from the same caller is rejected immediately — before any expensive work runs. +The CallerGuard pattern prevents concurrent calls from the same principal. While the guard is held, any second call from the same caller is rejected immediately: before any expensive work runs. -**Motoko — per-caller concurrency lock:** +**Motoko: per-caller concurrency lock:** ```motoko import Map "mo:core/Map"; @@ -126,7 +126,7 @@ public shared ({ caller }) func expensiveOperation() : async Result.Result Result { return Err("anonymous principal not allowed".to_string()); } - // Acquire per-caller lock — Drop releases it even if the callback traps + // Acquire per-caller lock: Drop releases it even if the callback traps let _guard = CallerGuard::new(caller)?; - // Do expensive work — use Call::bounded_wait for inter-canister calls + // Do expensive work: use Call::bounded_wait for inter-canister calls // to avoid unbounded waits that would block canister upgrades let result = do_expensive_work().await?; Ok(result) @@ -199,11 +199,11 @@ async fn expensive_operation() -> Result { } ``` -The guard releases automatically when it goes out of scope — including when an inter-canister call callback traps. Never use `let _ = CallerGuard::new(caller)?` (this drops the guard immediately, making locking ineffective). Always bind to a named variable (`let _guard`). +The guard releases automatically when it goes out of scope: including when an inter-canister call callback traps. Never use `let _ = CallerGuard::new(caller)?` (this drops the guard immediately, making locking ineffective). Always bind to a named variable (`let _guard`). ### Proof-of-work and captchas for public endpoints -For endpoints that must accept anonymous or unauthenticated callers — for example, a public registration flow — the per-caller lock pattern cannot apply. Instead, require the caller to prove they spent computational resources: +For endpoints that must accept anonymous or unauthenticated callers: for example, a public registration flow. The per-caller lock pattern cannot apply. Instead, require the caller to prove they spent computational resources: - **Captcha:** Require solving a captcha before calling an expensive endpoint. Use a library-based captcha (not a cloud service) to keep the solution onchain and avoid HTTPS outcalls. - **Proof of work:** Require the client to include a nonce that satisfies a hash challenge. The canister verifies the nonce in `inspect_message` before accepting the message. This imposes CPU cost on the caller proportional to the difficulty parameter. @@ -229,12 +229,12 @@ Source: [Cycles costs reference](../../reference/cycles-costs.md). If users can store data without limits, an attacker can fill the 4 GiB Wasm heap or stable memory, causing allocation failures that corrupt canister state. Mitigations: -- **Enforce per-user storage quotas** — track bytes stored per principal and reject requests that exceed the limit. -- **Validate input sizes** — check the size of user-provided blobs, text, or arrays before storing them. -- **Set a `wasm_memory_limit`** — configures a soft ceiling below the 4 GiB hard limit. When exceeded, new update calls trap instead of corrupting state. See [Canister settings](../canister-management/settings.md). +- **Enforce per-user storage quotas**: track bytes stored per principal and reject requests that exceed the limit. +- **Validate input sizes**: check the size of user-provided blobs, text, or arrays before storing them. +- **Set a `wasm_memory_limit`**: configures a soft ceiling below the 4 GiB hard limit. When exceeded, new update calls trap instead of corrupting state. See [Canister settings](../canister-management/settings.md). ```yaml -# icp.yaml — memory protection (settings nested under canister name) +# icp.yaml: memory protection (settings nested under canister name) canisters: - name: backend settings: @@ -246,12 +246,12 @@ canisters: Data queries that return unbounded result sets can exhaust the instruction limit for a single call. An attacker can exploit this by requesting a query that processes all stored data: -- **Always paginate** — accept an optional cursor or offset and return at most a fixed number of results per call. -- **Avoid unbounded iteration** — do not iterate entire data structures in a single call unless the data set is provably bounded. +- **Always paginate**: accept an optional cursor or offset and return at most a fixed number of results per call. +- **Avoid unbounded iteration**: do not iterate entire data structures in a single call unless the data set is provably bounded. ## Freezing threshold as a safety net -The `freezing_threshold` setting defines the minimum number of seconds the canister should be able to survive on its current cycle balance. When the balance drops below this reserve, the canister **freezes** — update calls are rejected. A frozen canister does not execute code, but it continues to pay for storage and compute allocation. +The `freezing_threshold` setting defines the minimum number of seconds the canister should be able to survive on its current cycle balance. When the balance drops below this reserve, the canister **freezes**: update calls are rejected. A frozen canister does not execute code, but it continues to pay for storage and compute allocation. The default threshold is 30 days. For production canisters holding valuable state, increase it to 90–180 days: @@ -263,7 +263,7 @@ icp canister settings update backend --freezing-threshold 7776000 -e ic Or via `icp.yaml`: ```yaml -# icp.yaml — settings nested under canister name +# icp.yaml: settings nested under canister name canisters: - name: backend settings: @@ -281,21 +281,21 @@ Multiple canisters share the same subnet. If a neighboring canister consumes exc Setting `compute_allocation` guarantees your canister a percentage of an execution core and ensures scheduled execution even when the subnet is busy: ```yaml -# icp.yaml — settings nested under canister name +# icp.yaml: settings nested under canister name canisters: - name: backend settings: compute_allocation: 10 # Guaranteed 10% of one execution core ``` -A value of `10` means the canister is scheduled at least every 10 consensus rounds. Compute allocation incurs an ongoing rental fee (10M cycles per percentage point per second on a 13-node subnet) — only set it if you need guaranteed throughput under load. See [Cycles costs](../../reference/cycles-costs.md). +A value of `10` means the canister is scheduled at least every 10 consensus rounds. Compute allocation incurs an ongoing rental fee (10M cycles per percentage point per second on a 13-node subnet). Only set it if you need guaranteed throughput under load. See [Cycles costs](../../reference/cycles-costs.md). ### Memory allocation Setting `memory_allocation` reserves a fixed pool of memory for your canister, preventing other canisters from consuming the subnet's available memory: ```yaml -# icp.yaml — settings nested under canister name +# icp.yaml: settings nested under canister name canisters: - name: backend settings: @@ -318,9 +318,9 @@ icp canister status -e ic Key metrics to monitor: -- **Balance** — alert when balance drops below a safe threshold (e.g., 2x the freezing threshold reserve) -- **Burn rate** — track cycles per day; a sudden spike indicates unexpected activity -- **Memory usage** — track growth over time; sudden jumps suggest user-driven data accumulation +- **Balance**: alert when balance drops below a safe threshold (e.g., 2x the freezing threshold reserve) +- **Burn rate**: track cycles per day; a sudden spike indicates unexpected activity +- **Memory usage**: track growth over time; sudden jumps suggest user-driven data accumulation For production canister monitoring, consider automating balance checks with a heartbeat or timer canister that sends an alert notification when the balance approaches the freezing threshold. @@ -328,17 +328,17 @@ For production canister monitoring, consider automating balance checks with a he Chain-key signing (threshold ECDSA/Schnorr), HTTPS outcalls, and Bitcoin API calls are significantly more expensive than standard update calls. These make attractive targets for attackers: -- **Require authentication** — never allow anonymous callers to trigger expensive operations. -- **Apply per-caller locking** — use the CallerGuard pattern to prevent the same caller from queuing multiple expensive calls. -- **Charge callers** — for canister-to-canister calls, require the calling canister to attach cycles to cover the cost. The called canister accepts the cycles using `ic0.msg_cycles_accept` (Rust: `ic_cdk::api::msg_cycles_accept(max_amount: u128)`). -- **Differentiate update vs. query** — move expensive computations to update calls and use query calls for cheap reads. Check whether a method is running as a query or update with `ic0.in_replicated_execution()` (Rust: `ic_cdk::api::in_replicated_execution()`). +- **Require authentication**: never allow anonymous callers to trigger expensive operations. +- **Apply per-caller locking**: use the CallerGuard pattern to prevent the same caller from queuing multiple expensive calls. +- **Charge callers**: for canister-to-canister calls, require the calling canister to attach cycles to cover the cost. The called canister accepts the cycles using `ic0.msg_cycles_accept` (Rust: `ic_cdk::api::msg_cycles_accept(max_amount: u128)`). +- **Differentiate update vs. query**: move expensive computations to update calls and use query calls for cheap reads. Check whether a method is running as a query or update with `ic0.in_replicated_execution()` (Rust: `ic_cdk::api::in_replicated_execution()`). ## Next steps -- [Access management](access-management.md) — caller checks, anonymous principal rejection, and role-based guards -- [Inter-canister call safety](inter-canister-calls.md) — TOCTOU vulnerabilities and the CallerGuard pattern -- [Canister settings](../canister-management/settings.md) — freezing threshold, memory allocation, and compute allocation -- [Cycles costs](../../reference/cycles-costs.md) — exact cost tables and resource limits -- [Security model](../../concepts/security.md) — IC trust boundaries and threat model overview +- [Access management](access-management.md): caller checks, anonymous principal rejection, and role-based guards +- [Inter-canister call safety](inter-canister-calls.md): TOCTOU vulnerabilities and the CallerGuard pattern +- [Canister settings](../canister-management/settings.md): freezing threshold, memory allocation, and compute allocation +- [Cycles costs](../../reference/cycles-costs.md): exact cost tables and resource limits +- [Security model](../../concepts/security.md): IC trust boundaries and threat model overview diff --git a/docs/guides/security/encryption.md b/docs/guides/security/encryption.md index f52916b9..c18bba6a 100644 --- a/docs/guides/security/encryption.md +++ b/docs/guides/security/encryption.md @@ -13,7 +13,7 @@ How to encrypt data on ICP using VetKeys. Cover the end-to-end flow: setting up - Portal: building-apps/network-features/vetkeys/ (9 files: intro, API, BLS-signatures, DKMS, encrypted-storage, IBE, timelock, VRF, demos) - icskills: vetkd -- Examples: vetkd (both), vetkeys (both), encrypted-notes-dapp-vetkd (both), filevault (Motoko) +- Examples: vetkd (both), vetkeys (both), encrypted-notes-app-vetkd (both), filevault (Motoko) - Learn Hub: check for VetKeys articles diff --git a/docs/guides/security/inter-canister-calls.md b/docs/guides/security/inter-canister-calls.md index d70e1bc3..49950e5f 100644 --- a/docs/guides/security/inter-canister-calls.md +++ b/docs/guides/security/inter-canister-calls.md @@ -15,19 +15,19 @@ When your canister `await`s a call to another canister, the IC scheduler can int - State your canister read before the `await` may be different when the callback runs. - A second call from the same user can arrive and begin executing before the first call's callback completes. -- If the callback traps, any mutations made in the callback are rolled back — but mutations made before the `await` are already committed. +- If the callback traps, any mutations made in the callback are rolled back: but mutations made before the `await` are already committed. The code before `await` and the code after `await` execute as **separate atomic message executions**. Understanding this is the foundation of inter-canister call security. ## Reentrancy and the CallerGuard pattern -A reentrancy bug occurs when a second message from the same caller interleaves with a first message that is still in progress — that is, awaiting a response. In DeFi contexts this enables double-spending: the attacker calls `withdraw()`, waits for it to begin the inter-canister transfer, then calls `withdraw()` again before the first call updates the balance. +A reentrancy bug occurs when a second message from the same caller interleaves with a first message that is still in progress: that is, awaiting a response. In DeFi contexts this enables double-spending: the attacker calls `withdraw()`, waits for it to begin the inter-canister transfer, then calls `withdraw()` again before the first call updates the balance. The CallerGuard pattern prevents this by tracking which callers have an in-flight operation. When a second call arrives from the same caller, it is rejected before it can interleave. ### Motoko -In Motoko, the guard must be released in a `finally` block. The `finally` block runs in cleanup context, where state changes are committed even if the `try` body trapped. If you release the guard inside the `try` body, a trap in the callback leaves the guard held forever — the caller is permanently locked out. +In Motoko, the guard must be released in a `finally` block. The `finally` block runs in cleanup context, where state changes are committed even if the `try` body trapped. If you release the guard inside the `try` body, a trap in the callback leaves the guard held forever. The caller is permanently locked out. ```motoko import Map "mo:core/Map"; @@ -79,7 +79,7 @@ public shared ({ caller }) func withdraw(amount : Nat) : async Result.Result<(), ### Rust -In Rust, the `Drop` trait releases the lock when the guard goes out of scope — including when the async function is cancelled or a trap occurs. Never write `let _ = CallerGuard::new(caller)?` — the leading underscore drops the guard immediately, making locking ineffective. Always bind to a named variable: `let _guard = CallerGuard::new(caller)?`. +In Rust, the `Drop` trait releases the lock when the guard goes out of scope: including when the async function is cancelled or a trap occurs. Never write `let _ = CallerGuard::new(caller)?`: the leading underscore drops the guard immediately, making locking ineffective. Always bind to a named variable: `let _guard = CallerGuard::new(caller)?`. ```rust use std::cell::RefCell; @@ -135,7 +135,7 @@ async fn withdraw(amount: u64) -> Result<(), String> { .map_err(|e| format!("transfer failed: {:?}", e))?; Ok(()) - // _guard dropped here — lock released + // _guard dropped here: lock released } ``` @@ -147,7 +147,7 @@ Because the code before `await` and the code after `await` are separate message ### Example: deduct before transferring -In a token transfer flow, deduct the balance before the inter-canister call rather than after. If the call fails, refund in the callback. This approach is safe: if the callback traps, the pre-deducted balance stays deducted — you can detect and remediate the stuck state. If you deduct after the call and the callback traps, the transfer happened but the balance was never deducted — funds are double-spent. +In a token transfer flow, deduct the balance before the inter-canister call rather than after. If the call fails, refund in the callback. This approach is safe: if the callback traps, the pre-deducted balance stays deducted (you can detect and remediate the stuck state. If you deduct after the call and the callback traps, the transfer happened but the balance was never deducted) funds are double-spent. **Motoko:** @@ -170,7 +170,7 @@ public shared ({ caller }) func transfer(to : Principal, amount : Nat) : async R return #err("insufficient balance"); }; - // 2. Deduct BEFORE the await — mutation is committed regardless of callback outcome. + // 2. Deduct BEFORE the await: mutation is committed regardless of callback outcome. Map.add(balances, Principal.compare, caller, balance - amount); // 3. Perform the inter-canister call. @@ -178,7 +178,7 @@ public shared ({ caller }) func transfer(to : Principal, amount : Nat) : async R await ledgerCanister.transfer(to, amount); #ok(()) } catch (e) { - // 4. Refund on failure — the deduction persists even if this try/catch runs. + // 4. Refund on failure: the deduction persists even if this try/catch runs. let currentBalance = switch (Map.get(balances, Principal.compare, caller)) { case (?b) b; case null 0; @@ -242,7 +242,7 @@ async fn transfer(to: Principal, amount: u64) -> Result<(), String> { ## Callback traps and security-critical cleanup -A trap in an inter-canister call callback is particularly dangerous: the callback's state mutations are rolled back, but the pre-`await` mutations are not. A malicious callee can induce a trap in your callback to skip actions that should always run — like debiting an account. +A trap in an inter-canister call callback is particularly dangerous: the callback's state mutations are rolled back, but the pre-`await` mutations are not. A malicious callee can induce a trap in your callback to skip actions that should always run: like debiting an account. To protect against this: @@ -270,7 +270,7 @@ public shared ({ caller }) func riskyOperation() : async () { // Handle error ignore Error.message(e); } finally { - // Runs in cleanup context — mutation persists even if callback trapped. + // Runs in cleanup context: mutation persists even if callback trapped. operationInProgress := false; } }; @@ -360,12 +360,12 @@ async fn call_untrusted(canister: Principal, method: &str) -> Result Result<(), String> { - // Capture caller BEFORE any await — defensive practice in Rust. + // Capture caller BEFORE any await: defensive practice in Rust. let caller: Principal = msg_caller(); Call::bounded_wait(other_canister_id(), "validate") @@ -410,7 +410,7 @@ fn do_work_for(_caller: Principal) { ## canister_inspect_message is not called for inter-canister calls -`canister_inspect_message` (Motoko: `system func inspect`) runs only for **ingress messages** — calls from external users arriving at the boundary nodes. It is never called for inter-canister calls. +`canister_inspect_message` (Motoko: `system func inspect`) runs only for **ingress messages**: calls from external users arriving at the boundary nodes. It is never called for inter-canister calls. This means any access control you implement in `inspect_message` does not protect your canister from being called by another canister. Always duplicate access checks inside the method body itself. @@ -471,13 +471,13 @@ Before shipping any canister that makes inter-canister calls: - **Wait type:** Use `bounded_wait` for calls to canisters you do not control; `unbounded_wait` only for your own canisters. - **Payload size:** Keep request and response payloads under 1 MB; paginate larger datasets. - **Caller capture:** In Rust, bind `msg_caller()` before the first `await`. -- **Access control:** Do not rely on `canister_inspect_message` for inter-canister call security — always check the caller inside the method. +- **Access control:** Do not rely on `canister_inspect_message` for inter-canister call security: always check the caller inside the method. - **Error handling:** Always handle the `Result` of every inter-canister call. ## Next steps -- [Onchain calls](../canister-calls/onchain-calls.md) — Basic inter-canister call patterns and the `Call` API -- [Parallel calls](../canister-calls/parallel-calls.md) — Running multiple calls concurrently and handling partial failures -- [Security concepts](../../concepts/security.md) — IC security model and threat landscape +- [Onchain calls](../canister-calls/onchain-calls.md): Basic inter-canister call patterns and the `Call` API +- [Parallel calls](../canister-calls/parallel-calls.md): Running multiple calls concurrently and handling partial failures +- [Security concepts](../../concepts/security.md): IC security model and threat landscape diff --git a/docs/guides/testing/pocket-ic.md b/docs/guides/testing/pocket-ic.md index 03eca403..ab6f4fa3 100644 --- a/docs/guides/testing/pocket-ic.md +++ b/docs/guides/testing/pocket-ic.md @@ -5,7 +5,7 @@ sidebar: order: 2 --- -PocketIC is a lightweight, deterministic testing library for canister integration tests. Unlike the full local network started by `icp network start`, PocketIC runs entirely inside your test process — no daemon, no ports, no Docker required. Tests execute synchronously, making them fast and fully reproducible. +PocketIC is a lightweight, deterministic testing library for canister integration tests. Unlike the full local network started by `icp network start`, PocketIC runs entirely inside your test process. No daemon, no ports, no Docker required. Tests execute synchronously, making them fast and fully reproducible. The `icp-cli` local development network also uses PocketIC under the hood, so behavior you observe in tests closely matches what you see during development. @@ -18,9 +18,9 @@ A PocketIC instance is an in-process IC replica. It supports: - Creating and installing canisters (from compiled `.wasm` files) - Making update and query calls - Multiple subnets (NNS, application, system) -- Time control — advance the clock without waiting -- Deterministic execution — the same test always produces the same result -- Parallel execution — each test gets its own `PocketIc` instance +- Time control: advance the clock without waiting +- Deterministic execution. The same test always produces the same result +- Parallel execution: each test gets its own `PocketIc` instance PocketIC strips the consensus and networking layers from the IC replica, keeping only the execution environment. This makes it orders of magnitude faster than running a full local network. @@ -236,7 +236,7 @@ fn test_timer_fires() { ### Multi-subnet testing -Test canister interactions that span subnets — for example, cross-subnet calls or NNS integration: +Test canister interactions that span subnets: for example, cross-subnet calls or NNS integration: ```rust title=tests/multi_subnet.rs use pocket_ic::{PocketIc, PocketIcBuilder}; @@ -280,7 +280,7 @@ Pic JS (`@dfinity/pic`) is the JavaScript/TypeScript client for PocketIC, design npm install --save-dev @dfinity/pic ``` -Pic JS manages the PocketIC server process for you via `PocketIcServer`. +Pic JS manages the PocketIC server process for you via `PocketIcServer`. ### Write a basic test @@ -330,11 +330,11 @@ describe('Counter canister', () => { }); ``` -Pic JS generates typed actors from Candid declarations automatically when you use `setupCanister`. The `idlFactory` is generated from your canister's `.did` file by `icp-cli` — it lives in the `declarations/` directory alongside the TypeScript types. See the [Pic JS documentation](https://js.icp.build/pic-js) for the full API, including typed actor generation and subnet configuration. +Pic JS generates typed actors from Candid declarations automatically when you use `setupCanister`. The `idlFactory` is generated from your canister's `.did` file by `icp-cli`: it lives in the `declarations/` directory alongside the TypeScript types. See the [Pic JS documentation](https://js.icp.build/pic-js) for the full API, including typed actor generation and subnet configuration. ### Advance time in JavaScript tests -This example uses inline setup for brevity. For test suites with multiple tests, the `beforeAll`/`afterAll` pattern from the basic example above is preferred — it avoids restarting the server for each test. +This example uses inline setup for brevity. For test suites with multiple tests, the `beforeAll`/`afterAll` pattern from the basic example above is preferred: it avoids restarting the server for each test. ```typescript title=src/__tests__/timer.test.ts import { PocketIc, PocketIcServer } from '@dfinity/pic'; @@ -393,8 +393,8 @@ PocketIC is appropriate when: ## Next steps -- [Testing strategies](strategies.md) — overview of unit, integration, and end-to-end testing -- [Governance testing](../governance/testing.md) — SNS testflight with PocketIC -- [Rust testing patterns](../../languages/rust/testing.md) — Rust-specific patterns including unit testing with mocks +- [Testing strategies](strategies.md): overview of unit, integration, and end-to-end testing +- [Governance testing](../governance/testing.md): SNS testflight with PocketIC +- [Rust testing patterns](../../languages/rust/testing.md): Rust-specific patterns including unit testing with mocks diff --git a/docs/guides/testing/strategies.md b/docs/guides/testing/strategies.md index 9e790c28..aa13a404 100644 --- a/docs/guides/testing/strategies.md +++ b/docs/guides/testing/strategies.md @@ -6,7 +6,7 @@ sidebar: --- Testing canisters on ICP deserves particular attention for two reasons. First, canister upgrades are irreversible in -practice — once a buggy upgrade runs `pre_upgrade`, your stable memory may be corrupted before you can roll back. +practice: once a buggy upgrade runs `pre_upgrade`, your stable memory may be corrupted before you can roll back. Second, cycles cost real money: a performance regression that doubles your instruction count doubles your operating cost. Catching these problems in tests before deployment avoids both classes of harm. @@ -14,12 +14,12 @@ cost. Catching these problems in tests before deployment avoids both classes of Effective canister testing uses three layers, from fastest to slowest: -1. **Unit tests** — Pure Rust or Motoko tests with mocked IC dependencies. Milliseconds per test, no WASM +1. **Unit tests**: Pure Rust or Motoko tests with mocked IC dependencies. Milliseconds per test, no WASM compilation, run in parallel. Cover 90%+ of your business logic here. -2. **PocketIC integration tests** — Deploy your canister WASM into a lightweight in-process IC replica. Seconds per +2. **PocketIC integration tests**: Deploy your canister WASM into a lightweight in-process IC replica. Seconds per test, but test actual IC behavior: canister calls, upgrade hooks, stable memory, multi-canister interactions, and time-based logic. -3. **Deployed testing** — Test against a real network (local or mainnet) via the CLI or scripts. Slowest, but +3. **Deployed testing**: Test against a real network (local or mainnet) via the CLI or scripts. Slowest, but validates deployment configuration, cycles top-up, and inter-canister call routing. Most projects need all three layers. The key insight is to push as much logic as possible into unit tests, then use @@ -73,7 +73,7 @@ thread_local! { ### Writing unit tests -With this structure, unit tests run entirely in pure Rust — no WASM, no PocketIC, no network: +With this structure, unit tests run entirely in pure Rust. No WASM, no PocketIC, no network: ```rust #[cfg(test)] @@ -202,14 +202,14 @@ cargo build --target wasm32-unknown-unknown --release cargo test ``` -For advanced PocketIC usage — multi-subnet topologies, time travel, NNS subnet setup, and JavaScript/TypeScript -testing with Pic JS — see [PocketIC](pocket-ic.md). +For advanced PocketIC usage: multi-subnet topologies, time travel, NNS subnet setup, and JavaScript/TypeScript +testing with Pic JS: see [PocketIC](pocket-ic.md). ## Performance benchmarking ICP canisters run inside a deterministic virtual machine where every instruction is counted. Each update call is limited to 40 billion instructions. `canbench` measures your canister's instruction count, heap memory, and stable -memory usage — and detects regressions by comparing against saved baselines. +memory usage: and detects regressions by comparing against saved baselines. ### Setup @@ -272,7 +272,7 @@ Benchmark: fibonacci_20 (new) Executed 1 of 1 benchmarks. ``` -Run `canbench` a second time after saving results — it compares against the baseline and reports regressions. Commit +Run `canbench` a second time after saving results: it compares against the baseline and reports regressions. Commit the `canbench_results.yml` file to your repository so CI can catch regressions automatically. For full crate documentation, see [canbench-rs on docs.rs](https://docs.rs/canbench-rs/latest/canbench_rs/). @@ -328,8 +328,8 @@ port with: icp network status docker-test --json ``` -For the full containerized network configuration reference — including environment variables, volume mounts, and -custom images — see the +For the full containerized network configuration reference: including environment variables, volume mounts, and +custom images: see the [icp-cli containerized networks guide](https://cli.internetcomputer.org/guides/containerized-networks). ## Choosing the right approach @@ -345,8 +345,8 @@ custom images — see the ## Next steps -- [PocketIC](pocket-ic.md) — Advanced integration testing: multi-subnet, time travel, Pic JS for TypeScript -- [Canister management: lifecycle](../canister-management/lifecycle.md) — Test upgrade paths before deploying -- [Canister management: logs](../canister-management/logs.md) — Add observability for debugging test failures +- [PocketIC](pocket-ic.md): Advanced integration testing: multi-subnet, time travel, Pic JS for TypeScript +- [Canister management: lifecycle](../canister-management/lifecycle.md): Test upgrade paths before deploying +- [Canister management: logs](../canister-management/logs.md): Add observability for debugging test failures diff --git a/docs/guides/tools/ai-coding-agents.md b/docs/guides/tools/ai-coding-agents.md index e7186970..e92926a2 100644 --- a/docs/guides/tools/ai-coding-agents.md +++ b/docs/guides/tools/ai-coding-agents.md @@ -5,7 +5,7 @@ sidebar: order: 3 --- -AI coding agents frequently hallucinate canister IDs, use deprecated APIs, and miss ICP-specific constraints. ICP skills solve this: structured markdown files containing accurate canister IDs, tested code patterns, and documented pitfalls — so your agent writes correct ICP code on the first attempt. +AI coding agents frequently hallucinate canister IDs, use deprecated APIs, and miss ICP-specific constraints. ICP skills solve this: structured markdown files containing accurate canister IDs, tested code patterns, and documented pitfalls: so your agent writes correct ICP code on the first attempt. ## Getting started @@ -47,7 +47,7 @@ Each ICP skill covers one capability area and includes: Skills are maintained by DFINITY and updated frequently. The full list is at [skills.internetcomputer.org](https://skills.internetcomputer.org). -ICP skills follow the [Agent Skills](https://agentskills.io/specification) open standard — [published by Anthropic](https://www.anthropic.com/engineering/equipping-agents-for-the-real-world-with-agent-skills) in December 2025 to define a portable SKILL.md format that works across coding agents. The registry uses the [Agent Skills Discovery RFC](https://github.com/cloudflare/agent-skills-discovery-rfc) so agents can auto-discover and load skills without manual configuration. +ICP skills follow the [Agent Skills](https://agentskills.io/specification) open standard: [published by Anthropic](https://www.anthropic.com/engineering/equipping-agents-for-the-real-world-with-agent-skills) in December 2025 to define a portable SKILL.md format that works across coding agents. The registry uses the [Agent Skills Discovery RFC](https://github.com/cloudflare/agent-skills-discovery-rfc) so agents can auto-discover and load skills without manual configuration. ## How discovery works @@ -58,7 +58,7 @@ When an agent follows the `skills.internetcomputer.org/llms.txt` instructions: 3. When a task matches a skill's description, it fetches the skill content from that skill's URL 4. It prefers skill guidance over general knowledge when both cover the same topic -Skills are fetched fresh each time — agents always use the latest version. +Skills are fetched fresh each time: agents always use the latest version. ## Skills vs docs @@ -77,9 +77,9 @@ When an agent has both loaded, it should prefer skill guidance for implementatio This docs site implements the [Agent-Friendly Documentation Spec](https://agentdocsspec.com). Two endpoints make these docs directly consumable by agents: -**[`/llms.txt`](/llms.txt)** — a discovery index listing every page with links to its clean markdown endpoint, plus the ICP skills registry URL. +**[`/llms.txt`](/llms.txt)**: a discovery index listing every page with links to its clean markdown endpoint, plus the ICP skills registry URL. -**`/.md`** — every page is available as clean markdown. HTML, navigation, and site chrome are stripped, leaving only the content. For example, this page is available at [`/guides/tools/ai-coding-agents.md`](/guides/tools/ai-coding-agents.md). +**`/.md`**: every page is available as clean markdown. HTML, navigation, and site chrome are stripped, leaving only the content. For example, this page is available at [`/guides/tools/ai-coding-agents.md`](/guides/tools/ai-coding-agents.md). A discovery link in every page's `` points to `/llms.txt`, so agents that crawl docs pages find the index automatically. @@ -98,9 +98,9 @@ ICP skills are available without authentication: ## Next steps -- [skills.internetcomputer.org](https://skills.internetcomputer.org) — browse all available ICP skills -- [Developer tools overview](overview.md) — icp-cli, CDKs, and other tools in the ICP toolchain -- [Quickstart](../../getting-started/quickstart.md) — deploy your first canister with icp-cli -- [Migrating from dfx](migrating-from-dfx.md) — upgrade an existing project from the legacy dfx tool +- [skills.internetcomputer.org](https://skills.internetcomputer.org): browse all available ICP skills +- [Developer tools overview](overview.md): icp-cli, CDKs, and other tools in the ICP toolchain +- [Quickstart](../../getting-started/quickstart.md): deploy your first canister with icp-cli +- [Migrating from dfx](migrating-from-dfx.md): upgrade an existing project from the legacy dfx tool diff --git a/docs/guides/tools/overview.md b/docs/guides/tools/overview.md index dbe6d736..d074139f 100644 --- a/docs/guides/tools/overview.md +++ b/docs/guides/tools/overview.md @@ -14,9 +14,9 @@ Developer tools are used to create, manage, and interact with canisters. ICP pro `icp-cli` is the primary tool for building and deploying applications on the Internet Computer. It manages the full development lifecycle: creating projects, building canisters, deploying to local or mainnet environments, managing identities, and handling cycles and ICP tokens. Key features: -- **Recipes** — reusable, versioned build templates for Rust, Motoko, and asset canisters -- **Environments** — named deployment targets that combine a network, canister set, and settings (e.g., local, staging, production) -- **Project scaffolding** — `icp new` bootstraps new projects from official templates +- **Recipes**: reusable, versioned build templates for Rust, Motoko, and asset canisters +- **Environments**: named deployment targets that combine a network, canister set, and settings (e.g., local, staging, production) +- **Project scaffolding**: `icp new` bootstraps new projects from official templates Install via npm (requires Node.js LTS): @@ -40,8 +40,8 @@ icp --version Full documentation: [cli.internetcomputer.org](https://cli.internetcomputer.org/) For advanced users, icp-cli supports authoring custom recipes and project templates: -- [Creating recipes](https://cli.internetcomputer.org/guides/creating-recipes) — encode build conventions as reusable Handlebars templates -- [Creating templates](https://cli.internetcomputer.org/guides/creating-templates) — scaffold new projects with `icp new` +- [Creating recipes](https://cli.internetcomputer.org/guides/creating-recipes): encode build conventions as reusable Handlebars templates +- [Creating templates](https://cli.internetcomputer.org/guides/creating-templates): scaffold new projects with `icp new` #### Telemetry opt-out @@ -57,7 +57,7 @@ Or set `DO_NOT_TRACK=1` in your environment. Telemetry is automatically disabled ### ic-wasm -`ic-wasm` is a utility for optimizing and annotating WebAssembly modules for the Internet Computer. It shrinks Wasm binary size, embeds Candid metadata, and strips unused sections. The official Rust and Motoko recipes use `ic-wasm` automatically — you only need to call it directly when using custom build steps. +`ic-wasm` is a utility for optimizing and annotating WebAssembly modules for the Internet Computer. It shrinks Wasm binary size, embeds Candid metadata, and strips unused sections. The official Rust and Motoko recipes use `ic-wasm` automatically: you only need to call it directly when using custom build steps. Install: @@ -69,7 +69,7 @@ brew install ic-wasm ### Quill -Quill is a minimalistic, offline-first CLI for signing and sending governance messages — NNS and SNS proposals, neuron management — from air-gapped machines. Unlike `icp-cli`, Quill is designed for cold wallet workflows: you generate signed messages on an offline device, then submit them from a networked machine. +Quill is a minimalistic, offline-first CLI for signing and sending governance messages (NNS and SNS proposals, neuron management) from air-gapped machines. Unlike `icp-cli`, Quill is designed for cold wallet workflows: you generate signed messages on an offline device, then submit them from a networked machine. Quill is suited for: - Submitting NNS governance proposals @@ -91,9 +91,9 @@ For language documentation, see [languages/motoko](../../languages/motoko/index. ### Rust CDK (`ic-cdk`) The Rust CDK (`ic-cdk`) is the official DFINITY-maintained library for building canisters in Rust. It exposes the ICP system API as safe Rust abstractions, including: -- `ic_cdk::api` — system calls (time, caller, stable memory, management canister) -- `ic_cdk_timers` — periodic timers and one-shot timers -- `ic_cdk_macros` — `#[update]`, `#[query]`, `#[init]`, and other attribute macros +- `ic_cdk::api`: system calls (time, caller, stable memory, management canister) +- `ic_cdk_timers`: periodic timers and one-shot timers +- `ic_cdk_macros`: `#[update]`, `#[query]`, `#[init]`, and other attribute macros API reference: [docs.rs/ic-cdk](https://docs.rs/ic-cdk/latest/ic_cdk/) @@ -116,7 +116,7 @@ Community CDKs are maintained independently of DFINITY. Check each project's doc ### ICP Ninja -[ICP Ninja](https://icp.ninja) is a web-based IDE for writing and deploying ICP canisters directly from a browser — no local toolchain required. It provides a gallery of example projects (Motoko and Rust backends, React frontends) that you can browse, edit, and deploy to the mainnet in one click. +[ICP Ninja](https://icp.ninja) is a web-based IDE for writing and deploying ICP canisters directly from a browser. No local toolchain required. It provides a gallery of example projects (Motoko and Rust backends, React frontends) that you can browse, edit, and deploy to the mainnet in one click. Deployed canisters remain live for 20 minutes. You can redeploy to reset the timer, or download the project files to continue development locally with icp-cli. @@ -146,8 +146,8 @@ Resources: ## Next steps -- **Start building:** [Quickstart](../../getting-started/quickstart.md) — deploy your first canister with icp-cli -- **Migrating from the legacy CLI?** [Migration guide](migrating-from-dfx.md) — command mapping and configuration changes +- **Start building:** [Quickstart](../../getting-started/quickstart.md): deploy your first canister with icp-cli +- **Migrating from the legacy CLI?** [Migration guide](migrating-from-dfx.md): command mapping and configuration changes - **Rust development:** [Rust language guide](../../languages/rust/index.md) - **Motoko development:** [Motoko language guide](../../languages/motoko/index.md) diff --git a/docs/index.mdx b/docs/index.mdx index 7273627c..33b288ee 100644 --- a/docs/index.mdx +++ b/docs/index.mdx @@ -25,7 +25,7 @@ import { Card, CardGrid, LinkCard } from '@astrojs/starlight/components'; ## ICP skills for agents that write code -Teach your AI agent canister patterns, token standards, CLI commands, and deployment workflows — so it ships working code instead of guessing. +Teach your AI agent canister patterns, token standards, CLI commands, and deployment workflows so it ships working code instead of guessing.
@@ -38,23 +38,23 @@ Teach your AI agent canister patterns, token standards, CLI commands, and deploy - Canisters sign transactions for Bitcoin, Ethereum, and other chains using threshold signatures — no bridges or oracles required. + Canisters sign transactions for Bitcoin, Ethereum, and other chains using threshold signatures. No bridges or oracles required. [Learn more](concepts/chain-key-cryptography.md) - Canister memory survives across executions and upgrades. No databases, no serialization — just use variables. + Canister memory survives across executions and upgrades. No databases, no serialization: just use variables. [Learn more](concepts/orthogonal-persistence.md) Users never pay gas. Canisters pay for their own compute, storage, and bandwidth using **cycles**. [Learn more](concepts/reverse-gas-model.md) - - Canisters serve HTTP responses directly. Host full web applications — frontend and backend — entirely onchain. + + Canisters serve HTTP responses directly. Host full web applications (frontend and backend) entirely onchain. [Learn more](concepts/app-architecture.md) - Canisters schedule their own execution with timers — no external cron jobs, keepers, or off-chain bots. + Canisters schedule their own execution with timers. No external cron jobs, keepers, or off-chain bots. [Learn more](concepts/timers.md) diff --git a/docs/languages/rust/index.md b/docs/languages/rust/index.md index 251b33e8..6fa55f93 100644 --- a/docs/languages/rust/index.md +++ b/docs/languages/rust/index.md @@ -13,7 +13,7 @@ The Rust CDK is split into focused crates that you pull in as needed: | Crate | Purpose | |-------|---------| -| [`ic-cdk`](https://crates.io/crates/ic-cdk) | Core library — system API bindings, inter-canister calls, canister state | +| [`ic-cdk`](https://crates.io/crates/ic-cdk) | Core library: system API bindings, inter-canister calls, canister state | | [`ic-cdk-macros`](https://crates.io/crates/ic-cdk-macros) | Procedural macros that register Rust functions as canister entry points (re-exported by `ic-cdk`) | | [`ic-cdk-timers`](https://crates.io/crates/ic-cdk-timers) | One-shot and periodic timer scheduling | @@ -21,7 +21,7 @@ You will also commonly use these companion crates: | Crate | Purpose | |-------|---------| -| [`candid`](https://crates.io/crates/candid) | Candid serialization — types, encoding, decoding | +| [`candid`](https://crates.io/crates/candid) | Candid serialization: types, encoding, decoding | | [`ic-stable-structures`](https://crates.io/crates/ic-stable-structures) | Persistent data structures that survive canister upgrades (see [Stable Structures](stable-structures.md)) | ## Quick example @@ -80,7 +80,7 @@ icp new my_project --subfolder rust This generates an `icp.yaml` with a Rust canister recipe and a Cargo workspace. The key files are: -**icp.yaml** — declares the canister and its build recipe: +**icp.yaml**: declares the canister and its build recipe: ```yaml canisters: @@ -92,7 +92,7 @@ canisters: shrink: true ``` -**Cargo.toml** — must set the crate type to `cdylib` so the compiler produces a Wasm module: +**Cargo.toml**: must set the crate type to `cdylib` so the compiler produces a Wasm module: ```toml [lib] @@ -184,12 +184,12 @@ async fn fire_and_forget() { ## Data persistence -Rust canisters on ICP benefit from [orthogonal persistence](../../concepts/orthogonal-persistence.md) — heap data is preserved across update calls automatically. However, heap data is **lost during canister upgrades** unless you explicitly save and restore it. +Rust canisters on ICP benefit from [orthogonal persistence](../../concepts/orthogonal-persistence.md): heap data is preserved across update calls automatically. However, heap data is **lost during canister upgrades** unless you explicitly save and restore it. Two strategies for handling upgrades: -1. **Stable structures** — use the `ic-stable-structures` crate to store data in stable memory, which survives upgrades without any serialization. This is the recommended approach. See [Stable Structures](stable-structures.md). -2. **Pre/post upgrade hooks** — serialize heap data in `#[pre_upgrade]` and deserialize in `#[post_upgrade]`. Simpler for small state but does not scale well. +1. **Stable structures**: use the `ic-stable-structures` crate to store data in stable memory, which survives upgrades without any serialization. This is the recommended approach. See [Stable Structures](stable-structures.md). +2. **Pre/post upgrade hooks**: serialize heap data in `#[pre_upgrade]` and deserialize in `#[post_upgrade]`. Simpler for small state but does not scale well. For a deeper look at persistence patterns across languages, see the [Data persistence guide](../../guides/backends/data-persistence.md). @@ -210,11 +210,11 @@ Most crates that target `wasm32-unknown-unknown` for browser use (via `wasm-bind ## Further reading -- [Quickstart](../../getting-started/quickstart.md) — Create and deploy your first canister -- [Stable Structures](stable-structures.md) — Persistent data structures for Rust canisters -- [Testing Rust Canisters](testing.md) — Unit and integration testing strategies -- [`ic-cdk` API docs](https://docs.rs/ic-cdk) — Complete API reference -- [`ic-cdk-timers` API docs](https://docs.rs/ic-cdk-timers) — Timer scheduling API -- [Motoko](../motoko/index.md) — Alternative language for ICP canister development +- [Quickstart](../../getting-started/quickstart.md): Create and deploy your first canister +- [Stable Structures](stable-structures.md): Persistent data structures for Rust canisters +- [Testing Rust Canisters](testing.md): Unit and integration testing strategies +- [`ic-cdk` API docs](https://docs.rs/ic-cdk): Complete API reference +- [`ic-cdk-timers` API docs](https://docs.rs/ic-cdk-timers): Timer scheduling API +- [Motoko](../motoko/index.md): Alternative language for ICP canister development diff --git a/docs/languages/rust/stable-structures.md b/docs/languages/rust/stable-structures.md index 6e08402a..4499d65f 100644 --- a/docs/languages/rust/stable-structures.md +++ b/docs/languages/rust/stable-structures.md @@ -5,7 +5,7 @@ sidebar: order: 2 --- -Stable structures are data structures that read and write directly to stable memory, bypassing the heap entirely. Unlike heap data, stable memory survives canister upgrades — no `pre_upgrade`/`post_upgrade` serialization hooks required. +Stable structures are data structures that read and write directly to stable memory, bypassing the heap entirely. Unlike heap data, stable memory survives canister upgrades. No `pre_upgrade`/`post_upgrade` serialization hooks required. The [`ic-stable-structures`](https://docs.rs/ic-stable-structures/latest/ic_stable_structures/) crate provides the building blocks. This page covers how to use them in Rust canisters. @@ -17,7 +17,7 @@ The two approaches to persistence across upgrades are: | Approach | When to use | |----------|-------------| -| **Stable structures** | Recommended for all new canisters. Data lives in stable memory directly — no serialization step, no instruction-limit risk. | +| **Stable structures** | Recommended for all new canisters. Data lives in stable memory directly: no serialization step, no instruction-limit risk. | | **Pre/post upgrade hooks** | Simple to add to existing code, but does not scale. Serializing large datasets in `pre_upgrade` can hit the instruction limit and brick the canister. | Stable structures eliminate the upgrade risk entirely. The `MemoryManager` partitions stable memory (which can grow to hundreds of GB) into independent virtual regions, one per data structure. @@ -43,7 +43,7 @@ serde = { version = "1", features = ["derive"] } ciborium = "0.2" ``` -`ciborium` provides CBOR serialization for custom types stored in stable memory. CBOR is compact and fast — preferred over Candid for this use case. +`ciborium` provides CBOR serialization for custom types stored in stable memory. CBOR is compact and fast: preferred over Candid for this use case. ## Available structures @@ -52,11 +52,11 @@ The crate provides several persistent data structures. See the [full API referen | Type | Use case | |------|----------| | `StableBTreeMap` | Key-value store. Keys must implement `Storable + Ord`. | -| `StableCell` | Single persistent value — counters, configuration, state flags. | +| `StableCell` | Single persistent value: counters, configuration, state flags. | | `StableLog` | Append-only log. Efficient for event streams and audit trails. | | `StableVec` | Ordered sequence. Efficient indexed access. | | `StableBTreeSet` | Set of unique keys. Efficient membership tests and range queries. | -| `StableMinHeap` | Priority queue — smallest element dequeued first. | +| `StableMinHeap` | Priority queue: smallest element dequeued first. | ## Implement Storable for custom types @@ -79,7 +79,7 @@ struct User { impl Storable for User { // Use Unbounded to avoid compatibility issues when adding new fields. - // Bound::Bounded requires a fixed max_size — exceeding it after a + // Bound::Bounded requires a fixed max_size: exceeding it after a // schema change breaks deserialization of existing data. const BOUND: Bound = Bound::Unbounded; @@ -90,7 +90,7 @@ impl Storable for User { } // `into_bytes` was added in ic-stable-structures 0.7. If you are upgrading - // from 0.6.x, add this method — it is not required in 0.6. + // from 0.6.x, add this method: it is not required in 0.6. fn into_bytes(self) -> Vec { let mut buf = vec![]; ciborium::into_writer(&self, &mut buf).expect("failed to encode User"); @@ -105,7 +105,7 @@ impl Storable for User { ## MemoryManager and MemoryId -The `MemoryManager` partitions a single stable memory region into virtual regions. Each data structure is allocated its own `MemoryId`. Two structures that share a `MemoryId` corrupt each other's data — always use distinct IDs. +The `MemoryManager` partitions a single stable memory region into virtual regions. Each data structure is allocated its own `MemoryId`. Two structures that share a `MemoryId` corrupt each other's data: always use distinct IDs. ```rust use ic_stable_structures::{ @@ -122,7 +122,7 @@ const LOG_INDEX_MEM_ID: MemoryId = MemoryId::new(2); const LOG_DATA_MEM_ID: MemoryId = MemoryId::new(3); ``` -`StableLog` requires two separate memory regions — one for the index and one for the data. +`StableLog` requires two separate memory regions: one for the index and one for the data. ## Canister wiring example @@ -162,7 +162,7 @@ fn init() {} #[post_upgrade] fn post_upgrade() { - // Stable data auto-restores — re-initialize timers or transient state here. + // Stable data auto-restores: re-initialize timers or transient state here. } ``` @@ -176,7 +176,7 @@ For a fully runnable canister with canister methods and `ic_cdk::export_candid!( ## Multiple data structures -When a canister needs more than one stable structure, allocate a unique `MemoryId` for each. `StableLog` requires two IDs — one for its index and one for its data: +When a canister needs more than one stable structure, allocate a unique `MemoryId` for each. `StableLog` requires two IDs: one for its index and one for its data: ```rust // Declare all IDs as named constants to prevent accidental reuse. @@ -190,7 +190,7 @@ const LOG_DATA_MEM_ID: MemoryId = MemoryId::new(3); // StableLog::init(index_memory, data_memory).expect("failed to init LOG") ``` -IDs are stable across upgrades — never renumber them. Adding a new structure always gets the next available integer. +IDs are stable across upgrades. Never renumber them. Adding a new structure always gets the next available integer. ## StableVec usage @@ -230,7 +230,7 @@ fn get_item(index: u64) -> Option { | Scenario | Use | |----------|-----| | Data that must survive upgrades (user records, balances, settings) | Stable structures | -| Large datasets that could grow beyond a few MB | Stable structures — stable memory can grow to hundreds of GB | +| Large datasets that could grow beyond a few MB | Stable structures: stable memory can grow to hundreds of GB | | Temporary computation state within a single call | Heap (`Vec`, `HashMap`) | | Caches that can be rebuilt after an upgrade | Heap (`Vec`, `HashMap`) reconstructed in `#[post_upgrade]` | | Small configuration that changes rarely | `StableCell` | @@ -262,7 +262,7 @@ icp canister call backend get_user_count '()' # Redeploy (simulates a code update + upgrade) icp deploy backend -# Count must still be 2 — not 0 +# Count must still be 2: not 0 icp canister call backend get_user_count '()' # Expected: (2 : nat64) @@ -281,33 +281,33 @@ If the count drops to 0 after redeployment, the data is not in stable memory. Ch **Using `Bound::Bounded` with a `max_size` that is too small.** If you add a field to a struct later, existing records that fit the old `max_size` may still encode larger than expected, or the new layout may exceed the bound and break deserialization. Prefer `Bound::Unbounded` unless you have a specific reason to bound the size. -**Omitting `#[post_upgrade]` when you have timers or transient state.** Stable data is safe without a `#[post_upgrade]` hook — the structures read from stable memory automatically on first access. The real reason to define the hook is to re-initialize timers and other transient heap state that is lost on upgrade. If your canister uses timers, omitting `#[post_upgrade]` means timers silently stop firing after an upgrade. +**Omitting `#[post_upgrade]` when you have timers or transient state.** Stable data is safe without a `#[post_upgrade]` hook. The structures read from stable memory automatically on first access. The real reason to define the hook is to re-initialize timers and other transient heap state that is lost on upgrade. If your canister uses timers, omitting `#[post_upgrade]` means timers silently stop firing after an upgrade. **Serializing heap data in `pre_upgrade` as the sole persistence strategy.** This does not scale. For canisters with user-facing data, use stable structures from the start. ## Schema evolution -Stable memory is persistent — once you deploy a canister, existing serialized bytes must remain readable after you add or change fields. `Bound::Unbounded` is the safe default because it allows the encoded size to grow without constraint, so adding a new field to a CBOR-serialized struct does not break reads of old records. +Stable memory is persistent: once you deploy a canister, existing serialized bytes must remain readable after you add or change fields. `Bound::Unbounded` is the safe default because it allows the encoded size to grow without constraint, so adding a new field to a CBOR-serialized struct does not break reads of old records. -**Adding a field:** Use `Option` for new fields so old records (which have no bytes for the field) deserialize correctly as `None`. CBOR skips unknown fields on deserialization, so a plain new field also works — but `Option` makes the intent explicit. +**Adding a field:** Use `Option` for new fields so old records (which have no bytes for the field) deserialize correctly as `None`. CBOR skips unknown fields on deserialization, so a plain new field also works: but `Option` makes the intent explicit. ```rust // Before upgrade: struct User { id: u64, name: String } -// After upgrade — old records deserialize with email = None: +// After upgrade: old records deserialize with email = None: struct User { id: u64, name: String, email: Option } ``` -**Changing a key type:** Changing the key type of a `StableBTreeMap` (for example, from `u32` to `u64`) is a breaking change — all existing keys are stored as the old type and the new type will not read them. To migrate, allocate a new `MemoryId` for a new map, copy data from the old map in `#[post_upgrade]`, then remove the old `MemoryId` in a subsequent upgrade once migration is complete. +**Changing a key type:** Changing the key type of a `StableBTreeMap` (for example, from `u32` to `u64`) is a breaking change: all existing keys are stored as the old type and the new type will not read them. To migrate, allocate a new `MemoryId` for a new map, copy data from the old map in `#[post_upgrade]`, then remove the old `MemoryId` in a subsequent upgrade once migration is complete. **Never change `Bound::Bounded` max_size for live data.** Lowering it truncates existing records. Raising it is safe but may require a separate migration if old records are smaller than the new bound expects. ## Next steps -- [`ic-stable-structures` API reference](https://docs.rs/ic-stable-structures/latest/ic_stable_structures/) — complete trait and type documentation -- [Data persistence guide](../../guides/backends/data-persistence.md) — cross-language comparison of persistence patterns -- [Orthogonal persistence](../../concepts/orthogonal-persistence.md) — how ICP manages canister state -- [Canister lifecycle](../../guides/canister-management/lifecycle.md) — upgrades, snapshots, and state management +- [`ic-stable-structures` API reference](https://docs.rs/ic-stable-structures/latest/ic_stable_structures/): complete trait and type documentation +- [Data persistence guide](../../guides/backends/data-persistence.md): cross-language comparison of persistence patterns +- [Orthogonal persistence](../../concepts/orthogonal-persistence.md): how ICP manages canister state +- [Canister lifecycle](../../guides/canister-management/lifecycle.md): upgrades, snapshots, and state management diff --git a/docs/languages/rust/testing.md b/docs/languages/rust/testing.md index b5110392..ef0f8436 100644 --- a/docs/languages/rust/testing.md +++ b/docs/languages/rust/testing.md @@ -5,15 +5,15 @@ sidebar: order: 3 --- -Testing Rust canisters requires a different mindset from ordinary Rust testing because most IC-specific APIs — -`ic_cdk::caller()`, `ic_cdk::api::time()`, inter-canister calls — are only available inside a live IC execution +Testing Rust canisters requires a different mindset from ordinary Rust testing because most IC-specific APIs +(`ic_cdk::caller()`, `ic_cdk::api::time()`, inter-canister calls) are only available inside a live IC execution environment. The key is to isolate those dependencies behind traits so your business logic can be tested in plain Rust without any IC infrastructure. This page covers the two main testing layers for Rust: -- **Unit tests** — pure Rust with mocked IC dependencies; milliseconds per test -- **Integration tests** — deploy your canister WASM into PocketIC and make real calls +- **Unit tests**: pure Rust with mocked IC dependencies; milliseconds per test +- **Integration tests**: deploy your canister WASM into PocketIC and make real calls For a general overview of the testing pyramid and guidance on Motoko testing, see [Testing strategies](../../guides/testing/strategies.md). For advanced PocketIC features (multi-subnet, time travel, @@ -91,7 +91,7 @@ impl CanisterApi { } ``` -Business logic functions take `&CanisterApi` directly — no nested generics required. +Business logic functions take `&CanisterApi` directly. No nested generics required. ### Initialize with production dependencies @@ -121,7 +121,7 @@ fn increment_count(_: IncrementCountRequest) -> IncrementCountResponse { } ``` -### Production implementation — stable memory counter +### Production implementation: stable memory counter The production `Counter` reads and writes stable memory via `ic-stable-structures`: @@ -142,7 +142,7 @@ impl Counter for StableMemoryCounter { } ``` -### Test implementation — in-memory counter +### Test implementation: in-memory counter The test `Counter` uses a plain `Mutex` and works in any Rust test runner: @@ -295,7 +295,7 @@ tokio = { version = "1.0", features = ["macros", "rt"] } ``` ```rust -// Async unit test — no IC runtime needed +// Async unit test: no IC runtime needed thread_local! { static TEST_API: RefCell = RefCell::new({ let governance = Arc::new(MockGovernanceApi::new()); @@ -399,7 +399,7 @@ Add `candid_parser` to dev dependencies: candid_parser = "0.2" ``` -This test fails if you add, remove, or change a method signature without updating the `.did` file — catching +This test fails if you add, remove, or change a method signature without updating the `.did` file: catching the mismatch before deployment. ## Integration testing with PocketIC @@ -619,9 +619,9 @@ file for integration tests and deployment. | Test type | Typical duration | Parallelism | |---|---|---| -| Unit tests (`cargo test --lib`) | ~1ms per test | Full — each test runs in its own thread | -| Integration tests with PocketIC | 1–5s per test | Full — each test creates its own `PocketIc` instance | -| Integration tests with NNS setup | 10–30s per test | Full — but slow enough to run in a dedicated test binary | +| Unit tests (`cargo test --lib`) | ~1ms per test | Full: each test runs in its own thread | +| Integration tests with PocketIC | 1–5s per test | Full: each test creates its own `PocketIc` instance | +| Integration tests with NNS setup | 10–30s per test | Full: but slow enough to run in a dedicated test binary | The goal is to maximize coverage in unit tests so only a small number of integration tests are needed. A ratio of 90% unit tests to 10% integration tests is a reasonable target for most canisters. @@ -669,12 +669,12 @@ jobs: ``` Key points: -- **Cache the `target/` directory** — Rust compilation is the dominant cost. Caching on `Cargo.lock` gives a +- **Cache the `target/` directory**: Rust compilation is the dominant cost. Caching on `Cargo.lock` gives a deterministic cache key. -- **Build the WASM before running integration tests** — the test binary reads the WASM from `target/` at runtime. +- **Build the WASM before running integration tests**: the test binary reads the WASM from `target/` at runtime. Unit tests (`--lib`) do not need the WASM, so you can run them in parallel with the WASM build if your CI system supports it. -- **PocketIC server binary** — the `pocket-ic` Rust crate downloads the server binary automatically on first use. +- **PocketIC server binary**: the `pocket-ic` Rust crate downloads the server binary automatically on first use. To cache it across runs, set `POCKET_IC_BIN` to a path in your cache and check whether the binary already exists before running tests. Alternatively, pin the download script from your CDK version (see [`scripts/download_pocket_ic_server.sh`](https://github.com/dfinity/cdk-rs/blob/main/scripts/download_pocket_ic_server.sh) @@ -701,10 +701,10 @@ This keeps fast unit tests in every PR while reserving the heavier NNS integrati ## Next steps -- [Testing strategies](../../guides/testing/strategies.md) — Motoko testing, benchmarking with `canbench`, and containerized network tests -- [PocketIC](../../guides/testing/pocket-ic.md) — Multi-subnet topologies, time travel, and JavaScript testing with Pic JS -- [Stable Structures](stable-structures.md) — Understand what data survives upgrades -- [`ic-cdk` API reference](https://docs.rs/ic-cdk/latest/ic_cdk/) — Complete CDK API documentation -- [unit_testable_rust_canister example](https://github.com/dfinity/examples/tree/master/rust/unit_testable_rust_canister) — Complete working example with mocked governance and stable memory +- [Testing strategies](../../guides/testing/strategies.md): Motoko testing, benchmarking with `canbench`, and containerized network tests +- [PocketIC](../../guides/testing/pocket-ic.md): Multi-subnet topologies, time travel, and JavaScript testing with Pic JS +- [Stable Structures](stable-structures.md): Understand what data survives upgrades +- [`ic-cdk` API reference](https://docs.rs/ic-cdk/latest/ic_cdk/): Complete CDK API documentation +- [unit_testable_rust_canister example](https://github.com/dfinity/examples/tree/master/rust/unit_testable_rust_canister): Complete working example with mocked governance and stable memory diff --git a/docs/reference/application-canisters.md b/docs/reference/application-canisters.md index 5e450095..5bc65fee 100644 --- a/docs/reference/application-canisters.md +++ b/docs/reference/application-canisters.md @@ -5,13 +5,13 @@ sidebar: order: 4 --- -Application canisters are well-known canisters at the application layer of the Internet Computer that developers commonly integrate into their projects. Unlike [system canisters](system-canisters.md) (which govern the network) or [protocol canisters](protocol-canisters.md) (which provide platform infrastructure), application canisters implement higher-level functionality: hosting web frontends, governing dapps via DAO, and running AI inference. +Application canisters are well-known canisters at the application layer of the Internet Computer that developers commonly integrate into their projects. Unlike [system canisters](system-canisters.md) (which govern the network) or [protocol canisters](protocol-canisters.md) (which provide platform infrastructure), application canisters implement higher-level functionality: hosting web frontends, governing apps via DAO, and running AI inference. ## Asset canister -The asset canister hosts static web assets — HTML, CSS, JavaScript, images, and other files — directly onchain. It is the standard way to deploy a web frontend on ICP. Responses are certified by the subnet, allowing HTTP gateways to verify integrity before serving content to browsers. +The asset canister hosts static web assets (HTML, CSS, JavaScript, images, and other files) directly onchain. It is the standard way to deploy a web frontend on ICP. Responses are certified by the subnet, allowing HTTP gateways to verify integrity before serving content to browsers. -Asset canisters are deployed per-project. There is no global asset canister ID — each project creates its own. +Asset canisters are deployed per-project. There is no global asset canister ID: each project creates its own. ### Recipe (icp.yaml) @@ -27,9 +27,9 @@ canisters: - npm run build ``` -- `recipe.type` — identifies this as an asset canister deployment -- `dir` — the build output directory whose contents are uploaded -- `build` — commands run automatically by `icp deploy` before uploading +- `recipe.type`: identifies this as an asset canister deployment +- `dir`: the build output directory whose contents are uploaded +- `build`: commands run automatically by `icp deploy` before uploading ### Interface @@ -104,9 +104,9 @@ Create `.ic-assets.json5` in your `dir` directory (or `public/`/`static/` so you ] ``` -- `security_policy: "standard"` — applies the default Content Security Policy and security headers -- `allow_raw_access: false` — disables serving assets on the uncertified `raw.ic0.app` domain -- `enable_aliasing: true` — enables SPA fallback, serving `index.html` for unmatched paths +- `security_policy: "standard"`: applies the default Content Security Policy and security headers +- `allow_raw_access: false`: disables serving assets on the uncertified `raw.ic0.app` domain +- `enable_aliasing: true`: enables SPA fallback, serving `index.html` for unmatched paths ### Programmatic uploads @@ -127,7 +127,7 @@ const assetManager = new AssetManager({ }); // Upload a file (files >1.9MB are chunked automatically) -// fileBuffer: Uint8Array | ArrayBuffer | number[] — e.g. from fs.readFileSync, fetch, or the File API +// fileBuffer: Uint8Array | ArrayBuffer | number[]: e.g. from fs.readFileSync, fetch, or the File API await assetManager.store(fileBuffer, { fileName: "photo.jpg", contentType: "image/jpeg", @@ -145,7 +145,7 @@ await assetManager.delete("/uploads/old-photo.jpg"); The asset canister Wasm version determines which features are available. Key versions: -- `0.30.2`+ — required for the `ic_env` cookie (used by `safeGetCanisterEnv()` from `@icp-sdk/core`) +- `0.30.2`+: required for the `ic_env` cookie (used by `safeGetCanisterEnv()` from `@icp-sdk/core`) - Omitting `configuration.version` in the recipe uses the latest version automatically Downgrading the Wasm version may fail if the stable memory format changed between versions. If a downgrade is necessary, use `icp deploy --mode reinstall` (wipes all stored assets). @@ -156,7 +156,7 @@ For version history, upgrade guidance, and deployment pitfalls, see the [Asset c ## SNS canisters -When an SNS (Service Nervous System) is launched for a dapp, the SNS-W canister deploys a set of governance canisters on an SNS subnet. These canisters are created per-dapp — there is no single global SNS. To find the canister IDs for a specific SNS, look up the dapp on the [ICP Dashboard](https://dashboard.internetcomputer.org/). +When an SNS (Service Nervous System) is launched for an app, the SNS-W canister deploys a set of governance canisters on an SNS subnet. These canisters are created per-app: there is no single global SNS. To find the canister IDs for a specific SNS, look up the app on the [ICP Dashboard](https://dashboard.internetcomputer.org/). ### Canister set per SNS @@ -164,7 +164,7 @@ When an SNS (Service Nervous System) is launched for a dapp, the SNS-W canister |---|---| | **Governance** | Proposal submission, voting, neuron management | | **Ledger** | SNS token transfers (ICRC-1 standard) | -| **Root** | Sole controller of all dapp canisters post-launch | +| **Root** | Sole controller of all app canisters post-launch | | **Swap** | Runs the decentralization swap (ICP for SNS tokens) | | **Index** | Transaction indexing for the SNS ledger | | **Archive** | Historical transaction storage | @@ -326,18 +326,18 @@ For a complete onchain AI guide, see [Onchain AI](../guides/backends/onchain-ai. | Canister | Canister ID | Purpose | |---|---|---| | Asset canister | Per-project | Static web asset hosting with HTTP certification | -| SNS governance | Per-dapp | DAO governance for a specific dapp | -| SNS ledger | Per-dapp | ICRC-1/ICRC-2/ICRC-3 token ledger for a specific SNS | -| SNS root | Per-dapp | Controller of all dapp canisters in the SNS set | -| SNS swap | Per-dapp | Decentralization swap (ICP for SNS tokens) | +| SNS governance | Per-app | DAO governance for a specific app | +| SNS ledger | Per-app | ICRC-1/ICRC-2/ICRC-3 token ledger for a specific SNS | +| SNS root | Per-app | Controller of all app canisters in the SNS set | +| SNS swap | Per-app | Decentralization swap (ICP for SNS tokens) | | LLM | `w36hm-eqaaa-aaaal-qr76a-cai` | Onchain AI inference (Llama 3.1 8B) | ## Next steps -- [Asset canister guide](../guides/frontends/asset-canister.md) — deploying and configuring the asset canister for your project -- [Launching an SNS](../guides/governance/launching.md) — how to decentralize a dapp with SNS -- [Onchain AI](../guides/backends/onchain-ai.md) — building AI-powered canisters with the LLM canister -- [System canisters](system-canisters.md) — NNS, Internet Identity, ICP ledger, and other network-level canisters -- [Protocol canisters](protocol-canisters.md) — Bitcoin, ckBTC, EVM RPC, and other protocol-layer canisters +- [Asset canister guide](../guides/frontends/asset-canister.md): deploying and configuring the asset canister for your project +- [Launching an SNS](../guides/governance/launching.md): how to decentralize an app with SNS +- [Onchain AI](../guides/backends/onchain-ai.md): building AI-powered canisters with the LLM canister +- [System canisters](system-canisters.md): NNS, Internet Identity, ICP ledger, and other network-level canisters +- [Protocol canisters](protocol-canisters.md): Bitcoin, ckBTC, EVM RPC, and other protocol-layer canisters diff --git a/docs/reference/cycles-costs.md b/docs/reference/cycles-costs.md index e5aa7209..02d1d5c9 100644 --- a/docs/reference/cycles-costs.md +++ b/docs/reference/cycles-costs.md @@ -5,9 +5,9 @@ sidebar: order: 5 --- -Canisters pay for the resources they consume and operations they perform using **cycles**. The price of cycles is pegged to [XDR](glossary.md) (Special Drawing Rights): **1 trillion cycles = 1 XDR**. As of May 22, 2025, 1 XDR ≈ $1.35 USD — this rate fluctuates; see the [IMF's XDR exchange data](https://www.imf.org/external/np/fin/data/rms_sdrv.aspx) for the current rate. +Canisters pay for the resources they consume and operations they perform using **cycles**. The price of cycles is pegged to [XDR](glossary.md) (Special Drawing Rights): **1 trillion cycles = 1 XDR**. As of May 22, 2025, 1 XDR ≈ $1.35 USD: this rate fluctuates; see the [IMF's XDR exchange data](https://www.imf.org/external/np/fin/data/rms_sdrv.aspx) for the current rate. -You can use the [pricing calculator](https://3d5wy-5aaaa-aaaag-qkhsq-cai.icp0.io/) to estimate the cost for your dapp. +You can use the [pricing calculator](https://3d5wy-5aaaa-aaaag-qkhsq-cai.icp0.io/) to estimate the cost for your app. ## Cycles units @@ -22,14 +22,14 @@ You can use the [pricing calculator](https://3d5wy-5aaaa-aaaag-qkhsq-cai.icp0.io Costs scale with the number of nodes in the subnet. The base cost tables below assume a **13-node application subnet**. For a 34-node (fiduciary) subnet, costs scale as `34 * (cost / 13)`: -- **13-node subnet**: Standard application subnets. No scaling needed — costs are as listed. +- **13-node subnet**: Standard application subnets. No scaling needed: costs are as listed. - **34-node subnet**: Fiduciary subnets (higher security for DeFi). Costs are approximately **2.6×** the 13-node cost. See [Subnet types](subnet-types.md) for subnet-specific details. ## Cost table -USD values are approximate and based on the May 2025 XDR rate (1 XDR ≈ $1.35). The XDR rate fluctuates — use cycle counts for precise budgeting. +USD values are approximate and based on the May 2025 XDR rate (1 XDR ≈ $1.35). The XDR rate fluctuates: use cycle counts for precise budgeting. @@ -87,11 +87,11 @@ Current values (13-node subnet): By default canisters are scheduled best-effort. Setting `compute_allocation` guarantees execution slots: -- **1%** — Scheduled every 100 rounds -- **2%** — Scheduled every 50 rounds -- **100%** — Scheduled every round +- **1%**: Scheduled every 100 rounds +- **2%**: Scheduled every 50 rounds +- **100%**: Scheduled every round -Total allocatable compute capacity per subnet is 299%. The per-second cost is `10M cycles * allocation_percent` on a 13-node subnet — see the *Compute allocation* row in the Cost table above for exact figures. +Total allocatable compute capacity per subnet is 299%. The per-second cost is `10M cycles * allocation_percent` on a 13-node subnet: see the *Compute allocation* row in the Cost table above for exact figures. ## Storage reservation @@ -127,17 +127,17 @@ Reserved cycles are non-transferable. Controllers can disable reservation by set Certain ICP features have additional cycle costs beyond the base execution and messaging fees: -- **HTTPS outcalls** — See the [HTTPS outcalls cost formula](#https-outcalls) above. -- **EVM RPC canister** — Costs depend on the underlying RPC call and the HTTPS outcall fees above. -- **Threshold ECDSA / Schnorr signing** — Charged per signing request. Exact cost tables are not yet included on this page. -- **Bitcoin integration API** — Per-call fees apply. Exact cost tables are not yet included on this page. +- **HTTPS outcalls**: See the [HTTPS outcalls cost formula](#https-outcalls) above. +- **EVM RPC canister**: Costs depend on the underlying RPC call and the HTTPS outcall fees above. +- **Threshold ECDSA / Schnorr signing**: Charged per signing request. Exact cost tables are not yet included on this page. +- **Bitcoin integration API**: Per-call fees apply. Exact cost tables are not yet included on this page. ## Related pages -- [Cycles management](../guides/canister-management/cycles-management.md) — Topping up and monitoring canister balances -- [Reverse gas model](../concepts/reverse-gas-model.md) — Why canisters (not users) pay for execution -- [Subnet types](subnet-types.md) — Cost multipliers per subnet type +- [Cycles management](../guides/canister-management/cycles-management.md): Topping up and monitoring canister balances +- [Reverse gas model](../concepts/reverse-gas-model.md): Why canisters (not users) pay for execution +- [Subnet types](subnet-types.md): Cost multipliers per subnet type diff --git a/docs/reference/execution-errors.md b/docs/reference/execution-errors.md index 53ce930d..0a41d291 100644 --- a/docs/reference/execution-errors.md +++ b/docs/reference/execution-errors.md @@ -22,10 +22,10 @@ A trap is an interruption of code execution that returns an error. WebAssembly o Common root causes: -- **Heap out of bounds** — the canister accessed heap memory that has not been allocated. Check for places where memory is allocated (creating vectors, buffers) and whether you try to access that memory before it is allocated. -- **Stable memory out of bounds** — similar to heap out of bounds but for stable memory. -- **Integer division by zero** — the canister attempted to divide by zero. Inspect the canister code for any division operations. -- **Unreachable** — typically produced when a Rust canister panics. Rust canisters using `ic-cdk` macros automatically convert panics to `ic0.trap` calls with a human-readable message including the file, line, and panic reason. +- **Heap out of bounds**: the canister accessed heap memory that has not been allocated. Check for places where memory is allocated (creating vectors, buffers) and whether you try to access that memory before it is allocated. +- **Stable memory out of bounds**: similar to heap out of bounds but for stable memory. +- **Integer division by zero**: the canister attempted to divide by zero. Inspect the canister code for any division operations. +- **Unreachable**: typically produced when a Rust canister panics. Rust canisters using `ic-cdk` macros automatically convert panics to `ic0.trap` calls with a human-readable message including the file, line, and panic reason. To fix this error, test the canister to identify unhandled errors. Review the [canister trapping guide](../guides/canister-management/lifecycle.md) for detailed guidance on traps during upgrades and inter-canister calls. @@ -90,7 +90,7 @@ The canister tried to perform a large copy that cannot be completed in a single Canister attempted to perform a large memory operation that used N instructions and exceeded the slice limit M. ``` -ICP maintains a consistent block rate by limiting the number of operations per round. A single large copy — to or from stable memory, or within the main heap — may be too large to execute within a round and cannot be automatically split into smaller copies. +ICP maintains a consistent block rate by limiting the number of operations per round. A single large copy (to or from stable memory, or within the main heap) may be too large to execute within a round and cannot be automatically split into smaller copies. To fix this error, find the locations in the canister code that execute large copies and split them into multiple smaller copies. @@ -580,7 +580,7 @@ A canister executed a request to delete itself. Canister xxx-xxx cannot delete itself. ``` -A canister cannot delete itself. To fix this error, delete the canister from one of its other controllers. If the only controller is the canister itself, it cannot be directly deleted — it will first be frozen when its cycle balance falls below the freezing threshold, and eventually deleted when the balance reaches zero. +A canister cannot delete itself. To fix this error, delete the canister from one of its other controllers. If the only controller is the canister itself, it cannot be directly deleted: it will first be frozen when its cycle balance falls below the freezing threshold, and eventually deleted when the balance reaches zero. ### Delete canister queue not empty diff --git a/docs/reference/glossary.md b/docs/reference/glossary.md index da8ce674..c7b6e1e7 100644 --- a/docs/reference/glossary.md +++ b/docs/reference/glossary.md @@ -77,7 +77,7 @@ hosted on this subnet. These blockchains interact using [chain-key cryptography] #### boundary nodes **Boundary nodes** are gateways to the Internet Computer. These nodes -allow users to seamlessly access the [canister](#c) smart contracts +allow users to access the [canister](#c) smart contracts running on ICP. The boundary nodes have several purposes: they aid in discover-ability (the `icp0.io` domain name points to a set of boundary nodes), they are diff --git a/docs/reference/http-gateway-spec.md b/docs/reference/http-gateway-spec.md index ae4de0cd..57016051 100644 --- a/docs/reference/http-gateway-spec.md +++ b/docs/reference/http-gateway-spec.md @@ -364,9 +364,9 @@ The value of the `upgrade` field returned from `http_request_update` is ignored. Version 1 response verification only supports verifying a request path and response body pair with only one response per request path. This is quite restrictive in the number of scenarios it can support. For example, redirection or client-side caching is not safe since the status code and headers required to verify responses of that nature are not included in the certification. Upon a query call to a canister’s `http_request` method, a single malicious node or boundary node can modify these parts of the HTTP response, leading to the following issues: -- dApps cannot load the service worker when embedded within iFrames. +- apps cannot load the service worker when embedded within iFrames. - The use of redirects and cookies is unsafe as they can be manipulated by malicious nodes. -- This is unexpected for developers and will lead to vulnerabilities in dApps sooner or later. +- This is unexpected for developers and will lead to vulnerabilities in apps sooner or later. - The effectiveness of security headers (such as Content Security Policy) is diminished as they can be omitted or modified by malicious nodes. [Response Verification version 2](#response-verification) overcomes these issues. diff --git a/docs/reference/management-canister.md b/docs/reference/management-canister.md index 97717787..b6150f3d 100644 --- a/docs/reference/management-canister.md +++ b/docs/reference/management-canister.md @@ -40,8 +40,8 @@ Registers a new canister on the IC and returns its canister ID. The canister sta - **Caller:** Canisters, or subnet admins via ingress messages - **Parameters:** - - `settings` (`opt canister_settings`) — initial canister settings - - `sender_canister_version` (`opt nat64`) — caller's canister version (must match `ic0.canister_version` if provided) + - `settings` (`opt canister_settings`): initial canister settings + - `sender_canister_version` (`opt nat64`): caller's canister version (must match `ic0.canister_version` if provided) - **Returns:** `record { canister_id : principal }` - **Cycles:** Must be explicitly attached to the call (not deducted automatically) @@ -55,8 +55,8 @@ Updates the settings of an existing canister. Only controllers can call this met - **Caller:** Controllers (canisters or external users) - **Parameters:** - - `canister_id` (`principal`) — target canister - - `settings` (`canister_settings`) — settings to update + - `canister_id` (`principal`): target canister + - `settings` (`canister_settings`): settings to update - `sender_canister_version` (`opt nat64`) - **Returns:** Nothing @@ -66,10 +66,10 @@ Installs or upgrades code on a canister. Only controllers can call this method. - **Caller:** Controllers (canisters or external users) - **Parameters:** - - `mode` — one of `install`, `reinstall`, or `upgrade` - - `canister_id` (`principal`) — target canister - - `wasm_module` (`blob`) — Wasm binary (raw or gzip-compressed) - - `arg` (`blob`) — initialization argument + - `mode`: one of `install`, `reinstall`, or `upgrade` + - `canister_id` (`principal`): target canister + - `wasm_module` (`blob`): Wasm binary (raw or gzip-compressed) + - `arg` (`blob`): initialization argument - `sender_canister_version` (`opt nat64`) - **Returns:** Nothing @@ -91,11 +91,11 @@ Installs code that was previously uploaded in chunks. Useful for Wasm modules th - **Caller:** Controllers (canisters or external users) - **Parameters:** - - `mode` — same as `install_code` - - `target_canister` (`principal`) — where to install - - `store_canister` (`opt principal`) — where chunks are stored (defaults to `target_canister`) - - `chunk_hashes_list` (`vec record { hash : blob }`) — ordered list of chunk hashes - - `wasm_module_hash` (`blob`) — SHA-256 of the concatenated chunks + - `mode`: same as `install_code` + - `target_canister` (`principal`): where to install + - `store_canister` (`opt principal`): where chunks are stored (defaults to `target_canister`) + - `chunk_hashes_list` (`vec record { hash : blob }`): ordered list of chunk hashes + - `wasm_module_hash` (`blob`): SHA-256 of the concatenated chunks - `arg` (`blob`) - `sender_canister_version` (`opt nat64`) - **Returns:** Nothing @@ -122,17 +122,17 @@ Returns detailed information about a canister: status, settings, module hash, cy - **Parameters:** - `canister_id` (`principal`) - **Returns:** A record containing: - - `status` — `running`, `stopping`, or `stopped` - - `ready_for_migration` (`bool`) — whether a stopped canister is ready for subnet migration (always `false` unless `stopped`) - - `canister_version` (`nat64`) — the canister's current version number - - `settings` — the definite canister settings currently in effect - - `module_hash` (`opt blob`) — SHA-256 of installed module (`null` if empty) - - `memory_size` (`nat`) — total memory consumed - - `memory_metrics` — breakdown by component (Wasm memory, stable memory, globals, binary, custom sections, history, chunk store, snapshots) - - `cycles` (`nat`) — current cycle balance - - `reserved_cycles` (`nat`) — reserved cycle balance - - `idle_cycles_burned_per_day` (`nat`) — daily idle burn rate - - `query_stats` — query call statistics (total calls, instructions, request/response bytes) + - `status`: `running`, `stopping`, or `stopped` + - `ready_for_migration` (`bool`): whether a stopped canister is ready for subnet migration (always `false` unless `stopped`) + - `canister_version` (`nat64`): the canister's current version number + - `settings`: the definite canister settings currently in effect + - `module_hash` (`opt blob`): SHA-256 of installed module (`null` if empty) + - `memory_size` (`nat`): total memory consumed + - `memory_metrics`: breakdown by component (Wasm memory, stable memory, globals, binary, custom sections, history, chunk store, snapshots) + - `cycles` (`nat`): current cycle balance + - `reserved_cycles` (`nat`): reserved cycle balance + - `idle_cycles_burned_per_day` (`nat`): daily idle burn rate + - `query_stats`: query call statistics (total calls, instructions, request/response bytes) ### `canister_info` @@ -141,10 +141,10 @@ Returns the history, current module hash, and controllers of any canister. Unlik - **Caller:** Canisters only - **Parameters:** - `canister_id` (`principal`) - - `num_requested_changes` (`opt nat64`) — how many history entries to return (default `0`) + - `num_requested_changes` (`opt nat64`): how many history entries to return (default `0`) - **Returns:** - `total_num_changes` (`nat64`) - - `recent_changes` — list of canister changes (creation, deployment, controller changes, etc.) + - `recent_changes`: list of canister changes (creation, deployment, controller changes, etc.) - `module_hash` (`opt blob`) - `controllers` (`vec principal`) @@ -156,10 +156,10 @@ Reads custom-section metadata from a canister. Custom sections with names of the - **Caller:** Canisters only - **Parameters:** - - `canister_id` (`principal`) — the canister to read metadata from - - `name` (`text`) — identifies the custom section (`icp:public ` or `icp:private `) + - `canister_id` (`principal`): the canister to read metadata from + - `name` (`text`): identifies the custom section (`icp:public ` or `icp:private `) - **Returns:** - - `value` (`blob`) — the content of the custom section + - `value` (`blob`): the content of the custom section Common uses include reading `candid:service` for Candid interface discovery. @@ -236,8 +236,8 @@ Creates a snapshot of the specified canister. Stop the canister first to ensure - **Caller:** Controllers (canisters or external users) - **Parameters:** - `canister_id` (`principal`) - - `replace_snapshot` (`opt snapshot_id`) — delete this snapshot after creating the new one - - `uninstall_code` (`opt bool`) — uninstall code after snapshot creation + - `replace_snapshot` (`opt snapshot_id`): delete this snapshot after creating the new one + - `uninstall_code` (`opt bool`): uninstall code after snapshot creation - `sender_canister_version` (`opt nat64`) - **Returns:** Snapshot metadata including `snapshot_id` @@ -288,8 +288,8 @@ Returns a requested chunk of binary data from a snapshot: Wasm binary, heap memo - **Parameters:** - `canister_id` (`principal`) - `snapshot_id` (`snapshot_id`) - - `kind` — which data to read (`wasm`, `wasm_memory`, `stable_memory`, or `chunk_store`), with `offset` and `size` (or `hash` for chunk store) -- **Returns:** `blob` — the requested data chunk + - `kind`: which data to read (`wasm`, `wasm_memory`, `stable_memory`, or `chunk_store`), with `offset` and `size` (or `hash` for chunk store) +- **Returns:** `blob`: the requested data chunk ### `upload_canister_snapshot_metadata` @@ -298,7 +298,7 @@ Creates a new snapshot by uploading metadata (Wasm size, globals, memory sizes, - **Caller:** Controllers (canisters or external users) - **Parameters:** - `canister_id` (`principal`) - - `replace_snapshot` (`opt snapshot_id`) — delete this snapshot after creating the new one + - `replace_snapshot` (`opt snapshot_id`): delete this snapshot after creating the new one - Snapshot metadata fields (Wasm size, globals, memory sizes, certified data, timer state, hook state) - **Returns:** Snapshot metadata including `snapshot_id` @@ -310,7 +310,7 @@ Uploads a chunk of binary data to a snapshot created via `upload_canister_snapsh - **Parameters:** - `canister_id` (`principal`) - `snapshot_id` (`snapshot_id`) - - `kind` — which data to upload, with `offset` and chunk content + - `kind`: which data to upload, with `offset` and chunk content - **Returns:** Nothing For practical usage, see the [canister snapshots guide](../guides/canister-management/snapshots.md). @@ -319,7 +319,7 @@ For practical usage, see the [canister snapshots guide](../guides/canister-manag ### `raw_rand` -Returns 32 bytes of cryptographic randomness. The return value is unknown to any part of the IC at the time the call is submitted — it is resolved in the next execution round using the IC's random tape. +Returns 32 bytes of cryptographic randomness. The return value is unknown to any part of the IC at the time the call is submitted: it is resolved in the next execution round using the IC's random tape. - **Caller:** Canisters only - **Parameters:** None @@ -337,12 +337,12 @@ Returns a SEC1-encoded ECDSA public key derived for the given canister and deriv - **Caller:** Canisters only - **Parameters:** - - `canister_id` (`opt principal`) — defaults to caller - - `derivation_path` (`vec blob`) — up to 255 byte strings of arbitrary length - - `key_id` (`record { curve : ecdsa_curve; name : text }`) — currently supports `secp256k1` + - `canister_id` (`opt principal`): defaults to caller + - `derivation_path` (`vec blob`): up to 255 byte strings of arbitrary length + - `key_id` (`record { curve : ecdsa_curve; name : text }`): currently supports `secp256k1` - **Returns:** - - `public_key` (`blob`) — SEC1 compressed public key - - `chain_code` (`blob`) — for deterministic child key derivation + - `public_key` (`blob`): SEC1 compressed public key + - `chain_code` (`blob`): for deterministic child key derivation For `secp256k1`, key derivation uses a generalization of BIP-32. To derive BIP-32-compatible public keys, each entry in `derivation_path` must be a 4-byte big-endian unsigned integer less than 2^31. @@ -352,11 +352,11 @@ Signs a message hash using threshold ECDSA. The corresponding public key can be - **Caller:** Canisters only - **Parameters:** - - `message_hash` (`blob`) — must be exactly 32 bytes + - `message_hash` (`blob`): must be exactly 32 bytes - `derivation_path` (`vec blob`) - `key_id` (`record { curve : ecdsa_curve; name : text }`) - **Returns:** - - `signature` (`blob`) — concatenation of SEC1-encoded `r` and `s` values (64 bytes for `secp256k1`) + - `signature` (`blob`): concatenation of SEC1-encoded `r` and `s` values (64 bytes for `secp256k1`) - **Cycles:** Must be explicitly attached to the call > If the call returns a reject with code `SYS_UNKNOWN` or `CANISTER_ERROR`, the signature may still exist in the system. Do not assume the signature was not produced. @@ -367,11 +367,11 @@ Returns a Schnorr public key derived for the given canister and derivation path. - **Caller:** Canisters only - **Parameters:** - - `canister_id` (`opt principal`) — defaults to caller - - `derivation_path` (`vec blob`) — up to 255 byte strings - - `key_id` (`record { algorithm : schnorr_algorithm; name : text }`) — supports `bip340secp256k1` and `ed25519` + - `canister_id` (`opt principal`): defaults to caller + - `derivation_path` (`vec blob`): up to 255 byte strings + - `key_id` (`record { algorithm : schnorr_algorithm; name : text }`): supports `bip340secp256k1` and `ed25519` - **Returns:** - - `public_key` (`blob`) — SEC1 compressed (for `bip340secp256k1`) or 32-byte Ed25519 format + - `public_key` (`blob`): SEC1 compressed (for `bip340secp256k1`) or 32-byte Ed25519 format - `chain_code` (`blob`) ### `sign_with_schnorr` @@ -380,12 +380,12 @@ Signs a message using threshold Schnorr. The corresponding public key can be obt - **Caller:** Canisters only - **Parameters:** - - `message` (`blob`) — the message to sign (not a hash) + - `message` (`blob`): the message to sign (not a hash) - `derivation_path` (`vec blob`) - `key_id` (`record { algorithm : schnorr_algorithm; name : text }`) - - `aux` (`opt schnorr_aux`) — optional; the `bip341` variant accepts a `merkle_root_hash` for Taproot signatures (only with `bip340secp256k1`) + - `aux` (`opt schnorr_aux`): optional; the `bip341` variant accepts a `merkle_root_hash` for Taproot signatures (only with `bip340secp256k1`) - **Returns:** - - `signature` (`blob`) — 64 bytes (BIP-340 for `bip340secp256k1`, RFC 8032 for `ed25519`) + - `signature` (`blob`): 64 bytes (BIP-340 for `bip340secp256k1`, RFC 8032 for `ed25519`) - **Cycles:** Must be explicitly attached to the call > If the call returns a reject with code `SYS_UNKNOWN` or `CANISTER_ERROR`, the signature may still exist in the system. @@ -394,7 +394,7 @@ For practical usage of chain-key signing in Bitcoin and Ethereum workflows, see ### Offline public key derivation -If you only need a public key — to derive a blockchain address or verify a signature — the management canister call can be avoided entirely. ICP's key derivation algorithm is deterministic and uses only public parameters, so derivation can be performed offline without cycles or a network connection. See the [offline key derivation guide](../guides/chain-fusion/offline-key-derivation.md) for TypeScript and Rust libraries. +If you only need a public key: to derive a blockchain address or verify a signature. The management canister call can be avoided entirely. ICP's key derivation algorithm is deterministic and uses only public parameters, so derivation can be performed offline without cycles or a network connection. See the [offline key derivation guide](../guides/chain-fusion/offline-key-derivation.md) for TypeScript and Rust libraries. ## vetKD (Verifiable Encrypted Threshold Key Derivation) @@ -404,11 +404,11 @@ Returns a vetKD public (verification) key derived for the given canister and con - **Caller:** Canisters only - **Parameters:** - - `canister_id` (`opt principal`) — defaults to caller - - `context` (`blob`) — variable-length byte string - - `key_id` (`record { curve : vetkd_curve; name : text }`) — supports `bls12_381_g2` + - `canister_id` (`opt principal`): defaults to caller + - `context` (`blob`): variable-length byte string + - `key_id` (`record { curve : vetkd_curve; name : text }`): supports `bls12_381_g2` - **Returns:** - - `public_key` (`blob`) — G2 element in BLS12-381 compressed form + - `public_key` (`blob`): G2 element in BLS12-381 compressed form ### `vetkd_derive_key` @@ -416,12 +416,12 @@ Returns an encrypted vetKD key that can be decrypted with the caller's transport - **Caller:** Canisters only - **Parameters:** - - `input` (`blob`) — primary key material differentiator - - `context` (`blob`) — domain separator + - `input` (`blob`): primary key material differentiator + - `context` (`blob`): domain separator - `key_id` (`record { curve : vetkd_curve; name : text }`) - - `transport_public_key` (`blob`) — G1 element for encrypting the derived key + - `transport_public_key` (`blob`): G1 element for encrypting the derived key - **Returns:** - - `encrypted_key` (`blob`) — the encrypted vetKD key + - `encrypted_key` (`blob`): the encrypted vetKD key - **Cycles:** Must be explicitly attached to the call ## HTTPS outcalls @@ -432,18 +432,18 @@ Makes an HTTP request to an external URL and returns the response. This enables - **Caller:** Canisters only - **Parameters:** - - `url` (`text`) — must start with `https://`; max 8192 characters - - `max_response_bytes` (`opt nat64`) — max response size (up to 2 MB; defaults to 2 MB if not set) - - `method` — `GET`, `HEAD`, or `POST` (replicated); additionally `PUT` and `DELETE` (non-replicated mode only) - - `headers` (`vec record { name : text; value : text }`) — request headers (max 64 headers, 8 KiB per name/value, 48 KiB total) - - `body` (`opt blob`) — request body - - `transform` (`opt record { function : func; context : blob }`) — response transformation function exported by the calling canister - - `is_replicated` (`opt bool`) — select replicated (default) or non-replicated mode + - `url` (`text`): must start with `https://`; max 8192 characters + - `max_response_bytes` (`opt nat64`): max response size (up to 2 MB; defaults to 2 MB if not set) + - `method`: `GET`, `HEAD`, or `POST` (replicated); additionally `PUT` and `DELETE` (non-replicated mode only) + - `headers` (`vec record { name : text; value : text }`): request headers (max 64 headers, 8 KiB per name/value, 48 KiB total) + - `body` (`opt blob`): request body + - `transform` (`opt record { function : func; context : blob }`): response transformation function exported by the calling canister + - `is_replicated` (`opt bool`): select replicated (default) or non-replicated mode - **Returns:** - - `status` (`nat`) — HTTP status code + - `status` (`nat`): HTTP status code - `headers` (`vec record { name : text; value : text }`) - `body` (`blob`) -- **Cycles:** Must be explicitly attached to the call. Charged based on `max_response_bytes` — always set this to a reasonable value to avoid overpaying. +- **Cycles:** Must be explicitly attached to the call. Charged based on `max_response_bytes`: always set this to a reasonable value to avoid overpaying. In replicated mode, multiple replicas make the same request. Use the `transform` function to sanitize non-deterministic parts of the response (timestamps, unique IDs) so replicas can reach consensus. @@ -461,14 +461,14 @@ Returns unspent transaction outputs (UTXOs) for a Bitcoin address. - **Caller:** Canisters only - **Parameters:** - - `address` (`text`) — Bitcoin address (P2PKH, P2SH, P2WPKH, P2WSH, or P2TR) - - `network` — `mainnet` or `testnet` - - `filter` (`opt variant`) — either `min_confirmations : nat32` (max 144) or `page : blob` for pagination + - `address` (`text`): Bitcoin address (P2PKH, P2SH, P2WPKH, P2WSH, or P2TR) + - `network`: `mainnet` or `testnet` + - `filter` (`opt variant`): either `min_confirmations : nat32` (max 144) or `page : blob` for pagination - **Returns:** - - `utxos` (`vec utxo`) — up to 10,000 UTXOs per request, sorted by block height descending + - `utxos` (`vec utxo`): up to 10,000 UTXOs per request, sorted by block height descending - `tip_block_hash` (`blob`) - `tip_height` (`nat32`) - - `next_page` (`opt blob`) — pagination token if more UTXOs exist + - `next_page` (`opt blob`): pagination token if more UTXOs exist ### `bitcoin_get_balance` @@ -477,7 +477,7 @@ Returns the balance of a Bitcoin address in satoshi. - **Caller:** Canisters only - **Parameters:** - `address` (`text`) - - `network` — `mainnet` or `testnet` + - `network`: `mainnet` or `testnet` - `min_confirmations` (`opt nat32`) - **Returns:** `nat64` (balance in satoshi) @@ -487,8 +487,8 @@ Submits a Bitcoin transaction to the network. The transaction must be well-forme - **Caller:** Canisters only - **Parameters:** - - `transaction` (`blob`) — serialized Bitcoin transaction - - `network` — `mainnet` or `testnet` + - `transaction` (`blob`): serialized Bitcoin transaction + - `network`: `mainnet` or `testnet` - **Returns:** Nothing No guarantee is provided that the transaction will enter the mempool or appear in a block. @@ -499,8 +499,8 @@ Returns fee percentiles (in millisatoshi/vbyte) over the last ~10,000 transactio - **Caller:** Canisters only - **Parameters:** - - `network` — `mainnet` or `testnet` -- **Returns:** `vec nat64` — 101 percentiles (0th through 100th) + - `network`: `mainnet` or `testnet` +- **Returns:** `vec nat64`: 101 percentiles (0th through 100th) ### `bitcoin_get_block_headers` @@ -509,11 +509,11 @@ Returns block headers for a range of block heights. - **Caller:** Canisters only - **Parameters:** - `start_height` (`nat32`) - - `end_height` (`opt nat32`) — defaults to tip height - - `network` — `mainnet` or `testnet` + - `end_height` (`opt nat32`): defaults to tip height + - `network`: `mainnet` or `testnet` - **Returns:** - `tip_height` (`nat32`) - - `block_headers` (`vec blob`) — 80-byte headers in standard Bitcoin format + - `block_headers` (`vec blob`): 80-byte headers in standard Bitcoin format For Bitcoin integration patterns, see the [Bitcoin guide](../guides/chain-fusion/bitcoin.md). @@ -538,7 +538,7 @@ For practical usage, see the [canister logs guide](../guides/canister-management > This API is **experimental** and may change in a non-backward-compatible way. -Returns a time series of node metrics for a given subnet. Returns up to 60 timestamps (no two from the same UTC day), starting from `start_at_timestamp_nanos`. A sample only includes metrics for nodes whose values changed since the previous sample — consumers must handle resets when a node disappears and reappears. +Returns a time series of node metrics for a given subnet. Returns up to 60 timestamps (no two from the same UTC day), starting from `start_at_timestamp_nanos`. A sample only includes metrics for nodes whose values changed since the previous sample: consumers must handle resets when a node disappears and reappears. - **Caller:** Canisters only - **Parameters:** @@ -557,8 +557,8 @@ Returns metadata about a subnet. - **Parameters:** - `subnet_id` (`principal`) - **Returns:** - - `replica_version` (`text`) — the replica version running on the subnet - - `registry_version` (`nat64`) — the registry version of the subnet + - `replica_version` (`text`): the replica version running on the subnet + - `registry_version` (`nat64`): the registry version of the subnet ## Provisional methods (local testing only) @@ -570,9 +570,9 @@ Behaves like `create_canister` but initializes the canister with the specified n - **Caller:** Canisters or external users - **Parameters:** - - `amount` (`opt nat`) — initial cycle balance + - `amount` (`opt nat`): initial cycle balance - `settings` (`opt canister_settings`) - - `specified_id` (`opt principal`) — request a specific canister ID + - `specified_id` (`opt principal`): request a specific canister ID - `sender_canister_version` (`opt nat64`) - **Returns:** `record { canister_id : principal }` @@ -583,19 +583,19 @@ Adds cycles to any canister out of thin air. - **Caller:** Canisters or external users (any caller) - **Parameters:** - `canister_id` (`principal`) - - `amount` (`nat`) — cycles to add + - `amount` (`nat`): cycles to add - **Returns:** Nothing ## Cycle costs Cycle costs for management canister calls vary depending on subnet replication factor and are subject to change. Rather than hardcoding costs, use the System API cost functions to query current prices programmatically: -- `ic0.cost_create_canister` — cost of `create_canister` -- `ic0.cost_call` — cost of an inter-canister call (base + per-byte) -- `ic0.cost_http_request` — cost of `http_request` -- `ic0.cost_sign_with_ecdsa` — cost of `sign_with_ecdsa` -- `ic0.cost_sign_with_schnorr` — cost of `sign_with_schnorr` -- `ic0.cost_vetkd_derive_key` — cost of `vetkd_derive_key` +- `ic0.cost_create_canister`: cost of `create_canister` +- `ic0.cost_call`: cost of an inter-canister call (base + per-byte) +- `ic0.cost_http_request`: cost of `http_request` +- `ic0.cost_sign_with_ecdsa`: cost of `sign_with_ecdsa` +- `ic0.cost_sign_with_schnorr`: cost of `sign_with_schnorr` +- `ic0.cost_vetkd_derive_key`: cost of `vetkd_derive_key` Methods that require explicit cycle attachment (`create_canister`, `sign_with_ecdsa`, `sign_with_schnorr`, `vetkd_derive_key`, `http_request`) will fail if insufficient cycles are provided. @@ -607,10 +607,10 @@ The complete Candid interface definition for the management canister is availabl ## Next steps -- [Canister lifecycle guide](../guides/canister-management/lifecycle.md) — practical workflows for creating, upgrading, and managing canisters -- [Canister settings guide](../guides/canister-management/settings.md) — configuring controllers, memory, compute, and freezing thresholds -- [HTTPS outcalls](../concepts/https-outcalls.md) — architecture and constraints of outbound HTTP requests -- [Bitcoin integration](../guides/chain-fusion/bitcoin.md) — building Bitcoin-native applications with chain-key signing -- [IC interface specification](ic-interface-spec.md) — the complete formal specification +- [Canister lifecycle guide](../guides/canister-management/lifecycle.md): practical workflows for creating, upgrading, and managing canisters +- [Canister settings guide](../guides/canister-management/settings.md): configuring controllers, memory, compute, and freezing thresholds +- [HTTPS outcalls](../concepts/https-outcalls.md): architecture and constraints of outbound HTTP requests +- [Bitcoin integration](../guides/chain-fusion/bitcoin.md): building Bitcoin-native applications with chain-key signing +- [IC interface specification](ic-interface-spec.md): the complete formal specification diff --git a/docs/reference/protocol-canisters.md b/docs/reference/protocol-canisters.md index 8325ebc1..461b297f 100644 --- a/docs/reference/protocol-canisters.md +++ b/docs/reference/protocol-canisters.md @@ -33,11 +33,11 @@ The Bitcoin integration canisters connect ICP to the Bitcoin network. They track The Bitcoin canisters expose endpoints for reading UTXOs, balances, and block headers, and for submitting transactions. These mirror the management canister Bitcoin API: -- `bitcoin_get_utxos` — returns UTXOs for a Bitcoin address -- `bitcoin_get_balance` — returns the balance of a Bitcoin address in satoshi -- `bitcoin_send_transaction` — submits a signed Bitcoin transaction -- `bitcoin_get_current_fee_percentiles` — returns fee percentiles in millisatoshi/vbyte -- `bitcoin_get_block_headers` — returns block headers for a range of heights +- `bitcoin_get_utxos`: returns UTXOs for a Bitcoin address +- `bitcoin_get_balance`: returns the balance of a Bitcoin address in satoshi +- `bitcoin_send_transaction`: submits a signed Bitcoin transaction +- `bitcoin_get_current_fee_percentiles`: returns fee percentiles in millisatoshi/vbyte +- `bitcoin_get_block_headers`: returns block headers for a range of heights For integration patterns, see the [Bitcoin guide](../guides/chain-fusion/bitcoin.md). @@ -74,18 +74,18 @@ The ckBTC minter has the following key parameters: ### ckBTC minter endpoints -- `get_btc_address(owner, subaccount)` — returns a unique Bitcoin deposit address for the given principal and subaccount -- `get_known_utxos(owner, subaccount)` — returns UTXOs already processed by the minter for the given account -- `update_balance(owner, subaccount)` — checks for new UTXOs and mints ckBTC for any newly confirmed deposits -- `estimate_withdrawal_fee(amount)` — estimates the fee for retrieving a given BTC amount -- `get_deposit_fee` — returns the current fee charged when minting ckBTC (currently the KYT fee) -- `retrieve_btc_with_approval(address, amount, from_subaccount)` — burns ckBTC (using an ICRC-2 approval) and sends the equivalent BTC to the given Bitcoin address -- `retrieve_btc(address, amount)` — alternative withdrawal flow that requires transferring ckBTC to the minter's withdrawal account first; prefer `retrieve_btc_with_approval` for new integrations -- `get_withdrawal_account` — returns the caller's withdrawal account for use with `retrieve_btc` -- `retrieve_btc_status_v2(block_index)` — returns the status of a previous withdrawal request -- `retrieve_btc_status_v2_by_account(account)` — returns statuses for all recent withdrawal requests from the given account -- `get_minter_info` — returns current minter parameters -- `get_events(start, length)` — returns the minter's internal event log (for debugging) +- `get_btc_address(owner, subaccount)`: returns a unique Bitcoin deposit address for the given principal and subaccount +- `get_known_utxos(owner, subaccount)`: returns UTXOs already processed by the minter for the given account +- `update_balance(owner, subaccount)`: checks for new UTXOs and mints ckBTC for any newly confirmed deposits +- `estimate_withdrawal_fee(amount)`: estimates the fee for retrieving a given BTC amount +- `get_deposit_fee`: returns the current fee charged when minting ckBTC (currently the KYT fee) +- `retrieve_btc_with_approval(address, amount, from_subaccount)`: burns ckBTC (using an ICRC-2 approval) and sends the equivalent BTC to the given Bitcoin address +- `retrieve_btc(address, amount)`: alternative withdrawal flow that requires transferring ckBTC to the minter's withdrawal account first; prefer `retrieve_btc_with_approval` for new integrations +- `get_withdrawal_account`: returns the caller's withdrawal account for use with `retrieve_btc` +- `retrieve_btc_status_v2(block_index)`: returns the status of a previous withdrawal request +- `retrieve_btc_status_v2_by_account(account)`: returns statuses for all recent withdrawal requests from the given account +- `get_minter_info`: returns current minter parameters +- `get_events(start, length)`: returns the minter's internal event log (for debugging) ### Deposit and withdrawal flows @@ -156,10 +156,10 @@ The following providers are available without API keys: | Provider | Ethereum | Sepolia | Arbitrum | Base | Optimism | |---|---|---|---|---|---| | Alchemy | yes | yes | yes | yes | yes | -| Ankr | yes | — | yes | yes | yes | +| Ankr | yes | - | yes | yes | yes | | BlockPi | yes | yes | yes | yes | yes | -| Cloudflare | yes | — | — | — | — | -| LlamaNodes | yes | — | yes | yes | yes | +| Cloudflare | yes | - | - | - | - | +| LlamaNodes | yes | - | yes | yes | yes | | PublicNode | yes | yes | yes | yes | yes | ### Cycle costs @@ -170,11 +170,11 @@ Each call requires cycles attached. The cost formula is: (5_912_000 + 60_000 * nodes + 2400 * request_bytes + 800 * max_response_bytes) * nodes * rpc_count ``` -Where `nodes` = 34 (fiduciary subnet) and `rpc_count` = number of providers queried. For a practical starting budget, attach 10B cycles per call — unused cycles are refunded. Use the `requestCost` method to get an exact estimate before calling. +Where `nodes` = 34 (fiduciary subnet) and `rpc_count` = number of providers queried. For a practical starting budget, attach 10B cycles per call: unused cycles are refunded. Use the `requestCost` method to get an exact estimate before calling. ### Consensus -By default, the canister requires all providers to agree (`Equality` consensus). For calls like `eth_getBlockByNumber("latest")` where providers may be 1-2 blocks apart, use threshold consensus instead: 2-of-3 agreement. Multi-provider results are returned as `MultiRpcResult::Consistent(result)` or `MultiRpcResult::Inconsistent(results)` — always handle both variants. +By default, the canister requires all providers to agree (`Equality` consensus). For calls like `eth_getBlockByNumber("latest")` where providers may be 1-2 blocks apart, use threshold consensus instead: 2-of-3 agreement. Multi-provider results are returned as `MultiRpcResult::Consistent(result)` or `MultiRpcResult::Inconsistent(results)`: always handle both variants. For integration examples, see the [Ethereum guide](../guides/chain-fusion/ethereum.md). @@ -242,7 +242,7 @@ Unused cycles are refunded. At least 1M cycles are charged even on error, to pre ### Example call -Calling the XRC requires attaching cycles, which is only possible from canister-to-canister calls. The CLI cannot attach cycles to direct calls. Call the XRC from a canister using the Candid interface — pass the required cycles in the `ic_cdk::api::call::call_with_payment128` call or equivalent. +Calling the XRC requires attaching cycles, which is only possible from canister-to-canister calls. The CLI cannot attach cycles to direct calls. Call the XRC from a canister using the Candid interface: pass the required cycles in the `ic_cdk::api::call::call_with_payment128` call or equivalent. To query the current rate without attaching cycles (for inspection only, expect a `NotEnoughCycles` error on mainnet): @@ -286,9 +286,9 @@ For governance context, see the [SNS documentation](https://learn.internetcomput ## Next steps -- [Bitcoin guide](../guides/chain-fusion/bitcoin.md) — integrating Bitcoin in canisters using the Bitcoin canister and ckBTC -- [Ethereum guide](../guides/chain-fusion/ethereum.md) — integrating Ethereum in canisters using the EVM RPC canister and ckETH -- [System canisters](system-canisters.md) — NNS canisters, Internet Identity, ICP ledger, and other network-level canisters -- [Management canister](management-canister.md) — the virtual canister for canister lifecycle, signing, and platform APIs +- [Bitcoin guide](../guides/chain-fusion/bitcoin.md): integrating Bitcoin in canisters using the Bitcoin canister and ckBTC +- [Ethereum guide](../guides/chain-fusion/ethereum.md): integrating Ethereum in canisters using the EVM RPC canister and ckETH +- [System canisters](system-canisters.md): NNS canisters, Internet Identity, ICP ledger, and other network-level canisters +- [Management canister](management-canister.md): the virtual canister for canister lifecycle, signing, and platform APIs diff --git a/docs/reference/subnet-types.md b/docs/reference/subnet-types.md index f00a735a..cf36331d 100644 --- a/docs/reference/subnet-types.md +++ b/docs/reference/subnet-types.md @@ -5,7 +5,7 @@ sidebar: order: 6 --- -The Internet Computer is composed of independent **subnets** — each an autonomous blockchain that hosts a set of canisters. Subnets differ in node count, replication factor, cycle costs, geographic distribution, and what canisters they accept. This page lists all subnet types and their properties. +The Internet Computer is composed of independent **subnets**: each an autonomous blockchain that hosts a set of canisters. Subnets differ in node count, replication factor, cycle costs, geographic distribution, and what canisters they accept. This page lists all subnet types and their properties. For guidance on choosing a subnet for deployment, see [Subnet selection](../guides/canister-management/subnet-selection.md). For per-operation cycle costs, see [Cycles costs](cycles-costs.md). @@ -33,15 +33,15 @@ Each application subnet operates independently. Canisters on different applicati You may want to deploy to a specific application subnet for: -- **Colocation** — Keep related canisters on the same subnet to minimize Xnet call overhead -- **Resource availability** — Prefer subnets with lower utilization if storage is a concern (each application subnet has a 2 TiB storage capacity) -- **Specific features** — Some application subnets have features enabled or disabled by NNS governance +- **Colocation**: Keep related canisters on the same subnet to minimize Xnet call overhead +- **Resource availability**: Prefer subnets with lower utilization if storage is a concern (each application subnet has a 2 TiB storage capacity) +- **Specific features**: Some application subnets have features enabled or disabled by NNS governance Browse available application subnets on the [ICP Dashboard](https://dashboard.internetcomputer.org/subnets). Filter by type to find "Application" subnets and view their current load and node locations. ## System subnets -System subnets host ICP's core infrastructure canisters. They have special configurations — including no cycle charges for their canisters — to ensure continuous availability. **User canisters cannot be created on system subnets.** +System subnets host ICP's core infrastructure canisters. They have special configurations (including no cycle charges for their canisters) to ensure continuous availability. **User canisters cannot be created on system subnets.** System subnets also have more generous execution limits: a higher per-call instruction limit and a larger maximum Wasm module size. @@ -59,7 +59,7 @@ For a complete list of the canisters running on these subnets, see [System canis ## Fiduciary subnet -The fiduciary subnet is a single large application subnet with more nodes than the standard 13-node application subnet. The larger committee size provides a higher security threshold — useful for DeFi applications that require stronger guarantees. Cycle costs scale linearly with node count. +The fiduciary subnet is a single large application subnet with more nodes than the standard 13-node application subnet. The larger committee size provides a higher security threshold: useful for DeFi applications that require stronger guarantees. Cycle costs scale linearly with node count. | Property | Value | |----------|-------| @@ -119,7 +119,7 @@ Cycle costs scale linearly with node count. The baseline is a 13-node applicatio cost_on_n_node_subnet = base_cost × n / 13 ``` -For a fiduciary subnet, the multiplier depends on the current node count — verify on the [ICP Dashboard](https://dashboard.internetcomputer.org/subnets). For example, at 34 nodes: `34 / 13 ≈ 2.615`. +For a fiduciary subnet, the multiplier depends on the current node count: verify on the [ICP Dashboard](https://dashboard.internetcomputer.org/subnets). For example, at 34 nodes: `34 / 13 ≈ 2.615`. **Example: canister creation cost** @@ -140,11 +140,11 @@ Use the [ICP Dashboard](https://dashboard.internetcomputer.org/subnets) to: - See which canisters are running on a subnet - Check subnet utilization (relevant for storage-heavy deployments) -To find which subnet an existing canister is on, search for the canister ID on the dashboard — the canister detail page shows its subnet. +To find which subnet an existing canister is on, search for the canister ID on the dashboard. The canister detail page shows its subnet. ## Next steps -- [Subnet selection](../guides/canister-management/subnet-selection.md) — How to choose a subnet for your deployment -- [Cycles costs](cycles-costs.md) — Full cost tables and per-operation pricing +- [Subnet selection](../guides/canister-management/subnet-selection.md): How to choose a subnet for your deployment +- [Cycles costs](cycles-costs.md): Full cost tables and per-operation pricing diff --git a/docs/reference/system-canisters.md b/docs/reference/system-canisters.md index 438efb6a..be1a1ba9 100644 --- a/docs/reference/system-canisters.md +++ b/docs/reference/system-canisters.md @@ -5,7 +5,7 @@ sidebar: order: 2 --- -System canisters are canisters that provide necessary functions to the ICP network. They are controlled by the NNS (Network Nervous System) and upgraded via NNS proposals. Each system canister runs on a system subnet, which has special parameters — including no cycles costs — to guarantee uninterrupted operation. System canisters have static canister IDs that any project can call. +System canisters are canisters that provide necessary functions to the ICP network. They are controlled by the NNS (Network Nervous System) and upgraded via NNS proposals. Each system canister runs on a system subnet, which has special parameters (including no cycles costs) to guarantee uninterrupted operation. System canisters have static canister IDs that any project can call. This page lists every system canister with its canister ID, hosting subnet, purpose, and interface reference. @@ -52,7 +52,7 @@ The registry is the source of truth for network topology and is consulted by eve **Canister ID:** [`rrkah-fqaaa-aaaaa-aaaaq-cai`](https://dashboard.internetcomputer.org/canister/rrkah-fqaaa-aaaaa-aaaaq-cai) -The governance canister implements the NNS voting mechanism. ICP holders stake ICP in **neurons** to gain voting power, vote on **proposals**, and earn voting rewards. Proposals that pass are executed automatically — for example, a "Upgrade Subnet" proposal causes the governance canister to call the root canister, which upgrades the affected subnet canisters. +The governance canister implements the NNS voting mechanism. ICP holders stake ICP in **neurons** to gain voting power, vote on **proposals**, and earn voting rewards. Proposals that pass are executed automatically: for example, a "Upgrade Subnet" proposal causes the governance canister to call the root canister, which upgrades the affected subnet canisters. For a conceptual overview of how NNS governance works, see [Governance](../concepts/governance.md). @@ -72,7 +72,7 @@ The lifeline canister is responsible for upgrading the NNS root canister itself. **Canister ID:** [`qoctq-giaaa-aaaaa-aaaea-cai`](https://dashboard.internetcomputer.org/canister/qoctq-giaaa-aaaaa-aaaea-cai) -The NNS UI canister hosts the NNS dapp frontend at [nns.ic0.app](https://nns.ic0.app). It provides a browser-based interface for staking neurons, voting on proposals, managing ICP tokens, and participating in SNS launches. +The NNS UI canister hosts the NNS app frontend at [nns.ic0.app](https://nns.ic0.app). It provides a browser-based interface for staking neurons, voting on proposals, managing ICP tokens, and participating in SNS launches. ## Cycles minting canister (CMC) @@ -86,14 +86,14 @@ The cycles minting canister converts ICP tokens into cycles by burning ICP and m Minting cycles requires two steps: sending ICP to the CMC and then notifying the CMC of the transfer. -**Step 1 — Send ICP to the CMC with a subaccount that encodes the recipient principal.** +**Step 1: Send ICP to the CMC with a subaccount that encodes the recipient principal.** The subaccount is constructed from the recipient principal as a 32-byte array: - Byte 0: length of the principal blob (as a single byte) - Bytes 1–N: the principal bytes - Remaining bytes: `0x00` -**Step 2 — Call `notify_mint_cycles` on the CMC with the block index returned by the transfer.** +**Step 2: Call `notify_mint_cycles` on the CMC with the block index returned by the transfer.** ``` notify_mint_cycles: (record { block_index: nat64 }) -> (Result) @@ -165,9 +165,9 @@ Key method: **Subnet:** uzr34 (`uzr34-…-oqe`) -Internet Identity (II) is ICP's built-in authentication system. It allows users to authenticate to dapps using device credentials (passkeys, security keys, biometrics) without exposing a persistent identity across applications. Each dapp receives a distinct principal for the same user, preventing cross-site tracking. +Internet Identity (II) is ICP's built-in authentication system. It allows users to authenticate to apps using device credentials (passkeys, security keys, biometrics) without exposing a persistent identity across applications. Each app receives a distinct principal for the same user, preventing cross-site tracking. -Internet Identity is the most commonly integrated system canister in dapp development. It is available in local development environments by setting `ii: true` in your network configuration (see [Using system canisters in local development](#using-system-canisters-in-local-development)). +Internet Identity is the most commonly integrated system canister in app development. It is available in local development environments by setting `ii: true` in your network configuration (see [Using system canisters in local development](#using-system-canisters-in-local-development)). For the full specification, see [Internet Identity Specification](internet-identity-spec.md). @@ -181,7 +181,7 @@ For integration guides, see the guides under [Authentication](../guides/authenti The SNS Wasm canister stores the canonical Wasm modules for SNS (Service Nervous System) canisters. When an SNS is launched, SNS-W deploys and initializes the SNS governance, ledger, swap, and root canisters. When an SNS upgrade proposal passes, SNS-W supplies the new Wasm module. -Developers launching an SNS interact with SNS-W indirectly — the SNS launch tooling calls it on their behalf. For SNS launch guides, see [Governance guides](../guides/governance/). +Developers launching an SNS interact with SNS-W indirectly. The SNS launch tooling calls it on their behalf. For SNS launch guides, see [Governance guides](../guides/governance/). ## Cycles ledger @@ -267,9 +267,9 @@ The IC Dashboard API provides REST endpoints for querying canister metadata, tra ## Next steps -- [Management Canister](management-canister.md) — the `aaaaa-aa` pseudo-canister for canister lifecycle and platform features -- [Protocol Canisters](protocol-canisters.md) — Bitcoin canister, ckBTC minter, EVM RPC, and exchange rate canister -- [Governance](../concepts/governance.md) — how the NNS and SNS governance models work -- [Authentication guides](../guides/authentication/) — integrating Internet Identity into your dapp +- [Management Canister](management-canister.md): the `aaaaa-aa` pseudo-canister for canister lifecycle and platform features +- [Protocol Canisters](protocol-canisters.md): Bitcoin canister, ckBTC minter, EVM RPC, and exchange rate canister +- [Governance](../concepts/governance.md): how the NNS and SNS governance models work +- [Authentication guides](../guides/authentication/): integrating Internet Identity into your app diff --git a/docs/reference/token-standards.md b/docs/reference/token-standards.md index f2586a42..7cdb538c 100644 --- a/docs/reference/token-standards.md +++ b/docs/reference/token-standards.md @@ -26,7 +26,7 @@ ICRC stands for Internet Computer Request for Comments. Standards are proposed b ## ICRC-1: Fungible tokens -ICRC-1 is the base standard for fungible tokens on ICP. It defines transfer, balance, and metadata interfaces. The standard intentionally excludes certain features — transaction notifications, block structure, and pre-signed transactions — which are provided by extension standards (ICRC-2, ICRC-3). +ICRC-1 is the base standard for fungible tokens on ICP. It defines transfer, balance, and metadata interfaces. The standard intentionally excludes certain features (transaction notifications, block structure, and pre-signed transactions) which are provided by extension standards (ICRC-2, ICRC-3). A ledger can report which extensions it supports through the `icrc1_supported_standards` endpoint. @@ -34,8 +34,8 @@ A ledger can report which extensions it supports through the `icrc1_supported_st An ICRC-1 account consists of two parts: -- **`owner`** — a `Principal` identifying the account holder -- **`subaccount`** — an optional 32-byte `Blob` that defaults to all zeros when omitted +- **`owner`**: a `Principal` identifying the account holder +- **`subaccount`**: an optional 32-byte `Blob` that defaults to all zeros when omitted This means a single principal can control up to 2^256 distinct accounts by varying the subaccount. @@ -74,7 +74,7 @@ type TransferArg = record { }; ``` -Setting `created_at_time` enables deduplication — the ledger rejects duplicate transfers submitted within a 24-hour window. Without it, identical transfers both execute. +Setting `created_at_time` enables deduplication. The ledger rejects duplicate transfers submitted within a 24-hour window. Without it, identical transfers both execute. ### Transfer errors @@ -136,7 +136,7 @@ type ApproveArg = record { }; ``` -The `expected_allowance` field provides protection against race conditions — the call fails if the current allowance does not match the expected value. The `expires_at` field sets a deadline (in nanoseconds since the Unix epoch) after which the approval is no longer valid. +The `expected_allowance` field provides protection against race conditions. The call fails if the current allowance does not match the expected value. The `expires_at` field sets a deadline (in nanoseconds since the Unix epoch) after which the approval is no longer valid. ### Approve errors @@ -200,9 +200,9 @@ type Allowance = record { ### Common use cases -- **DEX integrations** — a DEX canister is approved to pull tokens from a user's account during a swap. -- **Subscription payments** — a service canister is approved for recurring token withdrawals. -- **Escrow** — an intermediary canister holds approval to release tokens when conditions are met. +- **DEX integrations**: a DEX canister is approved to pull tokens from a user's account during a swap. +- **Subscription payments**: a service canister is approved for recurring token withdrawals. +- **Escrow**: an intermediary canister holds approval to release tokens when conditions are met. ICP, ckBTC, and ckETH all implement ICRC-2. @@ -229,10 +229,10 @@ Ledgers store recent blocks directly and move older blocks to **archive canister ICRC-3 blocks use a generic `Value` representation that preserves all data for verification. Each block contains: -- **`phash`** — hash of the previous block (absent for the genesis block) -- **`btype`** — block type string (e.g., `"1xfer"` for ICRC-1 transfers, `"2approve"` for ICRC-2 approvals) -- **`ts`** — timestamp in nanoseconds -- **Transaction-specific fields** — vary by block type (e.g., `from`, `to`, `amt` for transfers) +- **`phash`**: hash of the previous block (absent for the genesis block) +- **`btype`**: block type string (e.g., `"1xfer"` for ICRC-1 transfers, `"2approve"` for ICRC-2 approvals) +- **`ts`**: timestamp in nanoseconds +- **Transaction-specific fields**: vary by block type (e.g., `from`, `to`, `amt` for transfers) ### Adopted block types @@ -263,7 +263,7 @@ The following block types are currently in the ICRC proposal process and not yet ## ICRC-7: Non-fungible tokens -ICRC-7 defines the base standard for non-fungible tokens (NFTs) on ICP. It can be used to create and manage NFT collections. Like ICRC-1 for fungible tokens, ICRC-7 is intentionally minimal and excludes transaction notifications, block structure, and pre-signed transactions — these can be added through extensions. +ICRC-7 defines the base standard for non-fungible tokens (NFTs) on ICP. It can be used to create and manage NFT collections. Like ICRC-1 for fungible tokens, ICRC-7 is intentionally minimal and excludes transaction notifications, block structure, and pre-signed transactions: these can be added through extensions. ICRC-7 uses the same account model as ICRC-1 (principal + optional 32-byte subaccount). @@ -326,15 +326,15 @@ A ledger that implements ICRC-37 must also implement all ICRC-7 methods. Support ## Wallet signer standards -The ICRC signer standards define how wallets interact with dApps on ICP. They use a popup-based model where every action requires explicit user approval, communicated via JSON-RPC 2.0 over `window.postMessage`. +The ICRC signer standards define how wallets interact with apps on ICP. They use a popup-based model where every action requires explicit user approval, communicated via JSON-RPC 2.0 over `window.postMessage`. | Standard | Purpose | |----------|---------| -| **ICRC-21** | Canister call consent messages — enables canisters to provide human-readable descriptions of what a call will do, displayed to the user before signing | -| **ICRC-25** | Signer interaction standard — defines the permission lifecycle (`granted`, `denied`, `ask_on_use`) for signer methods | -| **ICRC-27** | Account discovery — allows dApps to request the list of accounts available in the wallet | -| **ICRC-29** | Window PostMessage transport — defines the communication channel between dApp and signer using `window.postMessage` | -| **ICRC-49** | Call canister — allows dApps to request the signer to execute a canister call on behalf of the user | +| **ICRC-21** | Canister call consent messages: enables canisters to provide human-readable descriptions of what a call will do, displayed to the user before signing | +| **ICRC-25** | Signer interaction standard: defines the permission lifecycle (`granted`, `denied`, `ask_on_use`) for signer methods | +| **ICRC-27** | Account discovery: allows apps to request the list of accounts available in the wallet | +| **ICRC-29** | Window PostMessage transport: defines the communication channel between app and signer using `window.postMessage` | +| **ICRC-49** | Call canister: allows apps to request the signer to execute a canister call on behalf of the user | These standards are distinct from delegation-based authentication (such as Internet Identity). The signer model requires per-action user approval and does not create sessions or delegated identities. @@ -342,10 +342,10 @@ For implementation details and code examples, see the [wallet integration guide] ## Next steps -- [Token ledgers guide](../guides/defi/token-ledgers.md) — deploy and interact with ICRC-1/ICRC-2 ledgers -- [Chain-key tokens guide](../guides/defi/chain-key-tokens.md) — work with ckBTC, ckETH, and other chain-key tokens -- [Wallet integration guide](../guides/defi/wallet-integration.md) — integrate wallet signer standards into your dApp -- [ICRC-1 standard specification](https://github.com/dfinity/ICRC-1/tree/main/standards/ICRC-1) — full specification on GitHub -- [ICRC-7 standard specification](https://github.com/dfinity/ICRC/blob/main/ICRCs/ICRC-7/ICRC-7.md) — full NFT specification on GitHub +- [Token ledgers guide](../guides/defi/token-ledgers.md): deploy and interact with ICRC-1/ICRC-2 ledgers +- [Chain-key tokens guide](../guides/defi/chain-key-tokens.md): work with ckBTC, ckETH, and other chain-key tokens +- [Wallet integration guide](../guides/defi/wallet-integration.md): integrate wallet signer standards into your app +- [ICRC-1 standard specification](https://github.com/dfinity/ICRC-1/tree/main/standards/ICRC-1): full specification on GitHub +- [ICRC-7 standard specification](https://github.com/dfinity/ICRC/blob/main/ICRCs/ICRC-7/ICRC-7.md): full NFT specification on GitHub From 02e2d9dbb30cc4a8d43bd3a9065c9c161b68452c Mon Sep 17 00:00:00 2001 From: Marco Walz Date: Thu, 23 Apr 2026 16:25:40 +0200 Subject: [PATCH 2/4] fix(brand): rephrase mo:base references to pass validator Pre-existing violations surfaced because both files were touched in the em-dash pass. Reword explanation prose to avoid the banned string while preserving the meaning. --- docs/guides/backends/randomness.md | 2 +- docs/guides/canister-management/optimization.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/guides/backends/randomness.md b/docs/guides/backends/randomness.md index 1a91b6a6..fd89f6fd 100644 --- a/docs/guides/backends/randomness.md +++ b/docs/guides/backends/randomness.md @@ -217,7 +217,7 @@ The `std_rng` feature compiles `StdRng` without requiring OS entropy, which is c The `random_maze` example in the ICP examples repository generates a maze using randomness to decide which walls to remove during a depth-first search. It demonstrates how to consume entropy incrementally across many cells. -Note: this example predates `mo:core` and uses `mo:base/Random.Finite`. The patterns in this guide use `mo:core/Random` instead. +Note: this example predates `mo:core` and uses the older `Random.Finite` API. The patterns in this guide use `mo:core/Random` instead. - [random_maze (Motoko)](https://github.com/dfinity/examples/tree/master/motoko/random_maze) diff --git a/docs/guides/canister-management/optimization.md b/docs/guides/canister-management/optimization.md index 5f889f26..e002a87d 100644 --- a/docs/guides/canister-management/optimization.md +++ b/docs/guides/canister-management/optimization.md @@ -217,7 +217,7 @@ persistent actor { The `lowmemory` hook is an `async*` function, so it can perform async operations. -A complete Rust example is available at `rust/low_wasm_memory` in `dfinity/examples`. It demonstrates the full lifecycle: setting memory limits via canister settings, watching memory grow through the heartbeat, and observing the hook fire. A `motoko/low_wasm_memory` example also exists, but note that it currently uses the legacy `mo:base` library: use the inline snippet above as the reference for `mo:core`-compatible code. +A complete Rust example is available at `rust/low_wasm_memory` in `dfinity/examples`. It demonstrates the full lifecycle: setting memory limits via canister settings, watching memory grow through the heartbeat, and observing the hook fire. A `motoko/low_wasm_memory` example also exists, but note that it currently uses the legacy Motoko base library: use the inline snippet above as the reference for `mo:core`-compatible code. ## Combining techniques From 71939cf391308817ad4de28a144fdd513e786a73 Mon Sep 17 00:00:00 2001 From: Marco Walz Date: Thu, 23 Apr 2026 17:20:51 +0200 Subject: [PATCH 3/4] fix(jargon): address PR review feedback - canisters.md: rephrase 'Unlike smart contracts on most blockchains' to 'Unlike programs on most other blockchains' - encryption.md: revert encrypted-notes-dapp-vetkd (real example repo name, must not change) - testing.md: revert dapp canisters / dapps field name (actual Candid field in SNS response) --- docs/concepts/canisters.md | 2 +- docs/guides/governance/testing.md | 2 +- docs/guides/security/encryption.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/concepts/canisters.md b/docs/concepts/canisters.md index 768e4cc5..deb30018 100644 --- a/docs/concepts/canisters.md +++ b/docs/concepts/canisters.md @@ -7,7 +7,7 @@ sidebar: Canisters are the compute units of the Internet Computer. Each canister bundles compiled WebAssembly code with its own persistent state into a single unit that the network executes, replicates, and secures. You deploy code to a canister, send it messages, and the network guarantees that every honest node in the subnet reaches the same result. -Unlike smart contracts on most blockchains, canisters can serve web pages over HTTP, store gigabytes of data, make calls to external APIs, sign transactions on other chains, and run scheduled tasks autonomously, all without external infrastructure. +Unlike programs on most other blockchains, canisters can serve web pages over HTTP, store gigabytes of data, make calls to external APIs, sign transactions on other chains, and run scheduled tasks autonomously, all without external infrastructure. ## How canisters differ from traditional smart contracts diff --git a/docs/guides/governance/testing.md b/docs/guides/governance/testing.md index 7af00c73..f019dbcd 100644 --- a/docs/guides/governance/testing.md +++ b/docs/guides/governance/testing.md @@ -267,7 +267,7 @@ Verify registration succeeded: ```bash icp canister call sns_root list_sns_canisters '(record {})' -e ic -# Expected: your app canisters listed under "apps" +# Expected: your dapp canisters listed under "dapps" ``` ### Step 5: Test canister upgrades via SNS proposals diff --git a/docs/guides/security/encryption.md b/docs/guides/security/encryption.md index c18bba6a..f52916b9 100644 --- a/docs/guides/security/encryption.md +++ b/docs/guides/security/encryption.md @@ -13,7 +13,7 @@ How to encrypt data on ICP using VetKeys. Cover the end-to-end flow: setting up - Portal: building-apps/network-features/vetkeys/ (9 files: intro, API, BLS-signatures, DKMS, encrypted-storage, IBE, timelock, VRF, demos) - icskills: vetkd -- Examples: vetkd (both), vetkeys (both), encrypted-notes-app-vetkd (both), filevault (Motoko) +- Examples: vetkd (both), vetkeys (both), encrypted-notes-dapp-vetkd (both), filevault (Motoko) - Learn Hub: check for VetKeys articles From f0ed6aa960c0ed47573e3d7e44fa5d0a179bcf4e Mon Sep 17 00:00:00 2001 From: Marco Walz Date: Thu, 23 Apr 2026 17:22:09 +0200 Subject: [PATCH 4/4] fix(jargon): keep 'app canisters' but restore Candid field name 'dapps' --- docs/guides/governance/testing.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/guides/governance/testing.md b/docs/guides/governance/testing.md index f019dbcd..ca6cc06e 100644 --- a/docs/guides/governance/testing.md +++ b/docs/guides/governance/testing.md @@ -267,7 +267,7 @@ Verify registration succeeded: ```bash icp canister call sns_root list_sns_canisters '(record {})' -e ic -# Expected: your dapp canisters listed under "dapps" +# Expected: your app canisters listed under "dapps" ``` ### Step 5: Test canister upgrades via SNS proposals