Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
194 changes: 194 additions & 0 deletions zk-compression-blog.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,194 @@
# How to use Zk compression in ZK Solana Programs
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix title capitalization for consistency.

The title uses "Zk" but the rest of the document consistently uses "ZK" (uppercase). Update the title to match.

🔧 Proposed fix
-# How to use Zk compression in ZK Solana Programs
+# How to use ZK compression in ZK Solana Programs
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# How to use Zk compression in ZK Solana Programs
# How to use ZK compression in ZK Solana Programs
🤖 Prompt for AI Agents
In `@zk-compression-blog.md` at line 1, Update the document title to match the
rest of the doc by capitalizing "ZK": change the heading text "How to use Zk
compression in ZK Solana Programs" to "How to use ZK compression in ZK Solana
Programs" so the token "ZK" is consistently uppercased throughout the file.


Every transaction on Solana is public. This lack of privacy prevents mainstream adoption for many use cases.

Zero knowledge proofs enable privacy in Solana programs, such as private transactions, private voting, and private identity verification.

The key building blocks for zk in Solana programs are:
1. Zero Knowledge Proofs (ZKPs) to prove application logic privately.
2. Poseidon Merkle Trees to store data in a format that can be efficiently proven in a ZK circuit.
3. Nullifiers to prevent double spending.

### Zero Knowledge Proofs
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix heading level increment.

The heading structure should increment by one level at a time. After an h1 heading, use h2 (##), not h3 (###).

📝 Proposed fix
-### Zero Knowledge Proofs
+## Zero Knowledge Proofs

Apply the same fix to other top-level sections: lines 25 ("Poseidon Merkle Tree"), and 42 ("Nullifier").

🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

12-12: Heading levels should only increment by one level at a time
Expected: h2; Actual: h3

(MD001, heading-increment)


12-12: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)

🤖 Prompt for AI Agents
In `@zk-compression-blog.md` at line 12, The heading "Zero Knowledge Proofs" is
using h3 (###) but should be h2 (##) to follow the h1 → h2 increment rule;
update the heading "Zero Knowledge Proofs" from "### Zero Knowledge Proofs" to
"## Zero Knowledge Proofs" and make the same change for the other top-level
sections "Poseidon Merkle Tree" and "Nullifier" so each uses h2 (##) instead of
h3 (###).

Zero knowledge proofs enable proving ownership of data and application logic without revealing the data itself.
First, select a proof system.
Different proof systems trade off proof size, proving time, and setup requirements. Groth16 produces small proofs (256 bytes, compressed 128 bytes) and verifies cheaply onchain, but requires a trusted setup (TODO: add reference) per circuit.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Resolve TODO: Add trusted setup reference.

The TODO comment indicates missing documentation about Groth16's trusted setup requirement. This is a critical concept for developers choosing proof systems.

Do you want me to suggest a reference to include, such as the Groth16 paper or a trusted setup ceremony explanation?

🤖 Prompt for AI Agents
In `@zk-compression-blog.md` at line 15, The sentence mentioning Groth16 currently
has a TODO to add a trusted-setup reference; replace the TODO by adding a
concise citation and link (or footnote) explaining Groth16's per-circuit trusted
setup—for example cite "Groth, 2016: succinct non-interactive zero-knowledge
proofs" (Groth16 paper) and/or a trusted-setup ceremony explainer (e.g., Zcash
parameter-setup docs or a reputable blog post) right after "trusted setup" in
the Groth16 sentence so readers can follow up on the per-circuit trusted setup
requirement.

Other established proof systems avoid trusted setups or prove faster, but produce larger proofs (kilobytes instead of bytes).
For Solana, Groth16's small proof size and fast verification (~200k compute units) make it the practical choice.
A ZK proof is generated from a circuit - a program that defines what you're proving.
Circuits are written in languages like circom or noir. Every circuit has two types of inputs:
1. Private inputs - the secret data only the prover knows.
2. Public inputs - values visible to the verifier that anchor the proof to onchain state.

For example, in a private KYC program, the private input is your credential, the public input is the Merkle root of the identity registry.

### Poseidon Merkle Tree

Merkle trees store zk application state.
Specifically, a Poseidon Merkle tree is a binary tree where each node is the hash of its children.
Poseidon is designed for ZK circuits - it uses fewer constraints (TODO: add reference) than SHA256, making proofs faster and cheaper to generate.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Resolve TODO: Add constraints reference.

The TODO comment indicates missing explanation about why Poseidon uses fewer constraints than SHA256. This is key to understanding the performance benefits.

Would you like me to suggest a reference or brief explanation about constraint counts in ZK circuits?

🤖 Prompt for AI Agents
In `@zk-compression-blog.md` at line 29, Replace the TODO by citing the original
Poseidon paper (e.g., "Poseidon: A New Hash Function for Fast Zero-Knowledge
Proofs") and a short 1–2 sentence explanation that Poseidon uses low-degree
algebraic S-boxes and optimized linear layers tailored for prime-field
arithmetic (so it maps efficiently to arithmetic constraints), whereas SHA-256
is built from bitwise/boolean operations that require many more arithmetic
constraints to emulate; optionally include a benchmark link or numeric
constraint-comparison (from the Poseidon paper or a Circom/arkworks benchmark)
to support the claim and improve the reader’s understanding.


**Merkle trees on Solana:**

A **custom sparse Merkle tree** gives you full control. You design the leaf structure and proof format to match your circuit exactly. Create Solana accounts with your program to store a sparse Merkle tree and its roots.
The tradeoff: you build and run your own indexer to track the tree and serve Merkle proofs.

**Zk compression state Merkle trees** Solana RPCs handle indexing for you and serve Merkle proofs. The Light Protocol programs create and maintain Poseidon state Merkle trees for you in Solana accounts. Once a state Merkle tree fills up the protocol creates a new one.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Inconsistent capitalization and trailing space.

Line 36 uses "Zk compression" while the rest of the document uses "ZK" (uppercase). Also, there's a trailing space at the end of this line.

🔧 Proposed fix
-**Zk compression state Merkle trees** Solana RPCs handle indexing for you and serve Merkle proofs. The Light Protocol programs create and maintain Poseidon state Merkle trees for you in Solana accounts. Once a state Merkle tree fills up the protocol creates a new one. 
+**ZK compression state Merkle trees** Solana RPCs handle indexing for you and serve Merkle proofs. The Light Protocol programs create and maintain Poseidon state Merkle trees for you in Solana accounts. Once a state Merkle tree fills up the protocol creates a new one.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
**Zk compression state Merkle trees** Solana RPCs handle indexing for you and serve Merkle proofs. The Light Protocol programs create and maintain Poseidon state Merkle trees for you in Solana accounts. Once a state Merkle tree fills up the protocol creates a new one.
**ZK compression state Merkle trees** Solana RPCs handle indexing for you and serve Merkle proofs. The Light Protocol programs create and maintain Poseidon state Merkle trees for you in Solana accounts. Once a state Merkle tree fills up the protocol creates a new one.
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

36-36: Trailing spaces
Expected: 0 or 2; Actual: 1

(MD009, no-trailing-spaces)

🤖 Prompt for AI Agents
In `@zk-compression-blog.md` at line 36, Replace the heading text "Zk compression
state Merkle trees" with "ZK compression state Merkle trees" and remove the
trailing space at the end of that line so capitalization matches the rest of the
document and there is no extra whitespace.

The tradeoff: your circuit must prove inclusion of your data inside the compressed account structure. Compressed accounts are stored as hashes in Poseidon Merkle trees with Solana RPC support. This adds constraints to your circuit but RPCs index the Merkle tree for you.

Note, for offchain privacy a user client should fetch a complete (sub)tree not Merkle proof from an indexer. If only onchain privacy is sufficient fetching Merkle proofs from an indexer is more efficient.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider improving sentence clarity.

The sentence structure with "Note, for offchain privacy..." reads as a run-on. Consider breaking it into two sentences or restructuring for better readability.

📝 Proposed fix
-Note, for offchain privacy a user client should fetch a complete (sub)tree not Merkle proof from an indexer. If only onchain privacy is sufficient fetching Merkle proofs from an indexer is more efficient.
+For offchain privacy, a user client should fetch a complete (sub)tree from an indexer rather than individual Merkle proofs. If only onchain privacy is sufficient, fetching Merkle proofs from an indexer is more efficient.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Note, for offchain privacy a user client should fetch a complete (sub)tree not Merkle proof from an indexer. If only onchain privacy is sufficient fetching Merkle proofs from an indexer is more efficient.
For offchain privacy, a user client should fetch a complete (sub)tree from an indexer rather than individual Merkle proofs. If only onchain privacy is sufficient, fetching Merkle proofs from an indexer is more efficient.
🤖 Prompt for AI Agents
In `@zk-compression-blog.md` at line 39, The sentence "Note, for offchain privacy
a user client should fetch a complete (sub)tree not Merkle proof from an
indexer. If only onchain privacy is sufficient fetching Merkle proofs from an
indexer is more efficient." is a run-on and unclear; split and rephrase into two
clearer sentences (one stating that for offchain privacy the client should fetch
a complete (sub)tree rather than a Merkle proof, and a second stating that if
onchain privacy suffices, fetching Merkle proofs is more efficient), e.g.,
insert punctuation and a small connector word ("whereas"/"however") and add
commas around "for offchain privacy" or "if only onchain privacy is sufficient"
to improve readability.



### Nullifier
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Add blank line after heading.

Markdown headings should be surrounded by blank lines for proper rendering and readability.

📝 Proposed fix
 ### Nullifier
+
 Nullifiers prevent double spending.

Apply the same fix to line 12.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
### Nullifier
### Nullifier
Nullifiers prevent double spending.
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

42-42: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)

🤖 Prompt for AI Agents
In `@zk-compression-blog.md` at line 42, Add a blank line after the "###
Nullifier" heading (and also add a blank line after the heading at line 12) so
Markdown headings are surrounded by a blank line for proper rendering; locate
the "### Nullifier" heading and the heading at line 12 and insert a single empty
line immediately following each heading.

Nullifiers prevent double spending.
In detail, a nullifier is a hash derived from your secret and the leaf the transaction is using.
When you use private state (stored in a Merkle tree leaf), you publish the nullifier. The program stores it in a set.
If anyone tries to spend the same leaf again, the nullifier would match one already stored, so the transaction fails.
The nullifier reveals nothing about which leaf was spent.
Different state produces different nullifiers, so observers can't link a nullifier back to its source leaf.

**Nullifiers on Solana:**

**PDAs** are a straightforward choice. Derive an address from the nullifier hash, create an account there. If the account exists, the nullifier was used. The cost is ~899k lamports per nullifier for rent exemption.

**Compressed addresses** work the same way but cost ~10k lamports. The tradeoff: you need an additional ZK proof to create the account and a CPI to the Light system program. If you're already generating a ZK proof for your application logic, the marginal cost of the extra proof is low. If not, PDAs are simpler.
Comment on lines +52 to +54
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find nullifier-related account structures and rent calculations
echo "=== Searching for nullifier account definitions ==="
find . -type f -name "*.rs" -o -name "*.md" | xargs grep -l "nullifier" | head -20

Repository: Lightprotocol/program-examples

Length of output: 454


🏁 Script executed:

#!/bin/bash
# Look for rent calculations in Light Protocol
echo "=== Searching for rent calculation implementations ==="
rg "rent|lamport" --type rs -i | grep -E "(pda|compressed|account)" | head -30

Repository: Lightprotocol/program-examples

Length of output: 172


🏁 Script executed:

#!/bin/bash
# Check for documentation on rent costs
echo "=== Searching for rent documentation ==="
find . -type f \( -name "*rent*.md" -o -name "*RENT*.md" \) -exec echo "File: {}" \; -exec head -50 {} \;

Repository: Lightprotocol/program-examples

Length of output: 114


🏁 Script executed:

#!/bin/bash
# Read the nullifier program to understand account structure
echo "=== Nullifier Account Structure ==="
cat ./basic-operations/anchor/create-nullifier/programs/create-nullifier/src/lib.rs

Repository: Lightprotocol/program-examples

Length of output: 2654


🏁 Script executed:

#!/bin/bash
# Search for any rent-related discussions in documentation
echo "=== Looking for rent discussions ==="
rg -i "rent|lamport|cost" --type md . | head -40

Repository: Lightprotocol/program-examples

Length of output: 4116


🏁 Script executed:

#!/bin/bash
# Check the blog file itself for context
echo "=== Examining the blog file ==="
cat ./zk-compression-blog.md | head -100

Repository: Lightprotocol/program-examples

Length of output: 7489


🏁 Script executed:

#!/bin/bash
# Search for rent-related documentation in light-protocol repo
echo "=== Finding rent documentation ==="
find . -type f -name "*RENT*" -o -name "*rent*.md" 2>/dev/null | head -20

Repository: Lightprotocol/program-examples

Length of output: 108


🏁 Script executed:

#!/bin/bash
# Search for rent calculation in Rust code
echo "=== Searching for rent calculation code ==="
rg "rent|lamport" --type md . 2>/dev/null | grep -i "cost\|price\|lamport" | head -50

Repository: Lightprotocol/program-examples

Length of output: 803


🏁 Script executed:

#!/bin/bash
# Look for the rent constants mentioned in learnings
echo "=== Searching for BASE_RENT and RENT_PER_BYTE constants ==="
rg "BASE_RENT|RENT_PER_BYTE|1220|432000" . 2>/dev/null | head -20

Repository: Lightprotocol/program-examples

Length of output: 133


🏁 Script executed:

#!/bin/bash
# Calculate Solana rent for a minimal account (empty/minimal data)
# Solana rent rate is approximately 3.48 lamports per byte per year
# For rent exemption (2-year coverage), we need: size * rate * 2

# For a minimal account with ~128 bytes (typical minimum):
python3 << 'EOF'
# Current Solana rent rate: ~0.00348 SOL per byte per year
# Minimum account size in Solana: 128 bytes
# Rent exemption period: 2 years (standard)

rent_rate_per_byte_per_year = 0.00348  # SOL
lamports_per_sol = 1_000_000
min_account_size = 128  # bytes
rent_years = 2

# Calculate for 128-byte account
min_balance_128 = min_account_size * rent_rate_per_byte_per_year * rent_years
min_balance_128_lamports = min_balance_128 * lamports_per_sol

print(f"Solana rent-exempt minimum for 128-byte account: {min_balance_128_lamports:,.0f} lamports (~{min_balance_128:.6f} SOL)")

# For 256 bytes
min_balance_256 = 256 * rent_rate_per_byte_per_year * rent_years
min_balance_256_lamports = min_balance_256 * lamports_per_sol
print(f"Solana rent-exempt minimum for 256-byte account: {min_balance_256_lamports:,.0f} lamports (~{min_balance_256:.6f} SOL)")

# Reverse calculate what size gives 899k lamports
target_lamports = 899_000
derived_size = target_lamports / lamports_per_sol / rent_rate_per_byte_per_year / rent_years
print(f"\nSize that yields ~899k lamports: {derived_size:,.0f} bytes")

# And for compressed: 10k lamports for Light Protocol
target_compressed = 10_000
# Light Protocol formula: 1220 + (bytes * 10) per epoch
# But this is per epoch, not 2-year rent exemption
# If minimal (0 bytes), it's 1220 per epoch
# 432,000 slots per epoch (~2.5 days)
epochs_in_2_years = 365.25 * 2 / 2.5  # roughly 292 epochs in 2 years
cost_per_epoch_minimal = 1220
total_minimal = cost_per_epoch_minimal * epochs_in_2_years
print(f"\nLight Protocol rent for minimal account over 2 years: ~{total_minimal:,.0f} lamports")
print(f"Light Protocol rent claimed: ~10,000 lamports (likely per-epoch or minimal overhead)")
EOF

Repository: Lightprotocol/program-examples

Length of output: 160


🏁 Script executed:

#!/bin/bash
# Search for any actual rent calculation references in the codebase
echo "=== Checking README and documentation for cost context ==="
grep -r "cost\|rent\|lamport" --include="*.md" . | grep -i "pda\|compressed\|nullif" | head -20

Repository: Lightprotocol/program-examples

Length of output: 2742


Clarify rent cost precision with size context.

The figures are reasonable approximations—~899k lamports for a minimal PDA aligns with Solana's rent-exempt threshold (~890k for 128 bytes, 2-year coverage), and compressed addresses do cost significantly less. However, the ~10k lamports for compressed is conservative; Light Protocol costs depend on account size and use case. Real examples show 15k–50k lamports depending on the data structure. Consider specifying the account size these estimates assume, or note that costs vary by application (e.g., "~10k–50k lamports per compressed nullifier depending on configuration").

🤖 Prompt for AI Agents
In `@zk-compression-blog.md` around lines 52 - 54, Update the PDA vs compressed
address cost text to clarify the assumptions and add a range: state the account
size used for the PDA estimate (e.g., minimal account ~128 bytes) and that ~899k
lamports refers to current rent-exempt threshold for that size, and change the
compressed-address line to show a configurable range (e.g., "~10k–50k lamports
depending on account size/data structure and Light Protocol use") and note that
Light Protocol costs vary with proof/CPI complexity; keep the original tradeoff
explanation but explicitly call out that exact numbers depend on account size
and configuration.



## Zk Id example
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Standardize "ZK ID" capitalization.

Use "ZK ID" instead of "Zk Id" for consistency with standard zero-knowledge abbreviation conventions.

📝 Proposed fix
-## Zk Id example
+## ZK ID example
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
## Zk Id example
## ZK ID example
🤖 Prompt for AI Agents
In `@zk-compression-blog.md` at line 57, Replace the header text "Zk Id" with the
standardized capitalization "ZK ID" in the markdown heading (the line currently
reading "## Zk Id example") so the header becomes "## ZK ID example" to match
the zero-knowledge abbreviation style used across the document.


[zk-id](https://github.com/Lightprotocol/program-examples/tree/main/zk-id) is a proof of concept credential system built with zk compression and the following tools:
Comment on lines +57 to +59
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add disclaimer that this is an example implementation.

The zk-id example should be clearly identified as a proof-of-concept for educational purposes, not a production-ready identity protocol. This helps readers understand the scope and limitations.

📝 Suggested addition
 ## Zk Id example
 
-[zk-id](https://github.com/Lightprotocol/program-examples/tree/main/zk-id) is a proof of concept credential system built with zk compression and the following tools:
+[zk-id](https://github.com/Lightprotocol/program-examples/tree/main/zk-id) is a proof-of-concept credential system demonstrating ZK inclusion proofs. This is an educational example, not a production-ready identity protocol. It's built with zk compression and the following tools:

Based on learnings, setting proper expectations about the example's scope is important.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
## Zk Id example
[zk-id](https://github.com/Lightprotocol/program-examples/tree/main/zk-id) is a proof of concept credential system built with zk compression and the following tools:
## Zk Id example
[zk-id](https://github.com/Lightprotocol/program-examples/tree/main/zk-id) is a proof-of-concept credential system demonstrating ZK inclusion proofs. This is an educational example, not a production-ready identity protocol. It's built with zk compression and the following tools:
🤖 Prompt for AI Agents
In `@zk-compression-blog.md` around lines 57 - 59, Add a brief disclaimer below
the "## Zk Id example" heading stating that the linked "zk-id" repository is a
proof-of-concept/example implementation for educational purposes and is not
production-ready; reference the repository name "zk-id" and the phrase "proof of
concept" or "not production-ready" so readers clearly understand scope and
limitations.


| Component | Implementation |
|-----------|----------------|
| Merkle leaves | [compressed accounts](https://github.com/Lightprotocol/program-examples/blob/main/zk-id/src/lib.rs#L141) (light-sdk) |
| Nullifiers | [compressed addresses](https://github.com/Lightprotocol/program-examples/blob/main/zk-id/src/lib.rs#L192) (light-sdk) |
| Circuit | [circom](https://github.com/Lightprotocol/program-examples/tree/main/zk-id/circuits) |
| Proof generation | [circom-prover](https://github.com/Lightprotocol/program-examples/blob/main/zk-id/tests/test.rs#L575) (Rust) |
| On-chain verification | [groth16-solana](https://github.com/Lightprotocol/program-examples/blob/main/zk-id/src/lib.rs#L269) |

### Creating a Credential

An issuer registers with the system by calling `create_issuer`. This creates a compressed account storing the issuer's public key and a credential counter.

The issuer then calls `add_credential` for each user. The user generates a credential keypair: a private key (random 248-bit value, sized to fit the BN254 field) and a public key (Poseidon hash of the private key). The issuer creates a compressed account containing:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's find the blog post file and read the relevant section
find . -name "zk-compression-blog.md" -type f

Repository: Lightprotocol/program-examples

Length of output: 98


🏁 Script executed:

# Locate CLAUDE.md in the zk-id directory
find . -path "*zk/zk-id*" -name "CLAUDE.md" -o -path "*zk/zk-id*" -name "claude.md"

Repository: Lightprotocol/program-examples

Length of output: 56


🏁 Script executed:

# Find lib.rs in zk-id to check the actual implementation
find . -path "*zk/zk-id*" -name "lib.rs"

Repository: Lightprotocol/program-examples

Length of output: 56


🏁 Script executed:

# Search more broadly for CLAUDE.md
fd -i claude.md

Repository: Lightprotocol/program-examples

Length of output: 258


🏁 Script executed:

# Search for any lib.rs files
fd lib.rs

Repository: Lightprotocol/program-examples

Length of output: 993


🏁 Script executed:

# Read the blog post to see the full context around lines 69-112
wc -l ./zk-compression-blog.md

Repository: Lightprotocol/program-examples

Length of output: 102


🏁 Script executed:

# Read the CLAUDE.md file that contains the learning
cat ./zk-id/CLAUDE.md

Repository: Lightprotocol/program-examples

Length of output: 6034


🏁 Script executed:

# Read the blog post to see the full context around the credential section
sed -n '60,115p' ./zk-compression-blog.md

Repository: Lightprotocol/program-examples

Length of output: 4178


🏁 Script executed:

# Read the lib.rs to understand the actual implementation
cat ./zk-id/src/lib.rs | head -100

Repository: Lightprotocol/program-examples

Length of output: 3648


Update credential keypair generation to match implementation.

The blog describes the credential private key as a "random 248-bit value," but according to the specification in CLAUDE.md, the private key is deterministically derived: Sha256(sign("CREDENTIAL")) truncated to 248 bits. This distinction is important since the derivation method affects how users generate and manage their credentials. Update the description to reflect the actual deterministic generation method.

🤖 Prompt for AI Agents
In `@zk-compression-blog.md` at line 73, Update the blog text around
add_credential to reflect that credential private keys are derived
deterministically per CLAUDE.md: compute Sha256(sign("CREDENTIAL")) and truncate
to 248 bits (not a random 248-bit value), then derive the public key as the
Poseidon hash of that private key; change the sentence that currently claims a
"random 248-bit value" to this deterministic derivation and reference the
CLAUDE.md spec and the sign("CREDENTIAL") / Sha256 truncation procedure.

- The issuer's public key
- The user's credential public key

The account uses Poseidon hashing. This stores the credential as a leaf in a 26-level Merkle tree (supporting ~67 million leaves). The tree root lives onchain. An indexer (a server that indexes Solana transactions, in this case leaves) maintains a full copy of the tree.

The credential private key never touches the blockchain. Only the user knows it. The public key is a one-way hash, so even though the credential account is onchain, no one can reverse it to obtain the private key.

**Create Credential:**
```
OFFCHAIN ONCHAIN

+------------------+ +------------------------+
| Issuer | | Solana Program |
| | create_issuer | |
| - authority | ----------------------> | 1. validate issuer |
| - signs creds | | 2. create account |
+------------------+ +------------------------+
|
v
+------------------+ +------------------------+
| User | | Solana Program |
| | add_credential | |
| - generates | ----------------------> | 1. verify issuer sig |
| keypair | (cred_pubkey) | 2. hash credential |
| - stores | | 3. insert leaf |
| private key | +------------------------+
+------------------+ |
v
+------------------------+
+------------------+ | State Tree |
| Indexer | | (26-level Poseidon) |
| | reads state | |
| - watches chain | <---------------------- | [credential leaves] |
| - builds tree | | |
+------------------+ +------------------------+
```

The issuer registers once, then creates credentials for users. Each credential is a compressed account containing the issuer's pubkey and the user's credential pubkey. The account is Poseidon-hashed and stored as a leaf in the state tree. The user's private key never touches the chain.

### Verifying a Credential

Verification proves two things: the user knows a credential private key corresponding to a leaf in the tree, and they haven't used that credential in this context before.

The user fetches their credential's Merkle path from the indexer. Their browser computes a nullifier: `Poseidon(verification_id, credential_private_key)`. The verification_id is context-specific, an event ID, a vote proposal, or a claim period.

The ZK circuit takes the private key as private input. Public inputs include the Merkle root, the verification_id, and the nullifier. The circuit verifies:
1. The credential public key derives correctly from the private key
2. The credential exists in the tree (Merkle proof)
3. The nullifier derives correctly from the private key and verification_id

The user submits the proof and nullifier to `zk_verify_credential`. The program verifies the Groth16 proof against the onchain root. It creates an event account at an address derived from the nullifier and verification_id.

The address derivation is the double-spend check. Compressed addresses can only be created once (the Light system program rejects duplicates). If the nullifier was already used for this verification_id, the address exists, and the transaction fails.

Same credential, different verification_id means different nullifier, different address. A credential can verify across multiple contexts. Within any single context, exactly once.

**Verify a Credential:**
```
OFFCHAIN ONCHAIN

+------------------+ +------------------------+
| User | | Solana Program |
| | ZK Proof | |
| - private key | ----------------------> | 1. verify proof |
| - generates | | 2. check root matches |
| ZK proof | | 3. derive address |
+------------------+ | 4. create event acct |
^ +------------------------+
| | |
| Merkle proof v v
| (inclusion path) +-----------+ +-----------+
| | Event | | State |
+------------------+ | Account | | Tree |
| Indexer | | (address= | | (creds) |
| | | nullifier)| +-----------+
| - reads tree | +-----------+ ^
| - serves | |
| Merkle proofs | <-----------------------------------------+
+------------------+ reads state

```

The user fetches a Merkle proof from the indexer, computes a nullifier from their private key and the verification context, and generates a ZK proof. The program verifies the proof and creates an event account at an address derived from the nullifier. If the address already exists, the transaction fails. The credential itself is never revealed.

The indexer watches the blockchain and maintains a local copy of the Merkle tree. Users query it for Merkle proofs. The indexer sees which addresses exist but cannot link them to specific credentials.

## Resources

**Circuits**
- **circom** - Domain-specific language for writing ZK circuits
- **circomlib** - Standard library (Poseidon hash, comparators, binary operations)
- **noir** - Rust-like circuit language
- **ark-works** - Rust cryptography library for circuits

**Proof Generation & Verification**
- **snarkjs** - Generates proofs from circom circuits in JavaScript
- **circomlibjs** - Offchain implementations of circomlib functions
- **groth16-solana** - Verifies Groth16 proofs onchain (~200k compute units)

**Zk compression**
- **light-hasher** - Poseidon/SHA256 implementations matching circuit behavior
- **light-sdk** - Compressed accounts, state trees, address derivation

Comment on lines +160 to +176
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Use proper markdown headings in Resources section.

The subsections "Circuits", "Proof Generation & Verification", and "Zk compression" use bold text instead of proper markdown headings. Convert these to h3 headings (###) for better document structure and navigation. Also, change "Zk compression" to "ZK Compression" for consistency.

📝 Proposed fix
 ## Resources
 
-**Circuits**
+### Circuits
+
 - **circom** - Domain-specific language for writing ZK circuits
 - **circomlib** - Standard library (Poseidon hash, comparators, binary operations)
 - **noir** - Rust-like circuit language
 - **ark-works** - Rust cryptography library for circuits
 
-**Proof Generation & Verification**
+### Proof Generation & Verification
+
 - **snarkjs** - Generates proofs from circom circuits in JavaScript
 - **circomlibjs** - Offchain implementations of circomlib functions
 - **groth16-solana** - Verifies Groth16 proofs onchain (~200k compute units)
 
-**Zk compression**
+### ZK Compression
+
 - **light-hasher** - Poseidon/SHA256 implementations matching circuit behavior
 - **light-sdk** - Compressed accounts, state trees, address derivation
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
## Resources
**Circuits**
- **circom** - Domain-specific language for writing ZK circuits
- **circomlib** - Standard library (Poseidon hash, comparators, binary operations)
- **noir** - Rust-like circuit language
- **ark-works** - Rust cryptography library for circuits
**Proof Generation & Verification**
- **snarkjs** - Generates proofs from circom circuits in JavaScript
- **circomlibjs** - Offchain implementations of circomlib functions
- **groth16-solana** - Verifies Groth16 proofs onchain (~200k compute units)
**Zk compression**
- **light-hasher** - Poseidon/SHA256 implementations matching circuit behavior
- **light-sdk** - Compressed accounts, state trees, address derivation
## Resources
### Circuits
- **circom** - Domain-specific language for writing ZK circuits
- **circomlib** - Standard library (Poseidon hash, comparators, binary operations)
- **noir** - Rust-like circuit language
- **ark-works** - Rust cryptography library for circuits
### Proof Generation & Verification
- **snarkjs** - Generates proofs from circom circuits in JavaScript
- **circomlibjs** - Offchain implementations of circomlib functions
- **groth16-solana** - Verifies Groth16 proofs onchain (~200k compute units)
### ZK Compression
- **light-hasher** - Poseidon/SHA256 implementations matching circuit behavior
- **light-sdk** - Compressed accounts, state trees, address derivation
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

162-162: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


168-168: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


173-173: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)

🤖 Prompt for AI Agents
In `@zk-compression-blog.md` around lines 160 - 176, Replace the bold subsection
labels with proper markdown h3 headings: change "**Circuits**", "**Proof
Generation & Verification**", and "**Zk compression**" to "### Circuits", "###
Proof Generation & Verification", and "### ZK Compression" respectively so they
render as headings and ensure consistency by capitalizing "ZK" in the last
heading; leave the bullet lists and content under each heading unchanged.

Comment on lines +162 to +176
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider using semantic headings for resource categories.

The resource categories use bold text (Circuits, Proof Generation & Verification, Zk compression) instead of markdown headings. While functional, semantic headings improve document structure and accessibility.

♻️ Optional refactoring
-**Circuits**
+### Circuits
+
 - **circom** - Domain-specific language for writing ZK circuits
 - **circomlib** - Standard library (Poseidon hash, comparators, binary operations)
 - **noir** - Rust-like circuit language
 - **ark-works** - Rust cryptography library for circuits
 
-**Proof Generation & Verification**
+### Proof Generation & Verification
+
 - **snarkjs** - Generates proofs from circom circuits in JavaScript
 - **circomlibjs** - Offchain implementations of circomlib functions
 - **groth16-solana** - Verifies Groth16 proofs onchain (~200k compute units)
 
-**Zk compression**
+### ZK compression
+
 - **light-hasher** - Poseidon/SHA256 implementations matching circuit behavior
 - **light-sdk** - Compressed accounts, state trees, address derivation

Note: Also fixes "Zk compression" → "ZK compression" capitalization.

🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

162-162: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


168-168: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)


173-173: Emphasis used instead of a heading

(MD036, no-emphasis-as-heading)

🤖 Prompt for AI Agents
In `@zk-compression-blog.md` around lines 162 - 176, The resource category labels
are using bold inline text (e.g., **Circuits**, **Proof Generation &
Verification**, **Zk compression**) instead of semantic Markdown headings;
replace those bold labels with appropriate heading syntax (e.g., "## Circuits",
"## Proof Generation & Verification", "## ZK compression") and adjust the "Zk
compression" label to "ZK compression" for correct capitalization so the
document structure and accessibility improve while keeping the listed resource
items (circom, circomlib, noir, ark-works, snarkjs, circomlibjs, groth16-solana,
light-hasher, light-sdk) unchanged.

## Appendix

1. Compressed Account Hashing:
```
Compressed Account Hash:
+----------------------------------------------------------+
| Poseidon( |
| owner_hash, |
| leaf_index, |
| merkle_tree_pubkey, |
| address, |
| discriminator, |
| data_hash <-- developer-defined, hash anything here |
| ) |
+----------------------------------------------------------+
```

The `data_hash` is entirely yours. Hash whatever structure your application needs. The outer fields are protocol overhead, but they don't limit what you store inside.