Get the Fossil Headers DB indexer running in 5 minutes. This guide assumes you've already completed the Installation.
Before starting, verify:
# Rust toolchain
cargo --version # Should show 1.70+
# Docker (if using Docker-based development)
docker --version
docker compose version
# PostgreSQL (if not using Docker)
psql --version # Should show 16+cd fossil-headers-db
make dev-upWhat this does:
- Creates Docker network
- Starts PostgreSQL 16 container
- Runs database migrations automatically
- Database ready on
localhost:5432
Expected output:
Creating network if it doesn't exist...
[+] Running 2/2
✔ Network fossil-network Created
✔ Container postgres Started
Database available at localhost:5432
Waiting for database to be ready...
Running migrations...
Applied 6 migrations
Development environment ready!
Create .env file:
cat > .env << EOF
DB_CONNECTION_STRING=postgresql://postgres:postgres@localhost:5432/postgres
NODE_CONNECTION_STRING=https://eth-mainnet.g.alchemy.com/v2/YOUR_API_KEY
ROUTER_ENDPOINT=0.0.0.0:3000
RUST_LOG=info
INDEX_TRANSACTIONS=false
START_BLOCK_OFFSET=1024
IS_DEV=true
EOFReplace YOUR_API_KEY with your Ethereum RPC provider API key:
make run-indexerExpected output:
Starting modern indexer service...
Connecting to DB
Run migrations
Starting Indexer
[router] Starting router service
[quick_index] Starting quick indexer service
[batch_index] Starting batch indexer service
[quick_index] Latest finalized block: 20000000
[batch_index] Backfilling from block 19999000 to 0
The indexer is now running! Keep this terminal open.
Open a new terminal and check the health endpoint:
curl http://localhost:3000/healthExpected response:
{
"status": "healthy",
"timestamp": "2025-10-14T12:00:00.000Z"
}Check the database for indexed blocks:
psql postgresql://postgres:postgres@localhost:5432/postgres \
-c "SELECT COUNT(*) as total_blocks, MIN(number) as first_block, MAX(number) as latest_block FROM block_header;"Expected output:
total_blocks | first_block | latest_block
--------------+-------------+--------------
42 | 19999958 | 20000000
(1 row)
The indexer is working! Block counts will grow as indexing continues.
Watch the indexer logs to see real-time progress:
Quick Indexer (real-time sync):
[quick_index] Indexed block 20000001
[quick_index] Indexed block 20000002
[quick_index] Waiting for new finalized blocks...
Batch Indexer (historical backfill):
[batch_index] Indexed batch: blocks 19999000-19998000 (1000 blocks in 12.3s)
[batch_index] Indexed batch: blocks 19998000-19997000 (1000 blocks in 11.8s)
[batch_index] Backfill progress: 2000/19999000 blocks (0.01%)
Performance: Expect 50-100 blocks/second during backfilling depending on your RPC provider and hardware.
The Fossil Headers DB runs two concurrent services:
- Purpose: Real-time synchronization
- Strategy: Polls for new finalized blocks every 10 seconds
- Target: Latest block → forward
- Speed: ~6-12 seconds per block (matches Ethereum finality)
- Purpose: Historical backfilling and gap filling
- Strategy: Processes blocks in batches of 1000
- Target:
START_BLOCK_OFFSET→ genesis (block 0) - Speed: 50-100 blocks/second (RPC limited)
Both services run simultaneously to ensure:
- New blocks are indexed immediately (quick)
- Historical blocks are filled in the background (batch)
- No gaps in the block sequence
psql postgresql://postgres:postgres@localhost:5432/postgres \
-c "SELECT MAX(number) as latest_block FROM block_header;"psql postgresql://postgres:postgres@localhost:5432/postgres \
-c "SELECT
current_latest_block_number as quick_indexer_position,
backfilling_block_number as batch_indexer_position,
is_backfilling,
indexing_starting_block_number as target_block
FROM index_metadata;"Example output:
quick_indexer_position | batch_indexer_position | is_backfilling | target_block
------------------------+------------------------+----------------+--------------
20000002 | 19995000 | t | 0
Interpretation:
- Quick indexer is at block 20,000,002 (near tip)
- Batch indexer is backfilling at block 19,995,000
- Backfilling is active (
t= true) - Target is block 0 (genesis)
psql postgresql://postgres:postgres@localhost:5432/postgres \
-c "SELECT number
FROM generate_series(0, (SELECT MAX(number) FROM block_header)) AS number
WHERE NOT EXISTS (SELECT 1 FROM block_header WHERE block_header.number = number.number)
LIMIT 10;"If this returns rows, those block numbers are missing and will be filled by the batch indexer.
In the terminal running the indexer, press Ctrl+C:
^C
Received Ctrl+C
Waiting for current processes to finish...
[router] Router service completed normally
[quick_index] Quick indexer completed normally
[batch_index] Batch indexer completed normally
All indexing services completed successfully
make dev-downThis stops the PostgreSQL container but preserves data in Docker volumes.
make dev-cleanWarning: This removes all indexed data. Use only when you want to start fresh.
Now that your indexer is running:
- Adjust indexing speed: Configuration Guide
- Enable transaction indexing
- Configure logging levels
- Learn how the Light Client uses indexed data: Light Client Integration
- Understand the data flow: Architecture Overview
- Deploy to AWS ECS: Deployment Guide
- Set up monitoring: Monitoring Guide
- Configure backups: Database Management
- Run tests: Testing Guide
- Contribute code: Development Workflow
Error: RPC request failed: connection timeout
Solution:
- Verify your RPC endpoint is correct in
.env - Test the endpoint:
curl -X POST -H "Content-Type: application/json" \ --data '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' \ YOUR_RPC_ENDPOINT
- Check your API key is valid and has remaining quota
Error: Failed to connect to database
Solution:
# Check if PostgreSQL container is running
docker ps | grep postgres
# If not running, start it
make dev-up
# Test connection manually
psql postgresql://postgres:postgres@localhost:5432/postgres -c "SELECT 1;"Symptoms: Only a few blocks/second during backfilling
Causes and Solutions:
-
RPC rate limiting
- Symptom: Many timeout/429 errors in logs
- Solution: Use premium RPC tier or dedicated node
-
Network latency
- Symptom: High RPC response times
- Solution: Choose RPC provider closer to your region
-
Database I/O bottleneck
- Symptom: High disk wait times
- Solution: Use SSD storage, increase PostgreSQL work_mem
-
Conservative default settings
- Solution: Increase batch size in configuration:
// Programmatic configuration index_batch_size(2000) // Default is 1000
- Solution: Increase batch size in configuration:
Check logs for errors:
# Last 100 lines
docker compose -f docker/docker-compose.local.yml logs --tail 100
# Follow logs in real-time
docker compose -f docker/docker-compose.local.yml logs -f indexerCommon causes:
- RPC endpoint became unavailable
- Database connection lost
- Insufficient disk space
- API key quota exceeded
Run the test suite to verify everything works:
# Run all tests
make test
# Run specific test categories
cargo test --lib # Unit tests
cargo test --test '*' # Integration tests
# Run with verbose output
cargo test -- --nocaptureExpected: All tests pass with green output.
# Start environment
make dev-up
# Run indexer
make run-indexer
# Check health
curl http://localhost:3000/health
# Stop environment
make dev-down
# Clean everything
make dev-clean
# Run tests
make test
# Format code
make format
# Run linter
make lint
# Build release binary
make build- Documentation: Browse the full docs
- Troubleshooting: See Troubleshooting Guide
- Configuration: See Configuration Guide
- Issues: Report on GitHub Issues