API layer that merges indexer data with Supabase metadata; handles auth (Sign In With Stellar), image storage, and notifications. Exposes REST (and later GraphQL) for the frontend and external consumers.
Stack: NestJS, Fastify, Supabase, Redis.
List raffles with optional filters and pagination. Data comes from the indexer (contract state).
| Query param | Type | Description |
|---|---|---|
status |
string | Filter by raffle status |
category |
string | Filter by category |
creator |
string | Filter by creator Stellar address |
asset |
string | Filter by asset (e.g. XLM) |
limit |
number | Page size (1–100, default 20) |
offset |
number | Pagination offset (default 0) |
Response: { raffles: RaffleListItem[], total?: number }
Single raffle detail. Merges indexer (contract state: price, tickets, winner, status) and Supabase (metadata: title, description, image_url, category).
Response: RaffleDetailResponse — contract fields + title, description, image_url, category
Create or update raffle metadata. Body: { title?, description?, image_url?, category?, metadata_cid? }. Requires JWT (Bearer token from SIWS).
Protected routes require Authorization: Bearer <token>.
- GET /auth/nonce?address=G... — Returns
{ nonce, expiresAt, issuedAt, message } - User signs the
messagein their Stellar wallet (Freighter, xBull, etc.) - POST /auth/verify — Body:
{ address, signature, nonce [, issuedAt] }wheresignatureis base64-encoded Ed25519 - Returns
{ accessToken, refreshToken }— useaccessTokenasAuthorization: Bearer <accessToken>
- POST /auth/refresh — Body:
{ refreshToken } - Returns
{ accessToken, refreshToken }(new pair, rotating the refresh token)
tikka.io wants you to sign in
Address: G...
Nonce: abc123
Issued At: 2025-02-19T12:00:00.000Z
Set SIWS_DOMAIN to customize the domain (default: tikka.io).
curl -X POST http://localhost:3001/raffles/1/metadata \
-H "Content-Type: application/json" \
-d '{"title":"Test"}'
# Expected: 401 Unauthorizednpm run test:e2eAll endpoints are protected against abuse. Limits are per IP address, enforced by @nestjs/throttler.
When a limit is exceeded the API returns HTTP 429 with a Retry-After value in the response body.
| Tier | Endpoints | Default limit |
|---|---|---|
default |
All public API (/raffles, /users, /leaderboard, /stats) |
100 req / 60 s |
nonce |
GET /auth/nonce |
30 req / 60 s |
auth |
POST /auth/verify |
10 req / 60 s |
{
"statusCode": 429,
"error": "Too Many Requests",
"message": "Rate limit exceeded. Please slow down and try again.",
"retryAfter": 42
}All thresholds are overridable without code changes (see .env.example):
THROTTLE_DEFAULT_LIMIT=100 # max requests per window
THROTTLE_DEFAULT_TTL=60 # window size in seconds
THROTTLE_AUTH_LIMIT=10 # POST /auth/verify
THROTTLE_AUTH_TTL=60
THROTTLE_NONCE_LIMIT=30 # GET /auth/nonce
THROTTLE_NONCE_TTL=60# Hit /auth/verify 11 times — the 11th must return 429
for i in $(seq 1 11); do
curl -s -o /dev/null -w "%{http_code}\n" -X POST http://localhost:3001/auth/verify
doneReturns the live status of all backend dependencies. No authentication required.
curl http://localhost:3001/healthResponse — all healthy (HTTP 200):
{
"status": "ok",
"indexer": "ok",
"supabase": "ok",
"timestamp": "2026-04-23T11:00:00.000Z"
}Response — dependency down (HTTP 503):
{
"status": "degraded",
"indexer": "error",
"supabase": "ok",
"timestamp": "2026-04-23T11:00:00.000Z"
}| Field | Values | Description |
|---|---|---|
status |
ok / degraded |
Overall health — degraded if any check fails |
indexer |
ok / error |
Reachability of tikka-indexer /health |
supabase |
ok / error |
Reachability of Supabase REST endpoint |
timestamp |
ISO 8601 string | Time the check was performed |
The endpoint returns HTTP 503 when status is degraded, so orchestrators (Kubernetes, Railway, Fly.io) can detect unhealthy instances automatically.
The backend selects a Stellar network with STELLAR_NETWORK (testnet or mainnet). That value drives:
- Horizon URL — defaults to the public Horizon for the chosen network (
https://horizon-testnet.stellar.orgorhttps://horizon.stellar.org). Override withSTELLAR_HORIZON_URLif you use a proxy or custom Horizon. - Network passphrase — exposed at runtime via
env.stellar.networkPassphrase(same constants as the Stellar SDK) for any logic that must sign or verify against a specific network. - Contract ID — defaults are empty until you deploy; set
STELLAR_CONTRACT_IDto your raffle (or other) contract for the environment you are running. - Indexer base URL — if
INDEXER_URLis not set, it defaults to the URL instellar.constants.tsfor that network (currentlyhttp://localhost:3002for both). In production, setINDEXER_URLexplicitly to the tikka-indexer instance that indexes the same chain asSTELLAR_NETWORK.
Injectable services should read INDEXER_URL and INDEXER_TIMEOUT_MS from Nest ConfigService (validated at startup). For scripts or non-DI code, use env.indexer and env.stellar from src/config/env.config.ts.
Example .env fragments:
# Local development against testnet
STELLAR_NETWORK=testnet
INDEXER_URL=http://localhost:3002
# Production-style: mainnet Horizon defaults; point indexer at your fleet
STELLAR_NETWORK=mainnet
STELLAR_CONTRACT_ID=YOUR_MAINNET_CONTRACT_ID
INDEXER_URL=https://your-indexer.example.comCopy .env.example to .env and fill in the required values before starting the server.
cp .env.example .envThe app validates all variables at startup using Zod. Missing or invalid required vars cause an immediate startup failure with a clear error message listing every invalid field.
These must be set or the app will refuse to start:
| Variable | Description |
|---|---|
SUPABASE_URL |
Full URL of your Supabase project (e.g. https://xyz.supabase.co) |
SUPABASE_SERVICE_ROLE_KEY |
Supabase service role key (not the anon key) |
JWT_SECRET |
Secret for signing JWTs — minimum 32 characters |
VITE_FRONTEND_URL |
Frontend origin allowed by CORS (e.g. https://app.tikka.io) |
ADMIN_TOKEN |
Bearer token for /admin/* endpoints |
| Variable | Default | Description |
|---|---|---|
PORT |
3001 |
HTTP port the server listens on |
STELLAR_NETWORK |
testnet |
testnet or mainnet — Horizon, contract, and default indexer base |
STELLAR_HORIZON_URL |
(from network) | Override Horizon URL (optional) |
STELLAR_CONTRACT_ID |
(none) | On-chain contract id for this deployment (optional) |
INDEXER_URL |
(per STELLAR_NETWORK) |
Base URL of tikka-indexer; set explicitly in prod |
INDEXER_TIMEOUT_MS |
5000 |
HTTP timeout for indexer requests (ms) |
JWT_EXPIRES_IN |
7d |
JWT expiry duration (e.g. 1h, 7d) |
SIWS_DOMAIN |
tikka.io |
Domain shown in the SIWS sign-in message |
ADMIN_IP_ALLOWLIST |
"" (allow all) |
Comma-separated CIDRs/IPs for admin access |
FCM_ENABLED |
false |
Enable Firebase Cloud Messaging push notifications |
FCM_SERVICE_ACCOUNT_JSON |
— | FCM service account JSON string (for CI/secrets) |
FCM_SERVICE_ACCOUNT_PATH |
— | Path to FCM service account JSON file |
THROTTLE_DEFAULT_LIMIT |
100 |
Max requests per window for public endpoints |
THROTTLE_DEFAULT_TTL |
60 |
Rate-limit window size in seconds |
THROTTLE_AUTH_LIMIT |
10 |
Max requests per window for POST /auth/verify |
THROTTLE_AUTH_TTL |
60 |
Rate-limit window for auth tier (seconds) |
THROTTLE_NONCE_LIMIT |
30 |
Max requests per window for GET /auth/nonce |
THROTTLE_NONCE_TTL |
60 |
Rate-limit window for nonce tier (seconds) |
src/api/rest/- raffles, users, leaderboard, stats, search, notificationssrc/auth/- SIWS (nonce, verify), JWT strategy, guardssrc/services/- metadata, storage, indexer client, notifications, searchsrc/middleware/- rate limit, validation (Zod), CORSsrc/config/- env configuration
Full ecosystem spec: ../docs/ARCHITECTURE.md (section 4 - tikka-backend).
POST /raffles/upload-image- Auth: Bearer token required
- Content type:
multipart/form-data - File field: first uploaded file part
- Optional field:
raffleId(used in storage path) - Response:
{ "url": "https://..." }
- Max file size:
5 MB(5242880bytes) - Allowed MIME types:
image/jpeg,image/png,image/webp
SUPABASE_URLSUPABASE_SERVICE_ROLE_KEY
- CPU: 100m requests, 500m limits
- Memory: 256Mi requests, 512Mi limits These resources are managed by HPA targeting 70% CPU usage.
Data persistence in Supabase is critical. We use a multi-layered backup strategy.
A GitHub Action (.github/workflows/supabase-backup.yml) runs daily at 02:00 UTC.
- Process: Runs
pg_dump, compresses to.sql.gz, and uploads to Cloudflare R2. - Retention: Backups are retained for 30 days.
- Trigger: Can be manually triggered via GitHub Actions tab.
Required Secrets:
SUPABASE_DB_URL: Full Postgres URI.R2_BUCKET_NAME,R2_ACCESS_KEY_ID,R2_SECRET_ACCESS_KEY,R2_ENDPOINT_URL.
Use the provided script for local dumps before migrations or major changes.
# Set your DB URL (or add to .env)
export SUPABASE_DB_URL="postgresql://postgres:<pwd>@db.<ref>.supabase.co:5432/postgres"
# Run the backup script
bash scripts/backup.sh- Output:
backups/tikka-backup-YYYY-MM-DD-HHmmss.dump(Postgres custom format). - Upload: Optionally uploads to R2 if credentials are set in
.env.
For production environments, ensure Supabase PITR is enabled in the dashboard:
- Go to Settings -> Database -> Backups.
- Enable PITR (requires Pro plan or higher).
The custom format is recommended as it allows selective restores and is compressed.
pg_restore \
--dbname=$SUPABASE_DB_URL \
--no-owner --no-acl \
--schema=public \
--verbose \
backups/tikka-backup-TIMESTAMP.dumpUsed by the automated GitHub workflow.
gunzip -c tikka-backup-TIMESTAMP.sql.gz | psql $SUPABASE_DB_URLCaution
Restoring a database can overwrite existing data. Always verify the backup and the target environment before proceeding.