The AdCP Sales Agent reference implementation is designed to be hosted anywhere. This guide covers several deployment options.
Deployment Flexibility:
- This is a standard Python application that can run on any infrastructure
- Docker recommended but not required
- Database agnostic - works with PostgreSQL (production) or SQLite (dev/testing)
- We'll support your deployment approach as best we can
Common Deployment Options:
- Docker Compose (recommended for most deployments)
- Kubernetes (for enterprise/scale deployments)
- Cloud Platforms (AWS, GCP, Azure, DigitalOcean, etc.)
- Platform Services (Fly.io, Heroku, Railway, Render, etc.)
- Bare Metal (direct Python deployment)
- Standalone (development/testing only)
Reference Implementation: The reference implementation at https://adcp-sales-agent.fly.dev is hosted on Fly.io, but this is just one option.
- Docker and Docker Compose installed
- Environment variables configured
- Google OAuth credentials (for Admin UI)
-
Clone and configure:
git clone https://github.com/adcontextprotocol/salesagent.git cd salesagent cp .env.example .env # Edit .env with your configuration
-
Start services:
docker-compose up -d
-
Initialize database:
docker exec -it adcp-buy-server-adcp-server-1 python migrate.py docker exec -it adcp-buy-server-adcp-server-1 python init_database.py
-
Access services:
- MCP Server: http://localhost:8080/mcp/
- Admin UI: http://localhost:8001
- PostgreSQL: localhost:5432
The docker-compose.yml defines three services:
services:
postgres: # PostgreSQL database
adcp-server: # MCP server (port 8080)
admin-ui: # Admin interface (port 8001)The system uses BuildKit caching with shared volumes:
adcp_global_pip_cache- Python packages cacheadcp_global_uv_cache- uv dependencies cache
This reduces rebuild times from ~3 minutes to ~30 seconds across all Conductor workspaces.
# Rebuild after code changes
docker-compose build
docker-compose up -d
# View logs
docker-compose logs -f
docker-compose logs -f adcp-server
# Stop services
docker-compose down
# Reset everything (including volumes)
docker-compose down -v
# Enter container
docker exec -it adcp-buy-server-adcp-server-1 bash
# Backup database
docker exec adcp-buy-server-postgres-1 \
pg_dump -U adcp_user adcp > backup.sqlThis section documents the Fly.io deployment used for the reference implementation at https://adcp-sales-agent.fly.dev.
Note: Fly.io is just one hosting option. You can deploy to any platform that supports Docker containers.
Fly.io provides:
- Managed cloud solution with automatic SSL
- Global distribution with edge locations
- Integrated PostgreSQL clusters
- Built-in monitoring and logging
Internet → Fly.io Edge → Proxy (8000) → MCP Server (8080)
→ Admin UI (8001)
-
Install Fly CLI:
brew install flyctl # macOS # or see https://fly.io/docs/hands-on/install-flyctl/
-
Authenticate:
fly auth login
-
Create application:
fly apps create adcp-sales-agent
-
Create PostgreSQL cluster:
fly postgres create --name adcp-db \ --region iad \ --initial-cluster-size 1 \ --vm-size shared-cpu-1x \ --volume-size 10 fly postgres attach adcp-db --app adcp-sales-agent
-
Create persistent volume:
fly volumes create adcp_data --region iad --size 1
-
Set secrets:
# OAuth configuration fly secrets set GOOGLE_CLIENT_ID="your-client-id.apps.googleusercontent.com" fly secrets set GOOGLE_CLIENT_SECRET="your-client-secret" # Admin configuration fly secrets set SUPER_ADMIN_EMAILS="admin@example.com" fly secrets set SUPER_ADMIN_DOMAINS="example.com" # API keys fly secrets set GEMINI_API_KEY="your-gemini-api-key"
-
Configure OAuth redirect URI:
Add to Google Cloud Console:
https://adcp-sales-agent.fly.dev/auth/google/callback -
Deploy:
fly deploy
-
Initialize database (first time only):
fly ssh console -C "cd /app && python init_database.py"
fly.toml- Main application configurationDockerfile.fly- Optimized Docker image for Fly.iofly-proxy.py- Request routing proxydebug_start.sh- Service startup script
# View logs
fly logs
# Check status
fly status
# SSH into machine
fly ssh console
# View metrics
fly dashboard
# Scale horizontally
fly scale count 2
# Scale vertically
fly scale vm shared-cpu-2x- Admin UI: https://adcp-sales-agent.fly.dev/admin
- MCP Endpoint: https://adcp-sales-agent.fly.dev/mcp/
- Health Check: https://adcp-sales-agent.fly.dev/health
# API Keys
GEMINI_API_KEY=your-gemini-api-key-here
# OAuth Configuration (choose one method)
# Method 1: Environment variables (recommended)
GOOGLE_CLIENT_ID=your-client-id.apps.googleusercontent.com
GOOGLE_CLIENT_SECRET=your-client-secret
# Method 2: File path (legacy)
# GOOGLE_OAUTH_CREDENTIALS_FILE=/path/to/client_secret.json
# Admin Configuration
SUPER_ADMIN_EMAILS=user1@example.com,user2@example.com
SUPER_ADMIN_DOMAINS=example.com
# Database (Docker/Fly.io handle automatically)
DATABASE_URL=postgresql://user:pass@host:5432/dbname- Go to Google Cloud Console
- Create new project or select existing
- Enable Google+ API
- Create OAuth 2.0 Client ID (Web application)
- Add authorized redirect URIs:
- Local:
http://localhost:8001/auth/google/callback - Docker:
http://localhost:8001/auth/google/callback - Fly.io:
https://your-app.fly.dev/auth/google/callback - Conductor:
http://localhost:8002-8011/auth/google/callback
- Local:
- Download credentials or copy Client ID and Secret
DATABASE_URL=postgresql://user:password@host:5432/dbname
DB_TYPE=postgresqlDATABASE_URL=sqlite:///adcp_local.db
DB_TYPE=sqlite# Docker deployment
docker exec -it adcp-buy-server-adcp-server-1 python setup_tenant.py \
"Publisher Name" \
--adapter google_ad_manager \
--gam-network-code 123456 \
--gam-refresh-token YOUR_REFRESH_TOKEN
# Fly.io deployment
fly ssh console -C "python setup_tenant.py 'Publisher Name' \
--adapter google_ad_manager \
--gam-network-code 123456"
# Mock adapter for testing
python setup_tenant.py "Test Publisher" --adapter mockAfter creating a tenant:
- Login to Admin UI with Google OAuth
- Navigate to "Advertisers" tab
- Click "Add Advertiser"
- Each advertiser gets their own API token
Migrations run automatically on startup via entrypoint.sh.
# Run migrations
uv run python migrate.py
# Check migration status
uv run python migrate.py status
# Create new migration
uv run alembic revision -m "description_of_change"
# Rollback last migration
uv run alembic downgrade -1- Always test on both SQLite and PostgreSQL
- Use SQLAlchemy operations for compatibility
- Include proper downgrade logic
- Test with fresh database before deploying
# MCP Server health
curl http://localhost:8080/health
# Admin UI health
curl http://localhost:8001/health
# PostgreSQL health (Docker)
docker exec adcp-buy-server-postgres-1 pg_isreadyThe system prepares for Prometheus metrics:
- Request latency
- Active media buys
- API call rates
- Error rates
- Use HTTPS everywhere (automatic on Fly.io)
- Set strong database passwords
- Rotate API keys regularly
- Enable audit logging
- Configure rate limiting
- Use environment variables for secrets
- Never commit
.envfiles - Implement backup strategy
- Monitor error logs
- Set up alerting
SSL is automatic - Fly.io handles certificates.
server {
listen 443 ssl;
ssl_certificate /etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;
location / {
proxy_pass http://admin-ui:8001;
}
location /mcp/ {
proxy_pass http://adcp-server:8080;
}
}# Docker
docker exec adcp-buy-server-postgres-1 \
pg_dump -U adcp_user adcp > backup_$(date +%Y%m%d).sql
# Fly.io
fly postgres backup create --app adcp-db# Docker
docker exec -i adcp-buy-server-postgres-1 \
psql -U adcp_user adcp < backup.sql
# Fly.io
fly postgres backup restore <backup-id> --app adcp-dbImportant files to backup:
.envconfigurationconductor_ports.json(if using Conductor)- Database backups
- Custom adapter configurations
# Check what's using a port
lsof -i :8080
# Kill process using port
kill -9 $(lsof -t -i:8080)# Test connection
psql postgresql://user:pass@localhost:5432/adcp
# Check Docker network
docker network ls
docker network inspect salesagent_default- Ensure redirect URI matches exactly (including trailing slash)
- Check for http vs https
- Verify port numbers match
# Check logs
docker-compose logs adcp-server
# Rebuild from scratch
docker-compose down -v
docker-compose build --no-cache
docker-compose upEnable debug logging:
# Docker
DEBUG=true docker-compose up
# Fly.io
fly secrets set DEBUG=trueIf migrating from an older single-tenant version:
-
Backup existing data:
cp adcp.db adcp_backup.db sqlite3 adcp.db .dump > backup.sql -
Update code:
git pull uv sync
-
Run migration:
uv run python migrate.py
The system will automatically:
- Create a default tenant from existing config
- Migrate data to multi-tenant schema
- Generate authentication tokens
-- Increase connections
ALTER SYSTEM SET max_connections = 200;
-- Optimize for SSD
ALTER SYSTEM SET random_page_cost = 1.1;
-- Increase shared buffers
ALTER SYSTEM SET shared_buffers = '256MB';services:
adcp-server:
deploy:
resources:
limits:
cpus: '2'
memory: 2G
reservations:
cpus: '1'
memory: 1GThe system uses SQLAlchemy connection pooling:
# Configured in database.py
pool_size=20
max_overflow=40
pool_timeout=30docker swarm init
docker stack deploy -c docker-compose.yml adcp
docker service scale adcp_adcp-server=3fly scale count 3 --region iad
fly scale count 2 --region lhrUpdate docker-compose.yml resource limits.
fly scale vm dedicated-cpu-2x
fly scale memory 4096To enable maintenance mode:
-
Create maintenance page:
<!-- templates/maintenance.html --> <h1>System Maintenance</h1> <p>We'll be back shortly.</p>
-
Enable in Admin UI:
# Set environment variable MAINTENANCE_MODE=true
-
Or use nginx:
location / { if (-f /var/www/maintenance.html) { return 503; } proxy_pass http://upstream; } error_page 503 /maintenance.html;
# Full backup
docker exec postgres pg_dump -U adcp_user adcp > backup.sql
# Compressed backup
docker exec postgres pg_dump -U adcp_user adcp | gzip > backup.sql.gz
# Restore
docker exec -i postgres psql -U adcp_user adcp < backup.sql# Simple copy
cp adcp_local.db backup.db
# With active connections
sqlite3 adcp_local.db ".backup backup.db"- Always use HTTPS in production
- Rotate API tokens regularly
- Monitor audit logs for anomalies
- Keep dependencies updated
- Input validation enforced on all API endpoints
- ID formats validated to prevent injection attacks
- Timezone strings validated against pytz database
- Temporary files cleaned up with try/finally blocks
- Database queries use parameterized statements only
- Use PostgreSQL for production
- Enable connection pooling
- Implement caching where appropriate
- Monitor resource usage
- Database replication for read scaling
- Load balancer for multiple app instances
- Consider CDN for static assets
- Queue system for async tasks
Benefits:
- Enterprise-grade orchestration
- Auto-scaling and self-healing
- Advanced networking and service mesh
- Multi-cloud portability
Basic Setup:
- Create Docker image:
docker build -t adcp-sales-agent:latest . - Push to registry:
docker push your-registry/adcp-sales-agent:latest - Apply k8s manifests (see
k8s/directory for examples) - Configure ingress for external access
Minimal k8s resources needed:
- Deployment (for app pods)
- Service (for internal networking)
- Ingress (for external access)
- ConfigMap (for configuration)
- Secret (for sensitive data)
- PersistentVolumeClaim (for PostgreSQL)
Option 1: ECS with Fargate
- Use Docker container from this repo
- Deploy to ECS with Fargate (serverless)
- RDS PostgreSQL for database
- Application Load Balancer for traffic
Option 2: EKS (Kubernetes)
- Deploy using k8s manifests
- Managed Kubernetes service
- Integrate with AWS services (RDS, Secrets Manager, etc.)
Option 3: EC2 with Docker
- Launch EC2 instance
- Install Docker and Docker Compose
- Deploy using docker-compose.yml
- Manage SSL with certbot/Let's Encrypt
Option 1: Cloud Run
- Deploy Docker container directly
- Serverless, auto-scaling
- Cloud SQL for PostgreSQL
- Cloud Load Balancing
Option 2: GKE (Kubernetes)
- Deploy using k8s manifests
- Managed Kubernetes service
- Integrate with GCP services
Option 3: Compute Engine
- Similar to EC2 approach
- Deploy with Docker Compose
- Cloud SQL or self-hosted PostgreSQL
Option 1: Azure Container Instances
- Deploy Docker container
- Serverless container service
- Azure Database for PostgreSQL
Option 2: AKS (Kubernetes)
- Deploy using k8s manifests
- Managed Kubernetes service
- Integrate with Azure services
Option 3: Virtual Machines
- Similar to EC2/Compute Engine
- Deploy with Docker Compose
- Azure Database for PostgreSQL
Option 1: App Platform
- Deploy from GitHub repo
- Managed platform (like Heroku)
- Managed PostgreSQL database
Option 2: Kubernetes
- Deploy using k8s manifests
- DigitalOcean Kubernetes (DOKS)
Option 3: Droplets
- Deploy with Docker Compose
- Managed PostgreSQL database
For direct Python deployment without Docker:
-
System requirements:
# Ubuntu/Debian sudo apt install python3.11 python3-pip postgresql nginx # Install uv (Python package manager) curl -LsSf https://astral.sh/uv/install.sh | sh
-
Application setup:
git clone https://github.com/adcontextprotocol/salesagent.git cd salesagent uv sync uv run python migrate.py -
Systemd services:
- Create service files for MCP server, Admin UI, and A2A server
- Use gunicorn or uvicorn for production serving
- Configure nginx as reverse proxy
-
Database:
- Install and configure PostgreSQL
- Create database and user
- Run migrations
Example systemd service:
[Unit]
Description=AdCP Sales Agent MCP Server
After=network.target postgresql.service
[Service]
Type=simple
User=adcp
WorkingDirectory=/opt/salesagent
Environment="DATABASE_URL=postgresql://user:pass@localhost/adcp"
ExecStart=/home/adcp/.local/bin/uv run python -m src.core.main
Restart=always
[Install]
WantedBy=multi-user.targetHeroku:
- Deploy using Heroku's buildpacks
- Add Heroku Postgres addon
- Configure environment variables via Heroku CLI
Railway:
- Deploy from GitHub repo
- Auto-detects Docker or Python
- Managed PostgreSQL available
Render:
- Deploy Docker container or Python app
- Managed PostgreSQL database
- Auto SSL and global CDN
Need help deploying to your platform?
We're here to support your deployment approach:
- Check existing docs - This guide covers common platforms
- Docker is universal - If you can run Docker, you can deploy this
- Open an issue - Share your deployment target and any challenges
- Contribute back - Add deployment guides for your platform
Common requirements across all platforms:
- Python 3.11+ (if not using Docker)
- PostgreSQL (production) or SQLite (dev/testing)
- Environment variables for configuration
- HTTPS/SSL for production
- Health check endpoint at
/health
We aim to support your chosen infrastructure - don't hesitate to reach out!