Integration of Scope with ai-runner for running Scope pipelines over the Livepeer network.
🚧 This project is currently in alpha. 🚧
- uv package manager
- NVIDIA GPU with >= 24GB VRAM
git clone https://github.com/daydreamlive/scope-runner.git
cd scope-runner
uv syncDownload required models before running:
mkdir -p ~/.daydream-scope/models
uv run scope-runner --prepare-modelsModels are stored in ~/.daydream-scope/models by default. It can be overridden by either DAYDREAM_SCOPE_MODELS_DIR env or MODEL_DIR. When MODEL_DIR is set (e.g. when ran by Orchestrators), models go to $MODEL_DIR/Scope--models/.
uv run scope-runnerThe server starts on port 8000.
# Build
docker build -t scope-runner .
# Run
docker run --gpus all -v /path/to/models:/models -p 8000:8000 scope-runnerThe go-livepeer box provides an easy way to test the full Livepeer AI stack locally.
Common setup for both methods below (local or dockerized):
cd /path/to/go-livepeer
export PIPELINE=scope
# Easier to get started, uses docker for go-livepeer nodes; may skip if you have already set up the local go-livepeer dev env
export DOCKER=trueStart scope-runner locally and point the box to it:
-
Start scope-runner locally:
uv run scope-runner # Starts on http://localhost:8000 -
Create an
aiModels.jsonfile to point to your local runner:[ { "pipeline": "live-video-to-video", "model_id": "scope", "url": "http://localhost:8000" } ] -
Start the box with your config:
export AI_MODELS_JSON=/path/to/aiModels.json REBUILD=false make boxREBUILD=falseskips building Docker images since we're running the pipeline locally. It might downloadgo-livepeerdocker image instead if not available locally. -
Stream and playback:
make box-stream # Start streaming make box-playback # Watch the output
On remote/headless machines, set
RTMP_OUTPUTto stream to an external endpoint instead:export RTMP_OUTPUT=rtmp://rtmp.livepeer.com/live/$STREAM_KEY make box-stream
Test the full docker pipeline. More similar to production and catches issues like missing dependencies, models, etc.
-
Prepare Scope models (first time only):
cd /path/to/ai-runner/runner PIPELINE=scope ./dl_checkpoints.sh --tensorrt export AI_MODELS_DIR=$(pwd)/models
-
Start the box.
Change to
go-livepeerdirectory and:Option A - Full rebuild (slower, first time or after major changes):
make box
Option B - Incremental rebuild (faster, for iterating on scope-runner):
REBUILD=false make box & # Start box in background without rebuilding make box-runner # Rebuild and restart only the runner
-
Stream and playback (same as local runner):
make box-stream # Start streaming make box-playback # Watch the output
You can similarly use the
RTMP_OUTPUTon a headless machine.
For more details on creating custom pipelines, see the ai-runner custom pipeline guide. For more information on using the go-livepeer box see its guide.
Scope Runner uses a two-stage deployment process managed via livepeer-infra:
| Environment | Image Tag | Trigger |
|---|---|---|
| Staging | daydreamlive/scope-runner:main |
Push to main branch |
| Production | daydreamlive/scope-runner:latest |
Git tag (e.g. ideally a semver like v0.2.0) |
Merging to main automatically builds and pushes the :main Docker image. This is auto-deployed to staging orchestrators.
To release to production:
-
Update the version in
pyproject.toml:[project] version = "0.2.0"
-
Tag the release on git:
git tag v0.2.0 git push origin v0.2.0
The tagged build creates the
:latestimage which production Orchestrators use (including public Os). -
Create a GitHub Release at releases page with release notes. This is a good practice to share some metadata about the release.
See LICENSE.md.