Skip to content

optimize plex sync: docker exec + single async loop + version-controlled crontab#35

Open
steezeburger wants to merge 1 commit into
mainfrom
claude/discord-bot-auto-restart-U1jk0
Open

optimize plex sync: docker exec + single async loop + version-controlled crontab#35
steezeburger wants to merge 1 commit into
mainfrom
claude/discord-bot-auto-restart-U1jk0

Conversation

@steezeburger
Copy link
Copy Markdown
Owner

@steezeburger steezeburger commented May 8, 2026

Follow-up to #34. Three changes that together address the OOM-kills-bot problem caused by the every-10-minute plex sync.

Summary

  1. bin/dcp-django-admin.sh prefers docker compose exec against the running web container instead of docker compose run --rm. Each invocation no longer spins up a brand-new ephemeral container (~100–150 MB peak on this image), which is what was pushing the host into OOM territory and taking the bot down. Falls back to compose run --rm when web isn't running (e.g. during the very first migrate). Also reads POSTGRES_HOST from .env directly instead of spawning a container just to inspect env, and skips the confirmation prompt when stdin isn't a TTY (so cron / GitHub Actions / SSH don't hang).

  2. SyncWithPlexCommand async refactor. Phase 1 walks Plex movies and persists them; Phase 2 runs ONE asyncio.run over all enrichments inside a single shared aiohttp.ClientSession via asyncio.gather. The previous code spun up a fresh event loop and ClientSession per movie. EnrichMovieActorsCommand.execute now takes an optional session= parameter so the existing manual backfill (manage.py enrich_actors) keeps working unchanged.

  3. crontab.txt + just install-crontab. Commits the existing schedule to the repo and adds a recipe to install it. ⚠️ See open question below — this is host-side cron, which we may want to revisit.

Open question: host cron vs cron-in-docker

What's in this PR is host cron — crontab.txt is installed onto the VPS via just install-crontab and the system cron daemon still runs the schedule. It only version-controls the entries.

After committing this, we agreed the better shape is cron in docker — either a dedicated scheduler service running cron -f (mirrors brainspread's brainspread-scheduler-1) or cron inside the existing web container. That's not in this PR and would be a separate change that drops crontab.txt/install-crontab and adds the chosen approach.

Two ways to land this:

  • Merge as-is — get the docker-exec + async-refactor wins immediately; remove the host-cron pieces in a follow-up alongside introducing the scheduler service.
  • Strip the crontab pieces from this PR — keep the description tight, ship docker-exec + async refactor only, and tackle scheduling holistically next.

Either is fine — flagging so you can decide.

What changed (4 files)

  • app/plex/commands.py — async refactor of SyncWithPlexCommand + optional session= on EnrichMovieActorsCommand.execute
  • bin/dcp-django-admin.shcompose exec preferred over compose run, non-TTY-safe, reads env from .env
  • crontab.txtnew, byte-identical to what's currently installed on the VPS
  • justfile — adds install-crontab recipe

Deploy steps

The auto-deploy added in #34 should run this entire PR on merge. After it lands:

  1. Confirm services are healthy:

    docker ps --format 'table {{.Names}}\t{{.Status}}'
  2. Diagnose enrichment backfill scope. The OOM kills mid-sync caused recent movies to never get enriched — the next sync's latest_movie early-exit then skips them permanently:

    docker compose exec -T web python manage.py shell -c "
    from plex.models import PlexMovie
    total = PlexMovie.objects.count()
    enriched = PlexMovie.objects.filter(actors_enriched_at__isnull=False).count()
    print(f'enriched: {enriched}/{total} ({100*enriched/max(total,1):.1f}%)')
    "
  3. Backfill if the percentage is low:

    docker compose exec -T web python manage.py enrich_actors
  4. (Optional, only if keeping the host-cron pieces) Install the version-controlled crontab — functionally a no-op since entries are byte-identical, but moves source-of-truth to the repo:

    cd /root/oscarr-stuff/oscarr/packages/django-app
    just install-crontab

Test plan

  • Lint and Test passes
  • Deploy to Production fires automatically and SSHes into the VPS successfully
  • After merge: oscarr-web-1, oscarr-bot-1, oscarr-db-1 all Up
  • just sync-with-plex does not spawn a new container (verify via docker events --filter event=create during a sync)
  • Cron-driven sync at the next */10 * * * * boundary completes without prompt-hang and without OOM
  • manage.py enrich_actors still works (signature compat for the manual backfill)

https://claude.ai/code/session_01RJrZb4X5Ywuzq2SjeFXgN1

…d crontab

Three independent improvements that together address the OOM-kills-bot
problem caused by the every-10-minute plex sync:

1. dcp-django-admin.sh prefers \`docker compose exec\` against the
   running web container over \`docker compose run --rm\`. Each sync
   invocation no longer pays the cost of spinning up a fresh ~150 MB
   ephemeral container. Falls back to \`compose run\` when web isn't
   running (e.g. first migrate). Also reads POSTGRES_HOST from .env
   directly instead of spawning a container just to inspect env, and
   skips the confirmation prompt when stdin isn't a TTY (so cron and
   GitHub Actions don't hang).

2. SyncWithPlexCommand now does sync-then-async-fanout: phase 1 walks
   Plex movies and persists them; phase 2 runs ONE asyncio.run wrapping
   all enrichments inside a single shared aiohttp.ClientSession via
   asyncio.gather. The old code spun up a fresh event loop and
   ClientSession per movie. EnrichMovieActorsCommand.execute now takes
   an optional session parameter so the existing manual backfill
   command (manage.py enrich_actors) keeps working unchanged.

3. crontab.txt commits the schedule to the repo, plus a
   \`just install-crontab\` recipe to apply it on the server. No
   scheduler container required — the existing schedule is now
   version-controlled instead of living only on the VPS.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants