Skip to content

Add CPU and GPU deployment modes#87

Open
satra wants to merge 1 commit intocodex/kiva-feature-planfrom
codex/gpu-or-cpu-deployment
Open

Add CPU and GPU deployment modes#87
satra wants to merge 1 commit intocodex/kiva-feature-planfrom
codex/gpu-or-cpu-deployment

Conversation

@satra
Copy link
Collaborator

@satra satra commented Mar 8, 2026

Summary

  • add explicit CPU vs GPU deployment controls for the server build and runtime
  • document the GPU-sensitive functions and deployment workflow
  • keep CPU deployments working when ONNX pose export dependencies are missing

Testing

  • python3 -m unittest discover -s tests

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the deployment flexibility of the Riverst server by introducing explicit support for both CPU-only and GPU-accelerated environments. It provides clear configuration options for building and running the server, allowing users to optimize performance based on their available hardware. A key improvement is the graceful handling of GPU-specific dependencies, ensuring that CPU deployments can proceed without issues even if optional GPU components are not present. This change makes the application more adaptable and accessible across a wider range of computing infrastructures.

Highlights

  • Deployment Mode Controls: Introduced explicit controls for CPU and GPU deployment modes, affecting both server build and runtime configurations.
  • Documentation: Documented GPU-sensitive functions and the overall deployment workflow for both CPU and GPU environments.
  • Robust CPU Deployments: Ensured CPU deployments remain functional by implementing a fallback mechanism when ONNX pose export dependencies are unavailable.
  • Runtime Device Policy: Added a new RIVERST_COMPUTE_DEVICE environment variable to allow users to specify runtime device policy (auto or CPU).
  • Docker Configuration: Updated Docker configurations to support CPU and GPU build targets and runtime settings.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • README.md
    • Updated documentation with GPU Docker Compose commands, GPU-specific pip install instructions, and how to force CPU inference.
  • docker-compose.gpu.yaml
    • Added a new Docker Compose override file for GPU-specific configurations.
  • docker-compose.yaml
    • Modified to include RIVERST_DEPLOYMENT_TARGET build argument and RIVERST_COMPUTE_DEVICE environment variable.
  • docs/gpu-cpu-deployment-plan.md
    • Added a new document detailing the GPU/CPU deployment strategy, findings, decisions, and implementation notes.
  • notes/first_steps_to_deploy.md
    • Updated deployment notes with guidance for GPU and non-GPU EC2 instances, NVIDIA driver requirements, and CPU-only configuration.
  • src/server/Dockerfile
    • Modified to accept RIVERST_DEPLOYMENT_TARGET and conditionally install requirements.txt or requirements.gpu.txt.
  • src/server/README.md
    • Updated virtual environment setup, added GPU installation instructions, and included Docker build/run commands for GPU.
  • src/server/bot/processors/video/processor.py
    • Refactored pose inferencer loading to include a fallback to PyTorch YOLO if ONNX export fails.
  • src/server/bot/utils/device_utils.py
    • Introduced new environment variables and functions to manage compute device policy and deployment targets, and updated get_best_device to respect these policies.
  • src/server/env.example
    • Added RIVERST_COMPUTE_DEVICE environment variable with a default value.
  • src/server/requirements.gpu.txt
    • Added a new requirements file for GPU-specific dependencies, including ONNX related packages.
  • src/server/tests/test_device_utils.py
    • Added a new test file containing unit tests for the device_utils module.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively adds explicit controls for CPU and GPU deployments, which is a great enhancement for flexibility. The changes are well-implemented across Docker configurations, application code, and documentation. Using environment variables (RIVERST_DEPLOYMENT_TARGET and RIVERST_COMPUTE_DEVICE) provides a clear and standard way to manage deployment modes. The fallback mechanism in the VideoProcessor for when ONNX is unavailable is a nice touch for ensuring CPU deployments remain functional. The addition of unit tests for the new device selection logic is also a great practice. I've found one potential issue in the test setup that could lead to flaky tests, for which I've left a specific comment and suggestion.

Comment on lines +20 to +28
def setUp(self):
self.modules_backup = {
"torch": sys.modules.get("torch"),
"bot.utils.device_utils": sys.modules.get("bot.utils.device_utils"),
"bot.utils": sys.modules.get("bot.utils"),
"bot": sys.modules.get("bot"),
}
sys.modules["torch"] = make_fake_torch()
self.device_utils = importlib.import_module("bot.utils.device_utils")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current test setup for mocking torch might be flaky. If bot.utils.device_utils or its parent packages are imported by another test before this one runs, importlib.import_module will return the cached module. This cached module would have been loaded with the original torch module, not your mock, causing tests to fail or behave unexpectedly.

To ensure the module under test always uses the mocked torch, you should explicitly remove it and its parent packages from sys.modules before re-importing. This forces a reload with the mock in place. The tearDown method will correctly restore the original state.

    def setUp(self):
        self.modules_backup = {
            "torch": sys.modules.get("torch"),
            "bot.utils.device_utils": sys.modules.get("bot.utils.device_utils"),
            "bot.utils": sys.modules.get("bot.utils"),
            "bot": sys.modules.get("bot"),
        }
        # Force reload of module under test and its parents by removing from cache
        for module_name in ["bot.utils.device_utils", "bot.utils", "bot"]:
            sys.modules.pop(module_name, None)

        sys.modules["torch"] = make_fake_torch()
        self.device_utils = importlib.import_module("bot.utils.device_utils")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant