fix(gpu): add WSL2 GPU support via CDI mode#411
Closed
tyeth-ai-assisted wants to merge 3 commits intoNVIDIA:mainfrom
Closed
fix(gpu): add WSL2 GPU support via CDI mode#411tyeth-ai-assisted wants to merge 3 commits intoNVIDIA:mainfrom
tyeth-ai-assisted wants to merge 3 commits intoNVIDIA:mainfrom
Conversation
…chart
WSL2 virtualises GPU access through /dev/dxg instead of native /dev/nvidia*
device nodes, which breaks the entire NVIDIA k8s device plugin detection
chain. Three changes fix this:
1. Detect WSL2 in cluster-entrypoint.sh and configure CDI mode:
- Generate CDI spec with nvidia-ctk (auto-detects WSL mode)
- Patch the spec to include libdxcore.so (nvidia-ctk bug omits it)
- Switch nvidia-container-runtime from auto to cdi mode
- Deploy a job to label the node with pci-10de.present=true
(NFD can't see NVIDIA PCI on WSL2's virtualised bus)
2. Bundle the nvidia-device-plugin Helm chart in the cluster image
instead of fetching from the upstream GitHub Pages repo at startup.
The repo URL (nvidia.github.io/k8s-device-plugin/index.yaml)
currently returns 404.
3. Update the HelmChart CR to reference the bundled local chart
tarball via the k3s static charts API endpoint.
Closes NVIDIA#404
The upstream Helm repo URL works fine; remove the unnecessary chart bundling and local reference changes.
WSL2 virtualises GPU access through /dev/dxg instead of native /dev/nvidia* device nodes, which breaks the entire NVIDIA k8s device plugin detection chain. This patch detects WSL2 at container startup and applies fixes: 1. Generate CDI spec with nvidia-ctk (auto-detects WSL mode) 2. Add per-GPU UUID and index device entries to CDI spec (nvidia-ctk only generates name=all but the device plugin assigns GPUs by UUID) 3. Bump CDI spec version from 0.3.0 to 0.5.0 (library minimum) 4. Patch the spec to include libdxcore.so (nvidia-ctk bug omits it; this library bridges Linux NVML to the Windows DirectX GPU Kernel) 5. Switch nvidia-container-runtime from auto to cdi mode 6. Deploy a job to label the node with pci-10de.present=true (NFD can't see NVIDIA PCI on WSL2's virtualised bus) Closes NVIDIA#404
|
Thank you for your interest in contributing to OpenShell, @tyeth-ai-assisted. This project uses a vouch system for first-time contributors. Before submitting a pull request, you need to be vouched by a maintainer. To get vouched:
See CONTRIBUTING.md for details. |
|
Thank you for your submission! We ask that you sign our Developer Certificate of Origin before we can accept your contribution. You can sign the DCO by adding a comment below using this text: I have read the DCO document and I hereby sign the DCO. You can retrigger this bot by commenting recheck in this Pull Request. Posted by the DCO Assistant Lite bot. |
|
I have read the DCO document and I hereby sign the DCO. |
1 similar comment
Author
|
I have read the DCO document and I hereby sign the DCO. |
tyeth-ai-assisted
pushed a commit
to tyeth-ai-assisted/NemoClaw
that referenced
this pull request
Mar 17, 2026
WSL2 GPU support: - Add wsl2-gpu-fix.sh that applies CDI mode, libdxcore.so injection, and node labeling after gateway start (workaround until OpenShell ships native WSL2 support via NVIDIA/OpenShell#411) - Hook it into both onboard.js (interactive wizard) and setup.sh (legacy script) so it runs automatically after gateway creation - Writes a complete CDI spec from scratch instead of fragile sed patching of the nvidia-ctk generated spec Ollama on Linux: - setup.sh only created the ollama-local provider on macOS (Darwin) - Now detects ollama on any platform (Linux/WSL2 included) - Enables local GPU inference via ollama for WSL2 users Closes NVIDIA/NemoClaw#TBD See also: NVIDIA/OpenShell#404, NVIDIA/OpenShell#411
This was referenced Mar 18, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
/dev/dxgpresent) and automatically configure CDI-based GPU injectionlibdxcore.so, CDI spec missing per-GPU UUID entriescluster-entrypoint.sh— no Rust, Dockerfile, or manifest changes neededWhat it does
When
GPU_ENABLED=trueand/dev/dxgexists (WSL2), the entrypoint:nvidia-ctk cdi generate(auto-detects WSL mode)name=all, but the device plugin assigns GPUs by UUID)libdxcore.so(upstream nvidia-ctk bug — nvidia-ctk cdi generate: libdxcore.so not found on WSL2 despite being present nvidia-container-toolkit#1739)autotocdimodepci-10de.present=true(NFD can't detect NVIDIA PCI on WSL2's virtualised bus)On non-WSL2 hosts, the new code path is never entered (
/dev/dxgdoesn't exist).Testing
Verified on:
nvidia-device-plugin1/1 Running,nvidia.com/gpu: 1advertised,nvidia-smiworks inside sandbox pods, full NemoClaw onboard + sandbox creation + local inference (ollama nemotron 70B) working end-to-endRelated
nvidia-ctk cdi generatemisseslibdxcore.soon WSL2)Agent Investigation
Diagnosed using
openshell doctorcommands. Full diagnostic chain documented in #404.🤖 Generated with Claude Code