Skip to content

Latest commit

 

History

History
64 lines (45 loc) · 2.38 KB

gpu.md

File metadata and controls

64 lines (45 loc) · 2.38 KB

Using GPUs inside containers

⚡ Requirement nerdctl >= 0.9

nerdctl provides docker-compatible NVIDIA GPU support.

Prerequisites

  • NVIDIA Drivers
    • Same requirement as when you use GPUs on Docker. For details, please refer to the doc by NVIDIA.
  • nvidia-container-cli

Options for nerdctl run --gpus

nerdctl run --gpus is compatible to docker run --gpus.

You can specify number of GPUs to use via --gpus option. The following example exposes all available GPUs.

nerdctl run -it --rm --gpus all nvidia/cuda:9.0-base nvidia-smi

You can also pass detailed configuration to --gpus option as a list of key-value pairs. The following options are provided.

  • count: number of GPUs to use. all exposes all available GPUs.
  • device: IDs of GPUs to use. UUID or numbers of GPUs can be specified.
  • capabilities: Driver capabilities. If unset, use default driver utility, compute.

The following example exposes a specific GPU to the container.

nerdctl run -it --rm --gpus '"capabilities=utility,compute",device=GPU-3a23c669-1f69-c64e-cf85-44e9b07e7a2a' nvidia/cuda:9.0-base nvidia-smi

Fields for nerdctl compose

nerdctl compose also supports GPUs following compose-spec.

You can use GPUs on compose when you specify some of the following capabilities in services.demo.deploy.resources.reservations.devices.

  • gpu
  • nvidia
  • all allowed capabilities for nerdctl run --gpus

Available fields are the same as nerdctl run --gpus.

The following exposes all available GPUs to the container.

version: "3.8"
services:
  demo:
    image: nvidia/cuda:9.0-base
    command: nvidia-smi
    deploy:
      resources:
        reservations:
          devices:
          - capabilities: ["utility"]
            count: all