Skip to content

Commit

Permalink
update stable diffusion demo requirements (#20914)
Browse files Browse the repository at this point in the history
### Description
Update docker and package version for stable diffusion demo.

### Motivation and Context
Update onnx to 1.16 for security
  • Loading branch information
tianleiwu authored Jun 4, 2024
1 parent 51bc535 commit 6dfdef7
Show file tree
Hide file tree
Showing 4 changed files with 12 additions and 10 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -36,16 +36,18 @@ cd onnxruntime
Install nvidia-docker using [these instructions](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker).

```
docker run --rm -it --gpus all -v $PWD:/workspace nvcr.io/nvidia/pytorch:23.10-py3 /bin/bash
docker run --rm -it --gpus all -v $PWD:/workspace nvcr.io/nvidia/pytorch:24.04-py3 /bin/bash
```

#### Build onnxruntime from source
The cuDNN in the container might not be compatible with official onnxruntime-gpu package, it is recommended to build from source instead.

After launching the docker, you can build and install onnxruntime-gpu wheel like the following.
```
export CUDACXX=/usr/local/cuda-12.2/bin/nvcc
export CUDACXX=/usr/local/cuda/bin/nvcc
git config --global --add safe.directory '*'
sh build.sh --config Release --build_shared_lib --parallel --use_cuda --cuda_version 12.2 \
--cuda_home /usr/local/cuda-12.2 --cudnn_home /usr/lib/x86_64-linux-gnu/ --build_wheel --skip_tests \
sh build.sh --config Release --build_shared_lib --parallel --use_cuda --cuda_version 12.4 \
--cuda_home /usr/local/cuda --cudnn_home /usr/lib/x86_64-linux-gnu/ --build_wheel --skip_tests \
--use_tensorrt --tensorrt_home /usr/src/tensorrt \
--cmake_extra_defines onnxruntime_BUILD_UNIT_TESTS=OFF \
--cmake_extra_defines CMAKE_CUDA_ARCHITECTURES=80 \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ onnxruntime-gpu>=1.16.2
py3nvml

# The version of cuda-python shall be compatible with installed CUDA version.
# For example, if your CUDA version is 12.1, you can install cuda-python 12.1.
# For demo of TensorRT excution provider and TensortRT.
cuda-python==11.8.0

# For windows, cuda-python need the following
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
py3nvml

# The version of cuda-python shall be compatible with installed CUDA version.
# For example, if your CUDA version is 12.1, you can install cuda-python 12.1.
# For demo of TensorRT excution provider and TensortRT.
cuda-python>=12.1.0

# For windows, cuda-python need the following
Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
diffusers==0.24.0
transformers==4.38.0
diffusers==0.28.0
transformers==4.41.2
numpy>=1.24.1
accelerate
onnx==1.14.1
onnx==1.16.0
coloredlogs
packaging
# Use newer version of protobuf might cause crash
Expand All @@ -11,7 +11,7 @@ psutil
sympy
controlnet_aux==0.0.7
# The following are for SDXL
optimum==1.14.1
optimum==1.20.0
safetensors
invisible_watermark
# newer version of opencv-python migth encounter module 'cv2.dnn' has no attribute 'DictValue' error
Expand Down

0 comments on commit 6dfdef7

Please sign in to comment.