Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for JetPack 6.1 build #3211

Merged
merged 11 commits into from
Oct 17, 2024
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
117 changes: 117 additions & 0 deletions docsrc/getting_started/jetpack.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
.. _Torch_TensorRT_in_JetPack_6.1

Overview
##################

JetPack 6.1
---------------------
Nvida JetPack 6.1 is the latest production release ofJetPack 6.
With this release it incorporates:
CUDA 12.6
TensorRT 10.3
cuDNN 9.3
DLFW 24.0
lanluo-nvidia marked this conversation as resolved.
Show resolved Hide resolved

You can find more details for the JetPack 6.1:

* https://docs.nvidia.com/jetson/jetpack/release-notes/index.html
* https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform/index.html


Prerequisites
~~~~~~~~~~~~~~


Ensure your jetson devleoper kit has been flashed with the latest JetPack 6.1. You can find more details on how to flash Jetson board via sdk-manager:

* https://developer.nvidia.com/sdk-manager


check the current jetpack version using

.. code-block:: sh

apt show nvidia-jetpack

Ensure you have installed JetPack Dev components. This step is required if you need to build on jetson board.

You can only install the dev components that you require: ex, tensorrt-dev would be the meta-package for all TRT development or install everthing.

.. code-block:: sh
# install all the nvidia-jetpack dev components
sudo apt update
sudo apt install nvidia-jetpack

Ensure you have cuda 12.6 installed(this should be installed automatically from nvidia-jetpack)

.. code-block:: sh

# check the cuda version
nvcc --version
# if not installed or the version is not 12.6, install via the below cmd:
sudo apt update
sudo apt install cuda-toolkit-12-6

Ensure libcusparseLt.so exists at /usr/local/cuda/lib64/:

.. code-block:: sh

# if not exist, download and copy to the directory
wget https://developer.download.nvidia.com/compute/cusparselt/redist/libcusparse_lt/linux-sbsa/libcusparse_lt-linux-sbsa-0.5.2.1-archive.tar.xz
lanluo-nvidia marked this conversation as resolved.
Show resolved Hide resolved
tar xf libcusparse_lt-linux-sbsa-0.5.2.1-archive.tar.xz
sudo cp -a libcusparse_lt-linux-sbsa-0.5.2.1-archive/include/* /usr/local/cuda/include/
sudo cp -a libcusparse_lt-linux-sbsa-0.5.2.1-archive/lib/* /usr/local/cuda/lib64/


Build torch_tensorrt
~~~~~~~~~~~~~~


Install bazel

.. code-block:: sh

wget -v https://github.com/bazelbuild/bazelisk/releases/download/v1.20.0/bazelisk-linux-arm64
sudo mv bazelisk-linux-arm64 /usr/bin/bazel
chmod +x /usr/bin/bazel

Install pip and required python packages

.. code-block:: sh

# install pip
sudp apt install python3-pip
lanluo-nvidia marked this conversation as resolved.
Show resolved Hide resolved

.. code-block:: sh

# install setuptools with the version less than 71.*.*
python -m pip install setuptools==70.2.0
lanluo-nvidia marked this conversation as resolved.
Show resolved Hide resolved

.. code-block:: sh

# install torch
wget https://developer.download.nvidia.cn/compute/redist/jp/v61/pytorch/torch-2.5.0a0+872d972e41.nv24.08.17622132-cp310-cp310-linux_aarch64.whl
lanluo-nvidia marked this conversation as resolved.
Show resolved Hide resolved
python -m pip install torch-2.5.0a0+872d972e41.nv24.08.17622132-cp310-cp310-linux_aarch64.whl
lanluo-nvidia marked this conversation as resolved.
Show resolved Hide resolved

# install torchvision
# currently it has not available yet for JetPack 6.1, it should be available in future


Build and Install torch_tensorrt wheel file


Since torch_tensorrt version has dependencies on torch version. torch version supported by JetPack6.1 is from DLFW 24.08/24.09(torch 2.5.0).

Please make sure to build torch_tensorrt wheel file from source release/2.5 branch
(TODO: lanl to update the branch name once release/ngc branch is available)

.. code-block:: sh

cuda_version=$(nvcc --version | grep Cuda | grep release | cut -d ',' -f 2 | sed -e 's/ release //g')
export TORCH_INSTALL_PATH=$(python -c "import torch, os; print(os.path.dirname(torch.__file__))")
export SITE_PACKAGE_PATH=${TORCH_INSTALL_PATH::-6}
export CUDA_HOME=/usr/local/cuda-${cuda_version}/
# replace the MODULE.bazel with the jetpack one
cat toolchains/jp_workspaces/MODULE.bazel.tmpl | envsubst > MODULE.bazel
# build and install torch_tensorrt wheel file
python setup.py --use-cxx11-abi install --user
lanluo-nvidia marked this conversation as resolved.
Show resolved Hide resolved
11 changes: 8 additions & 3 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -156,12 +156,14 @@ def load_dep_info():
JETPACK_VERSION = "4.6"
elif version == "5.0":
JETPACK_VERSION = "5.0"
elif version == "6.1":
JETPACK_VERSION = "6.1"

if not JETPACK_VERSION:
warnings.warn(
"Assuming jetpack version to be 5.0, if not use the --jetpack-version option"
"Assuming jetpack version to be 6.1, if not use the --jetpack-version option"
)
JETPACK_VERSION = "5.0"
JETPACK_VERSION = "6.1"

if not CXX11_ABI:
warnings.warn(
Expand Down Expand Up @@ -213,12 +215,15 @@ def build_libtorchtrt_pre_cxx11_abi(
elif JETPACK_VERSION == "5.0":
cmd.append("--platforms=//toolchains:jetpack_5.0")
print("Jetpack version: 5.0")
elif JETPACK_VERSION == "6.1":
cmd.append("--platforms=//toolchains:jetpack_6.1")
print("Jetpack version: 6.1")

if CI_BUILD:
cmd.append("--platforms=//toolchains:ci_rhel_x86_64_linux")
print("CI based build")

print("building libtorchtrt")
print(f"building libtorchtrt {cmd=}")
status_code = subprocess.run(cmd).returncode

if status_code != 0:
Expand Down
9 changes: 9 additions & 0 deletions toolchains/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,15 @@ platform(
],
)

platform(
name = "jetpack_6.1",
constraint_values = [
"@platforms//os:linux",
"@platforms//cpu:aarch64",
"@//toolchains/jetpack:6.1",
],
)

platform(
name = "ci_rhel_x86_64_linux",
constraint_values = [
Expand Down
5 changes: 5 additions & 0 deletions toolchains/jetpack/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,8 @@ constraint_value(
name = "4.6",
constraint_setting = ":jetpack",
)

constraint_value(
name = "6.1",
constraint_setting = ":jetpack",
)
61 changes: 61 additions & 0 deletions toolchains/jp_workspaces/MODULE.bazel.tmpl
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
module(
name = "torch_tensorrt",
repo_name = "org_pytorch_tensorrt",
version = "${BUILD_VERSION}"
)

bazel_dep(name = "googletest", version = "1.14.0")
bazel_dep(name = "platforms", version = "0.0.10")
bazel_dep(name = "rules_cc", version = "0.0.9")
bazel_dep(name = "rules_python", version = "0.34.0")

python = use_extension("@rules_python//python/extensions:python.bzl", "python")
python.toolchain(
ignore_root_user_error = True,
python_version = "3.11",
)

bazel_dep(name = "rules_pkg", version = "1.0.1")
git_override(
module_name = "rules_pkg",
commit = "17c57f4",
remote = "https://github.com/narendasan/rules_pkg",
)

local_repository = use_repo_rule("@bazel_tools//tools/build_defs/repo:local.bzl", "local_repository")

# External dependency for torch_tensorrt if you already have precompiled binaries.
local_repository(
name = "torch_tensorrt",
path = "${SITE_PACKAGE_PATH}/torch_tensorrt",
)


new_local_repository = use_repo_rule("@bazel_tools//tools/build_defs/repo:local.bzl", "new_local_repository")

# CUDA should be installed on the system locally
new_local_repository(
name = "cuda",
build_file = "@//third_party/cuda:BUILD",
path = "${CUDA_HOME}",
)

new_local_repository(
name = "libtorch",
path = "${TORCH_INSTALL_PATH}",
build_file = "third_party/libtorch/BUILD",
)

new_local_repository(
name = "libtorch_pre_cxx11_abi",
path = "${TORCH_INSTALL_PATH}",
build_file = "third_party/libtorch/BUILD"
)

new_local_repository(
name = "tensorrt",
path = "/usr/",
build_file = "@//third_party/tensorrt/local:BUILD"
)


Loading