Skip to content

Commit

Permalink
Isaac ROS 3.0.0
Browse files Browse the repository at this point in the history
  • Loading branch information
isaac_ros_deploy_bot committed May 31, 2024
1 parent 519d6f8 commit 1893634
Show file tree
Hide file tree
Showing 865 changed files with 530,505 additions and 105,178 deletions.
2 changes: 1 addition & 1 deletion public/.buildinfo
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 43c66b3e6d41727859f0affb8ec65cbd
config: bbd1f560bccfa2c0aa010d2514e7f18e
tags: 645f666f9bcd5a90fca523b33c5a78b7
Binary file modified public/.doctrees/concepts/benchmarking/index.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified public/.doctrees/concepts/dnn_inference/index.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified public/.doctrees/concepts/index.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified public/.doctrees/concepts/nitros/cuda_with_nitros.doctree
Binary file not shown.
Binary file modified public/.doctrees/concepts/nitros/index.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified public/.doctrees/concepts/object_detection/index.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified public/.doctrees/concepts/pose_estimation/index.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified public/.doctrees/concepts/scene_reconstruction/nvblox/index.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified public/.doctrees/concepts/segmentation/index.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified public/.doctrees/concepts/stereo_depth/ess/visualize_image.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified public/.doctrees/concepts/visual_slam/cuvslam/index.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified public/.doctrees/concepts/visual_slam/index.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified public/.doctrees/environment.pickle
Binary file not shown.
Binary file modified public/.doctrees/faq/index.doctree
Binary file not shown.
Binary file modified public/.doctrees/getting_started/dev_env_setup.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified public/.doctrees/getting_started/index.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified public/.doctrees/index.doctree
Binary file not shown.
Binary file modified public/.doctrees/performance/index.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified public/.doctrees/releases/index.doctree
Binary file not shown.
Binary file modified public/.doctrees/repositories_and_packages/index.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified public/.doctrees/robots/index.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file removed public/.doctrees/robots/nova_carter.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file added public/.doctrees/robots/nova_carter/index.doctree
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file modified public/.doctrees/troubleshooting/deep_learning.doctree
Binary file not shown.
Binary file modified public/.doctrees/troubleshooting/dev_env.doctree
Binary file not shown.
Binary file modified public/.doctrees/troubleshooting/hardware_setup.doctree
Binary file not shown.
99 changes: 55 additions & 44 deletions public/_sources/concepts/benchmarking/index.rst.txt

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Decoding Jetson H.264 Images on Non-NVIDIA Powered Systems
Overview
------------

Using hardware-accelerated Isaac ROS compression to H.264
Using NVIDIA-accelerated Isaac ROS compression to H.264
encode data for playback through Isaac ROS H.264 decoder on
NVIDIA-powered systems is fast and efficient. However, you may need
to decode recorded data on systems that are not NVIDIA-powered.
Expand All @@ -30,16 +30,17 @@ decoder is used to display it in an image view window.
Tutorial Walkthrough
--------------------

1. Complete the quickstart :ref:`here <repositories_and_packages/isaac_ros_compression/isaac_ros_h264_decoder/index:quickstart>`.
1. Finish the setup in the quickstart.

.. include:: /_snippets/set_up_dev_env.rst

2. Clone the following third-party repository into your workspace:

.. code:: bash
cd ${ISAAC_ROS_WS}/src
git clone https://github.com/clydemcqueen/h264_image_transport.git
# Install dependencies for the third-party package
sudo apt install libavdevice-dev libavformat-dev libavcodec-dev libavutil-dev libswscale-dev
cd ${ISAAC_ROS_WS}/src && \
git clone :ir_clone:`<isaac_ros_compression>` && \
git clone https://github.com/clydemcqueen/h264_image_transport.git
3. Launch the Docker container using the ``run_dev.sh`` script:

Expand All @@ -48,17 +49,23 @@ Tutorial Walkthrough
cd ${ISAAC_ROS_WS}/src/isaac_ros_common && \
./scripts/run_dev.sh
4. Inside the container, build the third-party ``h264_image_transport``
4. Install dependencies for the third-party package:

.. code:: bash
sudo apt install libavdevice-dev libavformat-dev libavcodec-dev libavutil-dev libswscale-dev
5. Build the third-party ``h264_image_transport``
package:

.. code:: bash
cd /workspaces/isaac_ros-dev && \
cd ${ISAAC_ROS_WS} && \
colcon build --symlink-install --packages-up-to \
h264_image_transport isaac_ros_to_h264_msgs_packet && \
source install/setup.bash
5. Launch the graph to bring up an image viewer that shows the decoded
6. Launch the graph to bring up an image viewer that shows the decoded
output.

.. code:: bash
Expand Down

This file was deleted.

This file was deleted.

6 changes: 3 additions & 3 deletions public/_sources/concepts/dnn_inference/index.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Similarly, the output is a set of tensors that must be decoded or post-processed
In robotics, DNN inference is often used to wire streams of sensor data through an encoder which feeds into a DNN inference package that has been loaded with a model capable of predicting useful outputs that lead to intelligent behaviors.
For example, monocular camera images can be fed to a DNN inference framework such as TensorRT configured with a YOLOv8 model pre-trained for detecting cats.
Streams of images are encoded as tensors and fed into TensorRT to run inference over the model to predict tensors that are interpreted by a YOLOv8 decoder as a set of bounding boxes in pixel coordinates.
This information can now be used could be used in a variety of intelligent behaviors such as stopping the robot until said cat has lost interest in your roaming robot.
This information can now be used in a variety of intelligent behaviors such as stopping the robot until said cat has lost interest in your roaming robot.

.. figure:: :ir_lfs:`<resources/isaac_ros_docs/concepts/dnn_inference/graph.png>`
:alt: Encoder/inference/decoder pipeline
Expand All @@ -33,9 +33,9 @@ We provide decoders for a variety of model architectures for various tasks:
Package Name Use Case
============================================================ ======================================================================
:ir_repo:`DNN Stereo Disparity <isaac_ros_dnn_stereo_depth>` Deep learned stereo disparity estimation
:ir_repo:`Image Segmentation <isaac_ros_image_segmentation>` Hardware-accelerated, deep learned semantic image segmentation
:ir_repo:`Image Segmentation <isaac_ros_image_segmentation>` NVIDIA-accelerated, deep learned semantic image segmentation
:ir_repo:`Object Detection <isaac_ros_object_detection>` Deep learning model support for object detection including DetectNet
:ir_repo:`Pose Estimation <isaac_ros_pose_estimation>` Deep learned, hardware-accelerated 3D object pose estimation
:ir_repo:`Pose Estimation <isaac_ros_pose_estimation>` Deep-learned, NVIDIA-accelerated 3D object pose estimation
:ir_repo:`Depth Segmentation <isaac_ros_depth_segmentation>` DNN-based depth segmentation and obstacle field ranging using Bi3D
============================================================ ======================================================================

Expand Down
108 changes: 108 additions & 0 deletions public/_sources/concepts/dnn_inference/model_preparation.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -107,3 +107,111 @@ Create the models directory:
The calibration cache file (specified using the ``-c``
option) is required to generate the ``int8`` engine file. This file
is provided in the **File Browser** tab of the model's page on NGC.

Using ``trtexec`` to convert an ONNX model to a TensorRT Plan File
------------------------------------------------------------------

Assuming that a model called ``model.onnx`` is available, the conversion is performed using:

.. code::
/usr/src/tensorrt/bin/trtexec --onnx=model.onnx --saveEngine=model.plan
.. warning::

Reading the documentation of `trtexec <https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#trtexec>`__ is highly recommended to
obtain best performance. In particular, we recommend pay attention to the quantization of the model (e.g. ``fp32`` vs ``fp16`` vs ``int8``).

Inspecting The Input and Output Binding Names of a Model
--------------------------------------------------------

Deep learning models have ``input_binding_names`` and ``output_binding_names``. These correspond to the model's inputs and outputs
respectively. These are determined by the model itself during export. There are two methods one can perform to determine this, but
the **recommended** way is using a TensorRT Plan File.

.. note::

In addition, the ``TensorRTNode`` and ``TritonNode`` have parameters called ``input_tensor_names`` and ``output_tensor_names``,
these correspond to the expected tensor names within the ROS 2 ``TensorList``.

Using an ONNX Model File
~~~~~~~~~~~~~~~~~~~~~~~~

If an ONNX Model file is used, one can use `netron <https://netron.app/>`__ to visualize the ONNX model, and note down the input and output names and dimensions.

Using a TensorRT Plan File
~~~~~~~~~~~~~~~~~~~~~~~~~~

If a TensorRT Plan file is used, one can use NVIDIA's `polygraph <https://github.com/NVIDIA/TensorRT/tree/main/tools/Polygraphy>`__ tool to determine it.

1. Install TensorRT's Python binding and the polygraph tool:

.. code:: bash
pip install tensorrt tensorrt_bindings
pip install colored polygraphy --extra-index-url https://pypi.ngc.nvidia.com
2. Add ``/home/admin/.local/bin`` to your ``PATH`` to use ``polygraph`` more conveniently:

.. code:: bash
export PATH="/home/admin/.local/bin:$PATH"
3. Obtain the desired model. In this case, we'll show how to get the ``PeopleSemSegnet ShuffleSeg`` network:

.. code:: bash
mkdir -p /tmp/models/peoplesemsegnet_shuffleseg/1 && \
cd /tmp/models/peoplesemsegnet_shuffleseg && \
wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplesemsegnet/versions/deployable_shuffleseg_unet_v1.0/files/peoplesemsegnet_shuffleseg_etlt.etlt && \
wget https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplesemsegnet/versions/deployable_shuffleseg_unet_v1.0/files/peoplesemsegnet_shuffleseg_cache.txt
4. Convert the obtained model from an ``etlt`` file to a ``plan`` file (called ``model.plan``):

.. code:: bash
/opt/nvidia/tao/tao-converter -k tlt_encode -d 3,544,960 -p input_2:0,1x3x544x960,1x3x544x960,1x3x544x960 -t int8 -c peoplesemsegnet_shuffleseg_cache.txt -e /tmp/models/peoplesemsegnet_shuffleseg/1/model.plan -o argmax_1 peoplesemsegnet_shuffleseg_etlt.etlt
5. Now go to the directory where we obtained the ``PeopleSemSegNet ShuffleSeg`` model:

.. code:: bash
cd /tmp/models/peoplesemsegnet_shuffleseg/1
6. Now use ``polygraph`` to inspect the names of the inputs and outputs of the model. In this case, the model we obtained is called ``model.plan``:

.. code:: bash
polygraphy inspect model model.plan
The expected output should look like this:

.. code:: bash
[I] Loading bytes from /tmp/models/peoplesemsegnet_shuffleseg/1/model.plan
[I] ==== TensorRT Engine ====
Name: Unnamed Network 0 | Explicit Batch Engine
---- 1 Engine Input(s) ----
{input_2:0 [dtype=float32, shape=(1, 3, 544, 960)]}
---- 1 Engine Output(s) ----
{argmax_1 [dtype=int32, shape=(1, 544, 960, 1)]}
---- Memory ----
Device Memory: 21269504 bytes
---- 1 Profile(s) (2 Tensor(s) Each) ----
- Profile: 0
Tensor: input_2:0 (Input), Index: 0 | Shapes: min=(1, 3, 544, 960), opt=(1, 3, 544, 960), max=(1, 3, 544, 960)
Tensor: argmax_1 (Output), Index: 1 | Shape: (1, 544, 960, 1)
---- 73 Layer(s) ----
In this case, the ``input_binding_names`` for this network is ``['input_2:0']``, whereas the ``output_binding_names`` is ``['argmax_1']``.
The shape of each dimension can also be observed from this command.

These values can be taken and used as the ``input_binding_names`` and ``output_binding_names`` for the ``TensorRTNode`` or ``TritonNode``. If a model has multiple inputs or outputs, these must be passed in as
a string list of all the values. Once again, ensure that the ``TensorRTNode`` or ``TritonNode``'s ``input_tensor_names`` and ``output_tensor_names`` parameters are correctly
set according to the names of the ROS 2 ``TensorList`` message obtained from any upstream nodes or expected by any downstream nodes.
Loading

0 comments on commit 1893634

Please sign in to comment.