Skip to content

Commit

Permalink
Merge pull request #956 from luxonis/docs/refactor_structure
Browse files Browse the repository at this point in the history
Restructure and improve docs
  • Loading branch information
daniilpastukhov committed May 17, 2023
2 parents 60dec47 + 32367c7 commit f2f467c
Show file tree
Hide file tree
Showing 26 changed files with 743 additions and 139 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ version: 2
# Build documentation in the docs/ directory with Sphinx
sphinx:
builder: dirhtml
configuration: source/conf.py
configuration: depthai_sdk/docs/source/conf.py

# Build documentation with MkDocs
#mkdocs:
Expand All @@ -22,4 +22,7 @@ formats:
python:
version: 3.8
install:
- requirements: requirements.txt
- requirements: requirements.txt
- requirements: depthai_sdk/docs/requirements.txt
- method: pip
path: depthai_sdk
2 changes: 1 addition & 1 deletion depthai_sdk/docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
Sphinx==4.1.2
sphinx-rtd-theme==0.5.0
-e ../
autodocsumm==0.2.10
10 changes: 10 additions & 0 deletions depthai_sdk/docs/source/api_reference.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
API Reference
=============

.. automodule:: depthai_sdk
:autosummary:
:members:
:special-members: __init__
:show-inheritance:
:undoc-members:
:imported-members:
10 changes: 5 additions & 5 deletions depthai_sdk/docs/source/components/camera_component.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,17 +17,17 @@ Usage
color = oak.create_camera('color')
# Visualize color camera frame stream
oak.visualize(color, fps=True)
oak.visualize(color.out.main, fps=True)
# Start the pipeline, continuously poll
oak.start(blocking=True)
Component outputs
#################

- ``out.main`` - Uses one of the outputs below.
- ``out.camera`` - Default output. Streams either ColorCamera's video (NV12) or MonoCamera's out (GRAY8) frames. Produces :ref:`FramePacket`.
- ``out.replay`` - If we are using :ref:`Replaying` feature. It doesn't actually stream these frames back to the host, but rather sends read frames to syncing mechanism directly (to reduce bandwidth by avoiding loopback). Produces :ref:`FramePacket`.
- ``out.encoded`` - If we are encoding frames, this will send encoded bitstream to the host. When visualized, it will decode frames (using cv2.imdecode for MJPEG, or pyav for H.26x). Produces :ref:`FramePacket`.
- :attr:`main <depthai_sdk.components.CameraComponent.Out.main>` - Uses one of the outputs below.
- :attr:`camera <depthai_sdk.components.CameraComponent.Out.camera>` - Default output. Streams either ColorCamera's video (NV12) or MonoCamera's out (GRAY8) frames. Produces :ref:`FramePacket`.
- :attr:`replay <depthai_sdk.components.CameraComponent.Out.replay>` - If we are using :ref:`Replaying` feature. It doesn't actually stream these frames back to the host, but rather sends read frames to syncing mechanism directly (to reduce bandwidth by avoiding loopback). Produces :ref:`FramePacket`.
- :attr:`encoded <depthai_sdk.components.CameraComponent.Out.encoded>` - If we are encoding frames, this will send encoded bitstream to the host. When visualized, it will decode frames (using cv2.imdecode for MJPEG, or pyav for H.26x). Produces :ref:`FramePacket`.

Reference
#########
Expand Down
2 changes: 1 addition & 1 deletion depthai_sdk/docs/source/components/imu_component.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Usage
Component outputs
#################

- ``out.main`` - Main output, produces :ref:`IMUPacket`
- :attr:`main <depthai_sdk.components.IMUComponent.Out.main>` - Main output, produces :ref:`IMUPacket`.

Reference
#########
Expand Down
24 changes: 14 additions & 10 deletions depthai_sdk/docs/source/components/nn_component.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@ object tracking, and MultiStage pipelines setup. It also supports :ref:`Roboflow
DepthAI API nodes
-----------------

For neural inferencing NNComponent will a DepthAI API node:
For neural inference, NNComponent will use DepthAI API node:

- If we are using MobileNet-SSD based AI model, this component will create `MobileNetDetectionNetwork <https://docs.luxonis.com/projects/api/en/latest/components/nodes/mobilenet_detection_network/>`__ or `MobileNetSpatialDetectionNetwork <https://docs.luxonis.com/projects/api/en/latest/components/nodes/mobilenet_spatial_detection_network/>`__ if ``spatial`` argument is set.
- If we are using YOLO based AI model, this component will create `YoloDetectionNetwork <https://docs.luxonis.com/projects/api/en/latest/components/nodes/yolo_detection_network/>`__ or `YoloSpatialDetectionNetwork <https://docs.luxonis.com/projects/api/en/latest/components/nodes/yolo_spatial_detection_network/>`__ if ``spatial`` argument is set.
- If we are using MobileNet-SSD based AI model, this component will create `MobileNetDetectionNetwork <https://docs.luxonis.com/projects/api/en/latest/components/nodes/mobilenet_detection_network/>`__ (or `MobileNetSpatialDetectionNetwork <https://docs.luxonis.com/projects/api/en/latest/components/nodes/mobilenet_spatial_detection_network/>`__ if ``spatial`` argument is set).
- If we are using YOLO based AI model, this component will create `YoloDetectionNetwork <https://docs.luxonis.com/projects/api/en/latest/components/nodes/yolo_detection_network/>`__ (or `YoloSpatialDetectionNetwork <https://docs.luxonis.com/projects/api/en/latest/components/nodes/yolo_spatial_detection_network/>`__ if ``spatial`` argument is set).
- If it's none of the above, component will create `NeuralNetwork <https://docs.luxonis.com/projects/api/en/latest/components/nodes/neural_network/>`__ node.

If ``tracker`` argument is set and we have YOLO/MobileNet-SSD based model, this component will also create `ObjectTracker <https://docs.luxonis.com/projects/api/en/latest/components/nodes/object_tracker/>`__ node,
Expand All @@ -26,19 +26,20 @@ Usage
with OakCamera(recording='cars-tracking-above-01') as oak:
color = oak.create_camera('color')
nn = oak.create_nn('vehicle-detection-0202', color, tracker=True)
nn.config_nn(ResizeMode=ResizeMode.STRETCH)
nn.config_nn(resize_mode='stretch')
oak.visualize([nn.out.tracker, nn.out.passthrough], fps=True)
# oak.show_graph()
oak.start(blocking=True)
Component outputs
#################

- ``out.main`` - Default output. Streams NN results and high-res frames that were downscaled and used for inferencing. Produces :ref:`DetectionPacket` or :ref:`TwoStagePacket` (if it's 2. stage NNComponent).
- ``out.passthrough`` - Default output. Streams NN results and passthrough frames (frames used for inferencing). Produces :ref:`DetectionPacket` or :ref:`TwoStagePacket` (if it's 2. stage NNComponent).
- ``out.spatials`` - Streams depth and bounding box mappings (``SpatialDetectionNework.boundingBoxMapping``). Produces :ref:`SpatialBbMappingPacket`.
- ``out.twostage_crops`` - Streams 2. stage cropped frames to the host. Produces :ref:`FramePacket`.
- ``out.tracker`` - Streams `ObjectTracker's <https://docs.luxonis.com/projects/api/en/latest/components/nodes/object_tracker/>`__ tracklets and high-res frames that were downscaled and used for inferencing. Produces :ref:`TrackerPacket`.
- :attr:`main <depthai_sdk.components.NNComponent.Out.main>` - Default output. Streams NN results and high-res frames that were downscaled and used for inferencing. Produces :ref:`DetectionPacket` or :ref:`TwoStagePacket` (if it's 2. stage NNComponent).
- :attr:`passthrough <depthai_sdk.components.NNComponent.Out.passthrough>` - Default output. Streams NN results and passthrough frames (frames used for inferencing). Produces :ref:`DetectionPacket` or :ref:`TwoStagePacket` (if it's 2. stage NNComponent).
- :attr:`spatials <depthai_sdk.components.NNComponent.Out.spatials>` - Streams depth and bounding box mappings (``SpatialDetectionNework.boundingBoxMapping``). Produces :ref:`SpatialBbMappingPacket`.
- :attr:`twostage_crops <depthai_sdk.components.NNComponent.Out.twostage_crops>` - Streams 2. stage cropped frames to the host. Produces :ref:`FramePacket`.
- :attr:`tracker <depthai_sdk.components.NNComponent.Out.tracker>` - Streams `ObjectTracker's <https://docs.luxonis.com/projects/api/en/latest/components/nodes/object_tracker/>`__ tracklets and high-res frames that were downscaled and used for inferencing. Produces :ref:`TrackerPacket`.
- :attr:`nn_data <depthai_sdk.components.NNComponent.Out.nn_data>` - Streams NN raw output. Produces :ref:`NNDataPacket`.

Decoding outputs
#################
Expand All @@ -50,6 +51,9 @@ NNComponent allows user to define their own decoding functions. There is a set o
- :class:`ImgLandmarks <depthai_sdk.classes.nn_results.ImgLandmarks>`
- :class:`InstanceSegmentation <depthai_sdk.classes.nn_results.InstanceSegmentation>`

.. note::
This feature is still in development and is not guaranteed to work correctly in all cases.

Example usage:

.. code-block:: python
Expand Down
11 changes: 7 additions & 4 deletions depthai_sdk/docs/source/components/stereo_component.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
StereoComponent
===============

**StereoComponent** abstracts `StereoDepth <https://docs.luxonis.com/projects/api/en/latest/components/nodes/imu/>`__ node, its configuration,
:class:`StereoComponent <depthai_sdk.components.StereoComponent>` abstracts `StereoDepth <https://docs.luxonis.com/projects/api/en/latest/components/nodes/imu/>`__ node, its configuration,
filtering (eg. `WLS filter <https://github.com/luxonis/depthai-experiments/tree/master/gen2-wls-filter>`__), and disparity/depth viewing.

Usage
Expand All @@ -23,9 +23,12 @@ Usage
Component outputs
#################

- ``out.main`` - Default output. Uses ``out.depth``.
- ``out.disparity`` - Streams `StereoDepth's <https://docs.luxonis.com/projects/api/en/latest/components/nodes/stereo_depth/>`__ disparity frames to the host. When visualized, these get normalized and colorized. Produces :ref:`FramePacket`.
- ``out.depth`` - Streams `StereoDepth's <https://docs.luxonis.com/projects/api/en/latest/components/nodes/stereo_depth/>`__ depth frames to the host. When visualized, depth gets converted to disparity (for nicer visualization), normalized and colorized. Produces :ref:`FramePacket`.
- :attr:`main <depthai_sdk.components.StereoComponent.Out.main>` - Default output. Uses :attr:`depth <depthai_sdk.components.StereoComponent.Out.depth>`.
- :attr:`disparity <depthai_sdk.components.StereoComponent.Out.disparity>` - Streams `StereoDepth's <https://docs.luxonis.com/projects/api/en/latest/components/nodes/stereo_depth/>`__ disparity frames to the host. When visualized, these get normalized and colorized. Produces :ref:`DepthPacket`.
- :attr:`depth <depthai_sdk.components.StereoComponent.Out.depth>` - Streams `StereoDepth's <https://docs.luxonis.com/projects/api/en/latest/components/nodes/stereo_depth/>`__ depth frames to the host. When visualized, depth gets converted to disparity (for nicer visualization), normalized and colorized. Produces :ref:`DepthPacket`.
- :attr:`rectified_left <depthai_sdk.components.StereoComponent.Out.rectified_left>` - Streams `StereoDepth's <https://docs.luxonis.com/projects/api/en/latest/components/nodes/stereo_depth/>`__ rectified left frames to the host.
- :attr:`rectified_right <depthai_sdk.components.StereoComponent.Out.rectified_right>` - Streams `StereoDepth's <https://docs.luxonis.com/projects/api/en/latest/components/nodes/stereo_depth/>`__ rectified right frames to the host.
- :attr:`encoded <depthai_sdk.components.StereoComponent.Out.encoded>` - Provides an encoded version of :attr:`disparity <depthai_sdk.components.StereoComponent.Out.disparoty>` stream.

Reference
#########
Expand Down
1 change: 1 addition & 0 deletions depthai_sdk/docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@
"sphinx_rtd_theme",
'sphinx.ext.autodoc',
'sphinx.ext.napoleon',
'autodocsumm'
]

# Add any paths that contain templates here, relative to this directory.
Expand Down
2 changes: 2 additions & 0 deletions depthai_sdk/docs/source/examples/color_example.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
Color camera example
=====================
12 changes: 7 additions & 5 deletions depthai_sdk/docs/source/features/ai_models.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,11 @@ AI models

Through the :ref:`NNComponent`, DepthAI SDK abstracts:

- **AI model sourcing** using `blobconverter <https://github.com/luxonis/blobconverter>`__ from `Open Model Zoo <https://github.com/openvinotoolkit/open_model_zoo>`__ (OMZ) and `DepthAI Model Zoo <https://github.com/luxonis/depthai-model-zoo>`__ (DMZ)
- **AI result decoding** - currently SDK supports on-device decoding for YOLO and MobileNet based results using `YoloDetectionNetwork <https://docs.luxonis.com/projects/api/en/latest/components/nodes/yolo_detection_network/>`__ and `MobileNetDetectionNetwork <https://docs.luxonis.com/projects/api/en/latest/components/nodes/mobilenet_detection_network/>`__ nodes
- **Decoding** of the ``config.json`` which **allows an easy deployment of custom AI models** trained `using our notebooks <https://github.com/luxonis/depthai-ml-training>`__ and converted using https://tools.luxonis.com
- Formatting of the AI model input frame - SDK uses **BGR** color order and **Planar / CHW** channel layout conventions
- Integration with 3rd party tools/services (:ref:`Roboflow`)
- **AI model sourcing** using `blobconverter <https://github.com/luxonis/blobconverter>`__ from `Open Model Zoo <https://github.com/openvinotoolkit/open_model_zoo>`__ (OMZ) and `DepthAI Model Zoo <https://github.com/luxonis/depthai-model-zoo>`__ (DMZ).
- **AI result decoding** - currently SDK supports on-device decoding for YOLO and MobileNet based results using `YoloDetectionNetwork <https://docs.luxonis.com/projects/api/en/latest/components/nodes/yolo_detection_network/>`__ and `MobileNetDetectionNetwork <https://docs.luxonis.com/projects/api/en/latest/components/nodes/mobilenet_detection_network/>`__ nodes.
- **Decoding** of the ``config.json`` which **allows an easy deployment of custom AI models** trained `using our notebooks <https://github.com/luxonis/depthai-ml-training>`__ and converted using https://tools.luxonis.com.
- Formatting of the AI model input frame - SDK uses **BGR** color order and **Planar / CHW** channel layout conventions.
- Integration with 3rd party tools/services (:ref:`Roboflow`).


SDK supported models
Expand All @@ -28,6 +28,8 @@ With :ref:`NNComponent` you can **easily try out a variety of different pre-trai
Both of the models above are supported by this SDK, so they will be downloaded and deployed to the OAK device along with the pipeline.

The following table lists all the models supported by the SDK. The model name is the same as the name used in the :ref:`NNComponent` constructor.

.. list-table::
:header-rows: 1

Expand Down
1 change: 1 addition & 0 deletions depthai_sdk/docs/source/features/recording.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
.. _Recording:
Recording
=========

Expand Down
12 changes: 7 additions & 5 deletions depthai_sdk/docs/source/features/replaying.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,9 @@ Replaying support
Replaying feature is quite extensible, and supports a variety of different inputs:

#. Single image.
#. Folder with images. Images are getting rotated every 3 seconds. `Example here <https://github.com/luxonis/depthai-experiments/tree/master/gen2-people-counter>`__..
#. Url to a video/image.
#. Url to a YouTube video.
#. Folder with images. Images are getting rotated every 3 seconds. `Example here <https://github.com/luxonis/depthai-experiments/tree/master/gen2-people-counter>`__.
#. URL to a video/image.
#. URL to a YouTube video.
#. Path to :ref:`depthai-recording <Replaying a depthai-recording>`.
#. A name of a :ref:`public depthai-recording <Public depthai-recordings>`.

Expand All @@ -49,13 +49,15 @@ Script below will also do depth reconstruction and will display 3D detections co
.. figure:: https://user-images.githubusercontent.com/18037362/193642506-76bd2d36-3ae8-4d0b-bbed-083a94463155.png

Live view pipeline uses live camera feeds (MonoCamera, ColorCamera) whereas Replaying pipeline uses XLinkIn nodes to which we send recorded frames
Live view pipeline uses live camera feeds (MonoCamera, ColorCamera) whereas Replaying pipeline uses XLinkIn nodes to which we send recorded frames.

Public depthai-recordings
#########################

We host several depthai-recordings on our servers that you can easily use in your
application (eg. ``OakCamera(recording='cars-california-01')``). Recording will get downloaded & cached on the computer for future use.
application, e.g., :class:`OakCamera(recording='cars-california-01') <depthai_sdk.OakCamera>`. Recording will get downloaded & cached on the computer for future use.

The following table lists all available recordings:

.. list-table::
:header-rows: 1
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,13 @@ their initialization, configuration, and linking. This improves ease of use when
.. autoclass:: depthai_sdk.components.Component
:members:
:undoc-members:
:noindex:

.. toctree::
:maxdepth: 1
:hidden:
:glob:
:caption: Components:
:caption: Components

../components/*

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,17 @@ Packets
Packets are **synchronized collections** of one or more `DepthAI messages <https://docs.luxonis.com/projects/api/en/latest/components/messages/>`__. They are used
**internally for visualization** and also forwarded to the callback function if the user:

#. Specified a callback for visualizing of an output (``oak.visualize(component, callback=cb)``)
#. Used callback output (``oak.callback(component, callback=cb)``)
#. Specified a callback for visualizing of an output via :meth:`OakCamera.visualize(..., callback=fn) <depthai_sdk.OakCamera.visualize>`.
#. Used callback output via :meth:`OakCamera.callback(..., callback=fn, enable_visualizer=True) <depthai_sdk.OakCamera.callback>`.

Example
#######
API Usage
#####

#. **oak.visualize**: In the example below SDK won't show the frame to the user, but instead it will send the packet to the callback function. SDK will draw detections (bounding boxes, labels) on the ``packet.frame``.
#. **oak.callback**: This will also send ``DetectionPacket`` to the callback function, the only difference is that the SDK won't draw on the frame, so you can draw detections on the frame yourself.
#. :meth:`OakCamera.visualize() <depthai_sdk.OakCamera.visualize>`: In the example below SDK won't show the frame to the user, but instead it will send the packet to the callback function. SDK will draw detections (bounding boxes, labels) on the ``packet.frame``.
#. :meth:`OakCamera.callback() <depthai_sdk.OakCamera.callback>`: This will also send :class:`DetectionPacket <depthai_sdk.classes.packets.DetectionPacket>` to the callback function, the only difference is that the SDK won't draw on the frame, so you can draw detections on the frame yourself.

.. note::
If you specify callback function in **oak.visualize**, you need to trigger drawing of detections yourself via **visualizer.draw** method.
If you specify callback function in :meth:`OakCamera.visualize() <depthai_sdk.OakCamera.visualize>`, you need to trigger drawing of detections yourself via :meth:`Visualizer.draw() <depthai_sdk.visualize.visualizer.Visualizer.draw>` method.

.. code-block:: python
Expand All @@ -35,7 +35,7 @@ Example
oak.visualize(nn.out.main, fps=True, callback=cb)
# 2. Callback:
oak.callback(nn.out.main, callback=cb)
oak.callback(nn.out.main, callback=cb, enable_visualizer=True)
oak.start(blocking=True)
Expand Down Expand Up @@ -64,6 +64,13 @@ DetectionPacket
:members:
:undoc-members:

NNDataPacket
------------

.. autoclass:: depthai_sdk.classes.packets.NNDataPacket
:members:
:undoc-members:

DepthPacket
---------------

Expand Down
Loading

0 comments on commit f2f467c

Please sign in to comment.