Skip to content

Latest commit

 

History

History
47 lines (35 loc) · 2.31 KB

detectnet-snapshot.md

File metadata and controls

47 lines (35 loc) · 2.31 KB

Back | Next | Contents
Object Detection

Downloading the Detection Model to Jetson

Next, download and extract the trained model snapshot to Jetson. From the browser on your Jetson TX1/TX2, navigate to your DIGITS server and the DetectNet-COCO-Dog model. Under the Trained Models section, select the desired snapshot from the drop-down (usually the one with the highest epoch) and click the Download Model button.

Alternatively, if your Jetson and DIGITS server aren't accessible from the same network, you can use the step above to download the snapshot to an intermediary machine and then use SCP or USB stick to copy it to Jetson.

Then extract the archive with a command similar to:

tar -xzvf 20170504-190602-879f_epoch_100.0.tar.gz

DetectNet Patches for TensorRT

In the original DetectNet prototxt exists a Python clustering layer which isn't available in TensorRT and should be deleted from the deploy.prototxt included in the snapshot. In this repo the detectNet class handles the clustering as opposed to Python.

At the end of deploy.prototxt, delete the layer named cluster:

layer {
  name: "cluster"
  type: "Python"
  bottom: "coverage"
  bottom: "bboxes"
  top: "bbox-list"
  python_param {
    module: "caffe.layers.detectnet.clustering"
    layer: "ClusterDetections"
    param_str: "640, 640, 16, 0.6, 2, 0.02, 22, 1"
  }
}

Without this Python layer, the snapshot can now be imported into TensorRT onboard the Jetson.

Next | Detecting Objects from the Command Line
Back | Locating Object Coordinates using DetectNet

© 2016-2019 NVIDIA | Table of Contents