Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add fps setting #531

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 4 additions & 3 deletions docs/detectnet-camera-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@

Up next we have a realtime object detection camera demo available for C++ and Python:

- [`detectnet-camera.cpp`](../examples/detectnet-camera/detectnet-camera.cpp) (C++)
- [`detectnet-camera.py`](../python/examples/detectnet-camera.py) (Python)
- [`detectnet-camera.cpp`](../examples/detectnet-camera/detectnet-camera.cpp) (C++)
- [`detectnet-camera.py`](../python/examples/detectnet-camera.py) (Python)

Similar to the previous [`detectnet-console`](detectnet-console-2.md) example, these camera applications use detection networks, except that they process a live video feed from a camera. `detectnet-camera` accepts various **optional** command-line parameters, including:

Expand All @@ -23,11 +23,12 @@ Similar to the previous [`detectnet-console`](detectnet-console-2.md) example, t
- The default is to use MIPI CSI sensor 0 (`--camera=0`)
- `--width` and `--height` flags setting the camera resolution (default is `1280x720`)
- The resolution should be set to a format that the camera supports.
- Query the available formats with the following commands:
- Query the available formats with the following commands:
``` bash
$ sudo apt-get install v4l-utils
$ v4l2-ctl --list-formats-ext
```
- `--fps` flag setting the camera fps (default is `30`)

You can combine the usage of these flags as needed, and there are additional command line parameters available for loading custom models. Launch the application with the `--help` flag to recieve more info, or see the [`Examples`](../README.md#code-examples) readme.

Expand Down
7 changes: 4 additions & 3 deletions docs/detectnet-camera.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@

Up next we have a realtime object detection camera demo available for C++ and Python:

- [`detectnet-camera.cpp`](../examples/detectnet-camera/detectnet-camera.cpp) (C++)
- [`detectnet-camera.py`](../python/examples/detectnet-camera.py) (Python)
- [`detectnet-camera.cpp`](../examples/detectnet-camera/detectnet-camera.cpp) (C++)
- [`detectnet-camera.py`](../python/examples/detectnet-camera.py) (Python)

Similar to the previous [`detectnet-console`](detectnet-console.md) example, these camera applications use detection networks, except that they process a live video feed from a camera. `detectnet-camera` accepts 4 optional command-line parameters:

Expand All @@ -20,11 +20,12 @@ Similar to the previous [`detectnet-console`](detectnet-console.md) example, the
- The default is to use MIPI CSI sensor 0 (`--camera=0`)
- `--width` and `--height` flags setting the camera resolution (default is `1280x720`)
- The resolution should be set to a format that the camera supports.
- Query the available formats with the following commands:
- Query the available formats with the following commands:
``` bash
$ sudo apt-get install v4l-utils
$ v4l2-ctl --list-formats-ext
```
- `--fps` flag setting the camera fps (default is `30`)

You can combine the usage of these flags as needed, and there are additional command line parameters available for loading custom models. Launch the application with the `--help` flag to recieve more info, or see the [`Examples`](../README.md#code-examples) readme.

Expand Down
18 changes: 9 additions & 9 deletions docs/detectnet-example-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,8 @@ import jetson.inference
import jetson.utils
```

> **note**: these Jetson modules are installed during the `sudo make install` step of [building the repo](building-repo-2.md#compiling-the-project).
>           if you did not run `sudo make install`, then these packages won't be found when the example is run.
> **note**: these Jetson modules are installed during the `sudo make install` step of [building the repo](building-repo-2.md#compiling-the-project).
>           if you did not run `sudo make install`, then these packages won't be found when the example is run.

#### Loading the Detection Model

Expand All @@ -41,21 +41,21 @@ Note that you can change the model string to one of the values from [this table]
To connect to the camera device for streaming, we'll create an instance of the [`gstCamera`](https://rawgit.com/dusty-nv/jetson-inference/pytorch/docs/html/python/jetson.utils.html#gstCamera) object:

``` python
camera = jetson.utils.gstCamera(1280, 720, "/dev/video0") # using V4L2
camera = jetson.utils.gstCamera(1280, 720, 30, "/dev/video0") # using V4L2
```

It's constructor accepts 3 parameters - the desired width, height, and video device to use. Substitute the following snippet depending on if you are using a MIPI CSI camera or a V4L2 USB camera, along with the preferred resolution:
It's constructor accepts 4 parameters - the desired width, height, fps and video device to use. Substitute the following snippet depending on if you are using a MIPI CSI camera or a V4L2 USB camera, along with the preferred resolution:

- MIPI CSI cameras are used by specifying the sensor index (`"0"` or `"1"`, ect.)
- MIPI CSI cameras are used by specifying the sensor index (`"0"` or `"1"`, ect.)
``` python
camera = jetson.utils.gstCamera(1280, 720, "0")
camera = jetson.utils.gstCamera(1280, 720, 30, "0")
```
- V4L2 USB cameras are used by specifying their `/dev/video` node (`"/dev/video0"`, `"/dev/video1"`, ect.)
- V4L2 USB cameras are used by specifying their `/dev/video` node (`"/dev/video0"`, `"/dev/video1"`, ect.)
``` python
camera = jetson.utils.gstCamera(1280, 720, "/dev/video0")
camera = jetson.utils.gstCamera(1280, 720, 30, "/dev/video0")
```
- The width and height should be a resolution that the camera supports.
- Query the available resolutions with the following commands:
- Query the available resolutions with the following commands:
``` bash
$ sudo apt-get install v4l-utils
$ v4l2-ctl --list-formats-ext
Expand Down
9 changes: 5 additions & 4 deletions docs/imagenet-camera-2.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
<img src="https://github.com/dusty-nv/jetson-inference/raw/master/docs/images/deep-vision-header.jpg">
<p align="right"><sup><a href="imagenet-example-2.md">Back</a> | <a href="detectnet-console-2.md">Next</a> | </sup><a href="../README.md#hello-ai-world"><sup>Contents</sup></a>
<br/>
<sup>Image Recognition</sup></p>
<sup>Image Recognition</sup></p>

# Running the Live Camera Recognition Demo

Next we have a realtime image recognition camera demo available for C++ and Python:

- [`imagenet-camera.cpp`](../examples/imagenet-camera/imagenet-camera.cpp) (C++)
- [`imagenet-camera.py`](../python/examples/imagenet-camera.py) (Python)
- [`imagenet-camera.cpp`](../examples/imagenet-camera/imagenet-camera.cpp) (C++)
- [`imagenet-camera.py`](../python/examples/imagenet-camera.py) (Python)

Similar to the previous [`imagenet-console`](imagenet-console-2.md) example, the camera applications are built to the `/aarch64/bin` directory. They run on a live camera stream with OpenGL rendering and accept 4 optional command-line arguments:

Expand All @@ -20,11 +20,12 @@ Similar to the previous [`imagenet-console`](imagenet-console-2.md) example, the
- The default is to use MIPI CSI sensor 0 (`--camera=0`)
- `--width` and `--height` flags setting the camera resolution (default is `1280x720`)
- The resolution should be set to a format that the camera supports.
- Query the available formats with the following commands:
- Query the available formats with the following commands:
``` bash
$ sudo apt-get install v4l-utils
$ v4l2-ctl --list-formats-ext
```
- `--fps` flag setting the camera fps (default is `30`)

You can combine the usage of these flags as needed, and there are additional command line parameters available for loading custom models. Launch the application with the `--help` flag to recieve more info, or see the [`Examples`](../README.md#code-examples) readme.

Expand Down
9 changes: 5 additions & 4 deletions docs/imagenet-camera.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
<img src="https://github.com/dusty-nv/jetson-inference/raw/master/docs/images/deep-vision-header.jpg">
<p align="right"><sup><a href="imagenet-example.md">Back</a> | <a href="imagenet-training.md">Next</a> | </sup><a href="../README.md#two-days-to-a-demo-digits"><sup>Contents</sup></a>
<br/>
<sup>Image Recognition</sup></p>
<sup>Image Recognition</sup></p>

# Running the Live Camera Recognition Demo

Next we have a realtime image recognition camera demo available for C++ and Python:

- [`imagenet-camera.cpp`](../examples/imagenet-camera/imagenet-camera.cpp) (C++)
- [`imagenet-camera.py`](../python/examples/imagenet-camera.py) (Python)
- [`imagenet-camera.cpp`](../examples/imagenet-camera/imagenet-camera.cpp) (C++)
- [`imagenet-camera.py`](../python/examples/imagenet-camera.py) (Python)

Similar to the previous [`imagenet-console`](imagenet-console.md) example, the camera applications are built to the `/aarch64/bin` directory. They run on a live camera stream with OpenGL rendering and accept 4 optional command-line arguments:

Expand All @@ -20,11 +20,12 @@ Similar to the previous [`imagenet-console`](imagenet-console.md) example, the c
- The default is to use MIPI CSI sensor 0 (`--camera=0`)
- `--width` and `--height` flags setting the camera resolution (default is `1280x720`)
- The resolution should be set to a format that the camera supports.
- Query the available formats with the following commands:
- Query the available formats with the following commands:
``` bash
$ sudo apt-get install v4l-utils
$ v4l2-ctl --list-formats-ext
```
- `--fps` flag setting the camera fps (default is `30`)

You can combine the usage of these flags as needed, and there are additional command line parameters available for loading custom models. Launch the application with the `--help` flag to recieve more info, or see the [`Examples`](../README.md#code-examples) readme.

Expand Down
9 changes: 5 additions & 4 deletions docs/pytorch-collect.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ Next, we'll cover the command-line options for starting the tool.

## Launching the Tool

The source for the `camera-capture` tool can be found under [`jetson-inference/tools/camera-capture/`](../tools/camera-capture), and like the other programs from the repo it gets built to the `aarch64/bin` directory and installed under `/usr/local/bin/`
The source for the `camera-capture` tool can be found under [`jetson-inference/tools/camera-capture/`](../tools/camera-capture), and like the other programs from the repo it gets built to the `aarch64/bin` directory and installed under `/usr/local/bin/`

The `camera-capture` tool accepts 3 optional command-line arguments:

Expand All @@ -81,11 +81,12 @@ The `camera-capture` tool accepts 3 optional command-line arguments:
- The default is to use MIPI CSI sensor 0 (`--camera=0`)
- `--width` and `--height` flags setting the camera resolution (default is `1280x720`)
- The resolution should be set to a format that the camera supports.
- Query the available formats with the following commands:
- Query the available formats with the following commands:
``` bash
$ sudo apt-get install v4l-utils
$ v4l2-ctl --list-formats-ext
```
- `--fps` flag setting the camera fps (default is `30`)

Below are some example commands for launching the tool:

Expand All @@ -106,7 +107,7 @@ Below is the `Data Capture Control` window, which allows you to pick the desired

<img src="https://github.com/dusty-nv/jetson-inference/raw/python/docs/images/pytorch-collection-widget.jpg" >

First, open the dataset path and class labels. The tool will then create the dataset structure discussed above (unless these subdirectories already exist), and you will see your object labels populated inside the `Current Class` drop-down.
First, open the dataset path and class labels. The tool will then create the dataset structure discussed above (unless these subdirectories already exist), and you will see your object labels populated inside the `Current Class` drop-down.

Then position the camera at the object or scene you have currently selected in the drop-down, and click the `Capture` button (or press the spacebar) when you're ready to take an image. The images will be saved under that class subdirectory in the train, val, or test set. The status bar displays how many images have been saved under that category.

Expand Down Expand Up @@ -162,7 +163,7 @@ Next we encourage you to experiment and apply what you've learned to other proje

* use GPIO to trigger external actuators or LEDs when an object is detected
* an autonomous robot that can find or follow an object
* a handheld battery-powered camera + Jetson + mini-display
* a handheld battery-powered camera + Jetson + mini-display
* an interactive toy or treat dispenser for your pet
* a smart doorbell camera that greets your guests

Expand Down
5 changes: 3 additions & 2 deletions docs/segnet-camera-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
Next we'll run realtime semantic segmentation on a live camera feed, available for C++ and Python:

- [`segnet-camera.cpp`](../examples/segnet-camera/segnet-camera.cpp) (C++)
- [`segnet-camera.py`](../python/examples/segnet-camera.py) (Python)
- [`segnet-camera.py`](../python/examples/segnet-camera.py) (Python)

Similar to the previous [`segnet-console`](segnet-console-2.md) example, these camera applications use segmentation networks, except that they process a live video feed instead. `segnet-camera` accepts various **optional** command-line parameters, including:

Expand All @@ -20,11 +20,12 @@ Similar to the previous [`segnet-console`](segnet-console-2.md) example, these c
- The default is to use MIPI CSI sensor 0 (`--camera=0`)
- `--width` and `--height` flags setting the camera resolution (default is `1280x720`)
- The resolution should be set to a format that the camera supports.
- Query the available formats with the following commands:
- Query the available formats with the following commands:
``` bash
$ sudo apt-get install v4l-utils
$ v4l2-ctl --list-formats-ext
```
- `--fps` flag setting the camera fps (default is `30`)
You can combine the usage of these flags as needed, and there are additional command line parameters available for loading custom models. Launch the application with the `--help` flag to recieve more info, or see the [`Examples`](../README.md#code-examples) readme.

Below are some typical scenarios for launching the program - see [this table](segnet-console-2.md#pre-trained-segmentation-models-available) for the models available to use.
Expand Down
37 changes: 20 additions & 17 deletions examples/detectnet-camera/detectnet-camera.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@ int usage()
printf(" by default, MIPI CSI camera 0 will be used.\n");
printf(" --width WIDTH desired width of camera stream (default is 1280 pixels)\n");
printf(" --height HEIGHT desired height of camera stream (default is 720 pixels)\n");
printf(" --fps FPS desired FPS of camera stream (default is 30)\n");
printf(" --threshold VALUE minimum threshold for detection (default is 0.5)\n\n");

printf("%s\n", detectNet::Usage());
Expand Down Expand Up @@ -86,25 +87,27 @@ int main( int argc, char** argv )
*/
gstCamera* camera = gstCamera::Create(cmdLine.GetInt("width", gstCamera::DefaultWidth),
cmdLine.GetInt("height", gstCamera::DefaultHeight),
cmdLine.GetInt("fps", gstCamera::DefaultFps),
cmdLine.GetString("camera"));

if( !camera )
{
printf("\ndetectnet-camera: failed to initialize camera device\n");
return 0;
}

printf("\ndetectnet-camera: successfully initialized camera device\n");
printf(" width: %u\n", camera->GetWidth());
printf(" height: %u\n", camera->GetHeight());
printf(" fps: %u\n", camera->GetFps());
printf(" depth: %u (bpp)\n\n", camera->GetPixelDepth());


/*
* create detection network
*/
detectNet* net = detectNet::Create(argc, argv);

if( !net )
{
printf("detectnet-camera: failed to load detectNet model\n");
Expand All @@ -113,14 +116,14 @@ int main( int argc, char** argv )

// parse overlay flags
const uint32_t overlayFlags = detectNet::OverlayFlagsFromStr(cmdLine.GetString("overlay", "box,labels,conf"));


/*
* create openGL window
*/
glDisplay* display = glDisplay::Create();

if( !display )
if( !display )
printf("detectnet-camera: failed to create openGL display\n");


Expand All @@ -132,38 +135,38 @@ int main( int argc, char** argv )
printf("detectnet-camera: failed to open camera for streaming\n");
return 0;
}

printf("detectnet-camera: camera open for streaming\n");


/*
* processing loop
*/
float confidence = 0.0f;

while( !signal_recieved )
{
// capture RGBA image
float* imgRGBA = NULL;

if( !camera->CaptureRGBA(&imgRGBA, 1000) )
printf("detectnet-camera: failed to capture RGBA image from camera\n");

// detect objects in the frame
detectNet::Detection* detections = NULL;

const int numDetections = net->Detect(imgRGBA, camera->GetWidth(), camera->GetHeight(), &detections, overlayFlags);

if( numDetections > 0 )
{
printf("%i objects detected\n", numDetections);

for( int n=0; n < numDetections; n++ )
{
printf("detected obj %i class #%u (%s) confidence=%f\n", n, detections[n].ClassID, net->GetClassDesc(detections[n].ClassID), detections[n].Confidence);
printf("bounding box %i (%f, %f) (%f, %f) w=%f h=%f\n", n, detections[n].Left, detections[n].Top, detections[n].Right, detections[n].Bottom, detections[n].Width(), detections[n].Height());
printf("bounding box %i (%f, %f) (%f, %f) w=%f h=%f\n", n, detections[n].Left, detections[n].Top, detections[n].Right, detections[n].Bottom, detections[n].Width(), detections[n].Height());
}
}
}

// update display
if( display != NULL )
Expand All @@ -184,13 +187,13 @@ int main( int argc, char** argv )
// print out timing info
net->PrintProfilerTimes();
}


/*
* destroy resources
*/
printf("detectnet-camera: shutting down...\n");

SAFE_DELETE(camera);
SAFE_DELETE(display);
SAFE_DELETE(net);
Expand Down
Loading