Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Object Detection on non-RKNN platforms #1340

Open
srimanachanta opened this issue Jun 5, 2024 · 10 comments
Open

Support Object Detection on non-RKNN platforms #1340

srimanachanta opened this issue Jun 5, 2024 · 10 comments
Labels
enhancement New feature or request

Comments

@srimanachanta
Copy link
Member

srimanachanta commented Jun 5, 2024

A transformed yolo model does Object Detection on RKNN, there is no reason we can't just support running this same model without the help of the NPU on other platforms, albeit, at worse performance. I would say the best way to handle this change would be in the photon-core JNI where that build will link the backend RKNN-toolkit and the JNI shouldn't be different for RKNN or non-RKNN.

One issue is that this building is done at build time, and we build binaries on platforms, not per coprocessor, so build action needs to be updated.

@srimanachanta srimanachanta added the enhancement New feature or request label Jun 5, 2024
@mcm001
Copy link
Contributor

mcm001 commented Jun 5, 2024

It's just an ONNX model right? What changes did rockchip make to YOLO to make it run on their NPU?

@srimanachanta
Copy link
Member Author

It's just an ONNX model right? What changes did rockchip make to YOLO to make it run on their NPU?

They take a model (Caffe, TensorFlow, TensorFlow Lite, ONNX, or Darknet) and then converts it to an RKNN model which I assume just uses their native toolkit for BLAS vs the native CPU ops. The only difference between the converted model and the pre-converted one is that the converted one is capable of running on their NPU. This should be fairly trivial to implement this. The build issue is the only problem I see.

@ramalamadingdong
Copy link

I am also hoping to get NPU support on the board I am testing.

Without an NPU you might get some pretty bad performance, I think supporting many types of NPUs is really the move here.

I am hoping more and more companies build easier solutions for NPU support but they are all currently a bit of a nightmare to get drivers working.

I am imaging a solution where we autodetect OS / NPU support or ask user to pick then download requirments and run that on their drivers but that's not easy.

@srimanachanta
Copy link
Member Author

srimanachanta commented Jun 5, 2024

I am also hoping to get NPU support on the board I am testing.

Without an NPU you might get some pretty bad performance, I think supporting many types of NPUs is really the move here.

I am hoping more and more companies build easier solutions for NPU support but they are all currently a bit of a nightmare to get drivers working.

I am imaging a solution where we autodetect OS / NPU support or ask user to pick then download requirments and run that on their drivers but that's not easy.

For FRC, what are the main NPUs we would have to target? I don't know of anything beyond RKNN NPU that is mainly used.

@ramalamadingdong
Copy link

ramalamadingdong commented Jun 5, 2024

RPi AI Kit is an example

I can't release what I am working on yet but with the RFP for the new control system coming out I expect a lot of companies will start releasing boards that have NPU support at a cheaper cost.

@Alextopher
Copy link
Contributor

The RPi AI kit looks very promising and worth while supporting.

I haven't found an off the shelf runtime supporting it and the RK NPU.

Here's what I think we could look to support for 2025 (at least what I'm comfortable contributing).

  • RK NPU
  • RPi Hailo acceleration
  • Generic X86 and Arm cpus

Longer term maybe CUDA support for Jetsons - but I've got no experience with those devices.

Looking into onnx-runtime it advertises CPU and CUDA support, that might be a good starting place. https://onnxruntime.ai/docs/execution-providers/

@mcm001
Copy link
Contributor

mcm001 commented Jul 6, 2024

A build of opencv with CUDA support and the DNN module should be all ya need for that. I've got opencv DNN cpu inference rotting in a branch somewhere already.

@ramalamadingdong
Copy link

The RPi AI kit looks very promising and worth while supporting.

I haven't found an off the shelf runtime supporting it and the RK NPU.

Here's what I think we could look to support for 2025 (at least what I'm comfortable contributing).

  • RK NPU
  • RPi Hailo acceleration
  • Generic X86 and Arm cpus

Longer term maybe CUDA support for Jetsons - but I've got no experience with those devices.

Looking into onnx-runtime it advertises CPU and CUDA support, that might be a good starting place. https://onnxruntime.ai/docs/execution-providers/

There are 3 parts of this imo

  • OpenGL support
  • NPU Support, I am so lost seems like there's no universal NPU drivers yet and it's a pain to get this going
  • CPU support specifically ARM for Object Detection

@Alextopher
Copy link
Contributor

Alextopher commented Jul 9, 2024

  • OpenGL support
    Do you mean OpenCV? OpenCV has CPU support and can be compiled with CUDA support for nvidia chips

  • NPU support
    As far as I can tell there isn't a universal way to run NPU drivers, it's too bad. For the Hailo AI Kit & Rockchip NPU there is a manual compilation step that needs to be carefully preformed.

I'm going to expand the scope of #1359 to include a 'Backend' abstraction that can be implemented for each platform in the future through individual PRs.

@ramalamadingdong
Copy link

ramalamadingdong commented Jul 21, 2024

  • Sorry that was a typo OpenGL is for graphics applications. What I meant was OpenCL where it is fairly universal regardless of what GPU you're using, it spans multithreaded applications on the GPU and does a pretty good job. To your point OpenCV does support OpenCL as well :). Maybe I need to read install scripts to verify if PV installs with OpenCL already supported.

  • I am curious if the best step for NPU support is to just have a drop down of supported platforms. I get most companies have a way they want you to create the model but I don't see why they can't run with a universal system? I am not super experienced on this and don't have the time to take it up yet.

Thanks! Didn't know about that PR I will track on it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants