Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

realtime detection on jetson using pi camera #13380

Open
1 task done
Munia-AK opened this issue Oct 24, 2024 · 2 comments
Open
1 task done

realtime detection on jetson using pi camera #13380

Munia-AK opened this issue Oct 24, 2024 · 2 comments
Labels
question Further information is requested

Comments

@Munia-AK
Copy link

Munia-AK commented Oct 24, 2024

Search before asking

Question

I'm trying to run real time detection on jetson nano using custom fine tuned yolov5s model and pi camera. I already did this using below script which worked:
Screenshot from 2024-10-24 11-57-39

However I don't want to use torch.hub for loading the model because it requires internet connection. I need to make it work without internet. I tried in the same code to load the model from it's path without using torch.hub.load, for example:
model = 'best.py'
and then send each frame as input directly like: results = model(frame)
but this didn't work and gave error

I know the solution lies in the detect.py script. I ran the detect.py script with webcam on jetson nano without internet connection and it worked.
So, I made two attempts, in first attempt I used same previous code but took the parts of loading the model from detect.py and added them to the code to replace torch.hub.load, like this:
Screenshot from 2024-10-24 13-25-05

but this gave this error too that I wasn't able to fix in the end:
File "detect_.py", line 71, in
result_img = result.render()[0] # Render the detection and get the image
AttributeError: 'Tensor' object has no attribute 'render_

In the second try I was editing the detect.script attempting to add the command that runs the pi camera using GStreamer pipeline. Specifically I edited the following three parts believing that I should edit the webcam sections by making the code runs pi camera instead of webcam when --source 0 is chosen.:
after editing:

part 1:
def run(
weights=ROOT / "yolov5s.pt", # model path or triton URL
source = "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv flip-method=0 ! video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink"
.
.
.
part 2:
Screenshot from 2024-10-24 11-58-50

part 3:
Screenshot from 2024-10-24 12-13-08

but this didn't work and threw an error.

Screenshot from 2024-10-24 13-01-49

I couldn't fix any of the errors of all attempts. So I'm not sure whether I'm following the right path but if this is doable then can you please guide me onto how to make pi camera do the detection without using torch.hub.load

Additional

No response

@Munia-AK Munia-AK added the question Further information is requested label Oct 24, 2024
@UltralyticsAssistant
Copy link
Member

👋 Hello @Munia-AK, thank you for reaching out with your query on YOLOv5 🚀!

It seems you're aiming to run real-time detection on a Jetson Nano using a Pi camera without relying on an internet connection for torch.hub. You're on the right track with modifying detect.py. This kind of setup can indeed be a bit tricky!

Please make sure to provide a minimum reproducible example that can help us debug the situation. This will assist in pinpointing what might be going wrong with the changes you made.

Here are a few steps and resources that might help:

  • Ensure you have set up your environment correctly. YOLOv5 requires Python>=3.8.0 and requirements.txt installed, with PyTorch>=1.8 installed. You can set up the environment with:

    git clone https://github.com/ultralytics/yolov5  # clone
    cd yolov5
    pip install -r requirements.txt  # install
  • Since you're working with a Jetson Nano, ensure your GStreamer pipeline for the Pi camera is correctly configured. Double-check the syntax and compatibility of your GStreamer string.

  • If you are using CUDA on your Jetson Nano, ensure it's properly installed and functional to leverage GPU acceleration.

If you're considering alternatives, you might want to check out the newer YOLOv8 model, designed to be fast and efficient. You can install it using:

pip install ultralytics

An Ultralytics engineer will assist you soon! Meanwhile, please explore our Tutorials for more guidance, including Tips for Best Training Results.

Feel free to share any additional details or screenshots that might aid in diagnosing the issue. Good luck, and we're here to support you! 🚀

@pderrenger
Copy link
Member

@Munia-AK to run YOLOv5 on a Jetson Nano without internet, you can load the model directly using PyTorch. First, ensure your model is saved as a .pt file. Then, load it with torch.load() and perform inference using the model's forward() method. Here's a basic example:

import torch
from models.common import DetectMultiBackend

# Load model
model = DetectMultiBackend('best.pt', device='cuda')  # Adjust path as necessary

# Perform inference
results = model(frame)  # Replace 'frame' with your input data

Ensure your environment is set up with all necessary dependencies from the requirements.txt file. If you encounter specific errors, please share them for further assistance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants