You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the YOLOv5 issues and discussions and found no similar questions.
Question
I'm trying to run real time detection on jetson nano using custom fine tuned yolov5s model and pi camera. I already did this using below script which worked:
However I don't want to use torch.hub for loading the model because it requires internet connection. I need to make it work without internet. I tried in the same code to load the model from it's path without using torch.hub.load, for example: model = 'best.py'
and then send each frame as input directly like: results = model(frame)
but this didn't work and gave error
I know the solution lies in the detect.py script. I ran the detect.py script with webcam on jetson nano without internet connection and it worked.
So, I made two attempts, in first attempt I used same previous code but took the parts of loading the model from detect.py and added them to the code to replace torch.hub.load, like this:
but this gave this error too that I wasn't able to fix in the end: File "detect_.py", line 71, in
result_img = result.render()[0] # Render the detection and get the image
AttributeError: 'Tensor' object has no attribute 'render_
In the second try I was editing the detect.script attempting to add the command that runs the pi camera using GStreamer pipeline. Specifically I edited the following three parts believing that I should edit the webcam sections by making the code runs pi camera instead of webcam when --source 0 is chosen.:
after editing:
I couldn't fix any of the errors of all attempts. So I'm not sure whether I'm following the right path but if this is doable then can you please guide me onto how to make pi camera do the detection without using torch.hub.load
Additional
No response
The text was updated successfully, but these errors were encountered:
👋 Hello @Munia-AK, thank you for reaching out with your query on YOLOv5 🚀!
It seems you're aiming to run real-time detection on a Jetson Nano using a Pi camera without relying on an internet connection for torch.hub. You're on the right track with modifying detect.py. This kind of setup can indeed be a bit tricky!
Please make sure to provide a minimum reproducible example that can help us debug the situation. This will assist in pinpointing what might be going wrong with the changes you made.
Here are a few steps and resources that might help:
Ensure you have set up your environment correctly. YOLOv5 requires Python>=3.8.0 and requirements.txt installed, with PyTorch>=1.8 installed. You can set up the environment with:
Since you're working with a Jetson Nano, ensure your GStreamer pipeline for the Pi camera is correctly configured. Double-check the syntax and compatibility of your GStreamer string.
If you are using CUDA on your Jetson Nano, ensure it's properly installed and functional to leverage GPU acceleration.
If you're considering alternatives, you might want to check out the newer YOLOv8 model, designed to be fast and efficient. You can install it using:
@Munia-AK to run YOLOv5 on a Jetson Nano without internet, you can load the model directly using PyTorch. First, ensure your model is saved as a .pt file. Then, load it with torch.load() and perform inference using the model's forward() method. Here's a basic example:
importtorchfrommodels.commonimportDetectMultiBackend# Load modelmodel=DetectMultiBackend('best.pt', device='cuda') # Adjust path as necessary# Perform inferenceresults=model(frame) # Replace 'frame' with your input data
Ensure your environment is set up with all necessary dependencies from the requirements.txt file. If you encounter specific errors, please share them for further assistance.
Search before asking
Question
I'm trying to run real time detection on jetson nano using custom fine tuned yolov5s model and pi camera. I already did this using below script which worked:
However I don't want to use torch.hub for loading the model because it requires internet connection. I need to make it work without internet. I tried in the same code to load the model from it's path without using torch.hub.load, for example:
model = 'best.py'
and then send each frame as input directly like: results = model(frame)
but this didn't work and gave error
I know the solution lies in the detect.py script. I ran the detect.py script with webcam on jetson nano without internet connection and it worked.
So, I made two attempts, in first attempt I used same previous code but took the parts of loading the model from detect.py and added them to the code to replace torch.hub.load, like this:
but this gave this error too that I wasn't able to fix in the end:
File "detect_.py", line 71, in
result_img = result.render()[0] # Render the detection and get the image
AttributeError: 'Tensor' object has no attribute 'render_
In the second try I was editing the detect.script attempting to add the command that runs the pi camera using GStreamer pipeline. Specifically I edited the following three parts believing that I should edit the webcam sections by making the code runs pi camera instead of webcam when --source 0 is chosen.:
after editing:
part 1:
def run(
weights=ROOT / "yolov5s.pt", # model path or triton URL
source = "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv flip-method=0 ! video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink"
.
.
.
part 2:
part 3:
but this didn't work and threw an error.
I couldn't fix any of the errors of all attempts. So I'm not sure whether I'm following the right path but if this is doable then can you please guide me onto how to make pi camera do the detection without using torch.hub.load
Additional
No response
The text was updated successfully, but these errors were encountered: