The yolov8 pose estimation
This is a sample ncnn android project, it depends on ncnn library and opencv
https://github.com/Tencent/ncnn
https://github.com/nihui/opencv-mobile
install ultralytics library
use yolo CLI
yolo export model=yolov8s-pose.pt format=ncnn # export official model
or in a Python environment
from ultralytics import YOLO
# Load a model
model = YOLO('yolov8s-pose.pt') # load an official model
# Export the model
model.export(format='ncnn')
Then rename the ncnn model and put it into "assets" directory.
https://github.com/Tencent/ncnn/releases
- Download ncnn-YYYYMMDD-android-vulkan.zip or build ncnn for android yourself
- Extract ncnn-YYYYMMDD-android-vulkan.zip into app/src/main/jni and change the ncnn_DIR path to yours in app/src/main/jni/CMakeLists.txt
https://github.com/nihui/opencv-mobile
- Download opencv-mobile-XYZ-android.zip
- Extract opencv-mobile-XYZ-android.zip into app/src/main/jni and change the OpenCV_DIR path to yours in app/src/main/jni/CMakeLists.txt
- Open this project with Android Studio, build it and enjoy!
- Android ndk camera is used for best efficiency
- Crash may happen on very old devices for lacking HAL3 camera interface
- All models are manually modified to accept dynamic input shape
- Most small models run slower on GPU than on CPU, this is common
- FPS may be lower in dark environment because of longer camera exposure time
https://github.com/nihui/ncnn-android-nanodet
https://github.com/Tencent/ncnn
https://github.com/ultralytics/assets/releases/tag/v0.0.0
https://github.com/FeiGeChuanShu/ncnn-android-yolov8