Use LLM to analyze and visualize sensor data in rosbag and generate reports.
report.webm
- ros version: foxy, humble
- rosbag:
mcap
,db3
- ros topic type:
sensor_msgs/Image
,sbg_driver/SbgGpsPos
,sensor_msgs/PointCloud2
,sensor_msgs/LaserScan
- gpt-4o
report_generation.webm
conda env create -n rosbag_gpt -f environment.yaml
conda activate rosbag_gpt
python3 demo.py
-
Exrtact specific topic messages from ros2bag
-
Extract all messages frame given a timestamp from ros2bag
-
Draw path map from ros2bag / csv
-
Use gpt-4o to analyse images and generate report
The unittest fixture files can be found at xrkong/nuway_rosbag.
huggingface-cli download --repo-type dataset --local-dir ./unittest/fixture xrkong/nuway_rosbag
Put the files under /unittest/fixture
please check the /unittest/test_unittest.py
- Deserialize from ros2 bag, (.db3 file)
- Get data, like /lidar_safety/front_left/scan, /ins0/gps_pos, /ins0/orientation
- Plot them
- If there is a icon of the vechicle, add into a middile.
- plot map