You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I have a question about visualization of ground truth bounding boxes.
The codes provide only visulalization of GT boxes on a bird's eye view as far as I've found, but I want to visualize them on each camera view.
In my understanding, the gt_box_tensor on the line 115 in inference.py is on the ego lidar coordinate.
So I transformed gt_box_tensor by multiplying the inverse of transformation_matrix, each extrinsic and each intrinsic to obtain bboxes on each image coordinate.
The transformation matrix is a matrix to project each lidar coordinate to ego coordinate, which is calculated in the line 444 of 'basedataset.py'.
However, the projected bboxes seem inaccurate, and there are sometimes weird bboxes, e.g., bboxes in mid-air.
The transformation I did is wrong? Or extrinsics are inaccurate?
Also, could you please publish your code to visualize bboxes on each camera view if you have.
The examples of visualization results are like below. The green boxes represent the projected GT boxes.
The text was updated successfully, but these errors were encountered:
Sorry for the late response. Thank you for pointing this out. This problem does exist in our verification.
The calibration of the intersection scene 117-120 is not accurate. This affects the practice of camera-based method in RCooper scenarios.
We will continue to work with CAIC to update the calibration parameters.
ryhnhao
changed the title
Visualization of ground truth bounding boxes on each camera view
BBox Shift during Visualization because of Inaccurate Calibration
Sep 26, 2024
Hi, I have a question about visualization of ground truth bounding boxes.
The codes provide only visulalization of GT boxes on a bird's eye view as far as I've found, but I want to visualize them on each camera view.
In my understanding, the
gt_box_tensor
on the line 115 ininference.py
is on the ego lidar coordinate.So I transformed
gt_box_tensor
by multiplying the inverse oftransformation_matrix
, each extrinsic and each intrinsic to obtain bboxes on each image coordinate.The
transformation matrix
is a matrix to project each lidar coordinate toego
coordinate, which is calculated in the line 444 of 'basedataset.py'.However, the projected bboxes seem inaccurate, and there are sometimes weird bboxes, e.g., bboxes in mid-air.
The transformation I did is wrong? Or extrinsics are inaccurate?
Also, could you please publish your code to visualize bboxes on each camera view if you have.
The examples of visualization results are like below. The green boxes represent the projected GT boxes.
The text was updated successfully, but these errors were encountered: