You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you for your research, which has been very inspiring to me. I found the reconstruction results obtained using your project on the replica dataset satisfactory. However, I encountered some issues when working with a custom dataset. Due to certain reasons, I couldn't use the online or offline modes of NerfCapture. Therefore, I used Polycam instead of NerfCapture to capture scenes and generate depth, RGB, and transforms.json.
Then I modified the following config in file configs/iphone/splatam.py:
base_dir = "./data"
scene_name = "poly1"
num_frames = 188
And use the following instructions to reconstruct the scene
python scripts/splatam.py configs/iphone/splatam.py
When I set the use_gt_poses = True, I found that the reconstructed room barely met its original geometric constraints:
But when I set the use_gt_poses = False, I noticed that the reconstructed scenes were starting to look strange:
It looks like a spiral that strings together each trained rgb image on the line,i.e. the camera's pose estimate has shifted a lot.
At first, I thought there was something wrong with the depth map, so I opened the eval directory to see the evaluation screenshot:
From the screenshot above it looks like there's nothing wrong with the depth map.
So I don't know, where did I go wrong? If there is any suggestion I will be very appreciated.
When i set the use_gt_poses = True, I fix wrong camera pose in my custom dataset, which takes me several days to solve……
I wonder why use_gt_poses default value is False ?
First of all, thank you for your research, which has been very inspiring to me. I found the reconstruction results obtained using your project on the replica dataset satisfactory. However, I encountered some issues when working with a custom dataset. Due to certain reasons, I couldn't use the online or offline modes of NerfCapture. Therefore, I used Polycam instead of NerfCapture to capture scenes and generate depth, RGB, and transforms.json.
Then I modified the following config in file configs/iphone/splatam.py:
base_dir = "./data"
scene_name = "poly1"
num_frames = 188
And use the following instructions to reconstruct the scene
python scripts/splatam.py configs/iphone/splatam.py
When I set the use_gt_poses = True, I found that the reconstructed room barely met its original geometric constraints:
But when I set the use_gt_poses = False, I noticed that the reconstructed scenes were starting to look strange:
It looks like a spiral that strings together each trained rgb image on the line,i.e. the camera's pose estimate has shifted a lot.
At first, I thought there was something wrong with the depth map, so I opened the eval directory to see the evaluation screenshot:
![0000](https://github.com/spla-tam/SplaTA
M/assets/28862680/079a0e1c-a0da-498d-becf-609b3e49cd22)
From the screenshot above it looks like there's nothing wrong with the depth map.
So I don't know, where did I go wrong? If there is any suggestion I will be very appreciated.
I also upload my custom dataset on Google to reproduce the issue.
https://drive.google.com/file/d/1ogvhCim9vXZjVK0EpqAqTat93bAbUblp/view?usp=sharing
Thanks, look forward to your reply.
The text was updated successfully, but these errors were encountered: