Skip to content

Achieve illumination robustness for Vox-Fusion. Course Project for ROB 530 Mobile Robotics W23.

Notifications You must be signed in to change notification settings

ROB530-WI23-Team30/Vox-Fusion-Robust

 
 

Repository files navigation

Vox-Fusion-Robust

Video | Paper | Slides | Datasets

Authors: Andreya Ware, Che Chen, Swetha Subbiah, Ved Abhyankar, Tiancheng Zhang

Achieve illumination robustness for Vox-Fusion. Course Project for ROB 530 Mobile Robotics W23.

Comparison

result

Main Contributions

  • Based on Vox-Fusion, use a per-image embedding and a single MLP layer to predict an affine transformation in color space (first proposed in URF) so that the SLAM algorithm is robust to global illumination change.

decoder

  • Created datasets for evaluating both global and local illumination changes, available here

Install

  • install cuda=11.7, python>=3.8

  • install poetry

  • prepare a python and set poetry environment using

poetry env use /path/to/python
  • install python environment
export PYTHON_KEYRING_BACKEND=keyring.backends.null.Keyring
poetry install
  • entering environment
poetry shell
  • build third-party libs
./install.sh

Training

  • Just run
poetry run python demo/run.py configs/replica_robust/room_0_global.yaml
  • The training log is stored within the log directory

Evaluation

Several evaluation scripts in utils

  • eval_mesh.py - evaluate mesh reconstruction
  • eval_track.py - evaluate tracking performance
  • rerender_replica.py - re-render scenes in replica dataset

About

Achieve illumination robustness for Vox-Fusion. Course Project for ROB 530 Mobile Robotics W23.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 94.3%
  • Jupyter Notebook 5.6%
  • Shell 0.1%