Here we open source CODON related code, trained model and corresponding results.
The model here is further finetuned by us, and the effect is better than that shown in the paper.
If you are interested in our work, please consider citing the following:
@article{yang2022codon,
title={CODON: on orchestrating cross-domain attentions for depth super-resolution},
author={Yang, Yuxiang and Cao, Qi and Zhang, Jing and Tao, Dacheng},
journal={International Journal of Computer Vision},
volume={130},
number={2},
pages={267--284},
year={2022},
publisher={Springer}
}