The code repository for Depth-Aware Endoscopic Video Inpainting. The pre-trained model for our paper could be found at here. The ArXiv version paper can be found at here.
If you encounter any difficulty in implementing our work, please feel free to contact me ([email protected]).
To run our code, you need to prepare the depth pseudo ground truth for your data. In our experiments, we generated our depth pseudo ground truth using AF-SfMLearner. To reproduce our work, you can find the extracted depth data at this link.
Due to the size of the depth data, only the test set depth ground truth is currently uploaded. The full dataset will be made available once a more efficient data-sharing method is implemented.
The Python version used in this code is Python 3.8.5.
pip install -r requirement.txt
python train.py --model DAEVI --config {Your Config File Path}.json
python test.py --gpu 0 --overlaid --output results/DAEVI_Output/ --frame datasets/EndoSTTN_dataset/JPEGImages --mask datasets/EndoSTTN_dataset/Annotations --model DAEVI -c release_model/DAEVI_24g -cn 20 --zip --ref_num 10
- Repository: Endo-STTN.
- Repository: AF-SfMLearner.
If you find this work useful, please consider our paper to cite:
@inproceedings{zhang24Depth,
author={Zhang, Francis Xiatian and Chen, Shuang and Xie, Xianghua and Shum, Hubert P. H.},
booktitle={Proceedings of the 2024 International Conference on Medical Image Computing and Computer Assisted Intervention},
series={MICCAI '24},
title={Depth-Aware Endoscopic Video Inpainting},
year={2024},
publisher={Springer},
location={Marrakesh, Morocco},
}