Skip to content

Official code release for "D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video".

License

Notifications You must be signed in to change notification settings

MoritzKappel/D-NPC

Repository files navigation

D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video

Official implementation of Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video built on the NeRFICG framework.

Getting Started

This project is built on the NeRFICG framework. Before cloning this repository, ensure the framework is set up:

  • Follow the instructions in the Getting Started section of the main nerficg repository (tested with commit c8e258b, PyTorch 2.5).
  • After setting up the framework, navigate to the top level directory:
     cd <Path/to/framework/>nerficg
  • also make sure to activate the correct conda environment
     conda activate nerficg

Now, you can directly add this project as an additional method:

  • clone this repository to the src/Methods/ directory:
     git clone [email protected]:MoritzKappel/D-NPC.git src/Methods/DNPC
  • install all dependencies and CUDA extensions for the new method using:
     ./scripts/install.py -m DNPC

Training and Inference

After setup, the DNPC method is fully compatible with all NeRFICG framework scripts in the scripts/ directory. This includes config file generation (defaultConfig.py), training (train.py), inference and performance benchmarking (inference.py), metric calculation (generateTables.py), and live rendering via the GUI (gui.py). These scripts were also used for all experiments in the main paper.

For guidance and detailed instruction, please refer to the main nerficg repository.

Data Preprocessing

This method is compatible with most dataset loaders provided by the main framework, e.g., the iPhone, Colmap and VGGSfM datasets.

Since additional priors (monocular depth and foreground segmentation) are required, be sure to use the -a flag when calibrating your input videos to generate the necessary annotations. For pre-calibrated datasets, such as the iPhone sequences, you can manually run the monocularDepth.py and cutie.py scripts to generate the additional data.

Download Evaluation Result Images

You can download the evaluation results and additional visualizations here.

License and Citation

This project is licensed under the MIT license (see LICENSE).

If you use this code for your research projects, please consider a citation:

@inproceedings{kappel2024d-npc,
  title = {D-{NPC}: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video},
  author = {Kappel, Moritz and Hahlbohm, Florian and Scholz, Timon and Castillo, Susana  and Theobalt, Christian and Eisemann, Martin and Golyanik, Vladislav and Magnor, Marcus},
  booktitle = {Proc. Eurographics},
  editor = {A. Bousseau and A. Dai},
  volume = {44},
  number = {2},
  note = {To appear},
  year = {2025}
}

About

Official code release for "D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published