Project page | Paper | Video
Jing Tan*, Shuai Yang*, Tong Wu✉️, Jingwen He, Yuwei Guo, Ziwei Liu, Dahua Lin✉️
* Equal Contribution,✉️ Corresponding author
[2025-01-30] 🔥 Release inference code
and checkpoints
!
More results can be found on our Project Gallery.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
We highly recommend using a mobile phone to access the website(better use Chrome browser) for device motion tracking, enhancing the immersive quality of the VR interactive experience.
🔥The Loading may be a little slow, but your wait will be worth it !!!
git clone https://github.com/3DTopia/Imagine360.git
cd Imagine360
conda create -n imagine360 python==3.10
conda activate imagine360
pip install -r requirements.txt
- Use GeoCalibration as elevation estimation model (by default):
python -m pip install -e "git+https://github.com/cvg/GeoCalib#egg=geocalib"
- Use PerspectiveFields as elevation estimation model:
pip install git+https://github.com/jinlinyi/PerspectiveFields.git
Download our checkpoints from google drive, and also [sam_vit_b_01ec64], [stable-diffusion-2-1], and [Qwen-VL-Chat].
Update the paths to these pre-trained models in configs/prompt-dual.yaml
.
python inference_dual_p2e.py --config configs/prompt-dual.yaml
If the result does not align with expectations, try modify text prompt or set different seeds (-1 for random seed) in configs/prompt-dual.yaml
.
For better visualization under VR mode, we recommend to use VEnhancer for video super resolution. Follow the instructions to update VEnhancer code for 360 close-loop continuity.
Jing Tan: [email protected]
Shuai Yang: [email protected]
Tong Wu: [email protected]
- Release Inference Code
- Gradio Demo
- Release Train Code
Special thanks to PanFusion, FollowYourCanvas, 360DVD and AnimateDiff for codebase and pre-trained weights.
If you find our work helpful for your research, please consider giving a star ⭐ and citation 📝
@article{tan2024imagine360,
title={Imagine360: Immersive 360 Video Generation from Perspective Anchor},
author={Tan, Jing and Yang, Shuai and Wu, Tong and He, Jingwen and Guo, Yuwei and Liu, Ziwei and Lin, Dahua},
journal={arXiv preprint arXiv:2412.03552},
year={2024}
}