You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, Thank you very much for your open-sourcing code!
When trying to understand the code with your paper, I have some problem.
In my understanding, the input of the network should be psvs of 5 viewpoints, but when I look at the code, I don't see the psvs transformation when the image is put into the network to infer mpi, but psvs are in the output of the network instead. I would like to ask, in the input of network do we need psvs transformation? Where is it embodied in the procedure?
Thank you very much for your attention.
Best,
Yilei Chen
The text was updated successfully, but these errors were encountered:
The code to create PSVs from the input images and poses is baked into the Tensorflow metagraph file stored in checkpoints/ that is loaded by llff/inference/mpi_tester.py.
The code to create PSVs from the input images and poses is baked into the Tensorflow metagraph file stored in checkpoints/ that is loaded by llff/inference/mpi_tester.py.
Thank you very much for your explanations. As my questions about LLFF have been answered, I'll close this issue.
Hello, Thank you very much for your open-sourcing code!
When trying to understand the code with your paper, I have some problem.
In my understanding, the input of the network should be psvs of 5 viewpoints, but when I look at the code, I don't see the psvs transformation when the image is put into the network to infer mpi, but psvs are in the output of the network instead. I would like to ask, in the input of network do we need psvs transformation? Where is it embodied in the procedure?
Thank you very much for your attention.
Best,
Yilei Chen
The text was updated successfully, but these errors were encountered: