You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sorry for the late response. As discussed in the paper, the two stages (pretraining stage and post-training stage) are conducted sequentially. They share the same backbone of CascadeMVSNet. And the GPU memory cost is the same as the default configuration of CascadeMVSNet. It is no more than 10G, if I remember it right.
Hi, thanks for the excellent work for unsupervised learning based MVS. I have one simple question since the code is not published.
How is the GPU memory cost in every stage if the two stages are separately trained on DTU_train datasets?
Thanks.
The text was updated successfully, but these errors were encountered: