UHRNet CNN implementation (pytorch)
In this paper, we propose a deep learning-based method for accurate 3D reconstruction for a single fringe-pattern. We use unet's encoding and decoding structure as baackbone and design Multi-level Conv block and Fusion block to enhance the ability of feature extraction and detail reconstruction of the network. Wang et al. 's dataset was used as our training set validation set and test set. The link to the data set is left at the end. The test set contains 153 patterns, and our method's average RMSE is only 0.443(mm) and an average SSIM is 0.9978 on the test set.
For more details, please refer to our paper:https://arxiv.org/abs/2304.14503
Frame of UHRNet
- Prediction evalution of three networks on test set
Model | RMSE(mm) | SSIM | Param(M) | Speed(s) |
---|---|---|---|---|
our method | 0.433 | 0.9978 | 30.33 | 0.0224 |
hNet | 1.330 | 0.9767 | 8.63 | 0.0093 |
ResUNet | 0.685 | 0.9931 | 32.44 | 0.0105 |
- 3D height map reconstructed by our method
- single object in the field of view
- two ioslated object in the field of view
- two overlapping object in the field of view
- three overlapping object in the field of view
- Python 3.9.7
- pytorch 1.5.0
- CUDA 11.3
- Numpy 1.23.3
- Pretrained model(UHRNet): Link:https://pan.baidu.com/s/1QS5ftR2Ww2n6enVeVlf-yQ Password:1234 According the link given above to download the weights to the UHRNet folder to run the pre-trained model
- Dataset: Single-input dual-output 3D shape reconstruction (figshare.com)[1] This dataset contains 1532 fringe-patterns and corresponding 3D height map, which are divided into training set, test set and validation set according to the ratio of 80%, 10% and 10%
[1] A. Nguyen, O. Rees and Z. Wang, "Learning-based 3D imaging from single structured-light image," Graphical Models, vol. 126, 2023.