This repository is the official implementation of ECCV 2024 "Asymmetric Mask Scheme for Self-Supervised Real Image Denoising"
Asymmetric Mask Scheme for Self-Supervised Real Image Denoising
Xiangyu Liao*,
Tianheng Zheng,
Jiayu Zhong,
Pingping Zhang,
Chao Ren
The project depends on the following packages:
- PyTorch
- TorchVision
- OpenCV
- Loguru
- TIMM (PyTorch Image Models)
- PyTorch Lightning
- Weights & Biases (wandb)
- Rich
Before you start, make sure you have the following installed:
- Python (3.10, 3.11)
- pip (Python package installer)
-
Create a virtual environment (optional but recommended, otherwise you are likely to encounter unknown and unnecessary errors)
conda create -n myenv python=3.10 conda activate myenv
-
Upgrade pip
pip install --upgrade pip
-
Install the dependencies
pip install torch torchvision opencv-python loguru timm pytorch-lightning wandb rich
-
PyTorch Installation: Depending on your system's configuration, you might want to install PyTorch with specific CUDA versions. Check the PyTorch official website for more details.
-
OpenCV Installation: If you need specific OpenCV modules or face issues, refer to the OpenCV installation guide.
-
Weights & Biases Configuration: For using Weights & Biases (wandb), you might need to login to your account. Run
wandb login
and follow the instructions.
By following the above steps, you should have your environment set up and ready for development. If you encounter any issues, refer to the documentation of each package or seek help from the community.
-
Download Pretrained Weights:
- Navigate to this link to access the pretrained weights.
-
Save to
weights
Directory:- Once downloaded, place the pretrained weights into the
weights
directory of your project like this
- Once downloaded, place the pretrained weights into the
-
Download the SIDD Medium Dataset
- Visit the SIDD Medium Dataset website.
- Follow the instructions on the website to download the dataset.
-
Extract the Dataset
Once the download is complete, extract the contents of the dataset to a directory of your choice. For example:
zip -FF SIDD_Medium_Srgb_Parts.zip --out combined.zip unzip combined.zip
To evaluate your models, you need to download the SIDD Validation dataset.
-
Download the Validation Files
- Visit the SIDD Dataset page.
- Download the following files:
ValidationGtBlocksSrgb.mat
ValidationNoisyBlocksSrgb.mat
-
Organize the Validation Files
Place the downloaded files in a folder. The folder structure should be as follows:
datasets/ └── SIDD_Validation/ ├── ValidationGtBlocksSrgb.mat ├── ValidationNoisyBlocksSrgb.mat
If you do not want to use the SIDD Medium dataset, you can prepare your own noisy images.
-
Organize Your Noisy Images
Place your noisy images in a folder. The folder structure should be as follows:
datasets/ └── custom_noisy_images/ ├── image1.png ├── image2.png ├── ...
-
Verify the Images
Ensure that your images are in a readable format (e.g., PNG, JPEG) and that they are accessible from your script or notebook.
To facilitate fast reading, you can convert your dataset into an LMDB format using the divide_sub_image.py
script.
python divide_sub_image.py --hw 512 512 --lmdb --suffix .PNG .bmp --path <Noisy PATH> --re "NOISY" --name <your_dataset_name> --output <your_output_path> --size 40 --step 256 256
The lmdb folder structure should be as follows:
```
datasets/
└── lmdb_folder/
├── data.mdb
├── lock.mdb
├── medium_keys.txt
├── meta_info.pkl
└── meta_info.txt
```
To train the denoise model, you need to modify the paths in the configuration file. Open the configs/restormer.yaml
file and update the paths as follows:
data:
train_dataset:
class_path: codes.data.UnpairDataset
init_args:
path: your_path
datatype: lmdb or images, make yourself happy
max_len: any number you want
crop_size: 320
augment: True
more detail in codes/data.py
just run the command(take restormer as an example):
python main.py fit --config configs/restormer.yaml
To validate the denoise model, you need to modify the paths in the configuration file. Open the configs/restormer.yaml
file and update the paths as follows:
val_dataset:
class_path: codes.data.SIDD_validation or codes.data.SIDD_benchmark
init_args:
sidd_val_dir: your_path
len: 1280
test_dataset:
class_path: codes.data.SIDD_validation or codes.data.SIDD_benchmark
init_args:
sidd_val_dir: your_path
len: 128
or
val_dataset:
class_path: codes.data.PairDataset
init_args:
please read the codes/data.py and input correct cfg
test_dataset:
class_path: codes.data.PairDataset
init_args:
please read the codes/data.py and input correct cfg
more detail in codes/data.py, and just run the command(take restormer as an example):
python main.py test/validate --config configs/restormer.yaml
Please note that the test or validate does not require the training dataset, just comment out the corresponding content in the configuration file
just run the command:
python main.py predict --config ./configs/restormer.yaml --data.predict_dataset.init_args.path=./images/test/test_sample.png
or
python main.py predict --config ./configs/restormer.yaml --data.predict_dataset.init_args.path=./images/test/
If you find this code useful for your research, please consider citing:
@inproceedings{liao2024asymmetric,
title={Asymmetric Mask Scheme for Self-supervised Real Image Denoising},
author={Liao, Xiangyu and Zheng, Tianheng and Zhong, Jiayu and Zhang, Pingping and Ren, Chao},
booktitle={European Conference on Computer Vision},
pages={199--215},
year={2024},
organization={Springer}
}
This project is built on source codes shared by AP-BSN, SpatiallyAdaptiveSSID, CVF-SID, DeamNet, Restormer, NAFNet, SCPGabNet, timm and pytorch.