Paper is available at arXiv
Clone this repository
git clone https://github.com/Cynicarlos/RetinexRawMamba.git
cd RetinexRawMamba
My cuda version: 11.7
conda create -n RetinexRawMamba python=3.9
conda activate RetinexRawMamba
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
Download these two files from the following links and upload them to the server, and then install:
Note: find the corresponding veresion
- causal_conv1d
causal_conv1d-1.0.0+cu118torch2.0cxx11abiFALSE-cp39-cp39-linux_x86_64.whl
- mamba_ssm
mamba_ssm-1.0.1+cu118torch2.0cxx11abiFALSE-cp39-cp39-linux_x86_64.whl
you can also download them easily here
pip install causal_conv1d-1.0.0+cu118torch2.0cxx11abiFALSE-cp39-cp39-linux_x86_64.whl
pip install mamba_ssm-1.0.1+cu118torch2.0cxx11abiFALSE-cp39-cp39-linux_x86_64.whl
pip install -r requirements.txt
Dataset | Download link | Source | CFA |
---|---|---|---|
Sony | Google Drive | Link | Bayer |
Fuji | Google Drive | Link | X-Trans |
MCR | Google Drive | Link | Bayer |
Note that for SID Sony dataset, to be consistent with DNF, please use the Sony_test_list.txt
we provide in the datasets
folder to evaluate, and there are totally 562
images to be tested.
The directory for the datasets should be as following:
📁datasets/
├─── 📁MCR/
│ ├─── 📄MCR_test_list.txt
│ ├─── 📄MCR_train_list.txt
│ └─── 📁Mono_Colored_RAW_Paired_DATASET/
│ ├─── 📁Color_RAW_Input/
│ │ ├─── 📄C00001_48mp_0x8_0x00ff.tif
│ │ └─── 📄...
│ └─── 📁RGB_GT/
│ ├─── 📄C00001_48mp_0x8_0x2fff.jpg
│ └─── 📄...
└─── 📁SID/
├─── 📁Fuji/
│ ├─── 📄Fuji_test_list.txt
│ ├─── 📄Fuji_train_list.txt
│ ├─── 📄Fuji_val_list.txt
│ └─── 📁Fuji/
│ ├─── 📁Long/
│ │ ├─── 📄00001_00_10s.RAF
│ │ └─── 📄...
│ └─── 📁Short/
│ ├─── 📄00001_00_0.1s.RAF
│ └─── 📄...
└─── 📁Sony/
├─── 📄Sony_test_list.txt
├─── 📄Sony_train_list.txt
├─── 📄Sony_val_list.txt
└─── 📁Sony/
├─── 📁Long/
│ ├─── 📄00001_00_10s.ARW
│ └─── 📄...
└─── 📁Short/
├─── 📄00001_00_0.1s.ARW
└─── 📄...
Before training and testing, please make sure the corresponding config file is correct, like the dataset dir, change it to your dataset path.
python train.py -cfg configs/sony.yaml
If you want to train on other dataset, just make sure you have the correct config file in the configs
folder, and change the -cfg
to your own config path.
Before evaluating our pretrained models, please download them by the following links and put them in the pretrained
folder.
Dataset | Pretrained Model |
---|---|
Sony | Google Drive or Pan Baidu |
Fuji | Google Drive or Pan Baidu |
MCR | Google Drive or Pan Baidu |
For MCR dataset:
python test_mcr.py
For SID dataset:
If your GPU memory is smaller than 40G, generally 24G, please use the following script so that you can test without OOM (out of memory).
python test_sony.py --merge_test
otherwise, ignore it. Note that the results may be a little bit smaller when merge testing than with whole image.
python test_sony.py
If there is any help for your research, please star this repository and if you want to follow this work, you can cite as follows:
@misc{chen2024retinexrawmambabridgingdemosaicingdenoising,
title={Retinex-RAWMamba: Bridging Demosaicing and Denoising for Low-Light RAW Image Enhancement},
author={Xianmin Chen and Peiliang Huang and Xiaoxu Feng and Dingwen Zhang and Longfei Han and Junwei Han},
year={2024},
eprint={2409.07040},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2409.07040},
}
The repository is refactored based on DNF, thanks to the author.