Skip to content

CVPR2025 - CATANet: Efficient Content-Aware Token Aggregation for Lightweight Image Super-Resolution

License

Notifications You must be signed in to change notification settings

Jumaron/CATANet

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CATANet - CVPR2025

This repository is an official implementation of the paper "CATANet: Efficient Content-Aware Token Aggregation for Lightweight Image Super-Resolution", CVPR, 2025.

📰News

  • ✅ 2025-03-15: Release the supplementary material of our CATANet.😃
  • ✅ 2025-03-13: Release the pretrained models and visual results of our CATANet.🤗
  • ✅ 2025-03-12: arXiv paper available.
  • ✅ 2025-03-09: Release the codes of our CATANet.
  • ✅ 2025-02-28: Our CATANet was accepted by CVPR2025!🎉🎉🎉

Abstract: Transformer-based methods have demonstrated impressive performance in low-level visual tasks such as Image Super-Resolution (SR). However, its computational complexity grows quadratically with the spatial resolution. A series of works attempt to alleviate this problem by dividing Low-Resolution images into local windows, axial stripes, or dilated windows. SR typically leverages the redundancy of images for reconstruction, and this redundancy appears not only in local regions but also in long-range regions. However, these methods limit attention computation to content-agnostic local regions, limiting directly the ability of attention to capture long-range dependency. To address these issues, we propose a lightweight Content-Aware Token Aggregation Network (CATANet). Specifically, we propose an efficient Content-Aware Token Aggregation module for aggregating long-range content-similar tokens, which shares token centers across all image tokens and updates them only during the training phase. Then we utilize intra-group self-attention to enable long-range information interaction. Moreover, we design an inter-group cross-attention to further enhance global information interaction. The experimental results show that, compared with the state-of-the-art cluster-based method SPIN, our method achieves superior performance, with a maximum PSNR improvement of $\textbf{\textit{0.33dB}}$ and nearly $\textbf{\textit{double}}$ the inference speed.

⭐If this work is helpful for you, please help star this repo. Thanks!🤗

📑Contents

  1. Enviroment
  2. Training
  3. Testing
  4. Citation
  5. Contact
  6. Acknowledgements

🔨Environment

  • Python 3.9
  • PyTorch >=2.2

Installation

pip install -r requirements.txt
python setup.py develop

🚀Training

Data Preparation

  • Download the training dataset DIV2K and put them in the folder ./datasets.
  • Download the testing data (Set5 + Set14 + BSD100 + Urban100 + Manga109 [Download]) and put them in the folder ./datasets.
  • It's recommended to refer to the data preparation from BasicSR for faster data reading speed.

Training Commands

  • Refer to the training configuration files in ./options/train folder for detailed settings.
# batch size = 4 (GPUs) × 16 (per GPU)
# training dataset: DIV2K

# ×2 scratch, input size = 64×64,800k iterations
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --standalone --nnodes=1 --nproc_per_node=4 basicsr/train.py -opt options/train/train_CATANet_x2_scratch.yml

# ×3 finetune, input size = 64×64, 250k iterations
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --standalone --nnodes=1 --nproc_per_node=4 basicsr/train.py -opt options/train/train_CATANet_x3_finetune.yml

# ×4 finetune, input size = 64×64, 250k iterations
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --standalone --nnodes=1 --nproc_per_node=4 basicsr/train.py -opt options/train/train_CATANet_x4_finetune.yml

🔧Testing

Data Preparation

  • Download the testing data (Set5 + Set14 + BSD100 + Urban100 + Manga109 [Download]) and put them in the folder ./datasets.

Pretrained Models

Testing Commands

  • Refer to the testing configuration files in ./options/test folder for detailed settings.
python basicsr/test.py -opt options/test/test_CATANet_x2.yml
python basicsr/test.py -opt options/test/test_CATANet_x3.yml
python basicsr/test.py -opt options/test/test_CATANet_x4.yml

😘Citation

Please cite us if our work is useful for your research.

@article{liu2025CATANet,
  title={CATANet: Efficient Content-Aware Token Aggregation for Lightweight Image Super-Resolution},
  author={Xin Liu and Jie Liu and Jie Tang and Gangshan Wu},
  journal={arXiv preprint arXiv:2503.06896},
  year={2025}
}

📫Contact

If you have any questions, feel free to approach me at [email protected]

🥰Acknowledgements

This code is built on BasicSR.

About

CVPR2025 - CATANet: Efficient Content-Aware Token Aggregation for Lightweight Image Super-Resolution

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 79.5%
  • Cuda 12.3%
  • C++ 8.2%