Skip to content

[ICLR 2025] On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning

License

Notifications You must be signed in to change notification settings

Gorilla-Lab-SCUT/RTTDP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning

Yongyi SuYushu LiNanqing LiuKui JiaXulei YangChuan-Sheng FooXun Xu

🎈 News

  • [2025.1.23] Our work has been accepted to ICLR 2025 (OpenReview) 🎉
  • [2025.2.24] The whole implementation code is released 🔓

🚀 Introduction

image

Illustration of the proposed Realistic Test-Time Data Poisoning (RTTDP) pipeline. In order to explore more realistical adversarial risk of Test-Time Adaptation (TTA), we restrict the attacker to the following three points:

  1. Gray-box Attack: The attacker has to perform a gray-box attack, i.e., the attacker can only access the pre-trained model's parameters $\theta_0$ but not the online TTA model's parameters $\theta_t$.
  2. No Access to Other Users' Benign Samples: The attacker must not access the benign samples from other users during generating poisoned samples.
  3. Online Attack Order: The poisoned samples should be mixed into the TTA test data stream with other normal samples, rather than being injected into the TTA stream before other users' samples.

The Adversarial Risk of TTA is estimated by calculating the error rate of other users' benign samples.

The comparison between RTTDP and existing Test-Time Data Poisoning (TTDP) methods is shown below.

Setting Gray-box v.s. White-box Access to Benign Samples Attack Order
DIA White-box $\checkmark$ Online
TePA Gray-box $\times$ Offline
RTTDP Gray-box $\times$ Online

🎮 Getting Started

1. Install Environment

Recommend using NVIDIA RTX A5000(24GB) with Python 3.9, PyTorch 2.3.0 and CUDA 12.1 for better reproducibility.

# Install Basic Packages
chmod +x Docker.sh
./Docker.sh

# Create a new conda environment
conda create -n RTTDP python=3.9
conda activate RTTDP

# Install PyTorch
conda install pytorch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 pytorch-cuda=12.1 -c pytorch -c nvidia

# Install other dependencies
cd classification
pip install -r requirements.txt
cd ..

2. Solved Problems

Please refer to the INSTALL_ISSUE.md for more details.

3. Download Data

mkdir -p classification/data

Please download the data from the following links and put them in the classification/data folder.

4. The pre-trained models

mkdir -p classification/ckpt
  • For adapting to ImageNet-C, the pre-trained model available in Torchvision or timm can be used.
  • For the corruption benchmarks, e.g. CIFAR10/100-C, pre-trained models from RobustBench can be used.

5. Run the code

cd classification
chmod +x schedule_exps.sh
./schedule_exps.sh

6. Expected Results

To better check the reproduced results of CIFAR10-C, we provide the expected results in the classification/cifar10_log folder, which are consistent with the results in the paper.

🎫 License

The content of this project itself is licensed under LICENSE.

💡 Acknowledgement

🖊️ Citation

If you find this project useful in your research, please consider cite:

@inproceedings{su2025on,
    title={On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning},
    author={Yongyi Su and Yushu Li and Nanqing Liu and Kui Jia and Xulei Yang and Chuan-Sheng Foo and Xun Xu},
    booktitle={The Thirteenth International Conference on Learning Representations},
    year={2025},
    url={https://openreview.net/forum?id=7893vsQenk}
}

About

[ICLR 2025] On the Adversarial Risk of Test Time Adaptation: An Investigation into Realistic Test-Time Data Poisoning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages