FeatWalk is a method tailored for few-shot learning settings, focusing on effectively mining local views to mitigate the interference caused by discriminative features in global view pre-training. By analyzing the correlation of local views with different class prototypes, FeatWalk constructs a more comprehensive class-related representation. This method has been accepted by AAAI 2024, and this repository serves as the official implementation for reference.
The following table demonstrates the performance of FeatWalk compared to the baseline method DeepBDC in various few-shot learning (FSL) scenarios on MiniImageNet and TieredImageNet. The results indicate that FeatWalk significantly outperforms DeepBDC in different FSL scenarios.
Method | Embedding | Mini 5-way 1-shot |
Mini 5-way 5-shot |
Tiered 5-way 1-shot |
Tiered 5-way 5-shot |
---|---|---|---|---|---|
DeepBDC | BDC | 67.83 ± 0.43 | 85.45 ± 0.29 | 73.82 ± 0.47 | 89.00 ± 0.30 |
FeatWalk | BDC | 70.21 ± 0.44 | 87.38 ± 0.27 | 75.25 ± 0.48 | 89.92 ± 0.29 |
Before starting with FeatWalk, please ensure the following preparations are made:
- Place the pre-trained models in the
checkpoint
directory. The pre-trained models can be obtained through the corresponding baseline methods or accessed from the official DeepBDC implementation. - Ensure that datasets (such as MiniImageNet) are located in the
filelist
directory.
--FeatWalk
|--filelist
|--miniImageNet
|--train
|--val
|--test
To run FeatWalk, use the following command:
# 5-Way 1-shot/5-shot on MiniImageNet
sh run.sh
We would like to express our heartfelt gratitude to the open-source methods GoodEmbed and DeepBDC. Our code for this paper was inspired and informed by these sources, and their contributions have been invaluable in supporting our work.