Skip to content

Latest commit

 

History

History
23 lines (16 loc) · 1.56 KB

README.md

File metadata and controls

23 lines (16 loc) · 1.56 KB

Consprompt: Exploiting Contrastive Samples for Few-Shot Prompt Learning

The prompt has become an effective linguistic tool for utilizing pre-trained language models. However, in few-shot scenarios, subtle changes in the prompt’s design always make the result widely different, and the prompt learning methods also easy to overfit the limited samples. To alleviate this, we explore utilizing suitable contrastive samples and multi-degree contrastive learning methods to improve the robustness of the prompt’s representation. Therefore, the proposed Consprompt, combined with the prompt encoding network, contrastive sampling modules, and contrastive scoring modules, is introduced to realize differential contrastive learning.

Data examples can see in the each zip file 16-100 21-100....zip each contains their sbert embedding

we use the Standford Sbert Embedding image

If you use our idea, you can cite our paper.

J. Weng, Y. Deng, D. Li, H. You, Y. Hu and H. Huang, "Consprompt: Exploiting Contrastive Samples for Few-Shot Prompt Learning," ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Korea, Republic of, 2024, pp. 6835-6839, doi: 10.1109/ICASSP48485.2024.10448403.


Our baseline use the code in LMBFF, which is also a popular prompt method for few-shot learning:https://gitcode.com/princeton-nlp/LM-BFF