This repository contains code for our EMNLP 2022 paper:
Discovering Low-rank Subspaces for Language-agnostic Multilingual Representations
Zhihui Xie, Handong Zhao, Tong Yu, Shuai Li
Shanghai Jiao Tong University, Adobe Research
EMNLP 2022
In this work, we propose LSAR (Low-rank Subspaces for language-Agnostic Representations), a simple but effective unsupervised method to project away language-specific factors from a multilingual embedding space. Specifically, LSAR is finetuning-free and keeps the original embeddings space intact. We systematically evaluate LSAR on various tasks including the challenging language-agnostic QA retrieval task. Empirical results show that applying LSAR consistently leads to improvements over commonly used ML-LMs. Here is the poster.
- Python 3.7+
- Nvidia GPU w/ CUDA
- Anaconda
In order to run experiments, it is also required to install several dependencies using:
bash scripts/install_tools.sh
Note that to run the experiments of LaBSE, a higher PyTorch version compatible with transformers and sentence-transformers is required. We create another Conda environment with Python3.9 and torch==1.12.1
, transformers==4.5.1
, sentence-transformers==1.2.1
.
Run the following command to download the source monolingual corpora (OSCAR and Wikipedia) that are used in the paper for extracting low-rank subspaces:
bash scripts/$source/download_$source.sh
Run the following command to download the datasets (Tatoeba, LAReQA, and Amazon Reviews) used in the paper:
bash scripts/$task/download_$task.sh
To reproduce our main results, make sure the source corpora and datasets are already downloaded and then:
-
Extract multilingual embeddings by running
bash scripts/$source/extract_$source.sh bash scripts/$task/extract_$task.sh
-
Evaluate cross-lingual performance of language-agnostic embeddings by running
bash scripts/$task/evaluate_$task.sh