Skip to content

Latest commit

 

History

History
58 lines (32 loc) · 1.72 KB

README.md

File metadata and controls

58 lines (32 loc) · 1.72 KB

Adversarial-MidiBERT

Article: Zijian Zhao*, “Let Network Decide What to Learn: Symbolic Music Understanding Model Based on Large-scale Adversarial Pre-training (arxiv.org)

Some parts of our code are based on wazenmai/MIDI-BERT: This is the official repository for the paper, MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding. (github.com).

1. Dataset

The datasets we used in the paper include POP1K7, POP909, Pinaist8, EMOPIA, and GiantMIDI.

You can refer the detail in our previous work PianoBART. To run the model, you also need the dict file in this repository.

2. Pre-train

python pretrain.py --dict_file <the dictionary in PianoBART>

To run the model, you need to place your pre-training data in ./Data/output_pretrain.

3. Fine-tune

python finetune.py --dict_file <the dictionary in PianoBART> --task <task name> --dataset <dataset name> --dataroot <dataset path> --class_num <class number> --model_path <pre-trained model path> --mask --aug

If you do not want to use pre-trained parameters, you should add --nopretrain. If you do not want to use mask fine-tuning or data augmentation, you should delete the --mask or --aug.

4. Citation

@misc{zhao2025letnetworkdecidelearn,
      title={Let Network Decide What to Learn: Symbolic Music Understanding Model Based on Large-scale Adversarial Pre-training}, 
      author={Zijian Zhao},
      year={2025},
      eprint={2407.08306},
      archivePrefix={arXiv},
      primaryClass={cs.SD},
      url={https://arxiv.org/abs/2407.08306}, 
}