Skip to content

KwaiVGI/VideoAlign

Repository files navigation

Improving Video Generation with Human Feedback

       
   

📖 Introduction

This repository open-sources the VideoReward component -- our VLM-based reward model introduced in the paper Improving Video Generation with Human Feedback. VideoReward evaluates generated videos across three critical dimensions:

  • Visual Quality (VQ): The clarity, aesthetics, and single-frame reasonableness.
  • Motion Quality (MQ): The dynamic stability, dynamic reasonableness, naturalness, and dynamic degress.
  • Text Alignment (TA): The relevance between the generated video and the text prompt.

This versatile reward model can be used for data filtering, guidance, reject sampling, DPO, and other RL methods.

📝 Updates

🚀 Quick Started

1. Environment Set Up

Clone this repository and install packages.

git clone https://github.com/KwaiVGI/VideoAlign
cd VideoAlign
conda env create -f environment.yaml

2. Download Pretrained Weights

Please download our checkpoints from Huggingface and put it in ./checkpoints/.

cd checkpoints
git lfs install
git clone https://huggingface.co/KwaiVGI/VideoReward
cd ..

3. Scoring for a single prompt-video item.

python inference.py

✨ Eval the Performance on VideoGen-RewardBench

1. Download the VideoGen-RewardBench and put it in ./datasets/.

cd dataset
git lfs install
git clone https://huggingface.co/datasets/KwaiVGI/VideoGen-RewardBench
cd ..

2. Start inference

python eval_videogen_rewardbench.py

🏁 Train RM on Your Own Data

1. Prepare your own data as the instruction stated.

2. Start training!

sh train.sh

🤗 Acknowledgments

Our reward model is based on QWen2-VL-2B-Instruct, and our code is build upon TRL and Qwen2-VL-Finetune, thanks to all the contributors!

⭐ Citation

Please leave us a star ⭐ if you find our work helpful.

@article{liu2025improving,
      title={Improving Video Generation with Human Feedback},
      author={Jie Liu and Gongye Liu and Jiajun Liang and Ziyang Yuan and Xiaokun Liu and Mingwu Zheng and Xiele Wu and Qiulin Wang and Wenyu Qin and Menghan Xia and Xintao Wang and Xiaohong Liu and Fei Yang and Pengfei Wan and Di Zhang and Kun Gai and Yujiu Yang and Wanli Ouyang},
      journal={arXiv preprint arXiv:2501.13918},
      year={2025}
}