Skip to content

Latest commit

 

History

History
32 lines (23 loc) · 2.78 KB

README.md

File metadata and controls

32 lines (23 loc) · 2.78 KB

ReadMe

Custom Trainers

NeMo-Aligner uses custom trainers to coordinate all aspects of training. There are currently three custom trainers:

  1. SupervisedTrainer: for SFT, SteerLM, and Reward modeling.
  2. DPOTrainer: for DPO training.
  3. CriticServerTrainer: trains the RL critic via PyTriton requests. It will also run the reward model depending on the configuration.
  4. PPOTrainer: performs the RLHF PPO training, since PPO has components such as the Critic, this trainer will send inference and train requests via PyTriton to the CriticServerTrainer to train and run inference on the critic.
  5. RSTrainer: performs the Rejection Sampling (RS) training. Since RS needs a reward model, this trainer will send inference requests via PyTriton to run inference on the reward model.

Configuration guide

See the example configurations in the conf folder for an explanation of different configurations we support. Note that all specified configurations in the .yaml file will overwrite the loaded model configuration from the pretrained checkpoint.

APIs

Our custom trainers will only call predefined APIs on the model passed in. These APIs are defined in alignable_interface.py.

Launching Scripts

To run a full RLHF PPO job, we need to start both the CriticServerTrainer and PPOTrainer.

RLHF Training architecture and details

Please see RLHFTraining.md.