Notebooks to create an instruction following version of Microsoft's Phi 2 LLM with Supervised Fine Tuning and Direct Preference Optimization (DPO)
-
Updated
Nov 27, 2024 - Jupyter Notebook
Notebooks to create an instruction following version of Microsoft's Phi 2 LLM with Supervised Fine Tuning and Direct Preference Optimization (DPO)
Notebooks to create an instruction following version of Microsoft's Phi 1.5 LLM with Supervised Fine Tuning and Direct Preference Optimization (DPO)
Add a description, image, and links to the trl topic page so that developers can more easily learn about it.
To associate your repository with the trl topic, visit your repo's landing page and select "manage topics."