Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apply quantization during DPO QLoRA #115

Merged
merged 2 commits into from
Feb 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions recipes/zephyr-7b-beta/dpo/config_qlora.yaml
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Model arguments
model_name_or_path: alignment-handbook/zephyr-7b-sft-qlora
torch_dtype: float16
torch_dtype: bfloat16
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I turns out that using bfloat16 makes a non-trivial difference to downstream perf! cc @nathan-az :)


# LoRA arguments
use_peft: true
load_in_4bit: true
lora_r: 16
lora_alpha: 16
lora_r: 128
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tuning these hparams was necessary to get close to zephyr-7b-beta perf on MT-Bench

lora_alpha: 128
lora_dropout: 0.05
lora_target_modules:
- q_proj
Expand All @@ -32,7 +32,7 @@ beta: 0.01
do_eval: true
evaluation_strategy: steps
eval_steps: 100
gradient_accumulation_steps: 2
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
Expand Down
14 changes: 6 additions & 8 deletions scripts/run_dpo.py
Original file line number Diff line number Diff line change
Expand Up @@ -128,28 +128,26 @@ def main():

model = model_args.model_name_or_path
if is_adapter_model(model, model_args.model_revision) is True:
# Load the base model, merge the adapter weights and unload the adapter
# Note: to run QLoRA, you will need to merge the base model separately as the merged model in 16bit
logger.info(f"Merging PEFT adapters for {model_args.model_name_or_path=}")

logger.info(f"Loading SFT adapter for {model_args.model_name_or_path=}")
peft_config = PeftConfig.from_pretrained(model_args.model_name_or_path, revision=model_args.model_revision)

model_kwargs = dict(
revision=model_args.base_model_revision,
trust_remote_code=model_args.trust_remote_code,
use_flash_attention_2=model_args.use_flash_attention_2,
torch_dtype=torch_dtype,
use_cache=False if training_args.gradient_checkpointing else True,
device_map=get_kbit_device_map() if quantization_config is not None else None,
quantization_config=quantization_config,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that this approach of quantizing and then merging in DPOTrainer is what Tim Dettmers suggests: https://twitter.com/Tim_Dettmers/status/1694654191325573456

)
base_model = AutoModelForCausalLM.from_pretrained(
peft_config.base_model_name_or_path,
**model_kwargs,
)
model = PeftModel.from_pretrained(
base_model, model_args.model_name_or_path, revision=model_args.model_revision
base_model,
model_args.model_name_or_path,
revision=model_args.model_revision,
)
model.eval()
model = model.merge_and_unload()
model_kwargs = None

ref_model = model
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@
"tensorboard",
"torch==2.1.2",
"transformers==4.36.2",
"trl==0.7.7",
"trl==0.7.10",
"jinja2>=3.0.0",
"tqdm>=4.64.1",
]
Expand Down
Loading