Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor DPO data processing #2209

Merged
merged 72 commits into from
Oct 21, 2024
Merged
Show file tree
Hide file tree
Changes from 52 commits
Commits
Show all changes
72 commits
Select commit Hold shift + click to select a range
af1dabf
in progress
qgallouedec Oct 9, 2024
60d1326
Merge branch 'main' into refactor-dpo-data
qgallouedec Oct 9, 2024
6a51014
Merge branch 'main' into refactor-dpo-data
qgallouedec Oct 10, 2024
9bcbae1
Merge branch 'main' into refactor-dpo-data
qgallouedec Oct 10, 2024
0f76eb5
refactor concatenated_inputs and concatenated_forward
qgallouedec Oct 10, 2024
fe1ed25
progress
qgallouedec Oct 10, 2024
bfa363f
Merge branch 'main' into refactor-dpo-data
qgallouedec Oct 11, 2024
a7d355f
further modif
qgallouedec Oct 12, 2024
05b30f9
padding side
qgallouedec Oct 12, 2024
e98dfda
eos prompt enc dec
qgallouedec Oct 12, 2024
85c6ecc
prompt_padding_side
qgallouedec Oct 12, 2024
3103585
drop prompt apdding side collator
qgallouedec Oct 13, 2024
b42305f
working on decoder only
qgallouedec Oct 13, 2024
97aca6a
dpo trainer
qgallouedec Oct 14, 2024
4fd879e
Merge branch 'main' into refactor-dpo-data
qgallouedec Oct 14, 2024
8e64bd5
Fix loss_mask type conversion bug
qgallouedec Oct 14, 2024
8f9e7fe
bad attention mask
qgallouedec Oct 14, 2024
182e7c4
try to get the same tokens as main
qgallouedec Oct 14, 2024
bae1cf2
fix loss mask
qgallouedec Oct 15, 2024
41618ce
Merge branch 'main' into refactor-dpo-data
qgallouedec Oct 15, 2024
d8aa817
fix unused col
qgallouedec Oct 15, 2024
1b3c5b2
added comment
qgallouedec Oct 15, 2024
f880697
raise error when paddind token not set
qgallouedec Oct 15, 2024
ddf4b72
remove private method tests
qgallouedec Oct 15, 2024
9673410
initial vlm support
qgallouedec Oct 15, 2024
fa49958
make it work for paligemma
qgallouedec Oct 15, 2024
139334e
minor test updates
qgallouedec Oct 15, 2024
a266fae
style
qgallouedec Oct 15, 2024
b95613b
improve readibility
qgallouedec Oct 15, 2024
82c1030
improve doc
qgallouedec Oct 15, 2024
ba83156
style
qgallouedec Oct 15, 2024
1aeb67e
flush left and truncate
qgallouedec Oct 15, 2024
6d02c9f
flush left in the code
qgallouedec Oct 16, 2024
c23224d
fix empty_cols and make max_length optional
qgallouedec Oct 16, 2024
2491214
always add eos token
qgallouedec Oct 16, 2024
4a545c5
Merge branch 'main' into refactor-dpo-data
qgallouedec Oct 16, 2024
03f6d88
Merge branch 'refactor-dpo-data' of https://github.com/huggingface/tr…
qgallouedec Oct 16, 2024
3984d38
minor changes and doc
qgallouedec Oct 16, 2024
fc6f5ef
style
qgallouedec Oct 16, 2024
7b09ec6
fix docstring
qgallouedec Oct 16, 2024
0091c21
preference collator in doc
qgallouedec Oct 16, 2024
27877cc
fix doc
qgallouedec Oct 16, 2024
f3a3532
optional max_completion_length
qgallouedec Oct 17, 2024
0c94b9a
Investigating CI failing
qgallouedec Oct 17, 2024
1c4a410
style
qgallouedec Oct 17, 2024
931c68e
just dpo trainer test
qgallouedec Oct 17, 2024
33cdee5
just idefics
qgallouedec Oct 17, 2024
bab8c33
paligemma
qgallouedec Oct 17, 2024
972c7ed
llava
qgallouedec Oct 17, 2024
1bb3a88
test cli
qgallouedec Oct 17, 2024
b3fc10a
dataset in test
qgallouedec Oct 17, 2024
9824f16
all tests
qgallouedec Oct 17, 2024
f7316de
Update trl/trainer/dpo_trainer.py
qgallouedec Oct 17, 2024
7a5ae0c
Update trl/trainer/dpo_trainer.py
qgallouedec Oct 17, 2024
fe9bea7
Update trl/trainer/dpo_trainer.py
qgallouedec Oct 17, 2024
4ca58d9
Update trl/trainer/dpo_trainer.py
qgallouedec Oct 17, 2024
e509549
reference to ref
qgallouedec Oct 17, 2024
1a813ef
Merge branch 'refactor-dpo-data' of https://github.com/huggingface/tr…
qgallouedec Oct 17, 2024
10cec4e
Merge branch 'main' into refactor-dpo-data
qgallouedec Oct 17, 2024
e513fa8
rich descriptions
qgallouedec Oct 17, 2024
5ce1461
fix logits reporting
qgallouedec Oct 17, 2024
5a362e3
fix truncation
qgallouedec Oct 17, 2024
6145afb
remove chat template from dpo_vlm
qgallouedec Oct 17, 2024
8c0c935
Merge branch 'main' into refactor-dpo-data
qgallouedec Oct 17, 2024
979c5c5
`get_batch_sample` -> `generate_from_model[_and_ref]`
qgallouedec Oct 18, 2024
ada53cf
add `num_items_in_batch=None`
qgallouedec Oct 18, 2024
10bffa0
`num_items_in_batch` in `training_step`
qgallouedec Oct 18, 2024
ca2d98f
Fix return type hint
qgallouedec Oct 18, 2024
3692bc0
test tokenize row
qgallouedec Oct 18, 2024
e8dfc7c
Merge branch 'rename_get_batch_sample' into refactor-dpo-data
qgallouedec Oct 18, 2024
beb7547
fix test
qgallouedec Oct 18, 2024
31d02cf
Merge branch 'main' into refactor-dpo-data
qgallouedec Oct 21, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions docs/source/dpo_trainer.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -276,3 +276,7 @@ dpo_trainer = DPOTrainer(
## DPOConfig

[[autodoc]] DPOConfig

## PreferenceCollator

[[autodoc]] trainer.dpo_trainer.PreferenceCollator
2 changes: 1 addition & 1 deletion tests/test_cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ def test_sft_cli():
def test_dpo_cli():
try:
subprocess.run(
"trl dpo --max_steps 1 --output_dir tmp-dpo --model_name_or_path trl-internal-testing/tiny-random-LlamaForCausalLM --dataset_name trl-lib/ultrafeedback_binarized --learning_rate 1e-4 --lr_scheduler_type cosine",
"trl dpo --max_steps 1 --output_dir tmp-dpo --model_name_or_path trl-internal-testing/tiny-random-LlamaForCausalLM --dataset_name trl-internal-testing/tiny-ultrafeedback-binarized --learning_rate 1e-4 --lr_scheduler_type cosine",
shell=True,
check=True,
)
Expand Down
174 changes: 7 additions & 167 deletions tests/test_dpo_trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,170 +31,10 @@
from transformers.testing_utils import require_bitsandbytes, require_peft

from trl import DPOConfig, DPOTrainer, FDivergenceType
from trl.trainer.dpo_trainer import _build_tokenized_answer, _truncate_tokens

from .testing_utils import require_no_wandb


class TestBuildTokenizedAnswer(unittest.TestCase):
def setUp(self):
self.tokenizer = AutoTokenizer.from_pretrained("gpt2")
self.tokenizer.pad_token = self.tokenizer.eos_token

def test_basic_functionality(self):
prompt = "Hello, how are you?"
answer = "I'm doing well, thank you!"

result = _build_tokenized_answer(prompt, answer, tokenizer=self.tokenizer)

self.assertIn("prompt_input_ids", result)
self.assertIn("prompt_attention_mask", result)
self.assertIn("input_ids", result)
self.assertIn("attention_mask", result)

self.assertEqual(len(result["prompt_input_ids"]), len(result["prompt_attention_mask"]))
self.assertEqual(len(result["input_ids"]), len(result["attention_mask"]))

decoded_prompt = self.tokenizer.decode(result["prompt_input_ids"])
self.assertTrue(prompt in decoded_prompt)

decoded_answer = self.tokenizer.decode(result["input_ids"])
self.assertTrue(answer in decoded_answer)

def test_with_processor(self):
def mock_processor(text, images=None, add_special_tokens=True):
return {"input_ids": torch.tensor([[1, 2, 3]]), "attention_mask": torch.tensor([[1, 1, 1]])}

prompt = "Describe this image:"
answer = "A beautiful sunset over the ocean."

result = _build_tokenized_answer(prompt, answer, processor=mock_processor)

self.assertIn("prompt_input_ids", result)
self.assertIn("prompt_attention_mask", result)
self.assertIn("input_ids", result)
self.assertIn("attention_mask", result)

self.assertEqual(result["prompt_input_ids"], [1, 2, 3])
self.assertEqual(result["prompt_attention_mask"], [1, 1, 1])

def test_token_merging(self):
prompt = "The quick brown"
answer = " fox jumps over the lazy dog."

result = _build_tokenized_answer(prompt, answer, tokenizer=self.tokenizer)

full_text = prompt + answer
full_tokenized = self.tokenizer(full_text, add_special_tokens=False)

self.assertEqual(result["prompt_input_ids"] + result["input_ids"], full_tokenized["input_ids"])

def test_vision_model(self):
def mock_vision_processor(text, images=None, add_special_tokens=True):
return {
"input_ids": torch.tensor([[1, 2, 3]]),
"attention_mask": torch.tensor([[1, 1, 1]]),
"pixel_values": torch.rand(1, 3, 224, 224),
"pixel_attention_mask": torch.ones(1, 224, 224),
}

prompt = "Describe this image:"
answer = "A cat sitting on a windowsill."

result = _build_tokenized_answer(prompt, answer, processor=mock_vision_processor)

self.assertIn("prompt_pixel_values", result)
self.assertIn("prompt_pixel_attention_mask", result)
self.assertTrue(torch.is_tensor(result["prompt_pixel_values"]))
self.assertTrue(torch.is_tensor(result["prompt_pixel_attention_mask"]))


class TestTruncateTokens(unittest.TestCase):
def setUp(self):
with tempfile.TemporaryDirectory() as tmp_dir:
self.training_args = DPOConfig(
max_length=20, max_prompt_length=10, truncation_mode="keep_start", output_dir=tmp_dir
)

def test_truncate_tokens(self):
chosen_tokens = [
{
"prompt_input_ids": list(range(15)),
"prompt_attention_mask": [1] * 15,
"input_ids": list(range(10)),
"attention_mask": [1] * 10,
}
]
rejected_tokens = [
{
"prompt_input_ids": list(range(15)),
"prompt_attention_mask": [1] * 15,
"input_ids": list(range(12)),
"attention_mask": [1] * 12,
}
]
prompt_tokens = [{"prompt_input_ids": list(range(15)), "prompt_attention_mask": [1] * 15}]

_truncate_tokens(chosen_tokens, rejected_tokens, prompt_tokens, self.training_args)

# Check if prompt is truncated correctly
self.assertEqual(len(chosen_tokens[0]["prompt_input_ids"]), 10)
self.assertEqual(len(chosen_tokens[0]["prompt_attention_mask"]), 10)
self.assertEqual(len(rejected_tokens[0]["prompt_input_ids"]), 10)
self.assertEqual(len(rejected_tokens[0]["prompt_attention_mask"]), 10)
self.assertEqual(len(prompt_tokens[0]["prompt_input_ids"]), 10)
self.assertEqual(len(prompt_tokens[0]["prompt_attention_mask"]), 10)

# Check if responses are truncated correctly
self.assertEqual(len(chosen_tokens[0]["input_ids"]), 10)
self.assertEqual(len(chosen_tokens[0]["attention_mask"]), 10)
self.assertEqual(len(rejected_tokens[0]["input_ids"]), 10)
self.assertEqual(len(rejected_tokens[0]["attention_mask"]), 10)

def test_truncation_mode_keep_end(self):
self.training_args.truncation_mode = "keep_end"
chosen_tokens = [
{
"prompt_input_ids": list(range(15)),
"prompt_attention_mask": [1] * 15,
"input_ids": list(range(15, 25)),
"attention_mask": [1] * 10,
}
]
rejected_tokens = [
{
"prompt_input_ids": list(range(15)),
"prompt_attention_mask": [1] * 15,
"input_ids": list(range(15, 28)),
"attention_mask": [1] * 13,
}
]
prompt_tokens = [{"prompt_input_ids": list(range(15)), "prompt_attention_mask": [1] * 15}]

_truncate_tokens(chosen_tokens, rejected_tokens, prompt_tokens, self.training_args)

# Check if prompt is truncated correctly from the end
self.assertEqual(prompt_tokens[0]["prompt_input_ids"], list(range(5, 15)))
self.assertEqual(prompt_tokens[0]["prompt_attention_mask"], [1] * 10)

# Check if chosen tokens are truncated correctly
self.assertEqual(chosen_tokens[0]["prompt_input_ids"], list(range(5, 15)))
self.assertEqual(chosen_tokens[0]["prompt_attention_mask"], [1] * 10)
self.assertEqual(chosen_tokens[0]["input_ids"], list(range(15, 25)))
self.assertEqual(chosen_tokens[0]["attention_mask"], [1] * 10)

# Check if rejected tokens are truncated correctly
self.assertEqual(rejected_tokens[0]["prompt_input_ids"], list(range(5, 15)))
self.assertEqual(rejected_tokens[0]["prompt_attention_mask"], [1] * 10)
self.assertEqual(rejected_tokens[0]["input_ids"], list(range(15, 25)))
self.assertEqual(rejected_tokens[0]["attention_mask"], [1] * 10)

def test_invalid_truncation_mode(self):
self.training_args.truncation_mode = "invalid_mode"
with self.assertRaises(ValueError):
_truncate_tokens([], [], [], self.training_args)


class DPOTrainerTester(unittest.TestCase):
def setUp(self):
self.model_id = "trl-internal-testing/dummy-GPT2-correct-vocab"
Expand Down Expand Up @@ -461,9 +301,9 @@ def test_dpo_trainer_padding_token_is_none(self):

with self.assertRaisesRegex(
ValueError,
expected_regex=r"Padding is enabled, but the tokenizer is not configured with a padding token."
r" Explicitly set `tokenizer.pad_token` \(e.g. `tokenizer.pad_token = tokenizer.eos_token`\)"
r" before calling the trainer.",
expected_regex=r"Can't find `pad_token_id` in the `processing_class`. "
r"Explicitly set `tokenizer.pad_token` \(e.g. `tokenizer.pad_token = tokenizer.eos_token`\) "
r"before instantiating the trainer.",
):
trainer = DPOTrainer(
model=self.model,
Expand Down Expand Up @@ -498,9 +338,9 @@ def test_dpo_trainer_w_dataset_num_proc(self):

with self.assertRaisesRegex(
ValueError,
expected_regex=r"Padding is enabled, but the tokenizer is not configured with a padding token."
r" Explicitly set `tokenizer.pad_token` \(e.g. `tokenizer.pad_token = tokenizer.eos_token`\)"
r" before calling the trainer.",
expected_regex=r"Can't find `pad_token_id` in the `processing_class`. "
r"Explicitly set `tokenizer.pad_token` \(e.g. `tokenizer.pad_token = tokenizer.eos_token`\) "
r"before instantiating the trainer.",
):
trainer = DPOTrainer(
model=self.model,
Expand Down Expand Up @@ -1139,7 +979,7 @@ def test_vdpo_trainer(self, model_id):
output_dir=tmp_dir,
per_device_train_batch_size=2,
max_length=512,
max_prompt_length=128,
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise it truncates the image tokens

max_prompt_length=512,
remove_unused_columns=False,
report_to="none",
)
Expand Down
Loading
Loading