-
Notifications
You must be signed in to change notification settings - Fork 523
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for unsharded parameters in state_dict APIs #2023
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/2023
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ No FailuresAs of commit 687e0a0 with merge base d5c54f3 (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some questions around edge cases.
torchtune/training/_distributed.py
Outdated
@@ -242,7 +245,9 @@ def gather_cpu_state_dict( | |||
if sharded_param.is_cpu: | |||
# Move back to device if offloaded to CPU | |||
sharded_param = sharded_param.to(device) | |||
if isinstance(sharded_param._local_tensor, NF4Tensor): | |||
if hasattr(sharded_param, "_local_tensor") and isinstance( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the tensor isn't sharded but it still a NF4Tensor, we still need to upcast the datatype as is done on line 271
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so maybe another elif
in the case that the tensor is unsharded, but still NF4Tensor, and just need to call line 271?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this would be much cleaner if at the top of the loop we do:
if hasattr(...):
# get full tensor (NF4 or .full_tensor)
if full_param is NF4:
# upcast
if is_rank_zero:
# the rest the same
This code still seems to assume that the unsharded param is on rank0 which isn't guaranteed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me
Context
What is the purpose of this PR? Is it to
Credit goes to @ifed-ucsd for the original commit, I've adapted it a bit to our latest APIs. Original summary:
"This diff adds functionality to shard the model separately from the vocabulary pruning, which allows us to run training keeping the model in bf16 and the vocab pruning in fp32"
Changelog
What are the changes made in this PR?
*
Test plan
Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.
pre-commit install
)pytest tests
pytest tests -m integration_test
UX
If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example