Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Trainer and Args to mention AdamW, not Adam. #9685

Merged
merged 5 commits into from
Jan 20, 2021

Conversation

gchhablani
Copy link
Contributor

@gchhablani gchhablani commented Jan 19, 2021

This PR fixed the issue with Docs and labels in Trainer and TrainingArguments Class for AdamW, current version mentions adam in several places.

Fixes #9628

The Trainer class in trainer.py uses AdamW as the default optimizer. The TrainingArguments class mentions it as Adam in the documentation, which was confusing.

I have also changed variable names to adamw_beta1, adamw_beta2, adamw_epsilon in trainer.py.

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

@LysandreJik

@LysandreJik
Copy link
Member

Thanks for opening the PR. As it stands this would break every existing script leveraging the parameters defined, so renaming the parameters is probably not the way to go.

@sgugger, your insight on this would be very welcome.

@sgugger
Copy link
Collaborator

sgugger commented Jan 20, 2021

There is no reason to change all the names of the parameters indeed, and it would be a too-heavy breaking change. AdamW is not a different optimizer from Adam, it's just Adam with a different way (some might say the right way) of doing weight decay. I don't think we need to do more than a mention at the beginning of the docstring saying that all mentions of Adam are actually about AdamW, with a link to the paper.

@gchhablani
Copy link
Contributor Author

gchhablani commented Jan 20, 2021

Hi @LysandreJik @sgugger. Thanks for your comments, I'll be changing the variables back.

I apologize if this is too silly a question, but how can I run and see how the docs look on a browser after the changes?

@sgugger
Copy link
Collaborator

sgugger commented Jan 20, 2021

You should check this page for all the information on generating/writing the documentation :-)

@gchhablani
Copy link
Contributor Author

gchhablani commented Jan 20, 2021

I have updated it and also added that by default, the weight decay is applied to all layers except bias and LayerNorm weights while training.

Copy link
Collaborator

@sgugger sgugger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me this way, thanks!

@sgugger sgugger merged commit 538245b into huggingface:master Jan 20, 2021
@gchhablani
Copy link
Contributor Author

@sgugger My code passed only 3 out of 12 checks, I was unable to run CirlceCI properly. Can you point out the reasons why this happened?

@sgugger
Copy link
Collaborator

sgugger commented Jan 20, 2021

We are trying to get support from them to understand why, but the checks on your PR were all cancelled. I couldn't retrigger them from our interface either.

@gchhablani gchhablani deleted the fix-trainer-adam-to-adamw branch January 31, 2021 08:07
@gchhablani
Copy link
Contributor Author

gchhablani commented Jan 31, 2021

Hi @sgugger

I believe a possible reason could be that I followed transformers on CircleCI. Maybe it performs checks on my fork of transformers and expects to find some "resources" which aren't there.

I'm not sure how CircleCI works, so this is just a wild guess.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Issue with TrainingArguments docs.
3 participants