-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[core] remove wandb
dependency
#92
Conversation
The documentation is not available anymore as the PR was closed or merged. |
trl/trainer/ppo_trainer.py
Outdated
wandb_logs.update(stats) | ||
wandb_logs["env/reward_mean"] = torch.mean(rewards).cpu().numpy() | ||
wandb_logs["env/reward_std"] = torch.std(rewards).cpu().numpy() | ||
wandb_logs["env/reward_dist"] = rewards.cpu().numpy() | ||
wandb.log(wandb_logs) | ||
self.accelerator.log(wandb_logs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since the user can select other frameworks (e.g. tensorboard) we should also log if we are not using wandb. maybe that whole block can be wandb agnostic, what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good, based on your suggestion I proposed dc8cda2 , here is the corresponding log: https://wandb.ai/distill-bloom/trl/runs/baw84iyv?workspace=user-
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! have you tried some other logging
Thanks! I will give it a try with |
I can confirm everything works fine (single & multi-GPU with wandb and with |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just one minor comment and then we can merge I think :)
examples/scripts/ppo-sentiment.py
Outdated
config = PPOConfig( | ||
model_name="lvwerra/gpt2-imdb", | ||
learning_rate=1.41e-5, | ||
log_with="wandb", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we remove wandb as a main dependency i think we should not use it as a default. otherwise it will fail if you use the ppo trainer out of the box. what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes makes sense!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be now addressed in d3f9231 !
What does this PR do ?
This PR removes
wandb
dependency fortrl
wandb run of this branch: https://wandb.ai/distill-bloom/trl/runs/360f7cdc?workspace=user-
cc @lvwerra