-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About "Init tensors using type_as" #2585
Comments
Hi! thanks for your contribution!, great first issue! |
It will work since Just curious, does |
In the first part of the code, if labels is a tensor, it should already be on the right device, since it is passed in to the training_step. |
Thanks for the promote replies and suggestions. This is a toy experiment to try lightning, and it indeed a nice framework~ First, the bert of The code could run successfully after adding the So in summary, |
|
Yeah, thanks very much. |
Yes, that's correct. If it is the only way, then you need to do that. |
Yes! Thanks for your suggestions. |
@awaelchli |
@nsarang Didn't know. Are you interested in making these doc updates? |
I also think there is a need to give a way to fit PL, because type_as is really not working well |
As described in the doc: https://pytorch-lightning.readthedocs.io/en/stable/multi_gpu.html#init-tensors-using-type-as
However, it would be not convenient when there is no proper x , for example:
The problems have been marked in the code.
My confusion is that when I run the code using multi-gpu:
(1) why "labels = torch.LongTensor(labels).to(self.device) " is right while "inputs = ids['input_ids'].to(self.device)" is wrong.
In other words, in the multi-gpu running, the
self.device
is not consitent withself.bert.device
.(2) For the above case, are there any other ways to send the
ids['input_ids']
into the right device in the multi-gpus running, rather thaninputs = ids['input_ids'].to(self.bert.device)
since the doc of lightning requires Delete .cuda() or .to() callsThe text was updated successfully, but these errors were encountered: