We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi,
I think this is a regression bug that was introduced after merging the CUDA semantics pull request #402.
Here's an error message that comes from the test_model_integration.py integration test:
test_model_integration.py
> tag_tensor = torch.LongTensor(sentence_tags) E TypeError: expected torch.LongTensor (got torch.cuda.LongTensor)
I got a similar error message when I tried to test my recent ELMo transformer embeddings on GPU.
To replicate the error just run the integration tests (pytest --runintegration) on a machine with GPU/CUDA support :)
pytest --runintegration
The text was updated successfully, but these errors were encountered:
GH-407: fix training on cuda without CRF
abc05ef
5d55ee3
Merge pull request #448 from zalandoresearch/GH-407-cuda-no-crf
980d2a8
Gh 407 cuda no crf
Thanks, it is working now :)
Sorry, something went wrong.
alanakbik
Successfully merging a pull request may close this issue.
Hi,
I think this is a regression bug that was introduced after merging the CUDA semantics pull request #402.
Here's an error message that comes from the
test_model_integration.py
integration test:> tag_tensor = torch.LongTensor(sentence_tags) E TypeError: expected torch.LongTensor (got torch.cuda.LongTensor)
I got a similar error message when I tried to test my recent ELMo transformer embeddings on GPU.
To replicate the error just run the integration tests (
pytest --runintegration
) on a machine with GPU/CUDA support :)The text was updated successfully, but these errors were encountered: