-
Notifications
You must be signed in to change notification settings - Fork 304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support GPU inference in torchscript model for v2.5 / v2.6 #188
Conversation
sru/version.py
Outdated
@@ -1 +1 @@ | |||
__version__ = '2.5.1' | |||
__version__ = '2.6.0.dev2' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh is this 2.x? for some reason I assumed it was 3.x. Though it does say 2.x in the title actually, now that I look ... :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, there are two PRs :P
What pytest tests do we have for the code affected by these changes [edit: or at least, an automated test script, such as bash, that I could run by hand]? I understand we have to run such tests by hand, since CCIE doesnt do that for us. |
@hpasapp The existing unit tests should cover most of the code paths for non-cuda usage. I also modified the C++ test to ensure the changes work for C++ and for CPU inference. |
Also @cdfox-asapp feels we don't have to merge this PR into master, since we need to merge 3.0.0 dev into master anyway. The changes are identical to the PR for v3.0.0. I can make a release on this branch for Chris's project. |
Advantage of not merging this branch into master? (not merging it means an additional branch to maintain and reason about.) |
🎉 thanks @hpasapp @cdfox-asapp ! |
This PR works for master branch, v2.5 and v2.6 release
A non-trivial PR to support GPU inference in torchscript