Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: TorchServe support #250

Merged
merged 4 commits into from
Nov 15, 2022
Merged

feat: TorchServe support #250

merged 4 commits into from
Nov 15, 2022

Conversation

njhill
Copy link
Member

@njhill njhill commented Sep 22, 2022

Motivation

The Triton runtime can be used with model-mesh to serve PyTorch torchscript models, but it does not support arbitrary PyTorch models i.e. eager mode. KServe "classic" has integration with TorchServe but it would be good to have integration with model-mesh too so that these kinds of models can be used in distributed multi-model serving contexts.

Modifications

The bulk of the required changes are to the adapter image, covered by PR kserve/modelmesh-runtime-adapter#34.

This PR contains the minimal controller changes needed to enable the support:

  • TorchServe ServingRuntime spec
  • Add "torchserve" to the list of supported built-in runtime types
  • Add "ID extraction" entry for TorchServe's gRPC Predictions RPC so that model-mesh will automatically extract the model name from corresponding request messages

Note the supported model format is advertised as "pytorch-mar" to distinguish from the existing "pytorch" format that refers to raw TorchScript .pt files as supported by Triton.

Result

TorchServe can be used seamlessly with ModelMesh Serving to serve PyTorch models, including eager mode.

Resolves #63

Motivation

The Triton runtime can be used with model-mesh to serve PyTorch torchscript models, but it does not support arbitrary PyTorch models i.e. eager mode. KServe "classic" has integration with TorchServe but it would be good to have integration with model-mesh too so that these kinds of models can be used in distributed multi-model serving contexts.

Modifications

The bulk of the required changes are to the adapter image, covered by PR kserve/modelmesh-runtime-adapter#34.

This PR contains the minimal controller changes needed to enable the support:
- TorchServe ServingRuntime spec
- Add "torchserve" to the list of supported built-in runtime types
- Add "ID extraction" entry for TorchServe's gRPC Predictions RPC so that model-mesh will automatically extract the model name from corresponding request messages

Note the supported model format is advertised as "pytorch-mar" to distinguish from the existing "pytorch" format that refers to raw TorchScript .pt files as supported by Triton.

Result

TorchServe can be used seamlessly with ModelMesh Serving to serve PyTorch models, including eager mode.

Resolves #63

Signed-off-by: Nick Hill <[email protected]>
@kserve-oss-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: njhill

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@njhill
Copy link
Member Author

njhill commented Sep 22, 2022

This PR should be merged after kserve/modelmesh-runtime-adapter#34.

@njhill
Copy link
Member Author

njhill commented Sep 23, 2022

One follow-on task here would be to add a torchserve-based test to the FVTs.

@njhill njhill force-pushed the torchserve branch 2 times, most recently from 65ba358 to a89172a Compare September 26, 2022 17:26
@kbumsik
Copy link

kbumsik commented Nov 6, 2022

Hi, thank you so much for your work on pushing TorchServe to ModelMesh. I am really interested in this feature :)

Do you have any plan to continue working on this? Though it seems that there isn't much work left to do since kserve/modelmesh-runtime-adapter#34 is merged. If you need any help I would like to do some work too.

@njhill
Copy link
Member Author

njhill commented Nov 8, 2022

Thanks @kbumsik. This should now be finished and fully functional. I have tested it manually. The reason for the delay merging the PR is that I wanted to also include an extension to our functional verification tests to exercise the torchserve integration, but am pretty busy with other things so not sure how soon I'll get a chance.

But I'm ok with getting this merged and labeling the torchserve support as "beta" the meantime. I'll aim to get that done in the next couple of days.

@njhill njhill marked this pull request as ready for review November 8, 2022 00:10
@kbumsik
Copy link

kbumsik commented Nov 8, 2022

Thank you for your quick and kind response. I will start testing by my own then 👍

@njhill
Copy link
Member Author

njhill commented Nov 12, 2022

I've opened a new issue to cover the FVT additions: #280

@njhill njhill requested a review from rafvasq November 12, 2022 02:15
Copy link
Member

@rafvasq rafvasq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested it locally and looks good @njhill!

@rafvasq
Copy link
Member

rafvasq commented Nov 15, 2022

/lgtm

@njhill njhill merged commit bd16e5b into main Nov 15, 2022
@njhill njhill deleted the torchserve branch November 15, 2022 17:22
@njhill
Copy link
Member Author

njhill commented Nov 15, 2022

FVI @kbumsik this has now been merged.

@rafvasq rafvasq mentioned this pull request Dec 22, 2022
kserve-oss-bot pushed a commit that referenced this pull request Feb 21, 2023
#### Motivation
Support for TorchServe was added in #250 and kserve/modelmesh-runtime-adapter#34. A test should be added for it as well.

#### Modifications
- Adds basic FVT for load/inference with a TorchServe MAR model using the native TorchServe gRPC API
- Disables OVMS runtime and tests to allow TorchServe to be tested due to resource constraints

#### Result
Closes #280 

Signed-off-by: Rafael Vasquez <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add TorchServe as a built-in Serving Runtime
4 participants