-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[2/x] Support non-OAI providers as LLM judges for GenAI metrics. #13717
Conversation
Documentation preview for becb1e2 will be available when this CircleCI job More info
|
Signed-off-by: B-Step62 <[email protected]>
a397381
to
ab3a0e7
Compare
Signed-off-by: B-Step62 <[email protected]>
ab3a0e7
to
f63c601
Compare
Signed-off-by: B-Step62 <[email protected]>
Signed-off-by: B-Step62 <[email protected]>
@@ -17,6 +17,7 @@ class AnthropicAdapter(ProviderAdapter): | |||
@classmethod | |||
def chat_to_model(cls, payload, config): | |||
key_mapping = {"stop": "stop_sequences"} | |||
payload["model"] = config.model.name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note: the "model" key was set in the async hander before (see diff at L261-). Moving it inside the adapter as we want request transformation logic to complete within the adapter.
self.base_url = "https://api.anthropic.com/v1/" | ||
|
||
@property | ||
def base_url(self) -> str: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note: accessors for convinience and simplicity in the consumer side. Eventually these should be added to the base provider class and implemented in all providers, but keeping it optional in this PR to reduce the size of changes.
@@ -54,6 +54,36 @@ def model_to_completions(cls, resp, config): | |||
), | |||
) | |||
|
|||
@classmethod | |||
def model_to_chat(cls, resp, config): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note: We only have completion endpoint support for Mistral now. However, their endpoint is chat fornat (OpenAI compatible) .
@property | ||
def adapter(self): | ||
return TogetherAIAdapter |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this an abstract property?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, but can do in a follow-up. See #13717 (comment) for why I didn't add it in this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, https://github.com/mlflow/mlflow/pull/13717/files#r1836413500 is not a blocker!
Signed-off-by: B-Step62 <[email protected]>
Signed-off-by: B-Step62 <[email protected]>
Signed-off-by: B-Step62 <[email protected]>
…low#13717) Signed-off-by: B-Step62 <[email protected]> Signed-off-by: Software Developer <[email protected]>
@B-Step62 any chance you can add documentation on how to setup the enviroment variables for this? |
@pazevedo-hyland You can set the region using the
|
@B-Step62 yup, that fixes it! Thanks, Any chance you can add in the docs as well which parameters can be passed for each of the providers? |
Parameters are simply passed to the providers' model endpoints within a request payload. Which ones are accepted/rejected is not in our control, so we cannot add and maintain documentation for that. Even different models from the same provider can have different requirements. API reference provided by each provider should be more reliable source for the information🙂 |
…low#13717) Signed-off-by: B-Step62 <[email protected]> Signed-off-by: k99kurella <[email protected]>
Related Issues/PRs
#xxxWhat changes are proposed in this pull request?
MLflow LLM Evaluation support the following types of judges: (1) OpenAI models (2) Databricks Model Serving (3) MLflow Gateway endpoints. This PR adds support for a few more proprietary LLM providers like
anthropic
,bedrock
,mistral
in response to repeating user requests.The main challenge here is how to construct request payload and parse response. Fortunately, we already solved this problem once - MLflow Gateway. The adapter's implementation for different LLM providers translates the vendor-dependent format into a unified chat format.
A few key notes:
adapter
property so we can get the adapter class conveniently.How is this PR tested?
Anthropic
Bedrock
Mistral
Does this PR require documentation update?
Will update LLM evaluation doc in a follow-up PR before the release.
Release Notes
Is this a user-facing change?
Enhance MLflow GenAI metrics (LLM-as-a-judge) to support more LLM providers than OpenAI, such as Anthropic, Amazon Bedrock, Mistral, etc.
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/artifacts
: Artifact stores and artifact loggingarea/build
: Build and test infrastructure for MLflowarea/deployments
: MLflow Deployments client APIs, server, and third-party Deployments integrationsarea/docs
: MLflow documentation pagesarea/examples
: Example codearea/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registryarea/models
: MLmodel format, model serialization/deserialization, flavorsarea/recipes
: Recipes, Recipe APIs, Recipe configs, Recipe Templatesarea/projects
: MLproject format, project running backendsarea/scoring
: MLflow Model server, model deployment tools, Spark UDFsarea/server-infra
: MLflow Tracking server backendarea/tracking
: Tracking Service, tracking client APIs, autologgingInterface
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows
: Windows supportLanguage
language/r
: R APIs and clientslanguage/java
: Java APIs and clientslanguage/new
: Proposals for new client languagesIntegrations
integrations/azure
: Azure and Azure ML integrationsintegrations/sagemaker
: SageMaker integrationsintegrations/databricks
: Databricks integrationsHow should the PR be classified in the release notes? Choose one:
rn/none
- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change
- The PR will be mentioned in the "Breaking Changes" sectionrn/feature
- A new user-facing feature worth mentioning in the release notesrn/bug-fix
- A user-facing bug fix worth mentioning in the release notesrn/documentation
- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yes
should be selected for bug fixes, documentation updates, and other small changes.No
should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.