-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add set_destination
API
#14249
Add set_destination
API
#14249
Conversation
Signed-off-by: B-Step62 <[email protected]>
Signed-off-by: B-Step62 <[email protected]>
Signed-off-by: B-Step62 <[email protected]>
Signed-off-by: B-Step62 <[email protected]>
Documentation preview for 15a442a will be available when this CircleCI job More info
|
mlflow/tracing/processor/local.py
Outdated
_logger = logging.getLogger(__name__) | ||
|
||
|
||
class LocalSpanProcessor(SimpleSpanProcessor): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note: This class has large logic overlap with InferenceTableSpanProcessor
. I will refactor it in a follow-up to minimize the blast radius of shipping this change in between RC and stable release.
mlflow/tracing/provider.py
Outdated
The exporter is responsible for implementing the logic to send generated | ||
traces to the desired destination, such as a trace collector endpoint. | ||
""" | ||
# The destination needs to be persisted because the tracer setup can be re-initialized sometimes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we be more specific about when it's reinitialized?
mlflow/tracing/provider.py
Outdated
@@ -116,6 +122,28 @@ def detach_span_from_context(token: contextvars.Token): | |||
context_api.detach(token) | |||
|
|||
|
|||
@experimental | |||
def set_destination(destination: SpanExporter): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the UX if a user wants to set a particular experiment as the destination for their traces? They should just be able to pass the experiment info in, right?
I'm wondering if making users reason about constructing SpanExporter
instances is the right API here.
It would be nice if developers defined a "destination" object or URI that users passed in here. Developers could then define the appropriate exporter to be used for sending the traces to that destination. This would also resolve a small terminology gripe I have: an exporter is different from a destination. An exporter is a tool used to send traces to a destination.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed offline, I've updated the logic as follows:
- Add
TraceDestination
base class ("destination" object) and addedMlflowExperiment
destination that takes experiment ID and tracking URI. - Updated
set_destination
API to take the desintation object. - The span processor/exporter is hardcoded right now, but we will update it to pluggable registry after the 2.20 release. (Not doing it here to avoid regression).
mlflow/tracing/provider.py
Outdated
from mlflow.tracing.processor.local import LocalSpanProcessor | ||
|
||
processor = LocalSpanProcessor(_MLFLOW_TRACE_CUSTOM_EXPORTER) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following from https://github.com/mlflow/mlflow/pull/14249/files#r1916019408, what happens if we want to support setting a particular experiment as a destination? Is LocalSpanProcessor
still appropriate for that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the change mentioned above, when a user passes MlflowExperiment
desintation, MLflow selects theMlflowSpanProcessor
and the corresponding exporter.
Signed-off-by: B-Step62 <[email protected]>
Signed-off-by: B-Step62 <[email protected]>
Signed-off-by: B-Step62 <[email protected]>
if isinstance(_MLFLOW_TRACE_USER_DESTINATION, MlflowExperiment): | ||
from mlflow import MlflowClient | ||
from mlflow.tracing.export.mlflow import MlflowSpanExporter | ||
from mlflow.tracing.processor.mlflow import MlflowSpanProcessor | ||
|
||
client = MlflowClient(tracking_uri=_MLFLOW_TRACE_USER_DESTINATION.tracking_uri) | ||
exporter = MlflowSpanExporter(client) | ||
processor = MlflowSpanProcessor( | ||
exporter, client, _MLFLOW_TRACE_USER_DESTINATION.experiment_id | ||
) | ||
|
||
else: | ||
from mlflow.tracing.export.databricks_agent import DatabricksAgentSpanExporter | ||
from mlflow.tracing.processor.databricks_agent import DatabricksAgentSpanProcessor | ||
|
||
exporter = DatabricksAgentSpanExporter(_MLFLOW_TRACE_USER_DESTINATION) | ||
processor = DatabricksAgentSpanProcessor(exporter) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice if the Destination implementation defined this logic within the class, e.g. via get_exporter()
and get_processor()
developer-facing (not user-facing) methods.
Otherwise, it's hard for other developers to add new destinations without modifying provider.py
. Eventually, it would be nice to enable developers to define destinations in other packages (e.g. via a plugin system); having the destination define exporter & processor itself would be a good step towards this goal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Absolutely! @dbczumar can I do that in a follow-up? This part will be entirely replaced by the new pluggable registry implementation, and I can make sure the destination to have exporter/processor control. The intention in this PR is trying to make blast radius minimum because we are bypassing RC.
Signed-off-by: B-Step62 <[email protected]>
Signed-off-by: B-Step62 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Signed-off-by: B-Step62 <[email protected]>
Signed-off-by: B-Step62 <[email protected]>
Signed-off-by: B-Step62 <[email protected]> Signed-off-by: k99kurella <[email protected]>
Signed-off-by: B-Step62 <[email protected]> Signed-off-by: k99kurella <[email protected]>
🛠 DevTools 🛠
Install mlflow from this PR
Checkout with GitHub CLI
What changes are proposed in this pull request?
Add a new
set_destination
experimental API to support setting the external span exporter provided by thedatabricks-agents
package.How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/artifacts
: Artifact stores and artifact loggingarea/build
: Build and test infrastructure for MLflowarea/deployments
: MLflow Deployments client APIs, server, and third-party Deployments integrationsarea/docs
: MLflow documentation pagesarea/examples
: Example codearea/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registryarea/models
: MLmodel format, model serialization/deserialization, flavorsarea/recipes
: Recipes, Recipe APIs, Recipe configs, Recipe Templatesarea/projects
: MLproject format, project running backendsarea/scoring
: MLflow Model server, model deployment tools, Spark UDFsarea/server-infra
: MLflow Tracking server backendarea/tracking
: Tracking Service, tracking client APIs, autologgingInterface
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows
: Windows supportLanguage
language/r
: R APIs and clientslanguage/java
: Java APIs and clientslanguage/new
: Proposals for new client languagesIntegrations
integrations/azure
: Azure and Azure ML integrationsintegrations/sagemaker
: SageMaker integrationsintegrations/databricks
: Databricks integrationsHow should the PR be classified in the release notes? Choose one:
rn/none
- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change
- The PR will be mentioned in the "Breaking Changes" sectionrn/feature
- A new user-facing feature worth mentioning in the release notesrn/bug-fix
- A user-facing bug fix worth mentioning in the release notesrn/documentation
- A user-facing documentation change worth mentioning in the release notesShould this PR be included in the next patch release?
Yes
should be selected for bug fixes, documentation updates, and other small changes.No
should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.What is a minor/patch release?
Bug fixes, doc updates and new features usually go into minor releases.
Bug fixes and doc updates usually go into patch releases.