Releases: mlflow/mlflow
MLflow 2.11.1
MLflow 2.11.1 is a patch release, containing fixes for some Databricks integrations and other various issues.
Bug fixes:
- [UI] Add git commit hash back to the run page UI (#11324, @daniellok-db)
- [Databricks Integration] Explicitly import vectorstores and embeddings in databricks_dependencies (#11334, @daniellok-db)
- [Databricks Integration] Modify DBR version parsing logic (#11328, @daniellok-db)
Small bug fixes and documentation updates:
#11336, #11335, @harupy; #11303, @B-Step62; #11319, @BenWilson2; #11306, @daniellok-db
MLflow 2.11.0
MLflow 2.11.0 includes several major features and improvements
With the MLflow 2.11.0 release, we're excited to bring a series of large and impactful features that span both GenAI and Deep Learning use cases.
-
The MLflow Tracking UI got an overhaul to better support the review and comparison of training runs for Deep Learning workloads. From grouping to large-scale metric plotting throughout
the iterations of a DL model's training cycle, there are a large number of quality of life improvements to enhance your Deep Learning MLOps workflow. -
Support for the popular PEFT library from HuggingFace is now available
in themlflow.transformers
flavor. In addition to PEFT support, we've removed the restrictions on Pipeline types
that can be logged to MLflow, as well as the ability to, when developing and testing models, log a transformers pipeline without copying foundational model weights. These
enhancements strive to make the transformers flavor more useful for cutting-edge GenAI models, new pipeline types, and to simplify the development process of prompt engineering, fine-tuning,
and to make iterative development faster and cheaper. Give the updated flavor a try today! (#11240, @B-Step62) -
We've added support to both PyTorch and
TensorFlow for automatic model weights checkpointing (including resumption from a
previous state) for the auto logging implementations within both flavors. This highly requested feature allows you to automatically configure long-running Deep Learning training
runs to keep a safe storage of your best epoch, eliminating the risk of a failure late in training from losing the state of the model optimization. (#11197, #10935, @WeichenXu123) -
We've added a new interface to Pyfunc for GenAI workloads. The new
ChatModel
interface allows for interacting with a deployed GenAI chat model as you would with any other provider.
The simplified interface (no longer requiring conformance to a Pandas DataFrame input type) strives to unify the API interface experience. (#10820, @daniellok-db) -
We now support Keras 3. This large overhaul of the Keras library introduced new fundamental changes to how Keras integrates with different DL frameworks, bringing with it
a host of new functionality and interoperability. To learn more, see the Keras 3.0 Tutorial
to start using the updated model flavor today! (#10830, @chenmoneygithub) -
Mistral AI has been added as a native provider for the MLflow Deployments Server. You can
now create proxied connections to the Mistral AI services for completions and embeddings with their powerful GenAI models. (#11020, @thnguyendn) -
We've added compatibility support for the OpenAI 1.x SDK. Whether you're using an OpenAI LLM for model evaluation or calling OpenAI within a LangChain model, you'll now be able to
utilize the 1.x family of the OpenAI SDK without having to point to deprecated legacy APIs. (#11123, @harupy)
Features:
- [UI] Revamp the MLflow Tracking UI for Deep Learning workflows, offering a more intuitive and efficient user experience (#11233, @daniellok-db)
- [Data] Introduce the ability to log datasets without loading them into memory, optimizing resource usage and processing time (#11172, @chenmoneygithub)
- [Models] Introduce logging frequency controls for TensorFlow, aligning it with Keras for consistent performance monitoring (#11094, @chenmoneygithub)
- [Models] Add PySpark DataFrame support in
mlflow.pyfunc.predict
, enhancing data compatibility and analysis options for batch inference (#10939, @ernestwong-db) - [Models] Introduce new CLI commands for updating model requirements, facilitating easier maintenance, validation and updating of models without having to re-log (#11061, @daniellok-db)
- [Models] Update Embedding API for sentence transformers to ensure compatibility with OpenAI format, broadening model application scopes (#11019, @lu-wang-dl)
- [Models] Improve input and signature support for text-generation models, optimizing for Chat and Completions tasks (#11027, @es94129)
- [Models] Enable chat and completions task outputs in the text-generation pipeline, expanding interactive capabilities (#10872, @es94129)
- [Tracking] Add node id to system metrics for enhanced logging in multi-node setups, improving diagnostics and monitoring (#11021, @chenmoneygithub)
- [Tracking] Implement
mlflow.config.enable_async_logging
for asynchronous logging, improving log handling and system performance (#11138, @chenmoneygithub) - [Evaluate] Enhance model evaluation with endpoint URL support, streamlining performance assessments and integrations (#11262, @B-Step62)
- [Deployments] Implement chat & chat streaming support for Cohere, enhancing interactive model deployment capabilities (#10976, @gabrielfu)
- [Deployments] Enable Cohere streaming support, allowing real-time interaction functionalities for the MLflow Deployments server with the Cohere provider (#10856, @gabrielfu)
- [Docker / Scoring] Optimize Docker images for model serving, ensuring more efficient deployment and scalability (#10954, @B-Step62)
- [Scoring] Support completions (
prompt
) and embeddings (input
) format inputs in the scoring server, increasing model interaction flexibility (#10958, @es94129)
Bug Fixes:
- [Model Registry] Correct the oversight of not utilizing the default credential file in model registry setups (#11261, @B-Step62)
- [Model Registry] Address the visibility issue of aliases in the model versions table within the registered model detail page (#11223, @smurching)
- [Models] Ensure
load_context()
is called when enforcingChatModel
outputs so that all required external references are included in the model object instance (#11150, @daniellok-db) - [Models] Rectify the keras output dtype in signature mismatches, ensuring data consistency and accuracy (#11230, @chenmoneygithub)
- [Models] Resolve spark model loading failures, enhancing model reliability and accessibility (#11227, @WeichenXu123)
- [Models] Eliminate false warnings for missing signatures in Databricks, improving the user experience and model validation processes (#11181, @B-Step62)
- [Models] Implement a timeout for signature/requirement inference during Transformer model logging, optimizing the logging process and avoiding delays (#11037, @B-Step62)
- [Models] Address the missing dtype issue for transformer pipelines, ensuring data integrity and model accuracy (#10979, @B-Step62)
- [Models] Correct non-idempotent predictions due to in-place updates to model-config, stabilizing model outputs (#11014, @B-Step62)
- [Models] Fix an issue where specifying
torch.dtype
as a string was not being applied correctly to the underlying transformers model (#11297, #11295, @harupy) - [Tracking] Fix
mlflow.evaluate
col_mapping
bug for non-LLM/custom metrics, ensuring accurate evaluation and metric calculation (#11156, @sunishsheth2009) - [Tracking] Resolve the
TensorInfo
TypeError exception message issue, ensuring clarity and accuracy in error reporting for users (#10953, @leecs0503) - [Tracking] Enhance
RestException
objects to be picklable, improving their usability in distributed computing scenarios where serialization is essential (#10936, @WeichenXu123) - [Tracking] Address the handling of unrecognized response error codes, ensuring robust error processing and improved user feedback in edge cases (#10918, @chenmoneygithub)
- [Spark] Update hardcoded
io.delta:delta-spark_2.12:3.0.0
dependency to the correct scala version, aligning dependencies with project requirements (#11149, @WeichenXu123) - [Server-infra] Adapt to newer versions of python by avoiding
importlib.metadata.entry_points().get
, enhancing compatibility and stability (#10752, @raphaelauv) - [Server-infra / Tracking] Introduce an environment variable to disable mlflow configuring logging on import, improving configurability and user control (#11137, @jmahlik)
- [Auth] Enhance auth validation for
mlflow.login()
, streamlining the authentication process and improving security (#11039, @chenmoneygithub)
Documentation Updates:
- [Docs] Introduce a comprehensive notebook demonstrating the use of ChatModel with Transformers and Pyfunc, providing users with practical insights and guidelines for leveraging these models (#11239, @daniellok-db)
- [Tracking / Docs] Stabilize the dataset logging APIs, removing the experimental status (#11229, @dbczumar)
- [Docs] Revise and update the documentation on authentication database configuration, offering clearer instructions and better support for setting up secure authentication mechanisms (#11176, @gabrielfu)
- [Docs] Publish a new guide and tutorial for MLflow data logging and
log_input
, enriching the documentation with actionable advice and examples for effective data handling (#10956, @BenWilson2) - [Docs] Upgrade the documentation visuals by replacing low-resolution and poorly dithered GIFs with high-quality HTML5 videos, significantly enhancing the learning experience (#11051, @BenWilson2)
- [Docs / Examples] Correct the compatibility matrix for OpenAI in MLflow Deployments Server documentation, providing users with accurate integration details and supporting smoother deployments (#11015, @BenWilson2)
Small bug fixes and documentation updates:
#11284, #11096, #11285, #11245, #11254, #11252, #11250, #11249, #11234, #11248, #11242, #11244, #11236, #11208, #11220, #11222, #11221, #11219, #1...
MLflow 2.10.2
MLflow 2.10.1
MLflow 2.10.1 is a patch release, containing fixes for various bugs in the transformers
and langchain
flavors, the MLflow UI, and the S3 artifact store. More details can be found in the patch notes below.
Bug fixes:
- [UI] Fixed a bug that prevented datasets from showing up in the MLflow UI (#10992, @daniellok-db)
- [Artifact Store] Fixed directory bucket region name retrieval (#10967, @kriscon-db)
- Bug fixes for Transformers flavor
- [Models] Fix an issue with transformer pipelines not inheriting the torch dtype specified on the model, causing pipeline inference to consume more resources than expected. (#10979, @B-Step62)
- [Models] Fix non-idempotent prediction due to in-place update to model-config (#11014, @B-Step62)
- [Models] Fixed a bug affecting prompt templating with Text2TextGeneration pipelines. Previously, calling
predict()
on a pyfunc-loaded Text2TextGeneration pipeline would fail forstring
andList[string]
inputs. (#10960, @B-Step62)
- Bug fixes for Langchain flavor
- Fixed errors that occur when logging inputs and outputs with different lengths (#10952, @serena-ruan)
Documentation updates:
- [Docs] Add indications of DL UI capabilities to the DL landing page (#10991, @BenWilson2)
- [Docs] Fix incorrect logo on LLMs landing page (#11017, @BenWilson2)
Small bug fixes and documentation updates:
MLflow 2.10.0
In MLflow 2.10, we're introducing a number of significant new features that are preparing the way for current and future enhanced support for Deep Learning use cases, new features to support a broadened support for GenAI applications, and some quality of life improvements for the MLflow Deployments Server (formerly the AI Gateway).
New MLflow Website
We have a new home. The new site landing page is fresh, modern, and contains more content than ever. We're adding new content and blogs all of the time.
Model Signature Supports Objects and Arrays (#9936, @serena-ruan)
Objects and Arrays are now available as configurable input and output schema elements. These new types are particularly useful for GenAI-focused flavors that can have complex input and output types. See the new Signature and Input Example documentation to learn more about how to use these new signature types.
Langchain Autologging (#10801, @serena-ruan)
LangChain has autologging support now! When you invoke a chain, with autologging enabled, we will automatically log most chain implementations, recording and storing your configured LLM application for you. See the new Langchain documentation to learn more about how to use this feature.
Prompt Templating for Transformers Models (#10791, @daniellok-db)
The MLflow transformers
flavor now supports prompt templates. You can now specify an application-specific set of instructions to submit to your GenAI pipeline in order to simplify, streamline, and integrate sets of system prompts to be supplied with each input request. Check out the updated guide to transformers to learn more and see examples!
MLflow Deployments Server Enhancement (#10765, @gabrielfu; #10779, @TomeHirata)
The MLflow Deployments Server now supports two new requested features: (1) OpenAI endpoints that support streaming responses. You can now configure an endpoint to return realtime responses for Chat and Completions instead of waiting for the entire text contents to be completed. (2) Rate limits can now be set per endpoint in order to help control cost overrun when using SaaS models.
Further Document Improvements
Continued the push for enhanced documentation, guides, tutorials, and examples by expanding on core MLflow functionality (Deployments, Signatures, and Model Dependency management), as well as entirely new pages for GenAI flavors. Check them out today!
Other Features:
- [Models] Enhance the MLflow Models
predict
API to serve as a pre-logging validator of environment compatibility. (#10759, @B-Step62) - [Models] Add support for Image Classification pipelines within the transformers flavor (#10538, @KonakanchiSwathi)
- [Models] Add support for retrieving and storing license files for transformers models (#10871, @BenWilson2)
- [Models] Add support for model serialization in the Visual NLP format for JohnSnowLabs flavor (#10603, @C-K-Loan)
- [Models] Automatically convert OpenAI input messages to LangChain chat messages for
pyfunc
predict (#10758, @dbczumar) - [Tracking] Enhance async logging functionality by ensuring flush is called on
Futures
objects (#10715, @chenmoneygithub) - [Tracking] Add support for a non-interactive mode for the
login()
API (#10623, @henxing) - [Scoring] Allow MLflow model serving to support direct
dict
inputs with themessages
key (#10742, @daniellok-db, @B-Step62) - [Deployments] Add streaming support to the MLflow Deployments Server for OpenAI streaming return compatible routes (#10765, @gabrielfu)
- [Deployments] Add support for directly interfacing with OpenAI via the MLflow Deployments server (#10473, @prithvikannan)
- [UI] Introduce a number of new features for the MLflow UI (#10864, @daniellok-db)
- [Server-infra] Add an environment variable that can disallow HTTP redirects (#10655, @daniellok-db)
- [Artifacts] Add support for Multipart Upload for Azure Blob Storage (#10531, @gabrielfu)
Bug fixes
- [Models] Add deduplication logic for pip requirements and extras handling for MLflow models (#10778, @BenWilson2)
- [Models] Add support for paddle 2.6.0 release (#10757, @WeichenXu123)
- [Tracking] Fix an issue with an incorrect retry default timeout for urllib3 1.x (#10839, @BenWilson2)
- [Recipes] Fix an issue with MLflow Recipes card display format (#10893, @WeichenXu123)
- [Java] Fix an issue with metadata collection when using Streaming Sources on certain versions of Spark where Delta is the source (#10729, @daniellok-db)
- [Scoring] Fix an issue where SageMaker tags were not propagating correctly (#9310, @clarkh-ncino)
- [Windows / Databricks] Fix an issue with executing Databricks run commands from within a Window environment (#10811, @wolpl)
- [Models / Databricks] Disable
mlflowdbfs
mounts for JohnSnowLabs flavor due to flakiness (#9872, @C-K-Loan)
Documentation updates:
- [Docs] Fixed the
KeyError: 'loss'
bug for the Quickstart guideline (#10886, @yanmxa) - [Docs] Relocate and supplement Model Signature and Input Example docs (#10838, @BenWilson2)
- [Docs] Add the HuggingFace Model Evaluation Notebook to the website (#10789, @BenWilson2)
- [Docs] Rewrite the search run documentation (#10863, @chenmoneygithub)
- [Docs] Create documentation for transformers prompt templates (#10836, @daniellok-db)
- [Docs] Refactoring of the Getting Started page (#10798, @BenWilson2)
- [Docs] Add a guide for model dependency management (#10807, @B-Step62)
- [Docs] Add tutorials and guides for LangChain (#10770, @BenWilson2)
- [Docs] Refactor portions of the Deep Learning documentation landing page (#10736, @chenmoneygithub)
- [Docs] Refactor and overhaul the Deployment documentation and add new tutorials (#10726, @B-Step62)
- [Docs] Add a PyTorch landing page, quick start, and guide (#10687, #10737 @chenmoneygithub)
- [Docs] Add additional tutorials to OpenAI flavor docs (#10700, @BenWilson2)
- [Docs] Enhance the guides on quickly getting started with MLflow by demonstrating how to use Databricks Community Edition (#10663, @BenWilson2)
- [Docs] Create the OpenAI Flavor landing page and intro notebooks (#10622, @BenWilson2)
- [Docs] Refactor the Tensorflow flavor API docs (#10662, @chenmoneygithub)
#10538, #10901, #10903, #10876, #10833, #10859, #10867, #10843, #10857, #10834, #10814, #10805, #10764, #10771, #10733, #10724, #10703, #10710, #10696, #10691, #10692, @B-Step62; #10882, #10854, #10395, #10725, #10695, #10712, #10707, #10667, #10665, #10654, #10638, #10628, @harupy; #10881, #10875, #10835, #10845, #10844, #10651, #10806, #10786, #10785, #10781, #10741, #10772, #10727, @serena-ruan; #10873, #10755, #10750, #10749, #10619, @WeichenXu123; #10877, @amueller; #10852, @QuentinAmbard; #10822, #10858, @gabrielfu; #10862, @jerrylian-db; #10840, @ernestwong-db; #10841, #10795, #10792, #10774, #10776, #10672, @BenWilson2; #10827, #10826, #10825, #10732, #10481, @michael-berk; #10828, #10680, #10629, @daniellok-db; #10799, #10800, #10578, #10782, #10783, #10723, #10464, @annzhang-db; #10803, #10731, #10708, @kriscon-db; #10797, @dbczumar; #10756, #10751, @Ankit8848; #10784, @AveshCSingh; #10769, #10763, #10717, @chenmoneygithub; #10698, @rmalani-db; #10767, @liangz1; #10682, @cdreetz; #10659, @prithvikannan; #10639, #10609, @TomeHirata
MLflow 2.9.2
MLflow 2.9.2 is a patch release, containing several critical security fixes and configuration updates to support extremely large model artifacts.
Features:
- [Deployments] Add the
mlflow.deployments.openai
API to simplify direct access to OpenAI services through the deployments API (#10473, @prithvikannan) - [Server-infra] Add a new environment variable that permits disabling http redirects within the Tracking Server for enhanced security in publicly accessible tracking server deployments (#10673, @daniellok-db)
- [Artifacts] Add environment variable configurations for both Multi-part upload and Multi-part download that permits modifying the per-chunk size to support extremely large model artifacts (#10648, @harupy)
Security fixes:
- [Server-infra] Disable the ability to inject malicious code via manipulated YAML files by forcing YAML rendering to be performed in a secure Sandboxed mode (#10676, @BenWilson2, #10640, @harupy)
- [Artifacts] Prevent path traversal attacks when querying artifact URI locations by disallowing
..
path traversal queries (#10653, @B-Step62) - [Data] Prevent a mechanism for conducting a malicious file traversal attack on Windows when using tracking APIs that interface with
HTTPDatasetSource
(#10647, @BenWilson2) - [Artifacts] Prevent a potential path traversal attack vector via encoded url traversal paths by decoding paths prior to evaluation (#10650, @B-Step62)
- [Artifacts] Prevent the ability to conduct path traversal attacks by enforcing the use of sanitized paths with the tracking server (#10666, @harupy)
- [Artifacts] Prevent path traversal attacks when using an FTP server as a backend store by enforcing base path declarations prior to accessing user-supplied paths (#10657, @harupy)
Documentation updates:
- [Docs] Add an end-to-end tutorial for RAG creation and evaluation (#10661, @AbeOmor)
- [Docs] Add Tensorflow landing page (#10646, @chenmoneygithub)
- [Deployments / Tracking] Add endpoints to LLM evaluation docs (#10660, @prithvikannan)
- [Examples] Add retriever evaluation tutorial for LangChain and improve the Question Generation tutorial notebook (#10419, @liangz1)
Small bug fixes and documentation updates:
#10677, #10636, @serena-ruan; #10652, #10649, #10641, @harupy; #10643, #10632, @BenWilson2
MLflow 2.9.1
MLflow 2.9.1 is a patch release, containing a critical bug fix related to loading pyfunc
models that were saved in previous versions of MLflow.
Bug fixes:
- [Models] Revert Changes to PythonModel that introduced loading issues for models saved in earlier versions of MLflow (#10626, @BenWilson2)
Small bug fixes and documentation updates:
MLflow 2.9.0
MLflow 2.9.0 includes several major features and improvements.
MLflow AI Gateway deprecation (#10420, @harupy)
The feature previously known as MLflow AI Gateway has been moved to utilize the MLflow deployments API.
For guidance on migrating from the AI Gateway to the new deployments API, please see the MLflow AI Gateway Migration Guide.
MLflow Tracking docs overhaul (#10471, @B-Step62)
The MLflow tracking docs have been overhauled. We'd like your feedback on the new tracking docs!
Security fixes
Three security patches have been filed with this release and CVE's have been issued with the details involved in the security patch and potential attack vectors. Please review and update your tracking server deployments if your tracking server is not securely deployed and has open access to the internet.
- Sanitize
path
inHttpArtifactRepository.list_artifacts
(#10585, @harupy) - Sanitize
filename
inContent-Disposition
header forHTTPDatasetSource
(#10584, @harupy). - Validate
Content-Type
header to prevent POST XSS (#10526, @B-Step62)
Features
- [Tracking] Use
backoff_jitter
when making HTTP requests (#10486, @ajinkyavbhandare) - [Tracking] Add default
aggregate_results
if the score type is numeric inmake_metric
API (#10490, @sunishsheth2009) - [Tracking] Add string type of score types for metric value for genai (#10307, @sunishsheth2009)
- [Artifacts] Support multipart upload for for proxy artifact access (#9521, @harupy)
- [Models] Support saving
torch_dtype
for transformers models (#10586, @serena-ruan) - [Models] Add built-in metric
ndcg_at_k
to retriever evaluation (#10284, @liangz1) - [Model Registry] Implement universal
copy_model_version
(#10308, @jerrylian-db) - [Models] Support saving/loading
RunnableSequence
,RunnableParallel
, andRunnableBranch
(#10521, #10611, @serena-ruan)
Bug fixes
- [Tracking] Resume system metrics logging when resuming an existing run (#10312, @chenmoneygithub)
- [UI] Fix incorrect sorting order in line chart (#10553, @B-Step62)
- [UI] Remove extra whitespace in git URLs (#10506, @mrplants)
- [Models] Make spark_udf use NFS to broadcast model to spark executor on databricks runtime and spark connect mode (#10463, @WeichenXu123)
- [Models] Fix promptlab pyfunc models not working for chat routes (#10346, @daniellok-db)
Documentation updates
- [Docs] Add a quickstart guide for Tensorflow (#10398, @chenmoneygithub)
- [Docs] Improve the parameter tuning guide (#10344, @chenmoneygithub)
- [Docs] Add a guide for system metrics logging (#10429, @chenmoneygithub)
- [Docs] Add instructions on how to configure credentials for Azure OpenAI (#10560, @BenWilson2)
- [Docs] Add docs and tutorials for Sentence Transformers flavor (#10476, @BenWilson2)
- [Docs] Add tutorials, examples, and guides for Transformers Flavor (#10360, @BenWilson2)
Small bug fixes and documentation updates
#10567, #10559, #10348, #10342, #10264, #10265, @B-Step62; #10595, #10401, #10418, #10394, @chenmoneygithub; #10557, @dan-licht; #10584, #10462, #10445, #10434, #10432, #10412, #10411, #10408, #10407, #10403, #10361, #10340, #10339, #10310, #10276, #10268, #10260, #10224, #10214, @harupy; #10415, @jessechancy; #10579, #10555, @annzhang-db; #10540, @wllgrnt; #10556, @smurching; #10546, @mbenoit29; #10534, @gabrielfu; #10532, #10485, #10444, #10433, #10375, #10343, #10192, @serena-ruan; #10480, #10416, #10173, @jerrylian-db; #10527, #10448, #10443, #10442, #10441, #10440, #10439, #10381, @prithvikannan; #10509, @keenranger; #10508, #10494, @WeichenXu123; #10489, #10266, #10210, #10103, @TomeHirata; #10495, #10435, #10185, @daniellok-db; #10319, @michael-berk; #10417, @bbqiu; #10379, #10372, #10282, @BenWilson2; #10297, @KonakanchiSwathi; #10226, #10223, #10221, @milinddethe15; #10222, @flooxo; #10590, @letian-w;
MLflow 2.8.1
MLflow 2.8.1 is a patch release, containing some critical bug fixes and an update to our continued work on reworking our docs.
Notable details:
- The API
mlflow.llm.log_predictions
is being marked as deprecated, as its functionality has been incorporated intomlflow.log_table
. This API will be removed in the 2.9.0 release. (#10414, @dbczumar)
Bug fixes:
- [Artifacts] Fix a regression in 2.8.0 where downloading a single file from a registered model would fail (#10362, @BenWilson2)
- [Evaluate] Fix the
Azure OpenAI
integration formlflow.evaluate
when using LLMjudge
metrics (#10291, @prithvikannan) - [Evaluate] Change
Examples
to optional for themake_genai_metric
API (#10353, @prithvikannan) - [Evaluate] Remove the
fastapi
dependency when usingmlflow.evaluate
for LLM results (#10354, @prithvikannan) - [Evaluate] Fix syntax issues and improve the formatting for generated prompt templates (#10402, @annzhang-db)
- [Gateway] Fix the Gateway configuration validator pre-check for OpenAI to perform instance type validation (#10379, @BenWilson2)
- [Tracking] Fix an intermittent issue with hanging threads when using asynchronous logging (#10374, @chenmoneygithub)
- [Tracking] Add a timeout for the
mlflow.login()
API to catch invalid hostname configuration input errors (#10239, @chenmoneygithub) - [Tracking] Add a
flush
operation at the conclusion of logging system metrics (#10320, @chenmoneygithub) - [Models] Correct the prompt template generation logic within the Prompt Engineering UI so that the prompts can be used in the Python API (#10341, @daniellok-db)
- [Models] Fix an issue in the
SHAP
model explainability functionality withinmlflow.shap.log_explanation
so that duplicate or conflicting dependencies are not registered when logging (#10305, @BenWilson2)
Documentation updates:
- [Docs] Add MLflow Tracking Quickstart (#10285, @BenWilson2)
- [Docs] Add tracking server configuration guide (#10241, @chenmoneygithub)
- [Docs] Refactor and improve the model deployment quickstart guide (#10322, @prithvikannan)
- [Docs] Add documentation for system metrics logging (#10261, @chenmoneygithub)
Small bug fixes and documentation updates:
#10367, #10359, #10358, #10340, #10310, #10276, #10277, #10247, #10260, #10220, #10263, #10259, #10219, @harupy; #10313, #10303, #10213, #10272, #10282, #10283, #10231, #10256, #10242, #10237, #10238, #10233, #10229, #10211, #10231, #10256, #10242, #10238, #10237, #10229, #10233, #10211, @BenWilson2; #10375, @serena-ruan; #10330, @Haxatron; #10342, #10249, #10249, @B-Step62; #10355, #10301, #10286, #10257, #10236, #10270, #10236, @prithvikannan; #10321, #10258, @jerrylian-db; #10245, @jessechancy; #10278, @daniellok-db; #10244, @gabrielfu; #10226, @milinddethe15; #10390, @bbqiu; #10232, @sunishsheth2009
MLflow 2.8.0
MLflow 2.8.0 includes several notable new features and improvements
- The MLflow Evaluate API has had extensive feature development in this release to support LLM workflows and multiple new evaluation modalities. See the new documentation, guides, and tutorials for MLflow LLM Evaluate to learn more.
- The MLflow Docs modernization effort has started. You will see a very different look and feel to the docs when visiting them, along with a batch of new tutorials and guides. More changes will be coming soon to the docs!
- 4 new LLM providers have been added! Google PaLM 2, AWS Bedrock, AI21 Labs, and HuggingFace TGI can now be configured and used within the AI Gateway. Learn more in the new AI Gateway docs!
Features:
- [Gateway] Add support for AWS Bedrock as a provider in the AI Gateway (#9598, @andrew-christianson)
- [Gateway] Add support for Huggingface Text Generation Inference as a provider in the AI Gateway (#10072, @SDonkelaarGDD)
- [Gateway] Add support for Google PaLM 2 as a provider in the AI Gateway (#9797, @arpitjasa-db)
- [Gateway] Add support for AI21labs as a provider in the AI Gateway (#9828, #10168, @zhe-db)
- [Gateway] Introduce a simplified method for setting the configuration file location for the AI Gateway via environment variable (#9822, @danilopeixoto)
- [Evaluate] Introduce default provided LLM evaluation metrics for MLflow evaluate (#9913, @prithvikannan)
- [Evaluate] Add support for evaluating inference datasets in MLflow evaluate (#9830, @liangz1)
- [Evaluate] Add support for evaluating single argument functions in MLflow evaluate (#9718, @liangz1)
- [Evaluate] Add support for Retriever LLM model type evaluation within MLflow evaluate (#10079, @liangz1)
- [Models] Add configurable parameter for external model saving in the ONNX flavor to address a regression (#10152, @daniellok-db)
- [Models] Add support for saving inference parameters in a logged model's input example (#9655, @serena-ruan)
- [Models] Add support for
completions
in the OpenAI flavor (#9838, @santiagxf) - [Models] Add support for inference parameters for the OpenAI flavor (#9909, @santiagxf)
- [Models] Introduce support for configuration arguments to be specified when loading a model (#9251, @santiagxf)
- [Models] Add support for integrated Azure AD authentication for the OpenAI flavor (#9704, @santiagxf)
- [Models / Scoring] Introduce support for model training lineage in model serving (#9402, @M4nouel)
- [Model Registry] Introduce the
copy_model_version
client API for copying model versions across registered models (#9946, #10078, #10140, @jerrylian-db) - [Tracking] Expand the limits of parameter value length from 500 to 6000 (#9709, @serena-ruan)
- [Tracking] Introduce support for Spark 3.5's SparkConnect mode within MLflow to allow logging models created using this operation mode of Spark (#9534, @WeichenXu123)
- [Tracking] Add support for logging system metrics to the MLflow fluent API (#9557, #9712, #9714, @chenmoneygithub)
- [Tracking] Add callbacks within MLflow for Keras and Tensorflow (#9454, #9637, #9579, @chenmoneygithub)
- [Tracking] Introduce a fluent login API for Databricks within Mlflow (#9665, #10180, @chenmoneygithub)
- [Tracking] Add support for customizing auth for http requests from the MLflow client via a plugin extension (#10049, @lu-ohai)
- [Tracking] Introduce experimental asynchronous logging support for metrics, params, and tags (#9705, @sagarsumant)
- [Auth] Modify the behavior of user creation in MLflow Authentication so that only admins can create new users (#9700, @gabrielfu)
- [Artifacts] Add support for using
xethub
as an artifact store via a plugin extension (#9957, @Kelton8Z)
Bug fixes:
- [Evaluate] Fix a bug with Azure OpenAI configuration usage within MLflow evaluate (#9982, @sunishsheth2009)
- [Models] Fix a data consistency issue when saving models that have been loaded in heterogeneous memory configuration within the transformers flavor (#10087, @BenWilson2)
- [Models] Fix an issue in the transformers flavor for complex input types by adding dynamic dataframe typing (#9044, @wamartin-aml)
- [Models] Fix an issue in the langchain flavor to provide support for chains with multiple outputs (#9497, @bbqiu)
- [Docker] Fix an issue with Docker image generation by changing the default env-manager to virtualenv (#9938, @Beramos)
- [Auth] Fix an issue with complex passwords in MLflow Auth to support a richer character set range (#9760, @dotdothu)
- [R] Fix a bug with configuration access when running MLflow R in Databricks (#10117, @zacdav-db)
Documentation updates:
- [Docs] Introduce the first phase of a larger documentation overhaul (#10197, @BenWilson2)
- [Docs] Add guide for LLM eval (#10058, #10199, @chenmoneygithub)
- [Docs] Add instructions on how to force single file serialization within the onnx flavor's save and log functions (#10178, @BenWilson2)
- [Docs] Add documentation for the relevance metric for MLflow evaluate (#10170, @sunishsheth2009)
- [Docs] Add a style guide for the contributing guide for how to structure pydoc strings (#9907, @mberk06)
- [Docs] Fix issues with the pytorch lightning autolog code example (#9964, @chenmoneygithub)
- [Docs] Update the example for
mlflow.data.from_numpy()
(#9885, @chenmoneygithub) - [Docs] Add clear instructions for installing MLflow within R (#9835, @darshan8850)
- [Docs] Update model registry documentation to add content regarding support for model aliases (#9721, @jerrylian-db)
Small bug fixes and documentation updates:
#10202, #10189, #10188, #10159, #10175, #10165, #10154, #10083, #10082, #10081, #10071, #10077, #10070, #10053, #10057, #10055, #10020, #9928, #9929, #9944, #9979, #9923, #9842, @annzhang-db; #10203, #10196, #10172, #10176, #10145, #10115, #10107, #10054, #10056, #10018, #9976, #9999, #9998, #9995, #9978, #9973, #9975, #9972, #9974, #9960, #9925, #9920, @prithvikannan; #10144, #10166, #10143, #10129, #10059, #10123, #9555, #9619, @bbqiu; #10187, #10191, #10181, #10179, #10151, #10148, #10126, #10119, #10099, #10100, #10097, #10089, #10096, #10091, #10085, #10068, #10065, #10064, #10060, #10023, #10030, #10028, #10022, #10007, #10006, #9988, #9961, #9963, #9954, #9953, #9937, #9932, #9931, #9910, #9901, #9852, #9851, #9848, #9847, #9841, #9844, #9825, #9820, #9806, #9802, #9800, #9799, #9790, #9787, #9791, #9788, #9785, #9786, #9784, #9754, #9768, #9770, #9753, #9697, #9749, #9747, #9748, #9751, #9750, #9729, #9745, #9735, #9728, #9725, #9716, #9694, #9681, #9666, #9643, #9641, #9621, #9607, @harupy; #10200, #10201, #10142, #10139, #10133, #10090, #10086, #9934, #9933, #9845, #9831, #9794, #9692, #9627, #9626, @chenmoneygithub; #10110, @wenfeiy-db; #10195, #9895, #9880, #9679, @BenWilson2; #10174, #10177, #10109, #9706, @jerrylian-db; #10113, #9765, @smurching; #10150, #10138, #10136, @dbczumar; #10153, #10032, #9986, #9874, #9727, #9707, @serena-ruan; #10155, @shaotong-db; #10160, #10131, #10048, #10024, #10017, #10016, #10002, #9966, #9924, @sunishsheth2009; #10121, #10116, #10114, #10102, #10098, @B-Step62; #10095, #10026, #9991, @daniellok-db; #10050, @Dennis40816; #10062, #9868, @Gekko0114; #10033, @Anushka-Bhowmick; #9983, #10004, #9958, #9926, #9690, @liangz1; #9997, #9940, #9922, #9919, #9890, #9888, #9889, #9810, @TomeHirata; #9994, #9970, #9950, @lightnessofbein; #9965, #9677, @ShorthillsAI; #9906, @jessechancy; #9942, #9771, @Sai-Suraj-27; #9902, @remyleone; #9892, #9865, #9866, #9853, @montanarograziano; #9875, @Raghavan-B; #9858, @Salz0; #9878, @maksboyarin; #9882, @lukasz-gawron; #9827, @Bncer; #9819, @gabrielfu; #9792, @harshk461; #9726, @Chiragasourabh; #9663, @Abhishek-TyRnT; #9670, @mberk06; #9755, @simonlsk; #9757, #9775, #9776, #9774, @AmirAflak; #9782, @garymm; #9756, @issamarabi; #9645, @shichengzhou-db; #9671, @zhe-db; #9660, @mingyu89; #9575, @akshaya-a; #9629, @pnacht; #9876, @C-K-Loan