@github/mlops-with-vertexai, gcloud-vertexai
from google.cloud import bigquery
from google.cloud import aiplatform as vertex_ai
A step by step code guide in jupyter notebook can be found here : mlops-with-vertex-ai
Orchestrate Workflows : Manually training and deploying your models can be time-consuming and error-prone, especially if you have to repeat the processes often.
- Vertex AI Pipelines helps us automate, monitor and control your ML workflows.
Track metadata used in your ML system : In data science, it's important to track the parameters, artifacts, and metrics used in your ML workflow, especially if you repeat the workflow often.
- Vertex ML Metadata allows us to record the metadata, parameters, and artifacts used in our ML system. We can then query this metadata to analyze, debug, and audit the performance of our ML system or the artifacts it generates.
Identifying the best model for a use case : When trying out new training algorithms, you need to know which trained model performs best.
-
With Vertex AI Experiments, we can track and analyze different model architectures, hyperparameters, and training environments to determine the best model for our use case.
-
With Vertex AI TensorBoard, we can track, visualize, and compare ML tests to measure the performance of our models.
Manage Model Versions : By adding models to a central repository, you can keep track of model versions.
- Vertex AI Model Registry provides an overview of our models so we can better organize, track, and train new versions. With Model Registry, we can evaluate models, deploy models to an endpoint, create batch predictions, and view details about specific models and model versions.
Manage features : When you reuse ML features across teams, you need a fast and efficient way to share and deploy the features.
- Vertex AI Feature Store provides a central repository for organizing, storing, and deploying ML features. By using a central feature store, an organization can reuse ML features at scale and increase the speed of development and deployment of new ML applications.
Monitor model quality : A model deployed to production works best with prediction input data that is similar to the training data. If the input data differs from the data used to train the model, the model's performance can degrade, even if the model itself has not changed.
- Vertex AI Model Monitoring monitors models for training-to-deployment and prediction discrepancies, and sends us alerts when the incoming prediction data deviates too far from the training base. We can use the notifications and feature distributions to assess whether we need to retrain our model.
resource: how we optimised our Vertex AI Pipelines Environments at VMO2 for scale, Practitioners guide to MLOps, Serverless MLOps with Vertex AI, class : Machine Learning Operations With Vertex AI on Google Cloud Platform StatMike, Serverless MLOps with Vertex AI and ZenML, Vertex AI Model Garden, Google Cloud Tech, Google Cloud : Vertex AI, Vertex AI Pipelines - The Easiest Way to Run ML Pipelines, Learn Vertex AI while building a fraud detection system.