What is Vertex AI?
Summary
TLDRThis script introduces Vertex AI, a platform that streamlines the entire machine learning workflow from data preparation to model deployment and prediction. It highlights Vertex AI's ability to cater to teams with varying levels of expertise, offering AutoML for simpler use cases and custom models for more advanced requirements. The platform guides users through data ingestion, transformation, model training (AutoML or custom), evaluation, optimization, explainable AI insights, and deployment for online or batch predictions. The console tour provides an overview of the key features and steps involved in the end-to-end machine learning process with Vertex AI.
Takeaways
- 🌐 Vertex AI is a platform that provides tools for every step of the machine learning workflow, catering to varying levels of expertise.
- 📊 The typical machine learning workflow involves data ingestion, analysis, transformation, model creation and training, evaluation, optimization, and deployment.
- 🔀 Vertex AI simplifies the workflow by providing managed datasets, AutoML for automated model training, and custom model training options.
- 🧠 AutoML is suitable for use cases like images, videos, text files, and tabular data, and doesn't require writing model code.
- 💻 Custom models allow more control over model architecture and work well for frameworks like TensorFlow and PyTorch.
- 🔍 Explainable AI helps understand the factors influencing a model's predictions.
- 🚀 Trained models can be deployed to endpoints for online predictions, with scalable resources and low latency.
- 📈 Undeployed models can be used for batch predictions.
- 🖥️ The Vertex AI console provides a central dashboard to manage the entire workflow, from datasets to predictions.
- 🔑 Key steps in the console include creating datasets, training models (AutoML or custom), managing models, creating endpoints, and making predictions.
Q & A
What is the main purpose of Vertex AI as described in the script?
-Vertex AI is a platform that provides tools for every step of the machine learning workflow, catering to varying levels of machine learning expertise, from novice to expert. It aims to accelerate AI innovation by simplifying the machine learning process.
What are the typical steps involved in a machine learning workflow?
-The typical machine learning workflow involves: 1) defining the prediction task, 2) ingesting, analyzing, and transforming data, 3) creating and training the model, 4) evaluating the model for efficiency and optimization, and 5) deploying the model to make predictions.
How does Vertex AI simplify the machine learning workflow?
-Vertex AI provides a simplified machine learning workflow in one central place, covering data preparation (ingestion, analysis, and transformation) through managed datasets, model training (AutoML or custom), model evaluation and optimization (including explainable AI), and model deployment for online and batch predictions.
What is the difference between AutoML and custom models in Vertex AI?
-AutoML is a no-code solution for tasks like image, video, text, and tabular data, where Vertex AI automatically finds the best model. Custom models allow more control over model architecture, enabling users to write their own code using frameworks like TensorFlow or PyTorch.
How does Vertex AI handle model evaluation and optimization?
-Vertex AI provides tools for assessing and optimizing trained models, as well as explainable AI capabilities that allow users to dive deeper and understand which factors influence the model's predictions.
How are models deployed in Vertex AI for online predictions?
-Models are deployed to an endpoint, which includes all the necessary physical resources and scalable hardware to serve the model for online predictions via API or console. Endpoints can be configured for auto-scaling based on traffic and split traffic across multiple endpoints.
How can users make batch predictions in Vertex AI?
-For batch predictions, users can leverage undeployed models and make predictions on batches of data stored in Cloud Storage.
What are the different components of the Vertex AI console dashboard?
-The Vertex AI console dashboard includes sections for managing datasets, notebook instances, training jobs (AutoML, AutoML Edge, and custom training), models, endpoints, and batch predictions, providing a centralized interface for the entire machine learning workflow.
What types of data are supported by Vertex AI datasets?
-Vertex AI supports datasets for images, tabular data, text, and videos. For other use cases not falling into these categories, users can still leverage Vertex AI for custom model training and predictions.
How does Vertex AI handle custom model training with different frameworks?
-For custom model training, Vertex AI provides pre-built containers for supported frameworks like TensorFlow, PyTorch, scikit-learn, and XGBoost, where users can provide their code as a Python package. Additionally, users can build custom containers with any framework or language and run training on Vertex AI.
Outlines
🚀 An Introduction to Vertex AI and the Machine Learning Workflow
This paragraph introduces Vertex AI, a platform that provides tools for every step of the machine learning workflow across different model types and varying levels of expertise. It explains the typical machine learning workflow, which includes ingesting and transforming data, creating and training models, evaluating and optimizing models, and deploying models for predictions. Vertex AI simplifies this workflow by offering tools for data preparation, model training (AutoML or custom), model evaluation and optimization, and model deployment for online and batch predictions.
📚 Exploring Vertex AI's Features and Console
This paragraph delves into the specific features and capabilities of Vertex AI's console. It discusses the different training options available, including AutoML, AutoML Edge, and Custom Training with pre-built or custom containers for various frameworks like TensorFlow, PyTorch, scikit-learn, and XGBoost. It also describes the process of creating endpoints for serving models, managing resources, and making online and batch predictions. Additionally, it provides an overview of the Vertex AI console's dashboard, highlighting sections for data sets, notebooks, training jobs, models, endpoints, and predictions.
Mindmap
Keywords
💡Machine Learning Workflow
💡Vertex AI
💡Data Ingestion and Preparation
💡AutoML
💡Custom Models
💡Model Training
💡Model Evaluation and Optimization
💡Model Deployment
💡Predictions
💡Vertex AI Console
Highlights
Vertex AI provides tools for every step of the machine learning workflow across different model types, for varying levels of machine learning expertise.
The typical machine learning workflow involves ingesting data, analyzing and transforming it, creating and training the model, evaluating and optimizing the model, and finally deploying it to make predictions.
With Vertex AI, you get a simplified machine learning workflow in one central place, covering data preparation, model training, evaluation, optimization, deployment, and prediction.
For data preparation, you can create managed datasets within Vertex AI, import data, and label or annotate the data.
For model training, you have the option of AutoML or custom models, depending on your expertise and level of control required.
AutoML works well for use cases like images, videos, text files, and tabular data, where Vertex AI finds the best model for the task without requiring you to write code.
Custom models allow you to have more control over the model's architecture, and are suitable for using frameworks like TensorFlow or PyTorch with your own code.
After training, you can assess, optimize, and understand the factors behind your model's predictions using explainable AI.
Once you're satisfied with the model, you can deploy it to an endpoint to serve online predictions, which includes scalable hardware resources.
You can also use the undeployed model for batch predictions.
The Vertex AI console provides a dashboard to manage the machine learning workflow, including data sets, notebooks, training jobs, models, endpoints, and predictions.
In the console, you can create data sets for different data types like images, tabular data, text, and videos.
You can create custom notebook instances with specific environments and GPUs.
For training, you can use AutoML, AutoML Edge (for edge devices), or custom training with pre-built or custom containers for various frameworks like TensorFlow, PyTorch, scikit-learn, and XGBoost.
You can import models trained outside of Google Cloud to serve for online and batch predictions.
Transcripts
PRIYANKA: Hi, I'm Priyanka [INAUDIBLE],,
and this is AI Simplified, where we
will take the journey from data sets
all the way to deployed machine learning models.
No matter which service you offer today,
it is crucial to use the data you have to make predictions
so you can improve your apps and the user experience.
But most teams have varying levels of machine
learning expertise, ranging from novice all the way to experts.
To accelerate AI innovation, you need a platform
that can help you build expertise for those novice
users, and provide a seamless and flexible environment
for those experts.
This is where Vertex AI comes in.
It provides tools for every step of the machine learning
workflow across different model types,
for varying levels of machine learning expertise.
Before we take a look at Vertex AI,
though, let's understand the typical machine
learning workflow.
After defining your prediction task, the first thing you do
is ingest the data, analyze it, and then transform it.
Then you create and train the model.
Evaluate that model for efficiency and optimization.
And then deploy it to make predictions.
Now, with Vertex AI, you get a simplified machine learning
workflow in one central place.
Ingestion, analysis, and transforming
is really all about data preparation.
And you do that using managed data sets within Vertex AI.
You have tools to create the data set
by importing your data using the console, or just the API.
You can also label and annotate the data right
from within the console.
For model training, you have two options--
AutoML or custom.
With varying machine learning expertise on the team,
for some use cases such as images or videos, text files,
and tabular data, AutoML works great.
With AutoML, you don't need to write any of the model code.
Vertex AI will take care of finding
the best model for that task.
And for other use cases where you
would like more control over your model's architecture,
use custom models.
Now, custom models are great for frameworks and architectures
and code that you want to write yourself.
So this works great for TensorFlow or PyTorch code.
Once that model is trained, you have the ability
to assess that model, optimize it, and even understand
the signals behind your model's predictions.
And you do that with explainable AI.
And explainable AI lets you dive deeper into your model
and understand which factors are playing a role in defining
what that model is predicting.
Once you're happy with the model,
you deploy it to an endpoint to serve it
for online predictions using the API or the console.
Now, this deployment includes all the physical resources
and the scalable hardware that's needed
to scale that model for lower latency and online predictions.
You can, of course, use the undeployed model
for batch predictions.
Once the model is deployed, you can
get the predictions using either the command line
interface, or the console UI, or the SDK and the APIs.
At this point, you might be wondering
how this looks in the console and where to find it.
So let me give you a little tour of the dashboard.
In the console, when we click on Vertex AI,
we land on the dashboard, where you
can see the recent data sets, recent models,
and get predictions.
On the left, you have all the steps that
are involved in the machine learning workflow,
all the way from data sets to predictions.
In the data sets, you can create your data sets,
depending on the type of data and your prediction task.
Supported data types include image, tabular, text,
or videos.
If your prediction test doesn't fall into one of these use
cases, don't worry.
You can still use Vertex AI for your custom model training
and prediction.
Once you have created a data set,
you can see it listed in the data set list.
In the Notebook section of the console,
you can create your customized notebook instances
with the type of environment and GPUs you want.
The Training tab, you can see and create your training jobs.
The beauty is that you can have one data set,
but you can train it in different ways.
With AutoML, you can train a high quality model
with minimal effort.
AutoML Edge for models that are optimized for Edge devices.
And with the Custom Training option,
you can train models built with any framework,
via pre-built or custom containers.
Pre-built containers are available
for supported frameworks such as TensorFlow, PyTorch,
scikit-learn, and XGBoost.
You provide your code as a Python package.
The Custom Containers option allows
you to train models built with any framework or language
by putting your training application code in a Docker
container, push it to Container Registry,
and run training on Vertex.
You can accelerate the training with GPUs,
and also apply hyperparameter tuning.
In the Models tab, you can see all the models
you have created.
Here, you can also import models trained outside of Google Cloud
to serve it for online and batch predictions.
In order to use a model, you create an endpoint, which
brings us to our next step.
Creating an endpoint is how you serve your models
for online predictions.
Each model can have multiple endpoints.
Enter the compute resources so your endpoint can
auto scale the resources based on your traffic.
You can even split traffic across endpoints
and send model logs to Cloud Logging.
Once the model is live, we can make predictions in the UI
or through the SDK.
In the Batch Predictions tab, you
can make predictions on the batch
of data from Cloud Storage.
This was a pretty high-level overview
where we saw that Vertex AI provides the tools to support
your entire machine learning workflow from data management
all the way to predictions.
In the next episodes, we will dive much deeper
into all of these steps and build end-to-end machine
learning workflow.
In the meantime, let's continue our discussion
in the comments below.
I'm excited to hear all about your machine
learning use case and workflow.
5.0 / 5 (0 votes)