What is Vertex AI?

Google Cloud Tech
22 May 202107:16

Summary

TLDRThis script introduces Vertex AI, a platform that streamlines the entire machine learning workflow from data preparation to model deployment and prediction. It highlights Vertex AI's ability to cater to teams with varying levels of expertise, offering AutoML for simpler use cases and custom models for more advanced requirements. The platform guides users through data ingestion, transformation, model training (AutoML or custom), evaluation, optimization, explainable AI insights, and deployment for online or batch predictions. The console tour provides an overview of the key features and steps involved in the end-to-end machine learning process with Vertex AI.

Takeaways

  • 🌐 Vertex AI is a platform that provides tools for every step of the machine learning workflow, catering to varying levels of expertise.
  • 📊 The typical machine learning workflow involves data ingestion, analysis, transformation, model creation and training, evaluation, optimization, and deployment.
  • 🔀 Vertex AI simplifies the workflow by providing managed datasets, AutoML for automated model training, and custom model training options.
  • 🧠 AutoML is suitable for use cases like images, videos, text files, and tabular data, and doesn't require writing model code.
  • 💻 Custom models allow more control over model architecture and work well for frameworks like TensorFlow and PyTorch.
  • 🔍 Explainable AI helps understand the factors influencing a model's predictions.
  • 🚀 Trained models can be deployed to endpoints for online predictions, with scalable resources and low latency.
  • 📈 Undeployed models can be used for batch predictions.
  • 🖥️ The Vertex AI console provides a central dashboard to manage the entire workflow, from datasets to predictions.
  • 🔑 Key steps in the console include creating datasets, training models (AutoML or custom), managing models, creating endpoints, and making predictions.

Q & A

  • What is the main purpose of Vertex AI as described in the script?

    -Vertex AI is a platform that provides tools for every step of the machine learning workflow, catering to varying levels of machine learning expertise, from novice to expert. It aims to accelerate AI innovation by simplifying the machine learning process.

  • What are the typical steps involved in a machine learning workflow?

    -The typical machine learning workflow involves: 1) defining the prediction task, 2) ingesting, analyzing, and transforming data, 3) creating and training the model, 4) evaluating the model for efficiency and optimization, and 5) deploying the model to make predictions.

  • How does Vertex AI simplify the machine learning workflow?

    -Vertex AI provides a simplified machine learning workflow in one central place, covering data preparation (ingestion, analysis, and transformation) through managed datasets, model training (AutoML or custom), model evaluation and optimization (including explainable AI), and model deployment for online and batch predictions.

  • What is the difference between AutoML and custom models in Vertex AI?

    -AutoML is a no-code solution for tasks like image, video, text, and tabular data, where Vertex AI automatically finds the best model. Custom models allow more control over model architecture, enabling users to write their own code using frameworks like TensorFlow or PyTorch.

  • How does Vertex AI handle model evaluation and optimization?

    -Vertex AI provides tools for assessing and optimizing trained models, as well as explainable AI capabilities that allow users to dive deeper and understand which factors influence the model's predictions.

  • How are models deployed in Vertex AI for online predictions?

    -Models are deployed to an endpoint, which includes all the necessary physical resources and scalable hardware to serve the model for online predictions via API or console. Endpoints can be configured for auto-scaling based on traffic and split traffic across multiple endpoints.

  • How can users make batch predictions in Vertex AI?

    -For batch predictions, users can leverage undeployed models and make predictions on batches of data stored in Cloud Storage.

  • What are the different components of the Vertex AI console dashboard?

    -The Vertex AI console dashboard includes sections for managing datasets, notebook instances, training jobs (AutoML, AutoML Edge, and custom training), models, endpoints, and batch predictions, providing a centralized interface for the entire machine learning workflow.

  • What types of data are supported by Vertex AI datasets?

    -Vertex AI supports datasets for images, tabular data, text, and videos. For other use cases not falling into these categories, users can still leverage Vertex AI for custom model training and predictions.

  • How does Vertex AI handle custom model training with different frameworks?

    -For custom model training, Vertex AI provides pre-built containers for supported frameworks like TensorFlow, PyTorch, scikit-learn, and XGBoost, where users can provide their code as a Python package. Additionally, users can build custom containers with any framework or language and run training on Vertex AI.

Outlines

00:00

🚀 An Introduction to Vertex AI and the Machine Learning Workflow

This paragraph introduces Vertex AI, a platform that provides tools for every step of the machine learning workflow across different model types and varying levels of expertise. It explains the typical machine learning workflow, which includes ingesting and transforming data, creating and training models, evaluating and optimizing models, and deploying models for predictions. Vertex AI simplifies this workflow by offering tools for data preparation, model training (AutoML or custom), model evaluation and optimization, and model deployment for online and batch predictions.

05:03

📚 Exploring Vertex AI's Features and Console

This paragraph delves into the specific features and capabilities of Vertex AI's console. It discusses the different training options available, including AutoML, AutoML Edge, and Custom Training with pre-built or custom containers for various frameworks like TensorFlow, PyTorch, scikit-learn, and XGBoost. It also describes the process of creating endpoints for serving models, managing resources, and making online and batch predictions. Additionally, it provides an overview of the Vertex AI console's dashboard, highlighting sections for data sets, notebooks, training jobs, models, endpoints, and predictions.

Mindmap

Keywords

💡Machine Learning Workflow

The machine learning workflow refers to the end-to-end process of developing a machine learning model, from data ingestion and preparation to model training, evaluation, deployment, and making predictions. The video script outlines the typical machine learning workflow steps: defining the prediction task, ingesting and transforming data, creating and training the model, evaluating and optimizing the model, deploying the model, and then making predictions using the deployed model.

💡Vertex AI

Vertex AI is a unified platform provided by Google Cloud that simplifies and streamlines the machine learning workflow. It provides tools and services for every step of the machine learning process, catering to varying levels of expertise, from novice to expert users. The video script introduces Vertex AI as a central place that offers a simplified machine learning workflow, including data ingestion, analysis, transformation, model training, evaluation, optimization, deployment, and serving predictions.

💡Data Ingestion and Preparation

Data ingestion and preparation are crucial initial steps in the machine learning workflow. It involves acquiring and importing data from various sources into the platform, followed by analyzing, labeling, and transforming the data to prepare it for model training. The video script mentions that in Vertex AI, data ingestion, analysis, and transformation are carried out using managed data sets, where users can import data, label it, and annotate it within the console or via APIs.

💡AutoML

AutoML (Automatic Machine Learning) is a feature of Vertex AI that automates the process of training machine learning models for specific types of data and use cases, such as images, videos, text files, and tabular data. As explained in the video script, with AutoML, users do not need to write any model code; Vertex AI will automatically find and train the best model for the given task, making it suitable for users with varying levels of machine learning expertise.

💡Custom Models

Custom models refer to machine learning models that are built using custom code and architectures, often written in frameworks like TensorFlow or PyTorch. The video script explains that Vertex AI supports custom models, allowing users to have more control over the model's architecture and write their own code. This option is suitable for use cases where more customization and flexibility are required, catering to users with advanced machine learning expertise.

💡Model Training

Model training is the process of feeding data into a machine learning algorithm to learn patterns and relationships, thereby building a model that can make predictions or decisions. The video script highlights that Vertex AI offers two options for model training: AutoML for automating the training process, and custom models for training models built with custom code and frameworks like TensorFlow or PyTorch.

💡Model Evaluation and Optimization

Model evaluation and optimization involve assessing the performance and efficiency of a trained machine learning model and making necessary adjustments or improvements. As mentioned in the video script, Vertex AI provides tools to evaluate and optimize models, including explainable AI (XAI) techniques that allow users to understand the factors influencing the model's predictions, enabling them to fine-tune and enhance the model's performance.

💡Model Deployment

Model deployment refers to the process of making a trained machine learning model available for serving predictions or inferences. The video script explains that in Vertex AI, once a model is satisfactory, it can be deployed to an endpoint, which includes provisioning the necessary physical resources and scalable hardware to serve online predictions with low latency. The deployed model can be accessed through various interfaces, such as the command line, console UI, SDK, or APIs.

💡Predictions

Predictions (or inferences) are the outputs generated by a deployed machine learning model when given new input data. The video script mentions that Vertex AI supports both online predictions, where the deployed model is queried via API calls for real-time predictions, and batch predictions, where the undeployed model is used to make predictions on a batch of data stored in Cloud Storage.

💡Vertex AI Console

The Vertex AI Console is the user interface provided by Google Cloud that allows users to manage and interact with various components of the Vertex AI platform. The video script provides an overview of the Vertex AI Console dashboard, highlighting sections for managing data sets, notebooks, training jobs, models, endpoints, and making predictions, enabling users to navigate and control the entire machine learning workflow from a centralized location.

Highlights

Vertex AI provides tools for every step of the machine learning workflow across different model types, for varying levels of machine learning expertise.

The typical machine learning workflow involves ingesting data, analyzing and transforming it, creating and training the model, evaluating and optimizing the model, and finally deploying it to make predictions.

With Vertex AI, you get a simplified machine learning workflow in one central place, covering data preparation, model training, evaluation, optimization, deployment, and prediction.

For data preparation, you can create managed datasets within Vertex AI, import data, and label or annotate the data.

For model training, you have the option of AutoML or custom models, depending on your expertise and level of control required.

AutoML works well for use cases like images, videos, text files, and tabular data, where Vertex AI finds the best model for the task without requiring you to write code.

Custom models allow you to have more control over the model's architecture, and are suitable for using frameworks like TensorFlow or PyTorch with your own code.

After training, you can assess, optimize, and understand the factors behind your model's predictions using explainable AI.

Once you're satisfied with the model, you can deploy it to an endpoint to serve online predictions, which includes scalable hardware resources.

You can also use the undeployed model for batch predictions.

The Vertex AI console provides a dashboard to manage the machine learning workflow, including data sets, notebooks, training jobs, models, endpoints, and predictions.

In the console, you can create data sets for different data types like images, tabular data, text, and videos.

You can create custom notebook instances with specific environments and GPUs.

For training, you can use AutoML, AutoML Edge (for edge devices), or custom training with pre-built or custom containers for various frameworks like TensorFlow, PyTorch, scikit-learn, and XGBoost.

You can import models trained outside of Google Cloud to serve for online and batch predictions.

Transcripts

play00:00

PRIYANKA: Hi, I'm Priyanka [INAUDIBLE],,

play00:02

and this is AI Simplified, where we

play00:04

will take the journey from data sets

play00:07

all the way to deployed machine learning models.

play00:10

No matter which service you offer today,

play00:12

it is crucial to use the data you have to make predictions

play00:16

so you can improve your apps and the user experience.

play00:19

But most teams have varying levels of machine

play00:22

learning expertise, ranging from novice all the way to experts.

play00:26

To accelerate AI innovation, you need a platform

play00:30

that can help you build expertise for those novice

play00:33

users, and provide a seamless and flexible environment

play00:37

for those experts.

play00:39

This is where Vertex AI comes in.

play00:42

It provides tools for every step of the machine learning

play00:45

workflow across different model types,

play00:48

for varying levels of machine learning expertise.

play00:51

Before we take a look at Vertex AI,

play00:53

though, let's understand the typical machine

play00:56

learning workflow.

play00:58

After defining your prediction task, the first thing you do

play01:01

is ingest the data, analyze it, and then transform it.

play01:06

Then you create and train the model.

play01:09

Evaluate that model for efficiency and optimization.

play01:13

And then deploy it to make predictions.

play01:15

Now, with Vertex AI, you get a simplified machine learning

play01:19

workflow in one central place.

play01:21

Ingestion, analysis, and transforming

play01:25

is really all about data preparation.

play01:27

And you do that using managed data sets within Vertex AI.

play01:32

You have tools to create the data set

play01:35

by importing your data using the console, or just the API.

play01:39

You can also label and annotate the data right

play01:42

from within the console.

play01:44

For model training, you have two options--

play01:47

AutoML or custom.

play01:49

With varying machine learning expertise on the team,

play01:52

for some use cases such as images or videos, text files,

play02:01

and tabular data, AutoML works great.

play02:05

With AutoML, you don't need to write any of the model code.

play02:08

Vertex AI will take care of finding

play02:10

the best model for that task.

play02:13

And for other use cases where you

play02:15

would like more control over your model's architecture,

play02:19

use custom models.

play02:20

Now, custom models are great for frameworks and architectures

play02:24

and code that you want to write yourself.

play02:26

So this works great for TensorFlow or PyTorch code.

play02:30

Once that model is trained, you have the ability

play02:33

to assess that model, optimize it, and even understand

play02:36

the signals behind your model's predictions.

play02:39

And you do that with explainable AI.

play02:42

And explainable AI lets you dive deeper into your model

play02:47

and understand which factors are playing a role in defining

play02:51

what that model is predicting.

play02:54

Once you're happy with the model,

play02:55

you deploy it to an endpoint to serve it

play02:58

for online predictions using the API or the console.

play03:02

Now, this deployment includes all the physical resources

play03:05

and the scalable hardware that's needed

play03:08

to scale that model for lower latency and online predictions.

play03:15

You can, of course, use the undeployed model

play03:17

for batch predictions.

play03:19

Once the model is deployed, you can

play03:21

get the predictions using either the command line

play03:24

interface, or the console UI, or the SDK and the APIs.

play03:31

At this point, you might be wondering

play03:33

how this looks in the console and where to find it.

play03:36

So let me give you a little tour of the dashboard.

play03:39

In the console, when we click on Vertex AI,

play03:42

we land on the dashboard, where you

play03:44

can see the recent data sets, recent models,

play03:48

and get predictions.

play03:50

On the left, you have all the steps that

play03:53

are involved in the machine learning workflow,

play03:55

all the way from data sets to predictions.

play03:59

In the data sets, you can create your data sets,

play04:02

depending on the type of data and your prediction task.

play04:05

Supported data types include image, tabular, text,

play04:09

or videos.

play04:10

If your prediction test doesn't fall into one of these use

play04:13

cases, don't worry.

play04:15

You can still use Vertex AI for your custom model training

play04:18

and prediction.

play04:19

Once you have created a data set,

play04:21

you can see it listed in the data set list.

play04:24

In the Notebook section of the console,

play04:26

you can create your customized notebook instances

play04:30

with the type of environment and GPUs you want.

play04:43

The Training tab, you can see and create your training jobs.

play04:48

The beauty is that you can have one data set,

play04:51

but you can train it in different ways.

play04:54

With AutoML, you can train a high quality model

play04:57

with minimal effort.

play04:59

AutoML Edge for models that are optimized for Edge devices.

play05:03

And with the Custom Training option,

play05:05

you can train models built with any framework,

play05:08

via pre-built or custom containers.

play05:11

Pre-built containers are available

play05:14

for supported frameworks such as TensorFlow, PyTorch,

play05:17

scikit-learn, and XGBoost.

play05:20

You provide your code as a Python package.

play05:23

The Custom Containers option allows

play05:26

you to train models built with any framework or language

play05:30

by putting your training application code in a Docker

play05:33

container, push it to Container Registry,

play05:36

and run training on Vertex.

play05:38

You can accelerate the training with GPUs,

play05:41

and also apply hyperparameter tuning.

play05:45

In the Models tab, you can see all the models

play05:48

you have created.

play05:50

Here, you can also import models trained outside of Google Cloud

play05:54

to serve it for online and batch predictions.

play05:57

In order to use a model, you create an endpoint, which

play06:01

brings us to our next step.

play06:03

Creating an endpoint is how you serve your models

play06:06

for online predictions.

play06:08

Each model can have multiple endpoints.

play06:11

Enter the compute resources so your endpoint can

play06:14

auto scale the resources based on your traffic.

play06:17

You can even split traffic across endpoints

play06:20

and send model logs to Cloud Logging.

play06:23

Once the model is live, we can make predictions in the UI

play06:27

or through the SDK.

play06:30

In the Batch Predictions tab, you

play06:32

can make predictions on the batch

play06:34

of data from Cloud Storage.

play06:37

This was a pretty high-level overview

play06:39

where we saw that Vertex AI provides the tools to support

play06:43

your entire machine learning workflow from data management

play06:47

all the way to predictions.

play06:49

In the next episodes, we will dive much deeper

play06:51

into all of these steps and build end-to-end machine

play06:54

learning workflow.

play06:55

In the meantime, let's continue our discussion

play06:58

in the comments below.

play06:59

I'm excited to hear all about your machine

play07:02

learning use case and workflow.