Introduction to Generative AI

Google Cloud Tech
8 May 202322:07

Summary

TLDRDr. Gwendolyn Stripling introduces Google's course on Generative AI, covering key concepts like AI, machine learning, and deep learning. The course explains the differences between supervised and unsupervised learning, the role of neural networks, and the emergence of generative AI models. It highlights practical applications, such as text-to-image and text-to-video generation, and tools like Bard, Vertex AI, and PaLM API for developing and deploying AI solutions. The course aims to provide a comprehensive understanding of generative AI and its potential to revolutionize various industries.

Takeaways

  • ๐Ÿง  AI is a branch of computer science that deals with creating intelligent agents capable of reasoning, learning, and acting autonomously.
  • ๐Ÿ“š Machine learning is a subfield of AI where models learn from input data to make predictions on new, unseen data.
  • ๐Ÿท๏ธ Supervised learning involves models trained on labeled data, whereas unsupervised learning deals with unlabeled data.
  • ๐Ÿค– Deep learning is a subset of machine learning that uses artificial neural networks to process complex patterns, often with many layers.
  • ๐ŸŒ Generative AI is a subset of deep learning that uses neural networks to generate new content based on learned patterns from existing data.
  • ๐Ÿ“ˆ Discriminative models predict labels for data points, while generative models create new data instances based on learned distributions.
  • ๐ŸŽจ Generative models can produce various types of content, including text, images, audio, and synthetic data.
  • ๐Ÿ› ๏ธ Foundation models are pre-trained on vast amounts of data and can be adapted for numerous downstream tasks, impacting industries like healthcare and finance.
  • ๐Ÿ“ Prompt design is crucial for controlling the output of large language models, which can generate human-like text in response to a wide range of prompts.
  • ๐Ÿ”ฎ Transformers, introduced in 2018, revolutionized natural language processing with their encoder-decoder architecture, enabling more complex pattern recognition.
  • ๐Ÿ›‘ Hallucinations in AI refer to nonsensical or incorrect text generated by models, often due to insufficient training data or context.
  • ๐Ÿ› ๏ธ Generative AI Studio and Gen AI App Builder provide tools for developers to create and deploy AI models and applications without extensive coding.

Q & A

  • What is the main focus of the course 'Introduction to Generative AI'?

    -The course 'Introduction to Generative AI' focuses on teaching students to define generative AI, explain its working principles, describe different types of generative AI models, and discuss various applications of generative AI.

  • How is Generative AI defined in the context of this course?

    -Generative AI is defined as a type of artificial intelligence technology capable of producing various types of content, including text, imagery, audio, and synthetic data.

  • What is the relationship between AI and machine learning according to the script?

    -AI is a broader discipline, like physics, dealing with the creation of intelligent agents that can reason, learn, and act autonomously. Machine learning is a subfield of AI that involves training a model from input data to make predictions on new, unseen data.

  • What are the two main classes of machine learning models mentioned in the script?

    -The two main classes of machine learning models are supervised and unsupervised ML models. Supervised models use labeled data, while unsupervised models work with unlabeled data.

  • How does a supervised learning model differ from an unsupervised learning model in terms of data usage?

    -In supervised learning, models are trained on labeled data, which includes tags like names or numbers. In contrast, unsupervised learning involves working with unlabeled data that has no tags, focusing on discovering natural groupings within the data.

  • What is deep learning in relation to machine learning methods?

    -Deep learning is a subset of machine learning that uses artificial neural networks to process more complex patterns than traditional machine learning models. It typically involves many layers of neurons, allowing the models to learn from both labeled and unlabeled data.

  • How does a generative AI model differ from a discriminative model?

    -A generative model generates new data instances based on a learned probability distribution of existing data, creating new content. A discriminative model, on the other hand, is used to classify or predict labels for data points based on learned relationships between data features and labels.

  • What is the role of a prompt in the context of generative AI?

    -A prompt is a short piece of text given to a large language model as input. It is used to control the output of the model, guiding it to generate the desired response based on the patterns and structures learned from the training data.

  • What are some of the potential applications of generative AI mentioned in the script?

    -Potential applications of generative AI include code generation, sentiment analysis, image and video generation, question answering, and creating digital assistants, custom search engines, knowledge bases, and training applications.

  • What is the significance of transformers in the power of generative AI?

    -Transformers, introduced in 2018, revolutionized natural language processing. They consist of an encoder and decoder, allowing the model to effectively process and generate human-like text in response to a wide range of prompts and questions.

  • How can Generative AI Studio and Generative AI App Builder assist developers?

    -Generative AI Studio provides a variety of tools and resources, including a library of pre-trained models, fine-tuning, and deployment tools, as well as a community forum for developers. Generative AI App Builder allows developers to create gen AI apps without writing code, offering a drag-and-drop interface, a visual editor, a built-in search engine, and a conversational AI engine.

Outlines

00:00

๐Ÿง  Introduction to Generative AI and AI Concepts

Dr. Gwendolyn Stripling introduces the Generative AI course, explaining its aim to teach the definition, workings, models, types, and applications of Generative AI. She clarifies AI as a discipline akin to physics, focusing on creating autonomous systems capable of reasoning and learning. Machine learning, a subset of AI, is explored, with a distinction made between supervised and unsupervised learning. The importance of labeled and unlabeled data is highlighted, along with examples of how these models work in different scenarios, such as predicting tips in a restaurant or clustering employees based on tenure and income.

05:01

๐ŸŒŸ Deep Learning and Generative AI's Role

The script delves into deep learning as a subset of machine learning, utilizing artificial neural networks to process complex patterns. It explains semi-supervised learning, where neural networks leverage both labeled and unlabeled data. Generative AI is introduced as a subset of deep learning, using neural networks to generate new content. The difference between generative and discriminative models is outlined, with the former creating new data instances and the latter classifying or predicting labels. The script also illustrates how generative AI can produce various types of content, including natural language, images, and audio, based on learned patterns.

10:03

๐Ÿ› ๏ธ Generative AI's Functionality and Transformer Models

This paragraph discusses the capabilities of generative AI, including its ability to generate new content based on learned structures from training data. It introduces large language models as a type of generative AI that creates novel text combinations. The script also covers different types of generative models, such as text-to-text, text-to-image, text-to-video, and text-to-3D, each with specific applications. The transformative impact of transformer models in natural language processing is highlighted, along with the concept of hallucinations and their potential issues. The importance of prompt design in controlling model output is also emphasized.

15:05

๐Ÿ“š Training Data's Impact and Model Types in Generative AI

The role of training data in shaping generative AI's capabilities is examined, with an explanation of how models learn from input data patterns. Various model types are introduced, including text-to-text for translations, text-to-image for generating images from descriptions, and text-to-video and text-to-3D for creating videos and 3D objects from text. The paragraph also discusses text-to-task models that perform actions based on text input. Foundation models, which are pre-trained on large datasets for adaptation to various tasks, are highlighted for their potential to revolutionize industries. Examples of applications and models available in Vertex AI's model garden are provided.

20:06

๐Ÿ› ๏ธ Tools and Applications of Generative AI

The script concludes with an overview of tools and applications for leveraging generative AI. It introduces Generative AI Studio, a platform for exploring and customizing AI models, and Generative AI App Builder, which allows for code-free app creation with a visual interface. The capabilities of PaLM API for experimenting with Google's language models are discussed, along with the Maker suite's tools for model training, deployment, and monitoring. The paragraph showcases the versatility of generative AI in tasks like code generation, sentiment analysis, and occupancy analytics, emphasizing the ease of prototyping and the community support for developers.

Mindmap

Keywords

๐Ÿ’กGenerative AI

Generative AI refers to artificial intelligence systems capable of creating new content, such as text, images, audio, and synthetic data. It is a key theme of the video, as it discusses the capabilities and applications of this technology. The script provides examples of generative AI in action, including generating text, images, and audio based on learned patterns from existing data.

๐Ÿ’กArtificial Intelligence (AI)

Artificial Intelligence, or AI, is a branch of computer science that focuses on creating intelligent agents capable of reasoning, learning, and acting autonomously. The video script defines AI and distinguishes it from machine learning, highlighting its role in the broader field of generative AI.

๐Ÿ’กMachine Learning

Machine learning is a subset of AI that involves training models on input data so they can make predictions on new, unseen data. The script explains the concept of machine learning and its importance in the development of generative AI, including the distinction between supervised and unsupervised learning models.

๐Ÿ’กSupervised Learning

Supervised learning is a type of machine learning where models are trained on labeled data, which includes tags like names or numbers. The script uses an example of a restaurant owner using historical data to predict tips, illustrating how supervised learning models learn from past examples to make future predictions.

๐Ÿ’กUnsupervised Learning

Unsupervised learning involves training models on unlabeled data, where the data does not come with any tags. The script describes unsupervised learning as a discovery process, where the model looks for natural groupings within the data, such as clustering employees based on tenure and income.

๐Ÿ’กDeep Learning

Deep learning is a subset of machine learning that uses artificial neural networks to process complex patterns. The script positions deep learning as a foundational technology for generative AI, with its ability to learn from both labeled and unlabeled data through various learning methods.

๐Ÿ’กNeural Networks

Neural networks are computing systems inspired by the human brain, consisting of interconnected nodes or neurons that process data and make predictions. The script explains that deep learning models, which include neural networks, have many layers allowing them to learn complex patterns, essential for generative AI.

๐Ÿ’กGenerative Model

A generative model is a type of machine learning model that learns the probability distribution of data and generates new data instances. The script contrasts generative models with discriminative models, emphasizing that generative models create new content based on learned patterns, such as generating images or text.

๐Ÿ’กDiscriminative Model

Discriminative models are used to classify or predict labels for data points based on learned relationships between features and labels. The script uses the example of a model classifying whether an image is a cat or a dog, illustrating the difference between generating new content (generative models) and classifying existing content (discriminative models).

๐Ÿ’กTransformers

Transformers are a type of deep learning architecture that revolutionized natural language processing. The script describes transformers as consisting of encoders and decoders, which process input sequences and generate outputs for tasks. Transformers are crucial for the power of generative AI, especially in processing and generating natural language.

๐Ÿ’กPrompt

A prompt is a short text input given to a large language model to guide its output. The script discusses prompt design as a way to control the generation of content from generative AI models, with examples of how different prompts can elicit specific responses or content creation.

๐Ÿ’กFoundation Models

Foundation models are large AI models pre-trained on vast amounts of data and designed to be adapted to various tasks. The script mentions foundation models as having the potential to revolutionize industries and provides examples of their applications, such as sentiment analysis and object recognition.

๐Ÿ’กGenerative AI Studio

Generative AI Studio is a tool mentioned in the script that allows developers to explore and customize generative AI models for their applications on Google Cloud. It provides resources and tools for developers to create and deploy generative AI models, emphasizing its role in facilitating the development process.

๐Ÿ’กPaLM API

PaLM API, as discussed in the script, allows developers to test and experiment with Google's large language models and generative AI tools. It is part of the toolset that enables quick prototyping and integration with the Maker suite for a graphical user interface experience.

Highlights

Introduction to Generative AI course by Dr. Gwendolyn Stripling, focusing on defining, explaining, and describing generative AI.

Generative AI's capability to produce diverse content such as text, imagery, audio, and synthetic data.

AI as a discipline akin to physics, dealing with the creation of intelligent agents capable of autonomous reasoning and learning.

Machine learning as a subfield of AI, enabling computers to learn from input data without explicit programming.

Differentiation between supervised and unsupervised machine learning models based on the presence or absence of labeled data.

Supervised learning's application in predicting future values, exemplified by a restaurant's tip prediction model.

Unsupervised learning's focus on data discovery and grouping, illustrated with employee clustering based on tenure and income.

Deep learning as a subset of machine learning, utilizing artificial neural networks to process complex patterns.

The concept of semi-supervised learning, combining a small amount of labeled data with a large amount of unlabeled data.

Generative AI's position as a subset of deep learning, employing neural networks for both labeled and unlabeled data.

Discriminative vs. generative models in deep learning, with the latter creating new data instances based on learned distributions.

Generative AI's ability to generate new content, distinguishing it from traditional AI which predicts or classifies existing data.

The importance of prompt design in controlling the output of large language models in generative AI.

The role of transformers in the 2018 revolution of natural language processing, influencing generative AI capabilities.

Challenges of hallucinations in transformers, where nonsensical or incorrect phrases are generated.

Foundation models as pre-trained AI models adaptable for various tasks, with potential to revolutionize industries.

Google's Vertex AI model garden offering foundation models for tasks like sentiment analysis and object recognition.

Generative AI Studio's tools and resources for developers to create and deploy Gen AI models on Google Cloud.

Generative AI App Builder, enabling code-free creation of gen AI apps with a visual editor and conversational AI engine.

PaLM API's accessibility for developers to experiment with Google's large language models through a graphical user interface.

Transcripts

play00:00

GWENDOLYN STRIPLING: Hello.

play00:01

And welcome to Introduction to Generative AI.

play00:04

My name is Dr. Gwendolyn Stripling.

play00:06

And I am the artificial intelligence

play00:09

technical curriculum developer here at Google Cloud.

play00:14

In this course, you learn to define generative AI,

play00:18

explain how generative AI works, describe generative AI model

play00:23

types, and describe generative AI applications.

play00:28

Generative AI is a type of artificial intelligence

play00:31

technology that can produce various types of content,

play00:36

including text, imagery, audio, and synthetic data.

play00:41

But what is artificial intelligence?

play00:44

Well, since we are going to explore

play00:46

generative artificial intelligence,

play00:48

let's provide a bit of context.

play00:51

So two very common questions asked

play00:53

are what is artificial intelligence

play00:55

and what is the difference between AI and machine

play01:00

learning.

play01:01

One way to think about it is that AI is a discipline,

play01:05

like physics for example.

play01:08

AI is a branch of computer science

play01:11

that deals with the creation of intelligence agents, which

play01:15

are systems that can reason, and learn, and act autonomously.

play01:20

Essentially, AI has to do with the theory and methods

play01:24

to build machines that think and act like humans.

play01:30

In this discipline, we have machine learning,

play01:33

which is a subfield of AI.

play01:35

It is a program or system that trains a model from input data.

play01:40

That trained model can make useful predictions

play01:42

from new or never before seen data

play01:45

drawn from the same one used to train the model.

play01:49

Machine learning gives the computer

play01:51

the ability to learn without explicit programming.

play01:56

Two of the most common classes of machine learning models

play01:59

are unsupervised and supervised ML models.

play02:03

The key difference between the two

play02:05

is that, with supervised models, we have labels.

play02:09

Labeled data is data that comes with a tag like a name, a type,

play02:14

or a number.

play02:16

Unlabeled data is data that comes with no tag.

play02:20

This graph is an example of the problem

play02:23

that a supervised model might try to solve.

play02:26

For example, let's say you are the owner of a restaurant.

play02:29

You have historical data of the bill amount

play02:32

and how much different people tipped based on order type

play02:36

and whether it was picked up or delivered.

play02:39

In supervised learning, the model learns from past examples

play02:42

to predict future values, in this case tips.

play02:47

So here the model uses the total bill amount

play02:49

to predict the future tip amount based on whether an order was

play02:54

picked up or delivered.

play02:57

This is an example of the problem

play02:58

that an unsupervised model might try to solve.

play03:02

So here you want to look at tenure and income

play03:05

and then group or cluster employees

play03:08

to see whether someone is on the fast track.

play03:11

Unsupervised problems are all about discovery,

play03:14

about looking at the raw data and seeing if it naturally

play03:18

falls into groups.

play03:21

Let's get a little deeper and show this graphically

play03:24

as understanding these concepts are

play03:27

the foundation for your understanding of generative AI.

play03:31

In supervised learning, testing data values or x

play03:35

are input into the model.

play03:37

The model outputs a prediction and compares that prediction

play03:42

to the training data used to train the model.

play03:45

If the predicted test data values and actual training data

play03:50

values are far apart, that's called error.

play03:54

And the model tries to reduce this error

play03:56

until the predicted and actual values are closer together.

play04:01

This is a classic optimization problem.

play04:05

Now that we've explored the difference

play04:07

between artificial intelligence and machine learning,

play04:10

and supervised and unsupervised learning,

play04:13

let's briefly explore where deep learning

play04:16

fits as a subset of machine learning methods.

play04:20

While machine learning is a broad field that

play04:22

encompasses many different techniques,

play04:25

deep learning is a type of machine learning

play04:27

that uses artificial neural networks,

play04:29

allowing them to process more complex patterns than machine

play04:32

learning.

play04:34

Artificial neural networks are inspired by the human brain.

play04:37

They are made up of many interconnected nodes or neurons

play04:41

that can learn to perform tasks by processing data and making

play04:46

predictions.

play04:47

Deep learning models typically have many layers

play04:49

of neurons, which allows them to learn

play04:52

more complex patterns than traditional machine learning

play04:55

models.

play04:56

And neural networks can use both labeled and unlabeled data.

play05:00

This is called semi-supervised learning.

play05:03

In semi-supervised learning, a neural network

play05:06

is trained on a small amount of labeled data

play05:09

and a large amount of unlabeled data.

play05:12

The labeled data helps the neural network

play05:15

to learn the basic concepts of the task

play05:17

while the unlabeled data helps the neural network

play05:20

to generalize to new examples.

play05:24

Now we finally get to where generative AI

play05:27

fits into this AI discipline.

play05:30

Gen AI is a subset of deep learning, which

play05:33

means it uses artificial neural networks,

play05:36

can process both labeled and unlabeled data using

play05:40

supervised, unsupervised, and semi-supervised methods.

play05:45

Large language models are also a subset of deep learning.

play05:51

Deep learning models, or machine learning models in general,

play05:54

can be divided into two types, generative and discriminative.

play05:59

A discriminative model is a type of model

play06:02

that is used to classify or predict labels for data points.

play06:06

Discriminative models are typically

play06:08

trained on a data set of labeled data points.

play06:10

And they learn the relationship between the features

play06:14

of the data points and the labels.

play06:17

Once a discriminative model is trained,

play06:20

it can be used to predict the label for new data points.

play06:25

A generative model generates new data instances

play06:28

based on a learned probability distribution of existing data.

play06:33

Thus generative models generate new content.

play06:38

Take this example here.

play06:40

The discriminative model learns the conditional probability

play06:42

distribution or the probability of y,

play06:45

our output, given x, our input, that this is a dog

play06:50

and classifies it as a dog and not a cat.

play06:54

The generative model learns the joint probability distribution

play06:58

or the probability of x and y and predicts

play07:02

the conditional probability that this is a dog

play07:05

and can then generate a picture of a dog.

play07:09

So to summarize, generative models

play07:11

can generate new data instances while discriminative models

play07:16

discriminate between different kinds of data instances.

play07:21

The top image shows a traditional machine

play07:23

learning model which attempts to learn

play07:25

the relationship between the data and the label,

play07:28

or what you want to predict.

play07:30

The bottom image shows a generative AI model

play07:33

which attempts to learn patterns on content so that it

play07:36

can generate new content.

play07:40

A good way to distinguish what is gen AI and what is not

play07:43

is shown in this illustration.

play07:46

It is not gen AI when the output, or y, or label is

play07:51

a number or a class, for example spam or not spam,

play07:55

or a probability.

play07:57

It is gen AI when the output is natural language, like speech

play08:03

or text, an image or audio, for example.

play08:08

Visualizing this mathematically would look like this.

play08:12

If you haven't seen this for a while,

play08:14

the y is equal to f of x equation calculates

play08:18

the dependent output of a process given different inputs.

play08:23

The y stands for the model output.

play08:25

The f embodies the function used in the calculation.

play08:29

And the x represents the input or inputs used for the formula.

play08:33

So the model output is a function of all the inputs.

play08:36

If the y is the number, like predicted sales,

play08:41

it is not gen AI.

play08:43

If y is a sentence, like define sales,

play08:46

it is generative as the question would elicit a text response.

play08:51

The response would be based on all the massive large data

play08:55

the model was already trained on.

play08:59

To summarize at a high level, the traditional, classical

play09:03

supervised and unsupervised learning process

play09:06

takes training code and label data to build a model.

play09:09

Depending on the use case or problem,

play09:12

the model can give you a prediction.

play09:15

It can classify something or cluster something.

play09:18

We use this example to show you how much more robust

play09:22

the gen AI process is.

play09:25

The gen AI process can take training code, label data,

play09:29

and unlabeled data of all data types

play09:31

and build a foundation model.

play09:33

The foundation model can then generate new content.

play09:36

For example, text, code, images, audio, video, et cetera.

play09:42

We've come a long away from traditional programming

play09:45

to neural networks to generative models.

play09:48

In traditional programming, we used

play09:50

to have to hard code the rules for distinguishing a cat--

play09:53

the type, animal; legs, four; ears, two; fur, yes;

play10:00

likes yarn and catnip.

play10:03

In the wave of neural networks, we

play10:05

could give the network pictures of cats and dogs

play10:07

and ask is this a cat and it would predict a cat.

play10:12

In the generative wave, we as users

play10:15

can generate our own content, whether it

play10:18

be text, images, audio, video, et cetera, for example

play10:23

models like PaLM or Pathways Language Model,

play10:26

or LAMBDA, Language Model for Dialogue Applications,

play10:30

ingest very, very large data from the multiple sources

play10:33

across the internet and build foundation language

play10:36

models we can use simply by asking a question,

play10:40

whether typing it into a prompt or verbally

play10:43

talking into the prompt itself.

play10:45

So when you ask it what's a cat, it

play10:48

can give you everything it has learned about a cat.

play10:52

Now we come to our formal definition.

play10:55

What is generative AI?

play10:57

Gen AI is a type of artificial intelligence

play11:00

that creates new content based on what it has

play11:02

learned from existing content.

play11:05

The process of learning from existing content

play11:07

is called training and results in the creation

play11:10

of a statistical model when given a prompt.

play11:13

AI uses the model to predict what an expected response might

play11:18

be and this generates new content.

play11:21

Essentially, it learns the underlying structure

play11:24

of the data and can then generate

play11:26

new samples that are similar to the data it was trained on.

play11:31

As previously mentioned, a generative language model

play11:35

can take what it has learned from the examples it's

play11:38

been shown and create something entirely new

play11:41

based on that information.

play11:43

Large language models are one type of generative AI

play11:47

since they generate novel combinations of text

play11:52

in the form of natural sounding language.

play11:56

A generative image model takes an image

play11:59

as input and can output text, another image, or video.

play12:04

For example, under the output text,

play12:07

you can get visual question answering

play12:09

while under output image, an image completion is generated.

play12:14

And under output video, animation is generated.

play12:19

A generative language model takes text as input

play12:22

and can output more text, an image, audio, or decisions.

play12:27

For example, under the output text,

play12:29

question answering is generated.

play12:31

And under output image, a video is generated.

play12:35

We've stated that generative language models learn

play12:38

about patterns and language through training data,

play12:41

then, given some text, they predict what comes next.

play12:46

Thus generative language models are pattern matching systems.

play12:50

They learn about patterns based on the data you provide.

play12:54

Here is an example.

play12:57

Based on things it's learned from its training data,

play12:59

it offers predictions of how to complete this sentence,

play13:03

I'm making a sandwich with peanut butter and jelly.

play13:09

Here is the same example using Bard,

play13:12

which is trained on a massive amount of text data

play13:15

and is able to communicate and generate

play13:17

humanlike text in response to a wide range of prompts

play13:21

and questions.

play13:23

Here is another example.

play13:25

The meaning of life is--

play13:29

and Bart gives you a contextual answer

play13:32

and then shows the highest probability response.

play13:35

The power of generative AI comes from the use of transformers.

play13:40

Transformers produced a 2018 revolution

play13:43

in natural language processing.

play13:45

At a high level, a transformer model

play13:47

consists of an encoder and decoder.

play13:50

The encoder encodes the input sequence

play13:53

and passes it to the decoder, which

play13:55

learns how to decode the representation

play13:58

for a relevant task.

play14:01

In transformers, hallucinations are words or phrases

play14:06

that are generated by the model that

play14:09

are often nonsensical or grammatically incorrect.

play14:13

Hallucinations can be caused by a number of factors,

play14:17

including the model is not trained on enough data,

play14:21

or the model is trained on noisy or dirty data,

play14:25

or the model is not given enough context,

play14:29

or the model is not given enough constraints.

play14:33

Hallucinations can be a problem for transformers

play14:35

because they can make the output text difficult to understand.

play14:40

They can also make the model more

play14:41

likely to generate incorrect or misleading information.

play14:46

A prompt is a short piece of text

play14:49

that is given to the large language model as input.

play14:53

And it can be used to control the output of the model

play14:57

in a variety of ways.

play14:59

Prompt design is the process of creating

play15:01

a prompt that will generate the desired output

play15:04

from a large language model.

play15:07

As previously mentioned, gen AI depends a lot

play15:11

on the training data that you have fed into it.

play15:14

And it analyzes the patterns and structures of the input data

play15:18

and thus learns.

play15:20

But with access to a browser based prompt, you, the user,

play15:23

can generate your own content.

play15:27

We've shown illustrations of the types of input based upon data.

play15:31

Here are the associated model types.

play15:33

Text-to-text.

play15:35

Text-to-text models take a natural language input

play15:38

and produces a text output.

play15:40

These models are trained to learn the mapping

play15:43

between a pair of text, e.g.

play15:45

for example, translation from one language to another.

play15:49

Text-to-image.

play15:50

Text-to-image models are trained on a large set of images,

play15:54

each captioned with a short text description.

play15:58

Diffusion is one method used to achieve this.

play16:01

Text-to-video and text-to-3D.

play16:04

Text-to-video models aim to generate a video representation

play16:08

from text input.

play16:09

The input text can be anything from a single sentence

play16:13

to a full script.

play16:15

And the output is a video that corresponds to the input text.

play16:20

Similarly, text-to-3D models generate

play16:23

three dimensional objects that correspond to a user's text

play16:28

description.

play16:29

For example, this can be used in games or other 3D worlds.

play16:34

Text-to-task.

play16:36

Text-to-task models are trained to perform a defined task

play16:41

or action based on text input.

play16:44

This task can be a wide range of actions

play16:46

such as answering a question, performing a search,

play16:50

making a prediction, or taking some sort of action.

play16:55

For example, a text-to-task model

play16:58

could be trained to navigate a web UI or make changes to a doc

play17:03

through the GUI.

play17:05

A foundation model is a large AI model pre-trained

play17:08

on a vast quantity of data designed to be adapted or fine

play17:13

tuned to a wide range of downstream tasks,

play17:17

such as sentiment analysis, image captioning, and object

play17:22

recognition.

play17:23

Foundation models have the potential

play17:26

to revolutionize many industries, including

play17:29

health care, finance, and customer service.

play17:32

They can be used to detect fraud and provide

play17:36

personalized customer support.

play17:38

Vertex AI offers a model garden that

play17:41

includes foundation models.

play17:43

The language foundation models include

play17:45

PaLM API for chat and text.

play17:48

The vision foundation models includes stable diffusion,

play17:52

which has been shown to be effective at generating

play17:55

high quality images from text descriptions.

play18:00

Let's say you have a use case where

play18:01

you need to gather sentiments about how your customers are

play18:05

feeling about your product or service.

play18:07

You can use the classification task sentiment analysis task

play18:12

model for just that purpose.

play18:15

And what if you needed to perform occupancy analytics?

play18:19

There is a task model for your use case.

play18:23

Shown here are gen AI applications.

play18:27

Let's look at an example of code generation

play18:30

shown in the second block under code at the top.

play18:34

In this example, I've input a code file conversion problem,

play18:39

converting from Python to JSON.

play18:41

I use Bard.

play18:42

And I insert into the prompt box the following.

play18:46

I have a Pandas DataFrame with two columns, one with the file

play18:50

name and one with the hour in which it is generated.

play18:54

I'm trying to convert this into a JSON file

play18:57

in the format shown onscreen.

play19:00

Bard returns the steps I need to do this and the code snippet.

play19:06

And here my output is in a JSON format.

play19:10

It gets better.

play19:11

I happen to be using Google's free, browser-based Jupyter

play19:16

Notebook, known as Colab.

play19:18

And I simply export the Python code to Google's Colab.

play19:23

To summarize, Bart code generation

play19:26

can help you debug your lines of source code,

play19:29

explain your code to you line by line,

play19:31

craft SQL queries for your database,

play19:34

translate code from one language to another,

play19:37

and generate documentation and tutorials for source code.

play19:42

Generative AI Studio lets you quickly explore and customize

play19:47

gen AI models that you can leverage in your applications

play19:51

on Google Cloud.

play19:53

Generative AI Studio helps developers create and deploy

play19:57

Gen AI models by providing a variety of tools and resources

play20:02

that make it easy to get started.

play20:05

For example, there's a library of pre-trained models.

play20:09

There is a tool for fine tuning models.

play20:12

There is a tool for deploying models to production.

play20:15

And there is a community forum for developers

play20:18

to share ideas and collaborate.

play20:21

Generative AI App Builder lets you

play20:24

create gen AI apps without having to write any code.

play20:28

Gen AI App Builder has a drag and drop interface

play20:31

that makes it easy to design and build apps.

play20:35

It has a visual editor that makes

play20:36

it easy to create and edit app content.

play20:39

It has a built-in search engine that

play20:40

allows users to search for information within the app.

play20:43

And it has a conversational AI Engine

play20:46

that helps users to interact with the app using

play20:49

natural language.

play20:51

You can create your own digital assistants, custom search

play20:55

engines, knowledge bases, training applications,

play20:59

and much more.

play21:01

PaLM API lets you test and experiment

play21:05

with Google's large language models and gen AI tools.

play21:09

To make prototyping quick and more accessible,

play21:11

developers can integrate PaLM API with Maker suite

play21:15

and use it to access the API using a graphical user

play21:20

interface.

play21:21

The suite includes a number of different tools such as a model

play21:25

training tool, a model deployment tool, and a model

play21:29

monitoring tool.

play21:31

The model training tool helps developers train ML models

play21:35

on their data using different algorithms.

play21:38

The model deployment tool helps developers deploy ML models

play21:42

to production with a number of different deployment options.

play21:48

The model monitoring tool helps developers

play21:51

monitor the performance of their ML models

play21:54

in production using a dashboard and a number

play21:58

of different metrics.

play22:01

Thank you for watching our course, Introduction

play22:04

to Generative AI.

Rate This
โ˜…
โ˜…
โ˜…
โ˜…
โ˜…

5.0 / 5 (0 votes)

Related Tags
Generative AIArtificial IntelligenceMachine LearningNeural NetworksAI ApplicationsGoogle CloudTechnical CurriculumContent CreationData ScienceAI EducationModel Training