Discover Prompt Engineering | Google AI Essentials

Google Career Certificates
13 May 202430:29

Summary

TLDRThe video script delves into the art of prompt engineering for AI, emphasizing its importance in eliciting useful responses from Large Language Models (LLMs). It discusses the process of designing clear and specific prompts, the iterative nature of refining prompts for better AI output, and the technique of few-shot prompting using examples. The script also addresses potential LLM limitations, such as biases and inaccuracies, and stresses the need for critical evaluation of AI-generated content. Yufeng, a Google engineer, shares insights on making AI tools more efficient through effective prompting, ultimately aiming to enhance productivity and creativity in the workplace.

Takeaways

  • 📝 Prompt engineering is about crafting text inputs that guide AI models to generate desired outputs.
  • 🌐 Language serves multiple purposes, including prompting responses in specific ways, similar to how we use it in daily life.
  • 🛠️ Clear and specific prompts are crucial for eliciting useful output from AI, as they provide necessary context and instructions.
  • 🔄 Iteration is key in prompt engineering; evaluating output and revising prompts can lead to better results.
  • 🧠 Large Language Models (LLMs) are trained on vast amounts of text to identify patterns and generate responses, but they have limitations.
  • 🎯 LLMs can sometimes produce biased or inaccurate outputs due to the nature of their training data or inherent tendencies to 'hallucinate'.
  • ⚖️ It's important to critically evaluate AI output for accuracy, bias, relevance, and sufficiency before using it.
  • 📈 The quality of the initial prompt significantly affects the quality of AI-generated content, akin to the impact of quality ingredients in cooking.
  • 📚 LLMs can be used for various tasks, including content creation, summarization, classification, extraction, translation, editing, and problem-solving.
  • 🔄 Iterative processes in prompt engineering involve multiple attempts and refinements to achieve optimal AI output.
  • 💡 Few-shot prompting, which includes providing two or more examples in a prompt, can improve an LLM's performance by offering additional context and clarity.

Q & A

  • What is prompt engineering and why is it important?

    -Prompt engineering is the practice of developing effective prompts that elicit useful output from generative AI. It is important because it helps to guide AI models to provide more accurate, relevant, and useful responses to inquiries or tasks.

  • How does language play a role in prompting AI?

    -Language is crucial in prompting AI as it is used to build connections, express opinions, explain ideas, and prompt others to respond in a particular way. The phrasing of the words in a prompt can significantly affect the AI's response.

  • What is a Large Language Model (LLM) and how does it learn to generate responses?

    -A Large Language Model (LLM) is an AI model trained on vast amounts of text to identify patterns between words, concepts, and phrases, enabling it to generate responses to prompts. It learns by analyzing millions of text sources, which helps it understand the relationships and patterns in human language.

  • How can biases in an LLM's training data affect its output?

    -Biases in an LLM's training data can lead to biased output, reflecting unfair biases present in society. For example, an LLM might associate certain professional occupations with specific gender roles due to the data it was trained on.

  • What is the concept of 'hallucination' in the context of LLMs?

    -In the context of LLMs, 'hallucination' refers to AI outputs that are factually inaccurate. Despite their ability to respond to many types of questions and instructions, LLMs can sometimes generate text that contains incorrect information.

  • Why is it necessary to critically evaluate LLM output?

    -It is necessary to critically evaluate LLM output to ensure it is factually accurate, unbiased, relevant to the specific request, and provides sufficient information. This is due to the potential limitations and inaccuracies that can arise from the LLM's training data or predictive processes.

  • What is the role of iteration in prompt engineering?

    -Iteration plays a key role in prompt engineering as it involves evaluating the output and revising the prompts to improve results. It is an essential process to achieve the desired output from an LLM, especially when the initial prompts do not yield satisfactory results.

  • How can providing examples, or 'shots', in a prompt improve an LLM's performance?

    -Providing examples, or 'shots', in a prompt can improve an LLM's performance by offering additional context and patterns for the model to follow. This can help clarify the desired format, phrasing, or general pattern, leading to more accurate and relevant responses.

  • What are some common uses of LLMs in a professional setting?

    -Common uses of LLMs in a professional setting include content creation, summarization of lengthy documents, classification of sentiments in customer reviews, extraction of data from text, translation between languages, editing of documents to fit a specific tone or audience, and problem-solving for various workplace challenges.

  • How can the iterative process in prompt engineering be compared to other creative processes?

    -The iterative process in prompt engineering can be compared to other creative processes, such as developing a proposal or designing a website, where a first version is created, evaluated, and improved upon for subsequent versions until the desired outcome is achieved.

  • What is the significance of including a verb in prompts when using an LLM?

    -Including a verb in prompts helps guide the LLM to understand the intended action or task, such as 'create', 'summarize', 'classify', or 'edit'. This clarity aids the model in producing output that is more aligned with the user's request.

Outlines

00:00

🧠 Introduction to Prompt Engineering

The first paragraph introduces the concept of prompt engineering, which is the design of effective prompts to elicit desired responses from AI models. It emphasizes the importance of clear and specific language in communication, both in daily life and when interacting with AI. The speaker, Yufeng, shares personal motivation for improving prompt efficiency and outlines the course's focus on understanding how Large Language Models (LLMs) generate output, the role of prompt engineering in improving output quality, and the iterative process of refining prompts for better results.

05:01

🤖 Understanding LLMs and Their Limitations

This paragraph delves into how Large Language Models work, including their training on vast amounts of text to identify patterns and generate responses. It discusses the potential issues with LLM output, such as bias, inaccuracies, and 'hallucinations,' which are factual inaccuracies in the generated text. The importance of critically evaluating LLM output is stressed, along with the acknowledgment that LLMs require high-quality prompts to produce useful results.

10:01

📝 The Art of Crafting Effective Prompts

The third paragraph focuses on the process of writing effective prompts for LLMs. It uses the analogy of cooking to illustrate the importance of starting with a high-quality prompt. The paragraph provides examples of how to improve prompts for better AI output, such as specifying the need for vegetarian options in restaurant recommendations. It also discusses the iterative nature of prompt engineering and the need for clear, specific instructions to guide the LLM.

15:03

🚀 Leveraging LLMs for Workplace Productivity

This paragraph explores various ways LLMs can be used to enhance productivity and creativity in the workplace. It provides examples of using LLMs for content creation, summarization, classification, extraction, translation, editing, and problem-solving. The paragraph demonstrates the versatility of LLMs in assisting with different tasks and the potential for customized solutions to workplace challenges.

20:04

🔄 The Iterative Process of Prompt Engineering

The fifth paragraph emphasizes the iterative process in prompt engineering, comparing it to creating presentations or designing websites where multiple drafts are produced and refined. It discusses the need for multiple attempts to achieve optimal AI output and the importance of evaluating and revising prompts based on the output's accuracy, bias, sufficiency, relevance, and consistency.

25:07

🌟 Harnessing the Power of Few-Shot Prompting

The final paragraph introduces the technique of few-shot prompting, which involves providing two or more examples in a prompt to guide the LLM. It explains the concept of 'shot' in prompting and contrasts zero-shot, one-shot, and few-shot prompting. The paragraph demonstrates how few-shot prompting can improve LLM performance by offering examples that clarify the desired format or pattern, using the task of writing a product description in a specific style as an illustration.

Mindmap

Keywords

💡Prompt Engineering

Prompt engineering is the practice of crafting effective prompts to elicit useful output from generative AI, such as Large Language Models (LLMs). It is central to the video's theme, as it discusses how to design prompts that guide AI in generating desired responses. The script uses the example of a clothing store owner seeking marketing ideas, illustrating how a specific prompt can lead to more relevant AI-generated suggestions.

💡Large Language Model (LLM)

An LLM, as described in the script, is an AI model trained on vast amounts of text to identify patterns and generate responses. The script emphasizes the importance of understanding how LLMs work and their limitations when it comes to generating output. The video's narrative revolves around leveraging LLMs effectively through prompt engineering, showcasing their predictive capabilities and potential for factual inaccuracies.

💡Output

In the context of the video, 'output' refers to the responses generated by an LLM in reaction to a prompt. The script discusses the importance of clear and specific prompts to produce useful output, such as marketing ideas for a clothing store. It also addresses the need for iteration and evaluation of output to refine prompts and achieve better results.

💡Context

Context is vital in prompt engineering as it provides the necessary background for the LLM to generate relevant responses. The script illustrates this with the example of a vegetarian seeking restaurant recommendations, where the lack of context in the prompt leads to unsuitable suggestions. The term is integral to the video's message on crafting effective prompts.

💡Iteration

Iteration is the process of refining prompts based on the evaluation of the AI's output. The script highlights the iterative nature of prompt engineering, emphasizing that initial prompts may not yield the desired results and require adjustment. The example of planning a company event and refining the prompt for themes demonstrates the iterative process in action.

💡Few-shot Prompting

Few-shot prompting is a technique mentioned in the script where two or more examples are provided in a prompt to guide the LLM. It is a key concept in the video, as it shows how including examples can improve the LLM's performance in generating responses that match a specific style or format, such as product descriptions.

💡Bias

Bias in LLMs is a significant issue discussed in the script, referring to the model's tendency to reflect unfair biases present in the training data, such as associating certain occupations with specific genders. The term is crucial for understanding the limitations of LLMs and the need for critical evaluation of their output.

💡Hallucination

In the script, 'hallucination' refers to AI-generated text that is factually inaccurate, despite appearing plausible. The term is used to describe a limitation of LLMs, where they might provide incorrect information, such as wrong dates or statistics, when summarizing or generating content.

💡Content Creation

Content creation is one of the applications of LLMs highlighted in the video, where the AI can assist in generating various types of written content, such as emails, plans, and ideas. The script provides an example of using an LLM to create an outline for an article on data visualization best practices, demonstrating the practical use of LLMs in workplace tasks.

💡Summarization

Summarization is the process of condensing the main points of a lengthy document into a shorter form, as discussed in the script. It is a key use case for LLMs, where the script illustrates how an LLM can be prompted to summarize a paragraph about project management strategies into a single sentence.

💡Classification

Classification, as mentioned in the script, is the task of categorizing data, such as customer reviews, into predefined categories like positive, negative, or neutral. The script uses this term to demonstrate another practical application of LLMs in analyzing and sorting information, which can be valuable in business settings.

Highlights

Prompt engineering is the practice of creating effective prompts to elicit useful output from generative AI.

Clear and specific prompts are crucial for achieving useful results from conversational AI tools.

Iteration is key in prompt engineering, involving evaluating output and revising prompts to refine results.

Few-shot prompting is a technique that uses two or more examples in a prompt to guide AI output.

LLMs (Large Language Models) generate responses based on patterns learned from training on large text datasets.

LLMs can sometimes produce biased or factually inaccurate outputs due to limitations in their training data.

Examples of effective prompting include using LLMs for content creation, summarization, classification, extraction, translation, and editing.

Problem-solving with LLMs can involve generating solutions for workplace challenges and brainstorming ideas.

The importance of critically evaluating LLM output for accuracy, bias, relevance, and sufficiency is emphasized.

LLMs may hallucinate, producing outputs that are not factually true, which underscores the need for careful evaluation.

The iterative process in prompt engineering involves multiple attempts and refinements to achieve optimal output.

Using verbs in prompts can guide the LLM to produce useful output for specific tasks like creating, summarizing, or classifying.

Including examples in prompts can help LLMs understand the desired format, phrasing, or pattern for the task at hand.

Zero-shot prompting provides no examples, relying solely on the LLM's training and the task description in the prompt.

The number of examples in a prompt can affect the flexibility and creativity of LLM responses, requiring experimentation to find the optimal balance.

Prompt engineering skills are applicable to various AI models beyond LLMs, including those for image generation.

The course encourages further exploration of using AI responsibly as part of Google AI Essentials.

Transcripts

play00:00

- Prompt engineering involves designing

play00:02

the best prompt you can to get the output you want.

play00:15

Think about how you use language in your daily life.

play00:19

Language is used for so many purposes,

play00:22

to build connections, express opinions, or explain ideas,

play00:27

and sometimes, you might wanna use language

play00:29

to prompt others to respond in a particular way.

play00:33

Maybe you want someone to give you a recommendation,

play00:36

or clarify something.

play00:38

In those cases, the way you phrase your words

play00:42

can affect how others respond.

play00:44

The same is true when prompting a conversational AI tool

play00:48

with a question, or request.

play00:50

A prompt is text input that provides instructions

play00:54

to the AI model on how to generate output.

play00:57

For example, someone who owns a clothing store might want

play01:00

an AI model to output new ideas

play01:03

for how to market their clothing.

play01:05

This business owner might write the prompt,

play01:07

"I own a clothing store.

play01:09

We sell high fashion womenswear.

play01:11

Help me brainstorm marketing ideas."

play01:14

In this section of the course,

play01:16

you'll focus on how to design,

play01:17

or engineer effective prompts

play01:20

to achieve more useful results

play01:23

from a conversational AI tool.

play01:25

My name is Yufeng, and I'm an engineer at Google.

play01:28

I first became interested in prompting

play01:30

because getting useful responses from language models

play01:33

was time-consuming.

play01:35

Sometimes, it was even quicker for us to do the work

play01:38

without the use of AI.

play01:40

I was inspired to help our tools

play01:41

be more efficient, not less.

play01:44

I'm excited to help you learn more about

play01:46

developing effective prompts.

play01:49

First, you'll discover how LLMs generate output

play01:53

in response to prompts,

play01:55

and then you'll explore the role of prompt engineering

play01:59

in improving the quality of the output.

play02:02

Prompt engineering is the practice of developing

play02:05

effective prompts that elicit useful output

play02:09

from generative AI.

play02:11

You'll learn to create clear and specific prompts,

play02:14

one of the most important parts of prompt engineering.

play02:17

The more clear and specific your prompt,

play02:20

the more likely you are to get useful output.

play02:24

Another important part of prompt engineering is iteration.

play02:28

You'll learn about evaluating output

play02:31

and revising your prompts.

play02:33

This will also help you get the results you need

play02:36

when leveraging conversational AI tools in the workplace.

play02:39

We'll also explore a specific prompting technique

play02:43

called few-shot prompting.

play02:46

Writing effective prompts involves

play02:48

critical thinking and creativity.

play02:51

It can also be a fun process,

play02:54

and it's a very important skill to practice

play02:57

if you wanna use AI effectively in the workplace.

play03:00

Are you excited to get started on prompt engineering?

play03:03

Let's go.

play03:06

It's helpful to understand how LLMs work

play03:09

and to be aware of their limitations.

play03:13

A Large Language Model, or LLM, is an AI model

play03:17

that is trained on large amounts of text

play03:19

to identify patterns between words, concepts, and phrases,

play03:23

so that it can generate responses to prompts.

play03:27

So how do LLMs learn to generate

play03:29

useful responses to prompts?

play03:32

An LLM is trained on millions of sources of text,

play03:35

including books, articles, websites, and more.

play03:39

This training helps the model learn the patterns

play03:42

and relationships that exist in human language.

play03:46

In general, the more high quality data the model receives,

play03:51

the better its performance will be.

play03:53

Because LLMs can identify so many patterns in language,

play03:57

they can also predict what word is most likely to come next

play04:01

in a sequence of words.

play04:03

Consider a simple example to get a basic understanding

play04:06

of how LLMs predict the next word in a sequence.

play04:09

Take the incomplete sentence,

play04:11

"After it rained the street was."

play04:15

An LLM can predict what word comes next

play04:19

by computing the probabilities for different possible words.

play04:23

Based on the available data,

play04:24

the word wet might have a high probability

play04:27

of being the next word,

play04:28

the word clean, a lower probability,

play04:31

and the word dry, an extremely low probability.

play04:35

In this case, the LLM might complete the sentence

play04:38

by inserting the word with the highest probability

play04:41

of coming next in the sequence, wet,

play04:43

or it might be another high probability word, like damp.

play04:47

An LLM may vary in its response

play04:50

to the same prompt each time you use it.

play04:53

LLMs use statistics to analyze the relationships

play04:57

between all the words in a given sequence

play05:00

and compute the probabilities

play05:02

for thousands of possible words to come next

play05:05

in that sequence.

play05:07

This predictive power enables LLMs to respond to questions

play05:11

and requests, whether the prompt is to complete

play05:14

a simple sentence, or to develop a compelling story

play05:17

for a new product launch, or ad campaign.

play05:20

Although LLMs are powerful,

play05:23

you may not always get the output you want.

play05:26

Sometimes, this is because of limitations

play05:29

in an LLM's training data.

play05:31

For instance, an LLM's output may be biased,

play05:35

because the data it was trained on contains bias.

play05:39

This data may include news articles

play05:41

and websites that reflect the unfair biases

play05:45

present in society.

play05:47

For example, because of the data it was trained on,

play05:51

an LLM may be more likely to produce output

play05:54

that associates a professional occupation

play05:56

with a specific gender role.

play05:59

The training data that informs an LLM

play06:01

can be limited in other ways as well.

play06:04

For instance, an LLM might not generate sufficient content

play06:08

about a specific domain, or topic,

play06:11

because the data it was trained on

play06:13

does not contain enough information about that topic.

play06:17

Another factor that can affect output is

play06:20

the tendency of LLMs to hallucinate.

play06:24

Hallucinations are AI outputs that are not true.

play06:29

While LLMs are good at responding to many kinds of questions

play06:33

and instructions, they can sometimes generate text

play06:37

that is factually inaccurate.

play06:40

Let's say you're researching a company,

play06:42

and you use an LLM to help you summarize

play06:45

the company's history.

play06:46

The LLM might hallucinate,

play06:48

and provide incorrect information about certain details,

play06:51

such as the date the company was founded,

play06:54

or the current number of employees.

play06:56

A number of factors can contribute to hallucinations,

play07:00

such as the quality of an LLM's training data,

play07:03

the phrasing of the prompt,

play07:05

or the method an LLM uses to analyze text

play07:08

and predict the next word in a sequence.

play07:11

Because of an LLM's limitations,

play07:14

it's important that you critically evaluate all LLM output

play07:19

to determine if it is factually accurate, is unbiased,

play07:23

is relevant to your specific request,

play07:26

and provides sufficient information.

play07:29

Whether you're using AI to summarize a lengthy report,

play07:32

generate ideas for marketing a product,

play07:35

or outline a project plan,

play07:37

be sure to carefully check the quality of the output.

play07:41

Finally, it's important not to make assumptions

play07:44

about an LLM's capabilities.

play07:47

For example, just because it produced high quality output

play07:51

for a persuasive letter to a customer,

play07:53

don't assume you will get the same quality output

play07:57

if you use the same prompt again in the future.

play08:00

Large Language Models are powerful tools

play08:03

that require human guidance for effective use.

play08:07

Being aware of an LLM's limitations can help you achieve

play08:11

the best possible results.

play08:15

How can you write prompts that produce useful output?

play08:20

It's generally true that the quality

play08:22

of what you start with greatly affects

play08:25

the quality of what you produce.

play08:27

Consider cooking, for example.

play08:29

Let's say you're preparing dinner.

play08:31

If you have fresh, high quality ingredients,

play08:34

well, you're more likely to produce a great meal.

play08:37

Conversely, if you're missing an ingredient,

play08:39

or the ingredients aren't high quality,

play08:42

the resulting meal may not be as good.

play08:45

In a similar way, the quality of the prompt

play08:48

that you put into a conversational AI tool

play08:51

can affect the quality of the tool's output.

play08:55

This is where prompt engineering comes in.

play08:58

Prompt engineering involves designing

play09:01

the best prompt you can

play09:03

to get the output you want from an LLM.

play09:06

This includes writing clear, specific prompts

play09:09

that provide relevant context.

play09:13

To gain a better understanding of the context LLMs need,

play09:17

let's compare how a person

play09:19

and an LLM might respond to the same question.

play09:23

Suppose a vegetarian asked their friend,

play09:27

"What restaurants should I go to in San Francisco?"

play09:30

The friend would likely suggest restaurants

play09:32

with good vegetarian options.

play09:35

However, if prompted with the same question,

play09:39

an LLM might recommend restaurants

play09:41

that are not suitable for a vegetarian.

play09:44

A person would instinctively consider the fact

play09:47

that their friend is a vegetarian

play09:48

when answering the question.

play09:50

But an LLM does not have this prior knowledge.

play09:54

So to get the needed information from an LLM,

play09:57

the prompt must be more specific.

play10:01

In this case, the prompt needs to mention

play10:03

that the restaurant should have good vegetarian options.

play10:08

Let's explore an example that demonstrates

play10:10

how you can use prompt engineering

play10:12

to improve the quality of an LLM's output.

play10:16

Let's take on the task of planning a company event.

play10:20

You need to find a theme for an upcoming conference.

play10:23

Let's write a prompt to Gemini

play10:26

to generate a list of five potential themes for an event.

play10:30

You can use similar prompts in ChatGPT,

play10:32

Microsoft Copilot, or any other conversational AI tool.

play10:37

Now, let's review the response.

play10:39

Well, this isn't what we wanted.

play10:42

We've gotten a list that seems more related

play10:44

to party themes than themes for a professional conference.

play10:48

Our prompt didn't provide enough context

play10:51

to produce the output we needed.

play10:53

It wasn't clear, or specific enough.

play10:57

Let's try this again.

play10:58

This time, we'll type the prompt,

play11:00

"Generate a list of five potential themes

play11:02

for a professional conference on customer experience

play11:05

in the hospitality industry."

play11:08

This prompt is much more specific,

play11:10

making it clear that it's a professional conference

play11:13

on customer experience in the hospitality industry.

play11:17

Let's examine the response.

play11:19

This is much better.

play11:21

We engineered our prompt to include

play11:23

specific, relevant context,

play11:25

so Gemini is able to generate useful output.

play11:28

When you provide clear, specific instructions

play11:31

that include necessary context,

play11:33

you enable LLMs to generate useful output.

play11:37

Keep in mind that due to LLM limitations,

play11:41

there might be some instances

play11:43

in which you can't get quality output,

play11:45

regardless of the quality of your prompt.

play11:48

For example, if you're prompting the LLM

play11:51

to find information about a current event,

play11:54

but the LLM doesn't have access to that information,

play11:57

it won't be able to provide the output you need.

play12:01

And like in other areas of design,

play12:04

prompt engineering is often an iterative process.

play12:07

Sometimes, even when you do provide

play12:10

clear and specific instructions,

play12:12

you may not get the output you want on your first try.

play12:15

When our first prompt didn't produce the response we wanted,

play12:18

we revised the prompt to improve the output.

play12:21

The second iteration provided instructions that were clear

play12:24

and specific enough to produce a more useful output.

play12:30

There are multiple ways to leverage an LLM's capabilities

play12:34

at work to boost productivity and creativity.

play12:37

A common one is content creation.

play12:39

You can use an LLM to create emails, plans, ideas, and more.

play12:44

As an example, you can ask an LLM to help you write

play12:48

an article about a work-related topic.

play12:51

Let's prompt Gemini to create an outline for an article

play12:54

on data visualization best practices.

play12:57

The article is for entry-level business analysts.

play13:00

Notice that the prompt begins with the verb create.

play13:04

It's often helpful to include a verb in your prompt

play13:07

to guide the LLM to produce useful output

play13:10

for your intended task.

play13:13

The output provides a helpful outline

play13:15

for a first draft of the article.

play13:18

You can also use an LLM for summarization.

play13:22

An LLM can summarize a lengthy document's main points.

play13:26

For example, you might ask Gemini to summarize

play13:29

a detailed paragraph about project management strategies.

play13:32

We'll begin the prompt with the verb summarize,

play13:35

and specify that we want the output to be a single sentence.

play13:41

Then we'll include the paragraph

play13:42

we want Gemini to summarize.

play13:47

The output provides a convenient, one-sentence summary

play13:50

of the paragraph.

play13:52

While this example shows how you can summarize

play13:54

a single paragraph, you can ask an LLM to summarize

play13:57

longer text and documents, too.

play13:59

Classification is another possible use.

play14:02

For instance, you might prompt the LLM

play14:04

to classify the sentiment,

play14:06

or feeling in a group of customer reviews as positive,

play14:09

negative, or neutral.

play14:11

Let's prompt Gemini to classify customer reviews

play14:14

about a retail website's new design

play14:17

as positive, negative, or neutral.

play14:20

The prompt includes the verb classify to guide the output.

play14:24

The prompt also contains the reviews.

play14:27

In this example, there are four reviews.

play14:31

The output accurately classifies

play14:33

the first two reviews as negative, the third as positive,

play14:37

and the fourth as neutral.

play14:39

Consider how you could leverage an LLM

play14:41

to efficiently complete large classification tasks.

play14:46

Or you can use an LLM for extraction,

play14:50

which involves pulling data from text,

play14:52

and transforming it into a structured format

play14:55

that's easier to understand.

play14:57

Suppose you have a report that provides information

play15:00

about a global organization.

play15:02

You can prompt Gemini to extract all mentions of cities

play15:06

and revenue in the report and place them in a table.

play15:10

Then we'll include the report in our prompt.

play15:13

Please be aware that you should not input

play15:15

confidential information into LLMs,

play15:18

but in this example, the report is not confidential.

play15:21

The output displays a table with columns

play15:24

for city and revenue.

play15:25

This presents the information in a well-organized format

play15:28

that's easy to review.

play15:30

Another use is translation.

play15:33

You can leverage an LLM to translate text

play15:36

between different languages.

play15:38

For example, you might ask Gemini to translate the title

play15:41

of a training session from English to Spanish.

play15:45

The output includes a variety

play15:47

of Spanish translations to choose from

play15:49

and explains the reasoning behind each translation.

play15:53

This information can help you choose

play15:55

the most useful option for your audience.

play15:59

Or you can use an LLM for editing,

play16:01

such as to change the tone of a section of text

play16:05

from formal to casual,

play16:07

and to check if the text is grammatically correct.

play16:11

For example, Gemini can help you edit

play16:13

a technical analysis about electric vehicles

play16:16

by making the language more accessible

play16:19

for a non-technical audience.

play16:21

We'll start the prompt with the verb edit,

play16:23

and specify that the language should be easy

play16:26

for a non-technical audience to understand.

play16:29

After this, we'll include the technical analysis.

play16:34

The output provides a version of the analysis

play16:36

that an audience less familiar

play16:38

with the technical details can understand.

play16:41

This is just one example

play16:42

of how an LLM can help you edit documents.

play16:46

LLMs can quickly customize the tone, length,

play16:49

and format of documents to fit your needs.

play16:53

One more use for an LLM we'll discuss is problem-solving.

play16:58

You can utilize an LLM to generate solutions

play17:01

for a variety of workplace challenges.

play17:05

When planning a company event, for example,

play17:07

you could prompt the LLM to find menu solutions

play17:11

that accommodate the food restrictions of multiple guests

play17:14

while following a holiday themed menu.

play17:18

And here's another example.

play17:20

Let's say you are an entrepreneur

play17:22

who recently launched a new copy editing service.

play17:25

Let's ask Gemini to solve a problem

play17:27

related to the copy editing service.

play17:29

We'll ask for suggestions for increasing the client base.

play17:33

The output provides specific suggestions

play17:35

for reaching new clients, optimizing services,

play17:38

and growing the business.

play17:40

I love these ideas.

play17:41

Let's ask Gemini to draft an email,

play17:43

so we can easily share these ideas with others.

play17:47

LLMs can help you brainstorm solutions

play17:50

for many different types of problems.

play17:52

I'm definitely excited

play17:53

by the variety of ways we can leverage LLMs

play17:56

when completing workplace tasks.

play17:59

It's a very important skill to practice

play18:01

if you wanna use AI effectively in the workplace.

play18:04

Coming up, we'll focus more on evaluating output

play18:08

and iterating on your prompt.

play18:12

Have you ever created a presentation for a client,

play18:15

or designed a website for your new business?

play18:18

If so, you may have used an iterative process

play18:21

to achieve your goal.

play18:22

In an iterative process, you create a first version,

play18:26

evaluate it, and improve upon it for the next version.

play18:30

Then you repeat these steps

play18:32

until you get the desired outcome.

play18:34

For example, if you're developing a proposal, report,

play18:38

or other document to share with your coworkers,

play18:40

you might produce multiple drafts,

play18:42

and make improvements on each draft

play18:44

until you are satisfied with the result.

play18:47

Taking an iterative approach is often the most effective way

play18:51

to solve a problem, or develop a product.

play18:54

An iterative process is also effective

play18:57

in prompt engineering.

play18:59

Prompt engineering often requires multiple attempts

play19:02

before you get the optimal output.

play19:05

Most of the time, you won't get the best result

play19:07

on your first try.

play19:09

If you try something and it doesn't work,

play19:11

don't get discouraged.

play19:12

Instead, carefully evaluate the output to determine

play19:16

why you didn't get the response you wanted.

play19:19

Then revise your prompt to try for a better result.

play19:22

Let's consider possible reasons

play19:24

you might not get useful output

play19:26

after creating a clear and specific prompt.

play19:29

First, differences in Large Language Models

play19:32

can affect the output.

play19:34

Each LLM is developed with unique training data

play19:37

and programming techniques,

play19:39

and has different background knowledge

play19:41

about specific domains.

play19:44

For this reason, different models might respond

play19:47

to similar prompts in different ways

play19:50

and might fail to provide

play19:52

an adequate response to some prompts.

play19:54

Taking an iterative approach with the LLM you're using

play19:58

will produce the best results.

play20:01

Second, LLM limitations.

play20:04

Previously, you learned that LLM output

play20:06

may sometimes be inaccurate, biased,

play20:09

insufficient, irrelevant, or inconsistent.

play20:12

You should critically evaluate all LLM output

play20:16

by asking yourself the following questions.

play20:18

Is the output accurate?

play20:20

Is the output unbiased?

play20:22

Does the output include sufficient information?

play20:25

Is the output relevant to my project, or task?

play20:29

And finally, is the output consistent

play20:32

if I use the same prompt multiple times?

play20:36

If you identify any issues when you evaluate output,

play20:39

iterating on your initial prompt can often help you resolve

play20:43

these issues and get better output.

play20:45

To begin, if you notice there's any context missing

play20:48

in your prompt, add it.

play20:50

Your choice of words can also significantly impact

play20:53

an LLM's output.

play20:54

Using different words, or phrasing in your prompts

play20:57

often yields different responses from the model.

play21:01

Experimenting with different phrasings

play21:03

can help you obtain the most useful output.

play21:06

Now that you know more about iterative prompting,

play21:09

let's consider an example.

play21:11

Suppose you work as a human resources coordinator

play21:14

for a video production company.

play21:16

The company wants to develop an internship program

play21:19

for students who are exploring careers in animation

play21:21

and motion graphics design.

play21:24

The company is based in the United States

play21:26

in the state of Pennsylvania, my home state.

play21:29

Your team wants to partner with local colleges

play21:32

to provide internship opportunities

play21:34

for students in Pennsylvania.

play21:36

As a first step, you need to create a list of colleges

play21:38

in Pennsylvania that have animation programs.

play21:42

The list should include necessary details about the colleges

play21:45

and be in a well-organized format

play21:48

that your team can quickly review.

play21:50

Let's review an example using Gemini.

play21:52

"Help me find colleges

play21:54

with animation programs in Pennsylvania."

play21:57

Next, we'll examine our output.

play22:00

The output lists colleges in Pennsylvania

play22:03

that have animation programs,

play22:05

along with further information related to these programs.

play22:08

This is helpful information,

play22:10

but it isn't structured in a way

play22:12

that your team can quickly reference

play22:14

when contacting the colleges.

play22:16

Organizing the information in a table

play22:18

would make it easier to read and understand,

play22:21

especially for stakeholders like your manager,

play22:24

who may have limited time.

play22:25

We can iterate on the prompt by adding context

play22:28

to specify the desired format of the output.

play22:31

We'll type, "Show these options as a table."

play22:35

The output displays a table

play22:36

that provides useful information about the location

play22:39

of each college and the specific type of degree it offers.

play22:42

Now, the list is in a well-organized format

play22:45

that's easier for your team to follow.

play22:48

Although the table contains most of the information

play22:50

your team needs, it doesn't include a key detail,

play22:54

whether the school is a public, or private institution.

play22:58

Your company wants to offer internships to students

play23:00

from both public and private colleges.

play23:03

We'll add a new request for Gemini

play23:05

to include the relevant information in the table.

play23:08

"Can you add a column showing

play23:10

whether they are public, or private?"

play23:12

Now, the table includes a column that indicates

play23:15

whether a college is private, or public.

play23:18

To share this information with your team

play23:20

in a format that's easy to review and understand,

play23:23

you can use the Export to Sheets feature.

play23:25

This will allow your team to easily access

play23:28

and analyze the data,

play23:29

and make informed decisions based on the results.

play23:33

You should apply the same iterative approach

play23:36

to further tasks.

play23:37

When you develop prompts for additional tasks,

play23:40

be aware that previous prompts made in the same conversation

play23:44

can influence the output of your most recent prompt.

play23:47

If you notice this is happening,

play23:49

you may want to start a new conversation.

play23:51

Iteration is a key part of prompt engineering.

play23:55

By taking an iterative approach to prompting,

play23:57

you can leverage an LLM to provide

play24:00

the most useful output for your needs.

play24:06

Have you ever created something new

play24:08

by building upon previous examples?

play24:11

Perhaps you used a well-received report

play24:14

as a reference when writing a similar report,

play24:17

or maybe you used a relevant and engaging website

play24:20

as a model when designing your own website.

play24:23

Examples are also useful for LLMs.

play24:27

Including examples in your prompt can help an LLM

play24:30

better respond to your request,

play24:32

and can be an especially effective strategy

play24:35

to get your desired output.

play24:37

We're going to explore how to use examples in prompting,

play24:41

but first, let's briefly discuss the technical term shot.

play24:45

In prompt engineering, the word shot is often used

play24:48

as a synonym for the word example.

play24:51

There are different names for prompting techniques

play24:54

based on the number of examples given to the LLM.

play24:58

Zero-shot prompting is a technique that provides

play25:01

no examples in a prompt,

play25:03

while one-shot prompting provides one example,

play25:07

and few-shot prompting is a technique that provides

play25:11

two, or more examples in a prompt.

play25:14

Because examples aren't included in zero-shot prompts,

play25:18

the model is expected to perform the task

play25:21

based only on its training data,

play25:23

and the task description included in the prompt.

play25:26

Zero-shot prompting is most likely to be effective

play25:29

when you are seeking simple and direct responses.

play25:33

Zero-shot prompting may not be effective for tasks

play25:37

that require the LLM to respond

play25:39

in a more specific, nuanced way.

play25:42

Few-shot prompting can improve an LLM's performance

play25:45

by providing additional context and examples in your prompt.

play25:50

These additional examples can help clarify

play25:52

the desired format, phrasing, or general pattern.

play25:56

Few-shot prompting can be useful for a range of tasks.

play26:00

For example, you might use few-shot prompting

play26:02

to generate content in a particular style.

play26:06

Let's say you work for an online retailer.

play26:09

You need to write a product description

play26:10

for a new skateboard.

play26:12

You already have descriptions for existing products,

play26:15

such as a bicycle and rollerblades.

play26:18

You want the skateboard description

play26:20

to follow a similar style and format.

play26:22

We'll start with a prompt that begins

play26:24

with some general instructions.

play26:27

"Write a one sentence description of a product.

play26:30

It should contain two adjectives

play26:31

that describe the product."

play26:34

We also specify that we want Gemini to review

play26:36

the examples we provide,

play26:38

and write the description of the skateboard

play26:40

in the same style.

play26:43

Because this is a few-shot prompt,

play26:46

we need to provide examples that model the style we want.

play26:50

Each example contains a label indicating

play26:53

the product being described, a bicycle and rollerblades,

play26:58

and each description is one sentence long,

play27:00

and contains two adjectives,

play27:02

sleek and durable for the bicycle,

play27:05

and smooth and stylish for the rollerblades.

play27:09

Next, we type the label skateboard.

play27:12

When we add this label

play27:13

and leave the product description blank,

play27:15

we indicate to Gemini that we want it to complete

play27:18

the description of the skateboard

play27:20

like it did with the other two product descriptions.

play27:23

Let's review our output.

play27:26

The output offers a product description of the skateboard

play27:28

that meets the criteria we requested

play27:31

and is in the same writing style

play27:33

and format as the examples we included in our prompt.

play27:37

In this case, two examples were enough

play27:40

to obtain useful results,

play27:41

but there is no definitive rule

play27:43

for the optimal number of examples to include in a prompt.

play27:47

Some LLMs can accurately reproduce patterns using

play27:50

only a few examples, while other LLMs need more.

play27:54

At the same time, if you include too many examples,

play27:58

an LLM's responses may become less flexible and creative,

play28:03

and they may reproduce the examples too closely.

play28:07

Experiment with the number of examples to include

play28:10

to get the best results for your specific task.

play28:14

Now you know a prompting technique

play28:16

that will help you get better quality output.

play28:19

Few-shot prompting is an effective strategy

play28:22

that can help you guide an LLM

play28:24

to generate more useful responses.

play28:29

You've learned a lot about writing prompts

play28:31

that you can apply to workplace tasks.

play28:33

In this section, we discussed Large Language Model,

play28:37

or LLM, output.

play28:39

We examined how LLMs produce their output

play28:42

and potential issues you might encounter in the output.

play28:45

After this, we focused on a key principle

play28:48

of prompt engineering, creating clear and specific prompts.

play28:52

You learned just how important it is

play28:55

to specify what you want the LLM to do

play28:58

and to include supporting context

play29:00

to help it provide better output.

play29:03

We then went on to discover how to improve

play29:06

the quality of AI output through iteration.

play29:09

It's essential that you evaluate your output,

play29:12

and then revise your prompt as needed.

play29:15

Lastly, we learned about few-shot prompting,

play29:19

which involves providing examples to guide the LLM.

play29:23

I wanna offer a final tip before I go.

play29:26

We focused on prompting Large Language Models.

play29:30

You can use the same general principles

play29:32

when you prompt other kinds of AI models, too.

play29:35

For instance, the next time you want to use AI

play29:38

to generate an image, try to be as clear

play29:41

and specific as possible,

play29:43

and then iterate to get closer to the output you want.

play29:48

It's been great guiding you through the process

play29:50

of prompt engineering.

play29:51

I hope you continue to apply and develop these skills

play29:54

as you leverage conversational AI tools in the workplace.

play29:58

To continue learning, I encourage you to explore the topic

play30:02

of using AI responsibly as part of Google AI Essentials.

Rate This

5.0 / 5 (0 votes)

Ähnliche Tags
Prompt EngineeringAI EfficiencyWorkplace AILLM OutputContent CreationSummarizationClassificationData ExtractionLanguage TranslationIterative ProcessFew-Shot Prompting
Benötigen Sie eine Zusammenfassung auf Englisch?