Discover Prompt Engineering | Google AI Essentials
Summary
TLDRThe video script delves into the art of prompt engineering for AI, emphasizing its importance in eliciting useful responses from Large Language Models (LLMs). It discusses the process of designing clear and specific prompts, the iterative nature of refining prompts for better AI output, and the technique of few-shot prompting using examples. The script also addresses potential LLM limitations, such as biases and inaccuracies, and stresses the need for critical evaluation of AI-generated content. Yufeng, a Google engineer, shares insights on making AI tools more efficient through effective prompting, ultimately aiming to enhance productivity and creativity in the workplace.
Takeaways
- 📝 Prompt engineering is about crafting text inputs that guide AI models to generate desired outputs.
- 🌐 Language serves multiple purposes, including prompting responses in specific ways, similar to how we use it in daily life.
- 🛠️ Clear and specific prompts are crucial for eliciting useful output from AI, as they provide necessary context and instructions.
- 🔄 Iteration is key in prompt engineering; evaluating output and revising prompts can lead to better results.
- 🧠 Large Language Models (LLMs) are trained on vast amounts of text to identify patterns and generate responses, but they have limitations.
- 🎯 LLMs can sometimes produce biased or inaccurate outputs due to the nature of their training data or inherent tendencies to 'hallucinate'.
- ⚖️ It's important to critically evaluate AI output for accuracy, bias, relevance, and sufficiency before using it.
- 📈 The quality of the initial prompt significantly affects the quality of AI-generated content, akin to the impact of quality ingredients in cooking.
- 📚 LLMs can be used for various tasks, including content creation, summarization, classification, extraction, translation, editing, and problem-solving.
- 🔄 Iterative processes in prompt engineering involve multiple attempts and refinements to achieve optimal AI output.
- 💡 Few-shot prompting, which includes providing two or more examples in a prompt, can improve an LLM's performance by offering additional context and clarity.
Q & A
What is prompt engineering and why is it important?
-Prompt engineering is the practice of developing effective prompts that elicit useful output from generative AI. It is important because it helps to guide AI models to provide more accurate, relevant, and useful responses to inquiries or tasks.
How does language play a role in prompting AI?
-Language is crucial in prompting AI as it is used to build connections, express opinions, explain ideas, and prompt others to respond in a particular way. The phrasing of the words in a prompt can significantly affect the AI's response.
What is a Large Language Model (LLM) and how does it learn to generate responses?
-A Large Language Model (LLM) is an AI model trained on vast amounts of text to identify patterns between words, concepts, and phrases, enabling it to generate responses to prompts. It learns by analyzing millions of text sources, which helps it understand the relationships and patterns in human language.
How can biases in an LLM's training data affect its output?
-Biases in an LLM's training data can lead to biased output, reflecting unfair biases present in society. For example, an LLM might associate certain professional occupations with specific gender roles due to the data it was trained on.
What is the concept of 'hallucination' in the context of LLMs?
-In the context of LLMs, 'hallucination' refers to AI outputs that are factually inaccurate. Despite their ability to respond to many types of questions and instructions, LLMs can sometimes generate text that contains incorrect information.
Why is it necessary to critically evaluate LLM output?
-It is necessary to critically evaluate LLM output to ensure it is factually accurate, unbiased, relevant to the specific request, and provides sufficient information. This is due to the potential limitations and inaccuracies that can arise from the LLM's training data or predictive processes.
What is the role of iteration in prompt engineering?
-Iteration plays a key role in prompt engineering as it involves evaluating the output and revising the prompts to improve results. It is an essential process to achieve the desired output from an LLM, especially when the initial prompts do not yield satisfactory results.
How can providing examples, or 'shots', in a prompt improve an LLM's performance?
-Providing examples, or 'shots', in a prompt can improve an LLM's performance by offering additional context and patterns for the model to follow. This can help clarify the desired format, phrasing, or general pattern, leading to more accurate and relevant responses.
What are some common uses of LLMs in a professional setting?
-Common uses of LLMs in a professional setting include content creation, summarization of lengthy documents, classification of sentiments in customer reviews, extraction of data from text, translation between languages, editing of documents to fit a specific tone or audience, and problem-solving for various workplace challenges.
How can the iterative process in prompt engineering be compared to other creative processes?
-The iterative process in prompt engineering can be compared to other creative processes, such as developing a proposal or designing a website, where a first version is created, evaluated, and improved upon for subsequent versions until the desired outcome is achieved.
What is the significance of including a verb in prompts when using an LLM?
-Including a verb in prompts helps guide the LLM to understand the intended action or task, such as 'create', 'summarize', 'classify', or 'edit'. This clarity aids the model in producing output that is more aligned with the user's request.
Outlines
🧠 Introduction to Prompt Engineering
The first paragraph introduces the concept of prompt engineering, which is the design of effective prompts to elicit desired responses from AI models. It emphasizes the importance of clear and specific language in communication, both in daily life and when interacting with AI. The speaker, Yufeng, shares personal motivation for improving prompt efficiency and outlines the course's focus on understanding how Large Language Models (LLMs) generate output, the role of prompt engineering in improving output quality, and the iterative process of refining prompts for better results.
🤖 Understanding LLMs and Their Limitations
This paragraph delves into how Large Language Models work, including their training on vast amounts of text to identify patterns and generate responses. It discusses the potential issues with LLM output, such as bias, inaccuracies, and 'hallucinations,' which are factual inaccuracies in the generated text. The importance of critically evaluating LLM output is stressed, along with the acknowledgment that LLMs require high-quality prompts to produce useful results.
📝 The Art of Crafting Effective Prompts
The third paragraph focuses on the process of writing effective prompts for LLMs. It uses the analogy of cooking to illustrate the importance of starting with a high-quality prompt. The paragraph provides examples of how to improve prompts for better AI output, such as specifying the need for vegetarian options in restaurant recommendations. It also discusses the iterative nature of prompt engineering and the need for clear, specific instructions to guide the LLM.
🚀 Leveraging LLMs for Workplace Productivity
This paragraph explores various ways LLMs can be used to enhance productivity and creativity in the workplace. It provides examples of using LLMs for content creation, summarization, classification, extraction, translation, editing, and problem-solving. The paragraph demonstrates the versatility of LLMs in assisting with different tasks and the potential for customized solutions to workplace challenges.
🔄 The Iterative Process of Prompt Engineering
The fifth paragraph emphasizes the iterative process in prompt engineering, comparing it to creating presentations or designing websites where multiple drafts are produced and refined. It discusses the need for multiple attempts to achieve optimal AI output and the importance of evaluating and revising prompts based on the output's accuracy, bias, sufficiency, relevance, and consistency.
🌟 Harnessing the Power of Few-Shot Prompting
The final paragraph introduces the technique of few-shot prompting, which involves providing two or more examples in a prompt to guide the LLM. It explains the concept of 'shot' in prompting and contrasts zero-shot, one-shot, and few-shot prompting. The paragraph demonstrates how few-shot prompting can improve LLM performance by offering examples that clarify the desired format or pattern, using the task of writing a product description in a specific style as an illustration.
Mindmap
Keywords
💡Prompt Engineering
💡Large Language Model (LLM)
💡Output
💡Context
💡Iteration
💡Few-shot Prompting
💡Bias
💡Hallucination
💡Content Creation
💡Summarization
💡Classification
Highlights
Prompt engineering is the practice of creating effective prompts to elicit useful output from generative AI.
Clear and specific prompts are crucial for achieving useful results from conversational AI tools.
Iteration is key in prompt engineering, involving evaluating output and revising prompts to refine results.
Few-shot prompting is a technique that uses two or more examples in a prompt to guide AI output.
LLMs (Large Language Models) generate responses based on patterns learned from training on large text datasets.
LLMs can sometimes produce biased or factually inaccurate outputs due to limitations in their training data.
Examples of effective prompting include using LLMs for content creation, summarization, classification, extraction, translation, and editing.
Problem-solving with LLMs can involve generating solutions for workplace challenges and brainstorming ideas.
The importance of critically evaluating LLM output for accuracy, bias, relevance, and sufficiency is emphasized.
LLMs may hallucinate, producing outputs that are not factually true, which underscores the need for careful evaluation.
The iterative process in prompt engineering involves multiple attempts and refinements to achieve optimal output.
Using verbs in prompts can guide the LLM to produce useful output for specific tasks like creating, summarizing, or classifying.
Including examples in prompts can help LLMs understand the desired format, phrasing, or pattern for the task at hand.
Zero-shot prompting provides no examples, relying solely on the LLM's training and the task description in the prompt.
The number of examples in a prompt can affect the flexibility and creativity of LLM responses, requiring experimentation to find the optimal balance.
Prompt engineering skills are applicable to various AI models beyond LLMs, including those for image generation.
The course encourages further exploration of using AI responsibly as part of Google AI Essentials.
Transcripts
- Prompt engineering involves designing
the best prompt you can to get the output you want.
Think about how you use language in your daily life.
Language is used for so many purposes,
to build connections, express opinions, or explain ideas,
and sometimes, you might wanna use language
to prompt others to respond in a particular way.
Maybe you want someone to give you a recommendation,
or clarify something.
In those cases, the way you phrase your words
can affect how others respond.
The same is true when prompting a conversational AI tool
with a question, or request.
A prompt is text input that provides instructions
to the AI model on how to generate output.
For example, someone who owns a clothing store might want
an AI model to output new ideas
for how to market their clothing.
This business owner might write the prompt,
"I own a clothing store.
We sell high fashion womenswear.
Help me brainstorm marketing ideas."
In this section of the course,
you'll focus on how to design,
or engineer effective prompts
to achieve more useful results
from a conversational AI tool.
My name is Yufeng, and I'm an engineer at Google.
I first became interested in prompting
because getting useful responses from language models
was time-consuming.
Sometimes, it was even quicker for us to do the work
without the use of AI.
I was inspired to help our tools
be more efficient, not less.
I'm excited to help you learn more about
developing effective prompts.
First, you'll discover how LLMs generate output
in response to prompts,
and then you'll explore the role of prompt engineering
in improving the quality of the output.
Prompt engineering is the practice of developing
effective prompts that elicit useful output
from generative AI.
You'll learn to create clear and specific prompts,
one of the most important parts of prompt engineering.
The more clear and specific your prompt,
the more likely you are to get useful output.
Another important part of prompt engineering is iteration.
You'll learn about evaluating output
and revising your prompts.
This will also help you get the results you need
when leveraging conversational AI tools in the workplace.
We'll also explore a specific prompting technique
called few-shot prompting.
Writing effective prompts involves
critical thinking and creativity.
It can also be a fun process,
and it's a very important skill to practice
if you wanna use AI effectively in the workplace.
Are you excited to get started on prompt engineering?
Let's go.
It's helpful to understand how LLMs work
and to be aware of their limitations.
A Large Language Model, or LLM, is an AI model
that is trained on large amounts of text
to identify patterns between words, concepts, and phrases,
so that it can generate responses to prompts.
So how do LLMs learn to generate
useful responses to prompts?
An LLM is trained on millions of sources of text,
including books, articles, websites, and more.
This training helps the model learn the patterns
and relationships that exist in human language.
In general, the more high quality data the model receives,
the better its performance will be.
Because LLMs can identify so many patterns in language,
they can also predict what word is most likely to come next
in a sequence of words.
Consider a simple example to get a basic understanding
of how LLMs predict the next word in a sequence.
Take the incomplete sentence,
"After it rained the street was."
An LLM can predict what word comes next
by computing the probabilities for different possible words.
Based on the available data,
the word wet might have a high probability
of being the next word,
the word clean, a lower probability,
and the word dry, an extremely low probability.
In this case, the LLM might complete the sentence
by inserting the word with the highest probability
of coming next in the sequence, wet,
or it might be another high probability word, like damp.
An LLM may vary in its response
to the same prompt each time you use it.
LLMs use statistics to analyze the relationships
between all the words in a given sequence
and compute the probabilities
for thousands of possible words to come next
in that sequence.
This predictive power enables LLMs to respond to questions
and requests, whether the prompt is to complete
a simple sentence, or to develop a compelling story
for a new product launch, or ad campaign.
Although LLMs are powerful,
you may not always get the output you want.
Sometimes, this is because of limitations
in an LLM's training data.
For instance, an LLM's output may be biased,
because the data it was trained on contains bias.
This data may include news articles
and websites that reflect the unfair biases
present in society.
For example, because of the data it was trained on,
an LLM may be more likely to produce output
that associates a professional occupation
with a specific gender role.
The training data that informs an LLM
can be limited in other ways as well.
For instance, an LLM might not generate sufficient content
about a specific domain, or topic,
because the data it was trained on
does not contain enough information about that topic.
Another factor that can affect output is
the tendency of LLMs to hallucinate.
Hallucinations are AI outputs that are not true.
While LLMs are good at responding to many kinds of questions
and instructions, they can sometimes generate text
that is factually inaccurate.
Let's say you're researching a company,
and you use an LLM to help you summarize
the company's history.
The LLM might hallucinate,
and provide incorrect information about certain details,
such as the date the company was founded,
or the current number of employees.
A number of factors can contribute to hallucinations,
such as the quality of an LLM's training data,
the phrasing of the prompt,
or the method an LLM uses to analyze text
and predict the next word in a sequence.
Because of an LLM's limitations,
it's important that you critically evaluate all LLM output
to determine if it is factually accurate, is unbiased,
is relevant to your specific request,
and provides sufficient information.
Whether you're using AI to summarize a lengthy report,
generate ideas for marketing a product,
or outline a project plan,
be sure to carefully check the quality of the output.
Finally, it's important not to make assumptions
about an LLM's capabilities.
For example, just because it produced high quality output
for a persuasive letter to a customer,
don't assume you will get the same quality output
if you use the same prompt again in the future.
Large Language Models are powerful tools
that require human guidance for effective use.
Being aware of an LLM's limitations can help you achieve
the best possible results.
How can you write prompts that produce useful output?
It's generally true that the quality
of what you start with greatly affects
the quality of what you produce.
Consider cooking, for example.
Let's say you're preparing dinner.
If you have fresh, high quality ingredients,
well, you're more likely to produce a great meal.
Conversely, if you're missing an ingredient,
or the ingredients aren't high quality,
the resulting meal may not be as good.
In a similar way, the quality of the prompt
that you put into a conversational AI tool
can affect the quality of the tool's output.
This is where prompt engineering comes in.
Prompt engineering involves designing
the best prompt you can
to get the output you want from an LLM.
This includes writing clear, specific prompts
that provide relevant context.
To gain a better understanding of the context LLMs need,
let's compare how a person
and an LLM might respond to the same question.
Suppose a vegetarian asked their friend,
"What restaurants should I go to in San Francisco?"
The friend would likely suggest restaurants
with good vegetarian options.
However, if prompted with the same question,
an LLM might recommend restaurants
that are not suitable for a vegetarian.
A person would instinctively consider the fact
that their friend is a vegetarian
when answering the question.
But an LLM does not have this prior knowledge.
So to get the needed information from an LLM,
the prompt must be more specific.
In this case, the prompt needs to mention
that the restaurant should have good vegetarian options.
Let's explore an example that demonstrates
how you can use prompt engineering
to improve the quality of an LLM's output.
Let's take on the task of planning a company event.
You need to find a theme for an upcoming conference.
Let's write a prompt to Gemini
to generate a list of five potential themes for an event.
You can use similar prompts in ChatGPT,
Microsoft Copilot, or any other conversational AI tool.
Now, let's review the response.
Well, this isn't what we wanted.
We've gotten a list that seems more related
to party themes than themes for a professional conference.
Our prompt didn't provide enough context
to produce the output we needed.
It wasn't clear, or specific enough.
Let's try this again.
This time, we'll type the prompt,
"Generate a list of five potential themes
for a professional conference on customer experience
in the hospitality industry."
This prompt is much more specific,
making it clear that it's a professional conference
on customer experience in the hospitality industry.
Let's examine the response.
This is much better.
We engineered our prompt to include
specific, relevant context,
so Gemini is able to generate useful output.
When you provide clear, specific instructions
that include necessary context,
you enable LLMs to generate useful output.
Keep in mind that due to LLM limitations,
there might be some instances
in which you can't get quality output,
regardless of the quality of your prompt.
For example, if you're prompting the LLM
to find information about a current event,
but the LLM doesn't have access to that information,
it won't be able to provide the output you need.
And like in other areas of design,
prompt engineering is often an iterative process.
Sometimes, even when you do provide
clear and specific instructions,
you may not get the output you want on your first try.
When our first prompt didn't produce the response we wanted,
we revised the prompt to improve the output.
The second iteration provided instructions that were clear
and specific enough to produce a more useful output.
There are multiple ways to leverage an LLM's capabilities
at work to boost productivity and creativity.
A common one is content creation.
You can use an LLM to create emails, plans, ideas, and more.
As an example, you can ask an LLM to help you write
an article about a work-related topic.
Let's prompt Gemini to create an outline for an article
on data visualization best practices.
The article is for entry-level business analysts.
Notice that the prompt begins with the verb create.
It's often helpful to include a verb in your prompt
to guide the LLM to produce useful output
for your intended task.
The output provides a helpful outline
for a first draft of the article.
You can also use an LLM for summarization.
An LLM can summarize a lengthy document's main points.
For example, you might ask Gemini to summarize
a detailed paragraph about project management strategies.
We'll begin the prompt with the verb summarize,
and specify that we want the output to be a single sentence.
Then we'll include the paragraph
we want Gemini to summarize.
The output provides a convenient, one-sentence summary
of the paragraph.
While this example shows how you can summarize
a single paragraph, you can ask an LLM to summarize
longer text and documents, too.
Classification is another possible use.
For instance, you might prompt the LLM
to classify the sentiment,
or feeling in a group of customer reviews as positive,
negative, or neutral.
Let's prompt Gemini to classify customer reviews
about a retail website's new design
as positive, negative, or neutral.
The prompt includes the verb classify to guide the output.
The prompt also contains the reviews.
In this example, there are four reviews.
The output accurately classifies
the first two reviews as negative, the third as positive,
and the fourth as neutral.
Consider how you could leverage an LLM
to efficiently complete large classification tasks.
Or you can use an LLM for extraction,
which involves pulling data from text,
and transforming it into a structured format
that's easier to understand.
Suppose you have a report that provides information
about a global organization.
You can prompt Gemini to extract all mentions of cities
and revenue in the report and place them in a table.
Then we'll include the report in our prompt.
Please be aware that you should not input
confidential information into LLMs,
but in this example, the report is not confidential.
The output displays a table with columns
for city and revenue.
This presents the information in a well-organized format
that's easy to review.
Another use is translation.
You can leverage an LLM to translate text
between different languages.
For example, you might ask Gemini to translate the title
of a training session from English to Spanish.
The output includes a variety
of Spanish translations to choose from
and explains the reasoning behind each translation.
This information can help you choose
the most useful option for your audience.
Or you can use an LLM for editing,
such as to change the tone of a section of text
from formal to casual,
and to check if the text is grammatically correct.
For example, Gemini can help you edit
a technical analysis about electric vehicles
by making the language more accessible
for a non-technical audience.
We'll start the prompt with the verb edit,
and specify that the language should be easy
for a non-technical audience to understand.
After this, we'll include the technical analysis.
The output provides a version of the analysis
that an audience less familiar
with the technical details can understand.
This is just one example
of how an LLM can help you edit documents.
LLMs can quickly customize the tone, length,
and format of documents to fit your needs.
One more use for an LLM we'll discuss is problem-solving.
You can utilize an LLM to generate solutions
for a variety of workplace challenges.
When planning a company event, for example,
you could prompt the LLM to find menu solutions
that accommodate the food restrictions of multiple guests
while following a holiday themed menu.
And here's another example.
Let's say you are an entrepreneur
who recently launched a new copy editing service.
Let's ask Gemini to solve a problem
related to the copy editing service.
We'll ask for suggestions for increasing the client base.
The output provides specific suggestions
for reaching new clients, optimizing services,
and growing the business.
I love these ideas.
Let's ask Gemini to draft an email,
so we can easily share these ideas with others.
LLMs can help you brainstorm solutions
for many different types of problems.
I'm definitely excited
by the variety of ways we can leverage LLMs
when completing workplace tasks.
It's a very important skill to practice
if you wanna use AI effectively in the workplace.
Coming up, we'll focus more on evaluating output
and iterating on your prompt.
Have you ever created a presentation for a client,
or designed a website for your new business?
If so, you may have used an iterative process
to achieve your goal.
In an iterative process, you create a first version,
evaluate it, and improve upon it for the next version.
Then you repeat these steps
until you get the desired outcome.
For example, if you're developing a proposal, report,
or other document to share with your coworkers,
you might produce multiple drafts,
and make improvements on each draft
until you are satisfied with the result.
Taking an iterative approach is often the most effective way
to solve a problem, or develop a product.
An iterative process is also effective
in prompt engineering.
Prompt engineering often requires multiple attempts
before you get the optimal output.
Most of the time, you won't get the best result
on your first try.
If you try something and it doesn't work,
don't get discouraged.
Instead, carefully evaluate the output to determine
why you didn't get the response you wanted.
Then revise your prompt to try for a better result.
Let's consider possible reasons
you might not get useful output
after creating a clear and specific prompt.
First, differences in Large Language Models
can affect the output.
Each LLM is developed with unique training data
and programming techniques,
and has different background knowledge
about specific domains.
For this reason, different models might respond
to similar prompts in different ways
and might fail to provide
an adequate response to some prompts.
Taking an iterative approach with the LLM you're using
will produce the best results.
Second, LLM limitations.
Previously, you learned that LLM output
may sometimes be inaccurate, biased,
insufficient, irrelevant, or inconsistent.
You should critically evaluate all LLM output
by asking yourself the following questions.
Is the output accurate?
Is the output unbiased?
Does the output include sufficient information?
Is the output relevant to my project, or task?
And finally, is the output consistent
if I use the same prompt multiple times?
If you identify any issues when you evaluate output,
iterating on your initial prompt can often help you resolve
these issues and get better output.
To begin, if you notice there's any context missing
in your prompt, add it.
Your choice of words can also significantly impact
an LLM's output.
Using different words, or phrasing in your prompts
often yields different responses from the model.
Experimenting with different phrasings
can help you obtain the most useful output.
Now that you know more about iterative prompting,
let's consider an example.
Suppose you work as a human resources coordinator
for a video production company.
The company wants to develop an internship program
for students who are exploring careers in animation
and motion graphics design.
The company is based in the United States
in the state of Pennsylvania, my home state.
Your team wants to partner with local colleges
to provide internship opportunities
for students in Pennsylvania.
As a first step, you need to create a list of colleges
in Pennsylvania that have animation programs.
The list should include necessary details about the colleges
and be in a well-organized format
that your team can quickly review.
Let's review an example using Gemini.
"Help me find colleges
with animation programs in Pennsylvania."
Next, we'll examine our output.
The output lists colleges in Pennsylvania
that have animation programs,
along with further information related to these programs.
This is helpful information,
but it isn't structured in a way
that your team can quickly reference
when contacting the colleges.
Organizing the information in a table
would make it easier to read and understand,
especially for stakeholders like your manager,
who may have limited time.
We can iterate on the prompt by adding context
to specify the desired format of the output.
We'll type, "Show these options as a table."
The output displays a table
that provides useful information about the location
of each college and the specific type of degree it offers.
Now, the list is in a well-organized format
that's easier for your team to follow.
Although the table contains most of the information
your team needs, it doesn't include a key detail,
whether the school is a public, or private institution.
Your company wants to offer internships to students
from both public and private colleges.
We'll add a new request for Gemini
to include the relevant information in the table.
"Can you add a column showing
whether they are public, or private?"
Now, the table includes a column that indicates
whether a college is private, or public.
To share this information with your team
in a format that's easy to review and understand,
you can use the Export to Sheets feature.
This will allow your team to easily access
and analyze the data,
and make informed decisions based on the results.
You should apply the same iterative approach
to further tasks.
When you develop prompts for additional tasks,
be aware that previous prompts made in the same conversation
can influence the output of your most recent prompt.
If you notice this is happening,
you may want to start a new conversation.
Iteration is a key part of prompt engineering.
By taking an iterative approach to prompting,
you can leverage an LLM to provide
the most useful output for your needs.
Have you ever created something new
by building upon previous examples?
Perhaps you used a well-received report
as a reference when writing a similar report,
or maybe you used a relevant and engaging website
as a model when designing your own website.
Examples are also useful for LLMs.
Including examples in your prompt can help an LLM
better respond to your request,
and can be an especially effective strategy
to get your desired output.
We're going to explore how to use examples in prompting,
but first, let's briefly discuss the technical term shot.
In prompt engineering, the word shot is often used
as a synonym for the word example.
There are different names for prompting techniques
based on the number of examples given to the LLM.
Zero-shot prompting is a technique that provides
no examples in a prompt,
while one-shot prompting provides one example,
and few-shot prompting is a technique that provides
two, or more examples in a prompt.
Because examples aren't included in zero-shot prompts,
the model is expected to perform the task
based only on its training data,
and the task description included in the prompt.
Zero-shot prompting is most likely to be effective
when you are seeking simple and direct responses.
Zero-shot prompting may not be effective for tasks
that require the LLM to respond
in a more specific, nuanced way.
Few-shot prompting can improve an LLM's performance
by providing additional context and examples in your prompt.
These additional examples can help clarify
the desired format, phrasing, or general pattern.
Few-shot prompting can be useful for a range of tasks.
For example, you might use few-shot prompting
to generate content in a particular style.
Let's say you work for an online retailer.
You need to write a product description
for a new skateboard.
You already have descriptions for existing products,
such as a bicycle and rollerblades.
You want the skateboard description
to follow a similar style and format.
We'll start with a prompt that begins
with some general instructions.
"Write a one sentence description of a product.
It should contain two adjectives
that describe the product."
We also specify that we want Gemini to review
the examples we provide,
and write the description of the skateboard
in the same style.
Because this is a few-shot prompt,
we need to provide examples that model the style we want.
Each example contains a label indicating
the product being described, a bicycle and rollerblades,
and each description is one sentence long,
and contains two adjectives,
sleek and durable for the bicycle,
and smooth and stylish for the rollerblades.
Next, we type the label skateboard.
When we add this label
and leave the product description blank,
we indicate to Gemini that we want it to complete
the description of the skateboard
like it did with the other two product descriptions.
Let's review our output.
The output offers a product description of the skateboard
that meets the criteria we requested
and is in the same writing style
and format as the examples we included in our prompt.
In this case, two examples were enough
to obtain useful results,
but there is no definitive rule
for the optimal number of examples to include in a prompt.
Some LLMs can accurately reproduce patterns using
only a few examples, while other LLMs need more.
At the same time, if you include too many examples,
an LLM's responses may become less flexible and creative,
and they may reproduce the examples too closely.
Experiment with the number of examples to include
to get the best results for your specific task.
Now you know a prompting technique
that will help you get better quality output.
Few-shot prompting is an effective strategy
that can help you guide an LLM
to generate more useful responses.
You've learned a lot about writing prompts
that you can apply to workplace tasks.
In this section, we discussed Large Language Model,
or LLM, output.
We examined how LLMs produce their output
and potential issues you might encounter in the output.
After this, we focused on a key principle
of prompt engineering, creating clear and specific prompts.
You learned just how important it is
to specify what you want the LLM to do
and to include supporting context
to help it provide better output.
We then went on to discover how to improve
the quality of AI output through iteration.
It's essential that you evaluate your output,
and then revise your prompt as needed.
Lastly, we learned about few-shot prompting,
which involves providing examples to guide the LLM.
I wanna offer a final tip before I go.
We focused on prompting Large Language Models.
You can use the same general principles
when you prompt other kinds of AI models, too.
For instance, the next time you want to use AI
to generate an image, try to be as clear
and specific as possible,
and then iterate to get closer to the output you want.
It's been great guiding you through the process
of prompt engineering.
I hope you continue to apply and develop these skills
as you leverage conversational AI tools in the workplace.
To continue learning, I encourage you to explore the topic
of using AI responsibly as part of Google AI Essentials.
5.0 / 5 (0 votes)