Few-Shot Prompting Explained
Summary
TLDRفي هذا الفيديو، نستعرض مفهوم الـ 'Few-shot prompting' وكيف يمكن استخدامه لتحسين أداء وجودة المخرجات من النماذج اللغوية الكبيرة. تم تغطية مفهوم 'zero-shot prompting' سابقًا، والذي يعتمد على إعطاء النموذج تعليمات بدون أمثلة. ومع ذلك، يتيح 'few-shot prompting' تقديم أمثلة للنموذج لتوضيح المهمة المطلوبة، مما يزيد من دقة وموثوقية النتائج. يُظهر الفيديو كيفية استخدام هذه التقنية لتحسين النماذج، مع تقديم أمثلة توضيحية من ورقة بحثية سابقة. في النهاية، يتم التطرق لإمكانية استخدام هذه الطريقة في تطبيقات مختلفة.
Takeaways
- 🤖 مفهوم 'few-shot prompting' يشير إلى تحسين أداء النماذج اللغوية الكبيرة عن طريق إعطائها أمثلة على كيفية تنفيذ المهام.
- 📜 تم التطرق إلى 'zero-shot prompting' حيث يتم إعطاء النموذج تعليمات مباشرة دون أمثلة مسبقة.
- 📝 عند استخدام 'zero-shot prompting' يمكن للنموذج تصنيف النصوص بناءً على تعليمات بسيطة مثل تصنيف النصوص إلى إيجابية أو سلبية.
- ⚙️ 'few-shot prompting' يكون مفيداً عند مواجهة مهام معقدة أو غير مألوفة للنموذج، حيث يمكن للأمثلة أن تساعد النموذج على فهم المهمة بشكل أفضل.
- 🎓 يتم استخدام أمثلة توضيحية في 'few-shot prompting' لتحديد المدخلات والمخرجات المتوقعة من النموذج.
- 🧠 يتم استخدام أمثلة من ورقة بحثية لتوضيح فعالية 'few-shot prompting' في تحسين دقة النموذج.
- 💡 'few-shot prompting' يمكن أن يكون مفيدًا في ضبط نغمة رسائل البريد الإلكتروني أو توليد عناوين لمقالات معينة.
- ⚠️ يتم التأكيد على أهمية تصميم التعليمات بعناية واختيار الأمثلة المناسبة لتحقيق أفضل نتائج من النماذج اللغوية.
- 🔄 يمكن تعديل هيكل التعليمات والأمثلة لتناسب المهمة المحددة وتوجيه النموذج بشكل أكثر دقة.
- 📚 سيتم تناول تقنية 'Chain of Thought prompting' في الفيديو التالي كمثال آخر على تحسين استجابة النماذج اللغوية.
Q & A
ما هو مفهوم 'F Shot Prompting' الذي يتم التحدث عنه في الفيديو؟
-مفهوم 'F Shot Prompting' يشير إلى استخدام أمثلة أو تقديم نماذج للمهام التي تريد من النموذج اللغوي تنفيذها، مما يساعد النموذج على فهم المهمة بشكل أفضل وتقديم إجابات أكثر دقة وجودة.
ما الفرق بين 'Zero Shot Prompting' و'F Shot Prompting'؟
-'Zero Shot Prompting' يعني إعطاء النموذج تعليمات دون تقديم أي أمثلة، بينما في 'F Shot Prompting' يتم تزويد النموذج بأمثلة توضيحية للمهمة مما يساعده على تحسين الأداء وفهم المهمة بشكل أعمق.
لماذا يمكن أن يكون 'F Shot Prompting' مفيدًا في بعض الحالات؟
-'F Shot Prompting' يكون مفيدًا في الحالات التي لا يمتلك فيها النموذج بيانات كافية أو فهمًا كافيًا للمهمة المعقدة، حيث تساعد الأمثلة المقدمة النموذج على تحسين دقة الإجابات.
كيف يساعد تقديم الأمثلة في 'F Shot Prompting' النموذج اللغوي؟
-تقديم الأمثلة في 'F Shot Prompting' يساعد النموذج على فهم التوقعات المطلوبة من المهمة وتحديد السياق المناسب للإجابة بشكل صحيح، مما يؤدي إلى تحسين جودة المخرجات.
ما هي خطوات إعداد نموذج لغوي باستخدام 'F Shot Prompting'؟
-الخطوات تشمل تقديم رسالة نظام توضح المهمة، متبوعة بتقديم أمثلة تحتوي على المدخلات والمخرجات المتوقعة، ثم يتم تشغيل النموذج لتقديم الإجابات بناءً على الأمثلة المقدمة.
هل يجب دائمًا تقديم أمثلة المدخلات والمخرجات في 'F Shot Prompting'؟
-ليس دائمًا، بعض المهام يمكن أن تكتفي بتقديم المخرجات فقط دون الحاجة إلى تقديم المدخلات، حسب طبيعة المهمة والهدف المطلوب.
ما هو التحدي المحتمل عند استخدام 'F Shot Prompting' مع النماذج اللغوية؟
-أحد التحديات هو أن النموذج قد يكون له انحياز نحو نوع معين من المخرجات (مثل تفضيل الإجابات الإيجابية) إذا لم يتم تزويده بأمثلة كافية ومتوازنة.
كيف يمكن تحسين نموذج لغوي إذا لم يقدم أداءً جيدًا في 'F Shot Prompting'؟
-يمكن تحسين أداء النموذج بتزويده بمزيد من الأمثلة المناسبة التي تغطي الحالات التي يواجه فيها صعوبة، مما يساعده على التعلم وتقديم إجابات أكثر دقة.
ما هي الفائدة من استخدام نموذج لغوي لتوليد كلمات جديدة وتكوين جمل باستخدام هذه الكلمات؟
-الفائدة تكمن في قدرة النموذج على توليد جمل منطقية باستخدام كلمات لم تكن موجودة مسبقًا، مما يظهر مرونته وفهمه للسياق اللغوي بطريقة مبتكرة.
ما هو الفيديو القادم الذي سيتم تقديمه بعد هذا الفيديو وفقًا للمحتوى؟
-الفيديو القادم سيغطي موضوع 'Chain of Thought Prompting'، وهو طريقة قوية أخرى لتحفيز النماذج اللغوية.
Outlines
🤖 Introduction to Few-Shot Prompting in LLMs
This paragraph introduces the concept of Few-Shot Prompting, a technique used to improve the performance and reliability of large language models (LLMs). The speaker compares it to Zero-Shot Prompting, where the model is asked to perform a task with just an instruction. However, Zero-Shot Prompting may not always yield accurate results, especially for complex tasks or tasks the model isn't familiar with. Few-Shot Prompting addresses this by providing examples or demonstrations, enabling the model to better understand the task and deliver higher quality and more reliable outputs.
📝 Example of Few-Shot Prompting with OpenAI Models
This paragraph explains a practical example of Few-Shot Prompting using OpenAI's GPT-3.5 model. The speaker demonstrates how to structure prompts in the OpenAI Playground, including providing a system message and user input. The example given involves making up a word and asking the model to use it in a sentence, which showcases the model's ability to generalize and generate appropriate responses without requiring fine-tuning. The speaker emphasizes that Few-Shot Prompting can be customized depending on the task, such as setting expectations for the model's output in terms of tone, style, or specific content.
🔍 Detailed Demonstration and Structuring of Prompts
In this paragraph, the speaker continues the demonstration of Few-Shot Prompting by structuring a sentiment classification task. The task involves classifying text as either negative or positive, and the speaker discusses different ways to format and input examples for the model. They highlight that while input-output examples are often used, in some cases, providing just the output might be sufficient. The speaker also points out the importance of giving models the right examples, especially when the model may favor certain outputs based on its training data. The paragraph concludes with a discussion on the flexibility of Few-Shot Prompting and how it can be adapted for various tasks.
📚 Conclusion and Upcoming Topics
The final paragraph wraps up the video, summarizing the key points about Few-Shot Prompting. The speaker encourages viewers to further explore the topic by reading additional materials and understanding the limitations of this prompting method. The paragraph ends with a teaser for the next video, which will cover Chain of Thought Prompting, another powerful technique for improving the performance of large language models.
Mindmap
Keywords
💡التلقين الصفري
💡التلقين الشفهي
💡نماذج اللغة الكبيرة
💡الأنظمة النموذجية
💡المدخلات والمخرجات
💡تقنية استرجاع الكلمات
💡التصنيف
💡الأمثلة التوضيحية
💡الأسلوب والنبرة
💡القيود والحدود
Highlights
شرح مفهوم التوجيه باللقطات (F-shot prompting) كوسيلة شائعة لتحسين أداء وثبات نتائج النماذج اللغوية الكبيرة.
التوجيه الصفري (Zero-shot prompting) يتم بإعطاء النموذج تعليمات مباشرة دون أمثلة.
يُعتبر التوجيه باللقطات ضرورياً للمهمات التي لم يرَ النموذج بيانات كافية عنها أو تلك المعقدة التي يصعب فهمها.
في التوجيه باللقطات، يتم تقديم أمثلة أو عروض توضيحية للنموذج لتحسين فهمه للمهمة المطلوبة.
النموذج يمكنه إنتاج جمل باستخدام كلمات مُختلقة بدون الحاجة لتدريب إضافي.
يمكن استخدام التوجيه باللقطات لتحسين أسلوب النماذج في كتابة رسائل البريد الإلكتروني أو اقتراح العناوين.
تجربة النماذج مع التوجيه باللقطات توضح قدرتها على تقديم نتائج دقيقة وثابتة عند إعطائها أمثلة توضيحية.
النموذج المستخدم في العرض هو GPT-3.5 من OpenAI.
إعداد الرسالة النظامية في Playground للنماذج من OpenAI يعتبر جزءاً أساسياً من التوجيه.
النماذج يمكنها تصنيف النصوص إلى إيجابية أو سلبية باستخدام أمثلة توضيحية.
التوجيه باللقطات يساعد في تحسين فهم النماذج للمهمات وتصحيح التحيزات المحتملة في النتائج.
يمكن تصميم التوجيه باللقطات بطرق مختلفة باستخدام أدوار النظام والمستخدم والمساعد.
إضافة التعليمات إلى رسالة النظام تساعد في تحسين فهم النموذج للمهمة.
توضيح كيفية استخدام المؤشرات الإضافية أو العناوين الفرعية في تحسين التوجيه باللقطات.
التوجيه باللقطات يمكن استخدامه في المهام المعقدة مثل تصنيف المشاعر في النصوص.
Transcripts
in this video we are going to go over
the concept of f shot prompting so F
shot prompting is one of the more
popular ways of prompting large language
models to be able to improve performance
and reliability in terms of the results
and output quality that we would like
from these
llms in the previous guide we covered
the idea of zero shot prompting and we
have a video for it here if you're
interested in what that concept is but
basically what we do with zero shot is
call a model or perform a task by just
giving the model an instruction so an
instruction can be classify the text
into neutral negative or positive then
you gave the model the input which is
going to be this text I think the
vacation is okay and then this output
indicator here is telling the model that
we're expecting an output which will be
one of these labels and so this you can
consider a classification task is also
considered uh zero shot prompting
because we're not adding examples right
we're not giving the model examples of
how to perform this task we are making
an assumption that the model has some
internal understanding of what this task
is and obviously this is a pattern
recognition system that
can understand what the task is and what
the intent is and be able to provide the
right answers in this case the
levels now this is a good first way of
experimenting with large language models
but these models lack capabilities in a
lot of areas and that would be for areas
where maybe they haven't seen enough
data they don't understand really the
task right it could be also a very
complex task that the model has very
little understanding of or very little
knowledge of and in those cases you may
want to experiment with something called
f shot prompting and this is what we're
going to cover now and F shot prompting
the a would be to add examples or give
the model demonstrations right you show
it how to perform the task and then by
showing it the M can understand better
what that task is about and be able to
give you more reliable and higher
quality answers so there are various
reasons why you may want to use f shot
prompting and we will get into those and
also cover an example so let's go back
to fot prompting here which is the next
guide after zero shot prompting so what
we'll do is we'll start with this simple
example and then we we'll move on to a
more interesting example so this one is
from the bral paper this is the GPD Tre
paper if I'm not mistaken and this is
directly copy pasted from that paper and
this basically tells you shows you the
idea of this few shot prompting
technique so we're going to copy this
and we're going to move it over to the
playground again the playground that I'm
using here is from open ey because we're
using the open ey models and in
particular I'm using GPD 3.5 but you
could use any of the other models that
you see available here in the playground
what I'm going to do now is I'm going to
use the system message so I can need to
provide a system message here that's
absolutely required now in the
playground so what I can do is I can
copy paste this for now but because this
requires a system message before I input
a user message what I can do is I can
actually divide this into two parts so
there are many ways how you can design
design The Prompt itself right with the
different roles like system role user
role and also the assistant role but in
this case what we want is we want to
perform this particular task and we
don't really want to add too many
instructions what we just want to do is
we just want to give them all the
examples which in this case you can see
in the examples it's making up a word
right and it's defining the word and
then it's instructing the model to come
up with an example of a sentence that
uses the word but before it does that it
gives it a demonstration on how to
perform that task so you set the
expectation basically for the model so
you can see that this particular
demonstration shows what is the input
which will be the word and what will be
the expected output which is the
sentence where that word is used that
made of word again these are made of
word so it's remarkable that this model
can come up with a sentence right on the
Fly about this particular word without
us having to find tune that model or us
having to tune the weights of that model
to tell it what this is about so that's
really the power of fot prompting so
what I'm going to do here to make this
work I can actually just go here to keep
it really simple and I'll add this as
part of user
rle so what I can do is I can just uh
add this and then now I have the system
message so this is the demonstration and
then I have this which will be the input
that makes sense for me to compose it
this way there are many other ways how
you can go about doing this you could
have also taken this and put it right
there just leave it there where I had it
and then maybe add something in the user
role some other additional instruction
uh there are different ways but I find
that this is the best way to do it uh
with these models and using these roles
so once I have that I can now run this
so you can see here it says uh let's
read the input message first to do a far
dle means to jump up and Dong really
fast an example of a sentence that uses
the word for doodle is and then from the
model we get the response which is I
couldn't contain my excitement when I
found out I won the race so I started to
fle right there on the spot so I think
it's properly using the word in a
sentence and that's great to see right
because that's what we wanted with this
particular task now this is a very
simple task we actually got this from
the paper and it was exciting at the
time when we showed that we could do
this with large language models because
this generalizes right now you can use
this uh here in this case is a toy
example but you can use this for okay if
you want a specific tone in an email you
can provide demonstrations of those
emails if you want to have certain
headlines or you want to use them all to
come up with headline suggestions for or
heading suggestions for your essays or
something like that then you can give
them all some examples of maybe previous
essay
that you think have the style that you
want and then the model should be able
to follow that to some extent right
that's the idea of f prompting you're
basically telling the model or
demonstrating to the model what you
expect in terms of the type of output
the quality of the output right the tone
perhaps as well the style of it it could
also be in this case defining of words
giving it knowledge about certain
Concepts as well there are different
ways how you can use fuchia promp this
is a basic toy example now I'm going to
delete this so let me just delete this
part here
and then what I'm going to do is I'm
going to copy over the second example we
have a second example here in the guide
it'll be recommended for you to read the
guide there there are also some
additional readings if you're interested
in going into more details but I'm
trying to give you more less a recap of
what the idea is so what I'm going to do
is I'm going to copy this over and I'm
going to paste it right here so the way
I can structure this right for every
task will be a little bit different now
I know this is uh classification task so
what I can do here is I can tell it your
task is to perform
uh sentiment
classification on the provided
inputs you will
classify the text input into either
negative or positive so that's my task I
could have improved this a bit better
but for now we will keep it as is the
instructions usually go in the system
message so I have it in the system
message already and what I can do is I
can provide it the examples here and I
could use something like this as well
that's totally fine I see a lot of
people actually use this these type of
indicators here or what we call as
delimiters or you can call it like
subheadings or whatever you may want to
call it and that's totally fine I think
that's something that the model should
be able to Leverage as well to better
understand the task and better
understand okay this part is going to be
about examples notice that I have the
examples here right so I have the input
and I have the output here right input
text and output of that input and output
of that you could also design this a
little bit different now I wanted to
take a shortcut here because the way how
I'm inputting this information is going
to be with this you know
additional uh indicator here right the
additional characters but I could also
use something like input text and then
output right so I could I could change
the way I'm formatting these examples
and then I will have to carry that over
to my final input that I'm providing for
the model to classify so here I'm just
going to classify this and so we got
negative from the model which is exactly
the classification we expected now even
this is a very simple task but as soon
as you start to scale on tasks like this
this model does tend to favor a certain
label here or a certain category like
maybe would favor positive or negative
because it has seen potentially more
positive right type of content in its
training data so our task now is to sort
of figure out what is the best strategy
to provide these models the examples
maybe the model is lacking the ability
to perform a specific classification
then in that case we can give it more
examples of those so that it gets it
right now this case you saw that I give
it an input output that's not always
required you don't always have to give
it input output you can also give it
output some Tas for instance where maybe
you want a specific type of email right
in a specific tone in that case you can
just give it the email right you don't
really need to give it an input in that
case so some task would require input
output but some you know you can just
get away with just giving it the output
that'll be it for this demo and
hopefully it's more clear what this is
about um feel free to read here again
understand also the limitations in the
next video we are also going to cover
Chain of Thought prompting which is
another really powerful way of prompting
these models that will be it for this
particular video thanks for watching and
catch you in the next one
5.0 / 5 (0 votes)