why you suck at prompt engineering (and how to fix it)

Liam Ottley
18 Apr 202456:39

Summary

TLDRThe video script offers an in-depth guide on prompt engineering with AI language models, emphasizing the importance of understanding the underlying science to maximize their potential. It humorously compares different levels of prompt engineering skills to a spectrum, highlighting the 'midwit' trap where people overcomplicate tasks. The speaker shares his journey and the strategies he uses to build efficient AI systems, including the use of specific techniques like role prompting, chain of thought prompting, emotion prompting, and few-shot prompting. The script also discusses the economic aspect of AI solutions, advising on the choice of model based on cost and performance. It concludes with a formula for creating effective prompts and its application across various AI use cases, such as AI agents, voice assistants, and automations, underlining the significance of prompt engineering in the burgeoning AI industry.

Takeaways

  • 🧠 **Understanding Prompt Engineering**: The ability to effectively prompt AI models is crucial for building AI systems and getting value from language models like GPT.
  • πŸš€ **Role Prompting**: Assigning an advantageous role to the AI model and enriching the role with key qualities can significantly increase the accuracy of prompts.
  • πŸ€– **Task Specificity**: Clearly defining the task with a verb and being as descriptive as possible allows the AI to understand exactly what is required.
  • πŸ’‘ **Chain of Thought Prompting**: Providing step-by-step instructions for the AI to follow can dramatically improve the accuracy of complex tasks.
  • πŸ“ˆ **Performance Increase**: Techniques like role prompting, chain of thought, and emotion prompting can lead to significant performance improvements in AI systems.
  • πŸ’Œ **Emotion Prompt**: Adding emotional stimuli to prompts can enhance performance, truthfulness, and informativeness of the AI's output.
  • πŸ“š **Contextual Information**: Giving the AI context about the environment it's operating in can help improve performance by making the task more relatable.
  • πŸ“‰ **Few Shot Prompting**: Providing a few examples (3-5) can greatly increase the accuracy and performance of the AI without needing fine-tuning.
  • πŸ“ **Notes and Tweaks**: Using a notes section to remind the AI of key aspects and to add final details can fine-tune the output without restructuring the entire prompt.
  • πŸ”‘ **Markdown Formatting**: Structuring prompts with markdown can improve readability and potentially the AI's understanding and performance.
  • πŸ’‘ **Positive Reinforcement**: Encouraging the AI model and using positive feedback can lead to better responses and higher quality outputs.

Q & A

  • What is the main issue the video aims to address?

    -The video addresses the issue of ineffective prompt engineering in AI systems, explaining why many people struggle with it, and how they can improve their skills to build better AI systems.

  • What is the 'midwit' concept mentioned in the video?

    -The 'midwit' concept refers to individuals who overcomplicate tasks, making them more difficult and inefficient. In the context of the video, a midwit is someone stuck in the middle of the IQ spectrum, not leveraging simple solutions like low IQ people, nor understanding the advanced techniques like high IQ people.

  • What is the significance of the 'Prompt Engineering' in building AI systems?

    -Prompt engineering is crucial as it directly impacts one's ability to extract value from AI models. It involves crafting instructions that AI systems can follow to perform specific tasks, and mastering this skill is essential for creating efficient and cost-effective AI solutions.

  • Why is the video's presenter moving away from fancy video production?

    -The presenter is focusing more on his business and team, building software and educating his community. He prefers to spend less time on video production and more on these activities that are directly related to his business goals.

  • What is the role of English in programming AI models?

    -English serves as a new programming language for AI models. By writing effective prompts in English, one can instruct AI models to perform tasks, which can replace the need for large blocks of code, making AI more accessible and efficient.

  • What is the difference between conversational and single shot prompting?

    -Conversational prompting is interactive, allowing for follow-up prompts and adjustments by a human operator, which is good for personal use. Single shot prompting, on the other hand, is a one-time instruction that is integrated into a system for automation, requiring no human intervention and is ideal for scalable AI systems.

  • How can a well-written prompt replace hundreds of lines of code?

    -A well-written prompt can encapsulate the logic and instructions that would otherwise require extensive coding. By providing clear and specific directions, an AI model can execute complex tasks based on the prompt, eliminating the need for the same task's code.

  • What are the components of the 'perfect prompt formula'?

    -The components of the perfect prompt formula are role, task specifics, context, examples, and notes. Each component is backed by scientific research or prompting techniques that enhance the accuracy and performance of the AI model.

  • Why is markdown formatting used in prompt engineering?

    -Markdown formatting is used to structure prompts for better readability and to help the AI model understand the structure of the prompt better. It uses headings, bold text, lists, and other formatting tools to organize the information logically.

  • What is the 'Lost in the Middle' effect and how does it apply to prompt engineering?

    -The 'Lost in the Middle' effect refers to the phenomenon where language models perform best when relevant information is placed at the beginning or the end of a context, and performance significantly worsens when critical information is in the middle. In prompt engineering, this understanding helps in structuring prompts effectively.

  • How can one optimize the cost and performance of an AI system?

    -One can optimize the cost and performance of an AI system by mastering prompt engineering to utilize cheaper and faster models effectively. By crafting precise and effective prompts, one can reduce the reliance on more expensive models and achieve similar results at a lower cost and higher speed.

Outlines

00:00

πŸ˜€ Introduction to Prompt Engineering

The speaker begins by addressing the audience's potential lack of skill in prompt engineering and promises to explain why they may be stuck in a 'midwit' range. The video aims to elevate the audience's understanding of prompt engineering, moving them from a plateau to a level of expertise. The speaker uses a meme to illustrate the convergence of low and high IQ individuals on similar solutions, contrasting this with the overcomplication by 'midwits.' The goal is to transition from relying on templates to understanding the science behind prompt engineering, which is crucial for building AI systems.

05:00

πŸš€ Building Reliable AI Systems with Single Shot Prompting

The speaker differentiates between conversational and single shot prompting, emphasizing the importance of the latter for creating scalable and reliable AI systems. They argue that while conversational prompting might improve one's job performance, single shot prompting allows for the development of valuable AI systems. The speaker also highlights the significance of English as a programming language for AI, stating that effective prompts can replace extensive code.

10:01

πŸ€” Prompt Engineering Misconceptions and the Importance of Understanding

The speaker discusses common misconceptions about prompt engineering, noting that many believe they are proficient when they are actually only good at conversational prompting. The video aims to correct this by teaching the audience the fundamentals of prompt engineering, which is essential for creating AI voice systems, AI agents, and custom AI tools. The speaker also shares personal insights about their business and the motivation behind teaching these skills.

15:02

πŸ“ˆ The Perfect Prompt Formula for AI Systems

The speaker outlines the components of an effective prompt, which includes role, task specifics, context, examples, and notes. Each component is backed by research or a discovered technique, and the speaker provides a detailed explanation of how to apply these components to build better AI systems. The goal is to help the audience understand the science behind prompt engineering to create more accurate and efficient AI models.

20:02

πŸ“‰ The Diminishing Returns of Adding Examples to Prompts

The speaker discusses the research results related to the effectiveness of adding examples to prompts, noting that accuracy increases significantly with each additional example up to a certain point. They suggest that providing 3 to 5 examples is sufficient for most tasks, as more examples increase the cost and complexity of the prompt without substantially improving performance.

25:02

πŸ”‘ Final Touches: Notes Section and Markdown Formatting

The speaker introduces the final part of the prompt, the notes section, which serves as a reminder for key aspects of the task and a place to add final details. They also discuss the importance of markdown formatting for structuring prompts, making them more readable and understandable for the AI model. The speaker emphasizes the effectiveness of positive reinforcement when interacting with AI models and the potential benefits of using persona-based prompts.

30:03

πŸŽ“ Conclusion: Mastering Prompt Engineering for AI Success

The speaker concludes by emphasizing the importance of mastering prompt engineering to succeed in the AI space. They provide a comprehensive guide that can be applied to various AI systems, including AI agents, voice agents, and AI automations. The speaker also discusses the business implications of prompt engineering, suggesting that those who do not master these skills will be outcompeted by those who do.

Mindmap

Keywords

πŸ’‘Prompt Engineering

Prompt engineering refers to the skill of effectively instructing AI models, such as language models, to perform specific tasks. In the video, it is emphasized as a critical ability for leveraging AI systems to their full potential. The script discusses the importance of understanding the science behind prompt engineering to move beyond basic usage and into building sophisticated AI applications.

πŸ’‘Midwit

The term 'midwit' is used in the script to describe someone who overcomplicates tasks, often due to a lack of understanding of the underlying principles. It is part of a meme referenced in the video, contrasting with 'low IQ' and 'high IQ' individuals who coincidentally reach similar solutions. In the context of the video, avoiding 'midwit' behavior is crucial for efficient prompt engineering.

πŸ’‘Conversational Prompting

Conversational prompting is a method of interacting with AI where the user has a back-and-forth dialogue with the AI model, allowing for follow-up prompts and modifications. The video explains that while this method is good for personal use and tweaking responses, it is not ideal for building automated systems that require consistent and error-free outputs.

πŸ’‘Single Shot Prompting

Single shot prompting is a technique where the AI model is given a prompt and expected to produce a correct response in one go, without the opportunity for follow-up prompts. This method is highlighted in the video as essential for creating automated, scalable AI systems that can be reliably integrated into business processes.

πŸ’‘Role Prompting

Role prompting involves assigning an AI model a specific role or identity, which can enhance its performance by up to 25% as mentioned in the script. By defining a role that is advantageous to the task at hand, the AI is better guided to perform the task with higher accuracy. This technique is part of the broader strategy to improve the effectiveness of prompts.

πŸ’‘Chain of Thought Prompting

Chain of thought prompting is a technique where the AI is instructed to think through a problem step by step, either by providing step-by-step instructions or by encouraging it to articulate its reasoning process. The video notes that this method can significantly boost accuracy, especially for complex problems, by making the AI's problem-solving process more transparent.

πŸ’‘Emotion Prompt

An emotion prompt is a short phrase containing emotional stimuli, used to enhance the performance of a prompt. The script mentions that adding emotional stimuli like 'this is very important to my career' can increase the accuracy of simple tasks by 8% and complex tasks by 115%. This technique leverages the AI's ability to respond to emotional cues to improve its output.

πŸ’‘Few Shot Prompting

Few shot prompting involves providing the AI with a few examples of inputs and desired outputs to guide its learning. The video references research showing that providing just a few examples can massively increase the performance of a prompt, with most gains achieved between 10 to 32 well-crafted examples. This technique is used to teach the AI the expected response format without extensive fine-tuning.

πŸ’‘Lost in the Middle Effect

The 'lost in the middle' effect is a phenomenon where information presented in the middle of a long context is less likely to be remembered or acted upon by the AI model. The video suggests structuring prompts to place critical information at the beginning or end to improve performance. This understanding helps in crafting prompts that are more likely to be effective.

πŸ’‘Markdown Formatting

Markdown formatting is a way to structure text using plain text conventions. In the context of the video, it is used to organize and structure prompts for better readability and to potentially enhance the AI's understanding of the prompt's structure. The video suggests that using markdown headings can help in clearly delineating the different components of a prompt, such as role, task, and examples.

πŸ’‘AI Systems

AI systems in the video refer to the various applications and tools powered by artificial intelligence that can be built using prompt engineering skills. These systems can range from email classification tools to AI agents and voice assistants. The script emphasizes that mastering prompt engineering enables the creation of valuable AI systems that can significantly impact businesses and the AI space.

Highlights

The video discusses the concept of prompt engineering and its importance in AI systems, emphasizing that improper prompt engineering can lead to suboptimal results.

The speaker introduces a meme comparing low IQ, midwit, and high IQ individuals to illustrate the complexity of problem-solving, relating it to prompt engineering.

The video emphasizes the importance of understanding the science behind prompt engineering to avoid being stuck in the 'midwit' range.

The speaker explains how prompt engineering skills can significantly impact the value one can extract from AI language models.

The concept of 'conversational' versus 'single shot' prompting is introduced, with the latter being more suitable for AI systems and automation.

The speaker shares personal experiences and challenges in prompt engineering, admitting to being part of the problem and offering solutions.

The video highlights the importance of crafting well-written prompts that can replace hundreds of lines of code, emphasizing the potential of AI in programming.

The speaker discusses the role of English as a 'programming language' in the context of AI, where effective prompts can replace traditional coding.

The video presents a 'perfect prompt formula' for building AI systems, which includes role, task, specifics, context, examples, and notes.

The speaker explains the significance of role prompting, where assigning an advantageous role to the AI can increase the accuracy of prompts.

The concept of 'chain of thought' prompting is introduced, which involves providing step-by-step instructions to the AI for complex tasks.

The video discusses the impact of emotional stimuli in prompts, which can enhance the performance of AI systems.

The speaker emphasizes the importance of providing examples in prompts, which can significantly improve the accuracy and performance of AI systems.

The video introduces the 'lost in the middle' effect, which suggests that information placed at the beginning or end of a prompt is more likely to be followed by the AI.

The speaker discusses the use of markdown formatting in prompts, which can improve readability and potentially the performance of AI systems.

The video concludes with a practical example of how to apply the learned techniques in prompt engineering to improve AI system outcomes.

Transcripts

play00:00

you probably suck at promt engineering

play00:01

and in this video I'm going to tell you

play00:02

why how you can fix it and how you

play00:04

cannot be the guy in the middle here of

play00:06

this mid with me so that's might a

play00:08

little bit off topic but if you give me

play00:09

a second I'll explain how this applies

play00:11

to the majority of people who are trying

play00:13

to do prompt engineering and build AI

play00:14

systems and why it's probably holding

play00:16

you back because you're stuck in this

play00:18

midwit range so if you haven't seen this

play00:19

meme before basically the low IQ people

play00:22

and the high IQ people kind of converge

play00:24

on the same solution uh as you see here

play00:26

so we have the guy using Apple notes on

play00:28

one side and the genius using Apple

play00:30

notes on one side and in the middle you

play00:32

have the midwit who's over complicating

play00:33

it making it very difficult and painful

play00:35

for themselves and then we have the same

play00:37

thing with NES Cafe Classic on both

play00:39

sides and in the middle we have the

play00:40

midwit struggling with all these

play00:42

different types of coffee and fancy

play00:44

methods so how does this apply to prompt

play00:46

engineering I know you're asking

play00:47

considering you clicked on a video

play00:48

that's about prompt engineering but it's

play00:50

actually a not so bow curve when it

play00:52

comes to prompt engineering and uh on

play00:54

the far left we have the stupid person

play00:56

who is just using chat GPT and prompting

play00:58

it as they wish kind of just throwing

play01:00

things in there on the far side we have

play01:02

what we're trying to get you to after

play01:03

this video which is a genius who has a

play01:05

toolkit of prompts and understands the

play01:07

science behind it and in the middle we

play01:08

have probably you right now which is uh

play01:11

I mean no no disrespect to these other

play01:13

YouTubers cuz I've made videos on on

play01:15

proing myself like I'm I'm I'm part of

play01:17

the problem here but uh what these are

play01:19

all about is chat gbt prompt templates

play01:23

and and sort of taking the thinking away

play01:24

from you and putting it in the hands of

play01:26

this template that they've created so uh

play01:29

I'm not going to sh on my videos too

play01:30

much uh because these videos were

play01:32

talking more conceptually as well so I'd

play01:34

say I'm on that line and the content of

play01:36

this presentation in this video is

play01:37

intended to take you from this plateau

play01:40

of someone trying to do PR engineering

play01:41

but not actually understanding the the

play01:44

science behind it which is what we're

play01:45

going to go into this video point of

play01:46

this video is to take you from someone

play01:48

who's on that Plateau as you can see

play01:49

here um and get you up to the sort of

play01:51

genius and and very capable PR engineer

play01:53

who's able to do great things with these

play01:55

language models and it's so important

play01:57

because your ability to prompt them and

play01:59

and provide instruction of these models

play02:00

directly impacts your ability to get

play02:02

value out of them so if there's this

play02:04

amazing new technology called llms and

play02:06

you're better at using them you're going

play02:07

to go further in the AI space and

play02:09

further in life if you can better send

play02:11

instructions to these models so

play02:13

continuing on uh you may be wondering

play02:15

hey why is this new style why is the

play02:17

camera on the different side why is

play02:18

everything so casual um and that's

play02:20

because uh I've been wasting a not

play02:22

wasting but I've been spending a lot of

play02:23

time on my videos uh the past while as

play02:25

you may have noticed some of you people

play02:26

starting to think that I'm a YouTuber um

play02:29

and I'm I've never really thought of

play02:30

myself as a YouTuber personally um I'm a

play02:32

businessman and YouTube is how I get

play02:34

clients for my business and I think you

play02:36

guys are starting to see me as a as a

play02:37

YouTuber and I I really as much as I

play02:40

love making videos and teaching you guys

play02:42

everything what I really like doing is

play02:43

working on my business and working my

play02:44

team and building the cool software that

play02:46

we're building through genive and work

play02:47

on the morning side and also the cool

play02:49

stuff we do with my my education

play02:50

Community as well and teaching them how

play02:51

to start their own businesses like I so

play02:54

probably less Fancy videos that require

play02:56

a lot of time and editing and and if I

play02:59

have anything interesting to share and I

play03:00

want to talk about it like in this video

play03:02

because this video is coming out of me

play03:04

seeing so many people that I talk to in

play03:05

my community not understanding this

play03:07

fundamental skill and it is so

play03:09

fundamental but people have this

play03:10

misconception that they know how to De

play03:12

it which I'm going to break like just

play03:14

absolutely destroy if you in this video

play03:16

uh and rebuild your skills as a prompt

play03:17

engineer so doing this because if I have

play03:20

something to talk to you about and I

play03:21

think it's important for you all um then

play03:23

I'm going to share it and also you may

play03:24

be wondering why do I do this at all and

play03:26

it's because I have a SAS and it helps

play03:29

agency owners to build AI solutions for

play03:32

businesses so if I don't teach you guys

play03:34

how to do prom engineering you're never

play03:35

going to use my SAS so I have to do this

play03:37

stuff so that I can succeed and and make

play03:38

all the money with the SAS that I want

play03:40

so I'm you guys get a byproduct of me

play03:42

trying to build my SAS which is helping

play03:44

you to learn these things so anyway

play03:46

atics so why you're probably bad at

play03:48

prompt engineering have conversational

play03:50

prompt engineering versus single shot

play03:52

conversational is what everyone thinks

play03:54

is prompt engineering and they go onto

play03:55

chat GPT and they go hey hey yeah I got

play03:57

this got this cool prompt template and

play03:59

they Chuck in there and they can get

play04:01

some responses from it and they're like

play04:02

man I'm so good at this and then they

play04:04

switch off and think that they're a

play04:05

prompt engineer and they know how to do

play04:06

this stuff um this of course is human

play04:08

operated there are follow-up prompts

play04:11

that you can do so you can say oh could

play04:12

you please like modify this a little bit

play04:14

and because of these follow-up prompts

play04:15

it's very forgiving in terms of what you

play04:17

can say um and how you can tweak it to

play04:19

get the RO responses and this really is

play04:21

just good for personal use if you're

play04:22

working at a job and you might want to

play04:24

streamline some of the work play that

play04:25

you do there great like I mean Chad GPT

play04:27

is an incredible software and I use it

play04:29

the time as well so I'm not not

play04:31

on it but it is conversational prompting

play04:33

and on the other side is single shot

play04:35

prompting which is something that we can

play04:37

actually bake into a system uh that can

play04:39

be automated and can be part of a sort

play04:41

of ongoing ongoing system or flow uh

play04:44

where an AI task is embedded in it um

play04:47

there are no follow-up prompts because

play04:48

there's no no human involved in most

play04:50

cases there's no room for error in that

play04:52

case you can't have jgpt putting hey

play04:54

here is the answer and they put in the

play04:55

answer it just needs to give you the

play04:56

answer every single time while the

play04:58

system's going to break uh because of

play05:00

this because if we can prompt it into

play05:01

something that is reliable we can

play05:03

actually have a very scalable system

play05:05

that is AI built into it which is ideal

play05:07

for these AI assisted systems and this

play05:09

is really how you can create value so

play05:12

the benefit of conversational prompting

play05:13

skills which many of you will have I'm

play05:15

sure is that it might make you better at

play05:17

your job and might make your boss a bit

play05:19

more money CU you're able to do more

play05:21

work um maybe make you a bit more money

play05:23

on the on the process but the benefit of

play05:25

these single shot systems where we can

play05:26

build an AI task to do a specific

play05:28

function every every single time

play05:30

reliably is that it will allow you to

play05:32

build AI systems worth potentially

play05:34

thousands of dollars a piece as as I've

play05:35

done as many people in my community have

play05:37

done as well if you don't believe me I

play05:38

don't care furthermore on this point of

play05:39

why you should take prompt engineering

play05:41

seriously Andre Kathy here uh says the

play05:43

hottest new programming language is

play05:45

English and this is no dummy he is a

play05:47

founding member of open AI he's also a

play05:49

leading AI researcher what he means here

play05:51

by saying the hottest new programming

play05:53

language is English is that you being

play05:55

able to write instructions in English is

play05:57

going to allow you to one generate code

play05:59

if you want to so you can translate from

play06:00

English to codee that's one way of

play06:02

programming in English technically but

play06:04

another way is that if you can write

play06:06

effective prompts you can replace the

play06:09

the the programming required with a

play06:12

massive program or a massive script you

play06:14

can write a prompt that effectively does

play06:16

all of the things that that that script

play06:18

would have done so you can replace large

play06:19

blocks of code with a well-written

play06:21

prompt now which is really what I want

play06:23

you guys to focus on and say well I can

play06:24

have the abilities of a developer if I

play06:26

can write these prompts well using llms

play06:28

properly um and furthermore this guy

play06:31

also this guy Liam otley I've founded a

play06:33

couple AI companies I have my own AI

play06:36

agency Morningside AI I have my own AI

play06:38

education Community uh my tripa

play06:40

accelerator and I also have a software U

play06:43

my AI SAS called agentive which is

play06:45

really what my focuses on right right

play06:46

now and I've got some pretty smart

play06:48

people working for me I'm not the brains

play06:49

of the operation anymore I I hope I was

play06:52

at one point but my CTO Spencer has like

play06:54

five six years of NP uh experience and

play06:57

he does some really cool stuff for us

play06:58

and a lot of what I'm going to sharing

play06:59

in this in terms of how you should be

play07:01

doing your prod engineering and what

play07:03

I've learned and what I now use is from

play07:05

him so you might think I'm just some

play07:07

goofball who's been doing YouTube for 12

play07:08

months uh but I do have teamed and I've

play07:11

paid people who are a lot smarter than

play07:13

me to give me this knowledge so now I'm

play07:14

giving it to you so now I want you to

play07:16

remember this a well-written prompt can

play07:17

replace hundreds of lines of code going

play07:19

back to what I said before this is I

play07:21

think it's my quote but I'm just going

play07:23

to say someone said it cuz someone must

play07:24

have said it but that's essentially what

play07:25

you can do if you write a well-written

play07:27

prompt um now here's an example so

play07:29

there's there a video that will have

play07:30

just gone out recently on my channel

play07:31

where I manage my phone finances with AI

play07:33

I set up a system where my assistant can

play07:35

send money I can send screenshots these

play07:37

things here through the system and out

play07:39

comes the other side a tracker for all

play07:41

my expenses within my notion um it

play07:43

automatically extracts the extracts the

play07:46

transactions from the screenshots

play07:47

categorizes them stores them in my

play07:49

expense data database within the notion

play07:52

and this is kind of the system here you

play07:54

can pause and take a look but basically

play07:55

so it took me 2 hours to write a very

play07:57

good prompt that can success categorized

play08:00

format and then pass the data over to

play08:01

notion um and that's ended up saving 8

play08:03

hours per month for my system so example

play08:06

there not the best one but you get the

play08:08

idea um if you write a good prompt you

play08:10

can replace what would have taken like

play08:12

to me for me to do this expensive system

play08:14

with code would have taken a a whole lot

play08:16

longer and it would have been extremely

play08:17

messy um but the AI can just throw all

play08:19

the information at it say hey look this

play08:20

is what I want you to do with it and

play08:21

outcomes the transactions ready to go

play08:23

into notion and no we're still not ready

play08:25

to move forward because you need to

play08:26

understand that if you can just get this

play08:27

skill right that many people don't have

play08:29

correct they think they can do

play08:30

conversational prompt engineering and

play08:31

that's going to be enough for them to go

play08:33

in and build these systems but in AI

play08:35

voice systems which are all the rage

play08:36

right now I've done a ton of videos on

play08:37

you can go watch them on my channel AI

play08:39

voice systems if you can't prompt

play08:41

correctly if you don't have good prompt

play08:42

engineering skills you can't do AI voice

play08:44

systems if you don't have good prompt

play08:46

engineering skills you can't create AI

play08:47

agents like gpts if you don't have good

play08:49

prompt engineering skills you can't

play08:51

build ai's tasks into AI automations

play08:53

like on zapia and make Etc and you can't

play08:56

build custom AI tools on relevance and

play08:58

stack Ai and these other platform so if

play09:00

you can't just get this thing right and

play09:01

watch the rest of this video it's not

play09:02

going to be a retention hookie and and

play09:04

for your Tik Tok brain I don't care if

play09:06

you watch the rest of it but I'm telling

play09:07

you if you don't take the time to

play09:09

actually soak in this information I'm

play09:11

about to tell you and and get good at

play09:13

this prompt engineering skill you are

play09:14

not going to make any money in AI

play09:15

because everything depends on it and

play09:17

finally what I want to do is a little

play09:18

comparison of the two different types of

play09:20

people you can be you can either watch

play09:22

this video and come out on the right

play09:23

side here or you can continue to do your

play09:26

whatever you think you're doing when

play09:27

you're prompt engineering um and you can

play09:28

be like the guy so go on the left the

play09:30

midw he has a handy bag of prompt

play09:32

templates he gets stuck when something

play09:34

doesn't work because he doesn't

play09:37

understand what what the template's even

play09:38

doing so then he uses a more expensive

play09:41

and a smarter model like he moves from

play09:42

3.5 turbo to four turbo and he goes oh

play09:45

yeah well now it works because he gets

play09:48

the models to do the work inad of

play09:49

himself so by doing this he creates

play09:51

slower and more expensive systems and

play09:54

therefore he struggles to create systems

play09:56

that are actually valuable for the

play09:57

clients cuz if it's costing them a lot

play09:59

and they're really slow there's less

play10:01

value for the client right and then

play10:03

number six he gives up on trying to

play10:05

start an AI business and get into this

play10:06

AI solution space and then like some of

play10:08

you guys in the comments they become a

play10:10

triaa as a scam goofball and blame it on

play10:12

the model and not your inability to

play10:13

learn how to write English and then on

play10:15

the right we have the guy that you want

play10:17

to be uh he has a toolkit of prompt

play10:19

components and methods based on Research

play10:21

which I'm going to take you through in

play10:22

this video he approaches problems like

play10:25

an engineer he skillfully applies these

play10:28

techniques he achieves the desired

play10:30

performance with fastest and cheapest

play10:32

model available so he uses the cheapest

play10:34

model he can get and uses his skills to

play10:36

make it do what he needed to do

play10:38

therefore he's able to create lightning

play10:40

quick and affordable AI systems for

play10:41

clients that create actual value because

play10:44

they're cheap and they're fast and then

play10:46

therefore he actually makes money

play10:47

because these clients like wow this

play10:49

thing is awesome and number seven this

play10:50

guy then finds other AI Chads like him

play10:52

who know how to do prompt engineering

play10:54

and are making money with AI and with

play10:56

him and his friends they all get AI Rich

play10:58

um yes I'm selling the dream there but

play11:00

that is what's possible if you can get

play11:01

this thing right and that is what myself

play11:03

and a bunch of the other guys that I was

play11:04

just namam with they're all doing it uh

play11:06

it's happening um whether you like it or

play11:08

not so be like this guy don't be like

play11:10

this guy um yeah there you go so now we

play11:13

get into the perfect prompt formula for

play11:15

building AI systems which is the meat

play11:16

and poates of this video um Beware Of

play11:19

The Prompt formula as I mentioned you

play11:21

don't want to be the guy who relies on

play11:22

the formula and while is while I am

play11:25

giving you a formula in this video I've

play11:26

put it in asx's and user capital letters

play11:28

so that you understand that I'm kind of

play11:30

taking the piss out of formulas because

play11:32

what I'm teaching you in this is going

play11:33

to be the science behind them um so that

play11:35

you guys if you run into an issue you'll

play11:37

understand hey look I can apply this

play11:39

technique to try and fix it so you'll

play11:41

actually be able to write good prompts

play11:42

forever if you understand the stuff I'm

play11:44

going to teach and you actually absorb

play11:46

it so components of this prompt are role

play11:48

task specifics context examples and

play11:51

notes and behind each of these

play11:53

components is a related uh scientific

play11:56

paper or some research that has been

play11:58

done or some prompting technique that

play12:00

has been discovered and backed up with a

play12:01

research paper that you can see on

play12:03

screen here we have roll prompting Chain

play12:05

of Thought prompting emotion prompt F

play12:06

shot prompting and lost in the middle

play12:08

all of these are going to be covered in

play12:09

the next section to this video so let's

play12:11

jump into it um oh before we do that

play12:13

actually what each of these techniques

play12:15

have is a increase in accuracy or

play12:17

performance for props and I'm going to

play12:19

retention hook you here with all these

play12:21

question marks because over time we're

play12:22

going to reveal just how much

play12:24

performance improvements you can get so

play12:25

if you stack all of these up together uh

play12:27

you get an increase in performance on

play12:28

your PR

play12:29

um just a lot of these are very easy to

play12:31

implement um but you're going to get a

play12:33

massive increase I'm not going to tell

play12:34

you how much it is but a huge increase

play12:35

just by applying these simple simple

play12:37

techniques so we're going to be using an

play12:38

example for this video which is an email

play12:39

classification system uh and the the AI

play12:43

task here in the middle uh is where

play12:44

we're going to have be sending our

play12:46

prompt and in this case it's going to be

play12:48

someone comes onto uh someone's website

play12:50

they fill out a form that form then gets

play12:52

sent the form submission gets sent by

play12:54

email to the company the CEO or the Ops

play12:57

guy uh to his email and he gets it and

play12:59

then normally has to read through it and

play13:00

then classify it and and take action

play13:01

from there but what we're going to be

play13:03

doing is imagining a system where there

play13:05

is this AI task or this AI node and

play13:07

make.com or whatever you want to use

play13:09

where the email comes in and then it's

play13:10

going to be classified using our prompt

play13:13

into opportunity needs attention or

play13:15

ignore label so super basic system I

play13:16

wanted to use as an example here let's

play13:19

get into it um we're going to be

play13:20

building up a prompt over time of of how

play13:22

we can apply this techniques to make the

play13:24

to make this thing better and perform

play13:25

better so starting off we have the

play13:26

typical chat GPT prompt if you asked any

play13:28

mid midwit well not even midwit this

play13:30

guy's the stupid guy uh if you asked any

play13:32

regular uh bottom feeder chat GPT user

play13:35

they' probably give you a prompt like

play13:37

classify the following email into ignore

play13:39

opportunity or need detention labels and

play13:41

then they' paste in the email right so

play13:43

this is our starting point this is the

play13:44

typical CHT prompt and this is as far on

play13:46

the on the IQ scale on the left as you

play13:49

can go so we're breaking down by

play13:51

component we're starting off with the

play13:52

rooll I know for you Tik Tok brains here

play13:53

you're probably going to look at this

play13:54

and be like ah there a lot of writing

play13:55

but uh can you just pause this video uh

play13:57

I'm not going to go over all of it I

play13:59

think some of you already know some of

play14:00

these components Ro prompting is

play14:01

something that you've definitely done

play14:02

before but I want to draw attention here

play14:03

to the research results with this little

play14:05

rocket ship to show that it's increasing

play14:06

the accuracy uh when you assign an

play14:09

advantageous role in your role prompting

play14:10

by saying you are an email

play14:11

classification expert uh trained to be

play14:14

the assist this it can increase the

play14:15

accuracy of your prompts and the

play14:16

performance of them by 10.3% and

play14:19

secondly if you give complimentary

play14:20

descriptions of your abilities to

play14:22

further increase accuracy you can get up

play14:24

to 15 to 25% increase in total so this

play14:26

is as simple as here's the example you

play14:28

are a highly skilled in Creative short

play14:29

form content script writer that is the

play14:31

role with a knack for crafting engaging

play14:34

informative and concise videos so you

play14:36

add a role and then you give it key

play14:38

qualities like engaging informative and

play14:40

concise and you basically hype it up and

play14:42

tell it man you're so amazing at this

play14:44

this this so you need have a role that

play14:46

is strong and tells it that is

play14:47

advantageous to what it's doing so if

play14:49

you're solving a math problem you are an

play14:51

expert math teacher and then you can

play14:53

give it some more examples after that of

play14:54

the key quality so takeaways here select

play14:56

the role that is advantageous for the

play14:58

specific task EG math teacher for math

play14:59

problems and then enrich the rooll I

play15:01

like that word enrich the rooll with

play15:03

with additional words to highlight how

play15:05

good it is at that task super simple um

play15:08

that's Ro prompting so this is what

play15:09

we're going to be doing to kind of tie

play15:10

everything together in this video which

play15:11

is a before and after so this was the

play15:13

this was the low IQ one remember this so

play15:15

this is our starting point and here we

play15:18

have what happens after we add in the

play15:19

roll thing so you're going to need to

play15:20

pause this as this thing gets bigger

play15:22

it's kind of hard for me to put the

play15:23

whole prompt on the screen uh but the

play15:25

before and after um you're going to have

play15:27

the r prompt here highlighted and well

play15:30

low lighted in Black uh so you can see

play15:32

what we've changed so here we've still

play15:33

got the task here we've still got the

play15:35

bit before but it's just now part of a

play15:37

Li and pront we have the role included

play15:39

as well you are an experienced email

play15:40

classification system that accurately

play15:42

categorizes emails based on the content

play15:44

and potential business

play15:45

bagged great so task now going back

play15:48

there that's pretty helpful this is

play15:49

actually the task um so the thing that

play15:51

most people actually put into ches or

play15:54

into the prompt is the task itself so

play15:56

it's basically just telling it what it's

play15:57

going to do uh usually starting with a

play15:59

verb we want to say generate a this

play16:01

Analyze This write this but be

play16:03

descriptive as possible while also

play16:04

keeping it brief so an example here is

play16:06

generate engaging and Casual Outreach

play16:08

messages for users looking to promote

play16:10

their services in the dental industry

play16:11

especially focusing on the integration

play16:13

of AI tools to scale businesses your

play16:14

messages should be direct so it's

play16:16

telling it what it should do use a verb

play16:18

nothing too crazy here um but what I

play16:20

will mention is that this is where

play16:22

because we're doing these single shot

play16:23

systems we need to insert values cuz

play16:25

it's going to have our prompt written

play16:27

and then we need to be throwing

play16:28

different like in this case the email

play16:30

content is the variable that we need to

play16:32

put in this place so in this case you

play16:34

see that I have the dental industry as

play16:35

the niche and the pink one here which

play16:38

the integration of tools as the offer um

play16:40

this is from an earlier video that I've

play16:41

done within the task is where you can

play16:43

insert the variables that are going to

play16:44

be used uh throughout the system so if

play16:46

you go back a little bit uh we have the

play16:48

email content variable and you can see

play16:50

here that it's already become part of

play16:51

the task so classify the D here's the

play16:54

variable based input that we want then

play16:56

we have the technique that's associated

play16:57

with the task component um and that is

play16:59

Chain of Thought prompting this is

play17:00

something that's fairly common now and

play17:02

pretty widely known um it involves

play17:04

telling the model to think step by step

play17:05

without our instructions or B yet you

play17:07

can provide it with step-by-step

play17:08

instructions uh for it to work through

play17:10

each time which is my kind of preferred

play17:11

way of doing it so here's the example um

play17:14

we take this script writer example as

play17:16

well um and in this case if you just

play17:18

give it a list of six points so hook the

play17:20

viewer in briefly explain provide one

play17:22

two F standing facts described so we're

play17:24

giving it step-by-step instructions on

play17:26

how it should perform the task and the

play17:27

research results of of thought prompting

play17:29

being incorporated into your prompts

play17:31

it's a 10% accuracy boost on simple

play17:33

problems I me that's like very very

play17:34

simple problems like solve this or 4

play17:37

plus 2 equals blah BL blah uh but 90%

play17:39

accuracy on complex multi-state problems

play17:41

which is likely what many of you are

play17:42

going to be uh dealing with with the

play17:44

system that you're trying to build so

play17:46

90% accuracy boost is pretty insane and

play17:48

uh considering you only have to write up

play17:50

a little list of what it should do chain

play17:52

of th promting something you should uh

play17:53

you should really incorporate uh key

play17:55

takeaway here the more complex the

play17:57

problem the more dramatic the

play17:58

Improvement using chain of Thor

play17:59

prompting so that's the task if we go

play18:02

across now you see that we've included a

play18:04

chain of Thor component to the task so

play18:06

the old one which was just the chat GPT

play18:08

uh low IQ person is this and we've added

play18:11

on the roll prompt and we've also added

play18:13

in a section for how it should approach

play18:16

a task a step-by-step Chain of Thought

play18:18

prompting method that we've Incorporated

play18:20

next we have the specific section which

play18:22

is below the task and this is really an

play18:24

addition to the task so to not get it

play18:25

too bloated on the task component you

play18:27

can then have important bullet points

play18:29

that reiterate uh more instructions or

play18:31

important notes regarding the execution

play18:32

of the task so using the example of the

play18:34

Outreach message generator prompt

play18:35

examples of specifics what this might be

play18:37

each message should have an intro body

play18:39

and outro with a tone that's informal

play18:41

use placeholders like this so it's kind

play18:42

of a list of additional points that

play18:44

outside of just the core part of the

play18:46

task you can give additional uh kind of

play18:48

bullet points which is pretty handy uh

play18:50

when you're modifying The Prompt when

play18:51

you're editing it if you think it's not

play18:52

doing something correctly you can just

play18:53

easily add another bullet point on so

play18:55

this is kind of what I will do most of

play18:56

my modification when I'm writing my

play18:57

prompts and the tech associated with

play18:59

specifics is called emotion prompt and

play19:01

this refers to adding short phrases um

play19:04

containing emotional stimuli emotional

play19:06

stimula emotional stimula right to

play19:09

enhance the prom performance so here's

play19:11

the research results emotional stimula

play19:13

can be things like this is very

play19:14

important to my career this task is

play19:16

vital to my career and I really value

play19:18

your thoughtful analysis this continues

play19:20

on from role prompting a bit cuz you're

play19:22

kind of continuing to hype this thing up

play19:23

and say look like you I really

play19:25

appreciate how how good you are at this

play19:27

thing and and you being part of this

play19:28

business and what we're doing is so

play19:29

important and it has massive

play19:31

implications on myself and my business

play19:33

and also on society as a whole the more

play19:35

you can hype it up and tell it that is

play19:37

its task is like the world is going to

play19:39

fall apart if it doesn't do this thing

play19:40

right the better the performance you can

play19:41

get out of it so the research results

play19:43

here are adding emotional stimula which

play19:45

can be as short as these two little

play19:47

phrases here this is very important to

play19:48

my career um and this is vital to my

play19:50

career these little lines here uh

play19:53

increased 8% on simple task and 115% on

play19:56

complex task compared to zero short

play19:58

problem

play19:59

so huge increase on complex tasks which

play20:01

is likely what you're going to be

play20:02

building your problems for anyway and it

play20:04

also enhanced the truthfulness and

play20:05

informativeness of llm outputs by an

play20:08

average of 19 and 12% respectively so

play20:10

not only are you getting the increase in

play20:11

accuracy is is this thing getting the

play20:13

right uh the right output in the right

play20:15

response but also it's more truthful and

play20:17

informative which is me fluffy things

play20:19

but more being more truthful and

play20:21

informative is probably a good thing

play20:22

right so the ROI just adding a few of

play20:24

these words for the performance of your

play20:25

prompt is ridiculous there's no reason

play20:27

you shouldn't be throwing in a couple

play20:28

these emotional kind of lines which is a

play20:30

this is very important like this is such

play20:32

a key thing in the business that you are

play20:33

part of so the key takeaways here adding

play20:35

simple phrases like these can encourage

play20:37

the model to engage in more thorough and

play20:39

deliberate processing which is

play20:40

especially beneficial for your complex

play20:42

tasks that require more careful thought

play20:44

and Analysis so how does this actually

play20:45

add into our prompt we have it below the

play20:47

task section here I can zoom in and we

play20:50

have the specifics this task is critical

play20:51

to the success of our business if the

play20:53

email contains blah blah blah blah and

play20:55

it's just a list of additional

play20:56

instructions and we can throw in that

play20:58

emotion prompt in there as well so

play20:59

that's specifics you can see it's sort

play21:01

of coming together here then we jump

play21:03

into context this is kind of

play21:05

self-explanatory but just giving the

play21:06

model a better idea of the environment

play21:08

in which it's operating in and why can

play21:10

be helpful to increase performance and

play21:12

this also gives us an opportunity to

play21:13

really further instill the role

play21:15

prompting that we did at the start and

play21:17

also the emersion prompting that we've

play21:18

done in the specific so an example here

play21:20

from our email classification system

play21:21

could be our company provides AI

play21:23

solutions to businesses across various

play21:24

Industries but Accord about who the

play21:26

business is we receive a high volume of

play21:28

emails from potential clients through

play21:29

our website contact form Your Role again

play21:32

role prompting we're incorporating again

play21:34

reminding it of the role that it has is

play21:35

classifying this emails is essential

play21:37

emotion prompt for our sales team to

play21:40

prioritize the efforts and respond to

play21:41

inquires inquiries in a timely manner by

play21:44

accurately identifying motion prompt

play21:46

again Etc so you can read the rest of

play21:47

that but we're we're heading up with a

play21:49

ro prompt again we're giving it context

play21:50

on the system that it belongs to and

play21:52

here's here's my general notes I'm

play21:53

getting here to myself but General notes

play21:55

for context is to provide context on the

play21:57

business including the types of

play21:58

customers types Services products values

play22:01

Etc then you can provide context on the

play22:03

system that it is part of as you can see

play22:04

here we're saying this is part of our

play22:05

sales process and we get a lot of emails

play22:08

and then you can provide a little bit of

play22:09

context on the importance of the task

play22:11

and the impact on the business um so you

play22:13

directly contribute to the growth and

play22:15

success of our company therefore we

play22:17

greatly value your careful consideration

play22:18

and attention to classification so just

play22:20

kind of reiterating a lot of the stuff

play22:22

that we've done in the role and also in

play22:24

the uh in the specific section as well

play22:25

here's the before and after we've added

play22:27

this context section section down the

play22:28

bottom uh not rocket science the example

play22:31

section kind of self-explanatory but we

play22:33

want to give examples to the model on

play22:35

how it should perform and and how it

play22:37

should be replying to it so you given

play22:39

input output pairs is what you usually

play22:40

refer to them as um and this goes on to

play22:43

the technique of few shot prompting uh

play22:45

single shot one shot prompting um and in

play22:48

this case we're going to be talking

play22:49

about few shot prompting because that's

play22:50

giving more than one example so uh I'll

play22:53

give you a little bit of a a look into

play22:54

the research results here um now all of

play22:56

these research results attached to

play22:59

Scientific papers that i' I've gone

play23:00

through and and found and and put in

play23:01

here for you so if you want to get

play23:03

access to all of those research papers

play23:04

I'll put it on a figma or put it on in

play23:06

the description so you can have a look

play23:07

at the papers themselves I'm not pulling

play23:08

these out of my ass uh these are coming

play23:10

from papers where people have actually

play23:11

studied these things so um and this

play23:14

graph here shows the effect of adding

play23:17

these input output examples on the

play23:18

performance and accuracy of the prompt

play23:20

so zero shot prompting is on the far

play23:22

left we have 10% accuracy for these 175

play23:25

billion parameters version of gpt3 as

play23:28

soon as you add one example to this it

play23:29

jumps up from 10 to nearly 50 to 45%

play23:32

accuracy and then we get sort of a a

play23:35

diminishing returns as we continue to

play23:37

increase up to here is 10 examples so

play23:40

this is 10 input output pairs so a QA QA

play23:43

QA one QA and one example of an input

play23:46

and an output that is a a a shock with a

play23:48

one shock prompt we got a 45% accuracy

play23:51

and as we got up to 10 we got a 60% and

play23:54

kind of flattened off after there so the

play23:56

research results uh is that GB3 175

play23:59

billion parameters achieved an average

play24:01

14.4% improvement over its zero shot

play24:04

accuracy of 57.4 when using 32 examples

play24:07

per task so that's way up here um and

play24:10

using a lot of them and it kind of crept

play24:11

its way up uh but for us the key

play24:13

takeaways is that providing just a few

play24:15

examples literally going from zero

play24:17

examples to one massively increases the

play24:20

performance compared to zero shot

play24:21

prompting when it doesn't have any

play24:22

examples so accuracy scales with the

play24:24

number of examples but it shows

play24:26

diminishing returns most of the gains

play24:28

can be achieved between uh 10 to 32 well

play24:31

crafted examples and personally I go for

play24:33

like 3 to 5 I don't really want to be

play24:34

sitting there all day writing all these

play24:35

examples and the more examples you give

play24:37

the more tokens you're putting in the

play24:39

input of your prompt and therefore the

play24:40

more expensive it is every time every

play24:42

time you call that prompt so if it's

play24:43

part of this email classification system

play24:45

and we have 32 examples we're going to

play24:47

have 32 examples worth of context and

play24:49

token usage in our Automation and that

play24:52

means every single time an email comes

play24:53

in it's going to be sending off huge

play24:55

amounts of tokens uh as part of the

play24:57

input and going to be charged on those

play24:59

import tokens as well so 10 to 32 is is

play25:02

a sweet spot according to this paper

play25:04

just do 3 to 5 it does a job enough um

play25:06

and at least in my experience and and

play25:08

the stuff that we do at morning side as

play25:09

well so a little bit more on examples I

play25:10

won't bore you too much here but this is

play25:12

kind of the key part here that these

play25:13

guys doing these these uh these papers

play25:15

and doing the research they documented

play25:17

roughly predictable Trends and scaling

play25:18

and performance without using fine

play25:20

tuning so by giving examples you are

play25:22

kind of impr prompt fine-tuning these

play25:24

models uh and people talk about fine

play25:26

tuning and everyone thinks that you need

play25:27

to do it I personally for me and my

play25:30

development company we build these AI

play25:32

solutions for businesses and we've never

play25:33

had to use fine tuning because we're

play25:35

actually good at prpt engineering and

play25:37

there's only a very limited number of

play25:38

use cases where fine shunting actually

play25:40

gives you an advantage um and that's

play25:42

just from our experience so if you want

play25:43

to avoid doing the messy stuff of data

play25:45

collection and fine tuning and all that

play25:47

crap uh just get good at prompting get

play25:49

get good at writing these examples and

play25:51

you can achieve the roughly similar uh

play25:54

performance increases um as fine tuning

play25:56

without fine tuning so this graph here

play25:58

shows an interesting uh bit of data that

play26:00

I do want to share is getting a little

play26:02

bit Ticky but uh this graph on the right

play26:03

here shows a significant increase in

play26:05

performance from zero shot which is the

play26:06

blue to few short completions so if you

play26:08

add in some examples you're going to

play26:10

jump up from I think it was 42 up to

play26:13

nearly 55 60 a big jump immediately just

play26:17

by adding a few examples but

play26:18

interestingly the gold labels here so

play26:20

these orange pillars these orange bars

play26:23

uh that refers to the tests done where

play26:25

the labels were correct so maybe if the

play26:27

email classification was um here's the

play26:29

email here's classification and we gave

play26:30

it correct examples the performance

play26:33

increase within the study was shown

play26:35

regardless of whether those labels were

play26:36

correct so this tells us something

play26:37

interesting that the llm is not strictly

play26:39

learning new information so by giving us

play26:42

giving it few short examples that have

play26:44

the correct labels it's not necessarily

play26:45

learning that information it's actually

play26:47

just learning from the format and

play26:49

structure uh and that helps to increase

play26:51

the accuracy of the outputs overall the

play26:53

accuracy of the label itself does not

play26:55

actually appear to matter too much uh on

play26:57

the on the overall performance so you

play26:58

can have incorrect labels and it's still

play27:00

going to perform just as well um because

play27:02

you've given it some examples on how it

play27:03

should respond so long story short

play27:04

throwing in three to five examples is

play27:06

going to greatly increase the accuracy

play27:08

and the performance of your prompt um

play27:10

and it's also should be thought of more

play27:11

as teaching it how to structure the

play27:13

output so this is very important if

play27:14

you're not getting the structure you

play27:15

want and throwing in a whole bunch of

play27:17

other rubbish like oh well this is the

play27:18

answer to the question if you just give

play27:20

it a few examples of how it should

play27:22

respond it's going to look very closely

play27:23

at that and it's going to perform much

play27:25

better for you so think of it as fine

play27:27

tuning of the St the tone and the length

play27:29

and the structure of the output um and I

play27:31

think this is something that a lot of

play27:32

people miss out on when they don't add

play27:33

these things in because it's it's so

play27:35

important if you just wanted to give you

play27:36

one word and you kind of try to tell it

play27:38

in the task to just give one word

play27:39

responses sure it might listen to it but

play27:41

if you give five examples of input and

play27:44

then just a one word output like in our

play27:45

case opportunity or or needs attention

play27:48

or ignore these labels for our email

play27:49

classification system uh it's going to

play27:51

perform so much better so here's a

play27:53

before and after again we're getting a

play27:55

little bit small here so I'll allow you

play27:56

to pause this on screen as you wish but

play27:59

we've given it a couple examples you can

play28:00

see how I've done it here in this case

play28:02

it's email label um I usually tend to go

play28:05

for a q and

play28:08

a uh that's usually my go-to strategy or

play28:11

input output um but that's that's

play28:13

basically how we do it we go example one

play28:15

uh we give the QA and then we give a

play28:17

space example two some you don't even

play28:19

need to put these on um you can just

play28:20

leave it as that and it sort of figures

play28:22

it out uh but that's that's F shot

play28:25

property and examples and how we've

play28:27

compared them

play28:29

now getting on to the final bit stick

play28:30

with me because you are learning some

play28:31

very good stuff here uh the notes

play28:33

section is the final part and this is

play28:34

our last chance to remind the llm of key

play28:36

aspects of the task and add any final

play28:38

details or tweaks uh this is something

play28:40

that you'll end up using a lot as you're

play28:42

actually doing the prompt engineering

play28:43

workflow um in the list I usually end up

play28:46

having things like output formatting

play28:48

notes like you should put your output in

play28:49

X format or do not do X like if it's

play28:52

doing something as I do a test this is

play28:53

kind of where I'm iterating on the on

play28:55

the prompt so if I if it gives me an

play28:56

output and it has doing something way

play28:58

wrong or just say at the bottom at the

play29:00

note section say do not do X or you are

play29:03

not supposed to do this never include it

play29:04

in your output uh these kind of things

play29:06

are very easy to slap onto the note

play29:08

section at the bottom um small tone

play29:10

tweaks reminders of key points from the

play29:11

task or specifics is really what I use

play29:14

the note section for um and and as I say

play29:16

here it usually starts out quite skinny

play29:18

because if you do the all the prompt

play29:19

incorrectly you'll have well I've got

play29:20

nothing else to say in the prompt all

play29:22

I've got nothing else to say at this

play29:23

bottom section then you give it a spin

play29:25

you throw some inputs at it and it

play29:26

starts doing some wacky stuff and you

play29:28

come back and go oh well this just

play29:30

reminded of some things I've said

play29:31

earlier on and you start to add this

play29:33

list of things to the notes now don't

play29:34

let it become too long u because it's

play29:36

going to start to sort of water it down

play29:37

you'll notice that it'll start

play29:38

forgetting earlier notes if you put too

play29:39

many notes in um but less is more here

play29:42

and if it's it's really just to tweak

play29:43

these outputs to to get the right right

play29:45

kind of responses without refactoring

play29:47

the whole thing and restructuring how

play29:49

you did the task in the specific so it's

play29:50

just kind of a lazy way of tacking

play29:52

things on to just get it nudged towards

play29:53

where you want it to go um now we have

play29:55

the note section and it's based off the

play29:56

Lost in the middle effect which is from

play29:58

another scientific like research paper

play30:00

um and this lost INE middle effect is is

play30:03

most famous kind of for this graph here

play30:05

uh which shows that language models

play30:07

perform best when relevant information

play30:09

is at the very beginning Primacy I'm

play30:11

learning new stuff here as well or end

play30:13

recency of the imput context so

play30:15

performance significantly worsens when

play30:16

the critical information is in the

play30:18

middle of a long context and this effect

play30:20

occurs even when the models are designed

play30:22

for long input sequences so yes gbt 4

play30:25

32k back in the day was designed for

play30:28

32,000 tokens but it didn't really

play30:29

listen to anything in the middle um

play30:31

luckily the models that we work with now

play30:34

um are much better at retrieving

play30:35

information over large context um but

play30:38

you should still keep this in mind

play30:39

because it still seems to apply um and

play30:41

this is why the note section is at the

play30:42

end this little graph here basically

play30:44

shows you that uh when you place the

play30:46

information at the start the accuracy is

play30:47

higher and when you place it in the

play30:49

middle the accuracy is lower and when

play30:50

you place it at the end the accuracy is

play30:52

higher but not as high as the start so

play30:54

it really listens to the stuff at the

play30:55

start so the role prompt it takes it

play30:57

very seriously and that's why we have

play30:58

our task up the top as well that's why

play31:00

we have the context in the middle

play31:01

because it's not as important so see

play31:03

he's starting to knit together all this

play31:05

information understand these how all

play31:07

these different uh techniques knitten

play31:09

together so the way that I've structured

play31:11

this prompt and the way my team have

play31:12

structured it I'm going to really re

play31:14

retelling you what we do at morning Side

play31:16

by adding these things all in together

play31:18

uh you see how it starts to fit together

play31:20

into a proper strategy and not just

play31:21

throwing over the wall and having some

play31:23

kind of prompt formula it's actually

play31:25

based off the science um and and if I L

play31:27

to talk about science these days so uh

play31:29

that is lost in the middle I think have

play31:31

a little more

play31:32

here the research results of course that

play31:34

you've been anxiously waiting for is

play31:36

that when a relevant document is at the

play31:38

beginning or the end of a context GPD

play31:39

345 turbo achieves around 95 around 75%

play31:43

accuracy on a QA task um an increase of

play31:45

20 to 25% compared to when the document

play31:47

was placed in the middle um so the key

play31:49

takeaways from this is instructions

play31:51

given at the start and the end of The

play31:52

Prompt are listened to by the LM far

play31:53

more than anything in the middle um for

play31:56

this reason the note section is a handy

play31:58

to append reminders uh for anything that

play32:00

happened in the task or the specifics

play32:02

that you notice it maybe isn't listening

play32:03

to and you need to reiterate um but be

play32:06

aware that increasing the context length

play32:08

alone does not ensure better performance

play32:09

still having less context or fluff will

play32:12

mean the remaining instructions are more

play32:14

likely to be followed so while lost in

play32:16

the middle refers to okay where should

play32:17

we put where should we structure the

play32:19

prompt to include uh the right

play32:21

information to be listen what's the most

play32:22

important thing in the prompt and where

play32:23

should we put it yes that does that but

play32:25

it also it also gives us information on

play32:27

how we should try to keep our prompt as

play32:30

short as possible because it's over

play32:31

longer context periods that these things

play32:33

start to get bad so the shorter you can

play32:35

keep the prompt in general it could

play32:36

listen to the whole thing very very well

play32:38

but as soon as you've like really made

play32:39

it bloated um it's going to be losing

play32:42

some of that stuff in the middle so less

play32:43

is more um and having less less fluff is

play32:46

always going to make your your prods

play32:47

perform better so here you can see in

play32:48

the note section uh please provide the

play32:50

email classification label and only the

play32:52

label as your response so again

play32:53

reiterating the format we want the

play32:54

output to be in um do not include any

play32:56

personal information in your response if

play32:58

you're unsure uh on the side of caution

play33:00

and assign the needs attenti label so

play33:02

little reminders as we've gone through

play33:03

and and we tweaking this email

play33:05

classification prompt you will add those

play33:06

things at over time so getting back to

play33:08

this little diagram here we have the

play33:09

role prompting covered off you know how

play33:11

to use that technique is tell it a roll

play33:13

and and tell it how good it is at that

play33:14

role Chain of Thought give it a list of

play33:16

things that it should do and how it

play33:17

should break down the the task motion

play33:18

prompt tell it how good it is tell it

play33:20

how important everything is that it's

play33:21

doing few shot prompting give it

play33:23

examples that it knows the kind of

play33:24

output format you want lost in the

play33:26

middle kind of tells you how to

play33:27

structure everything and where to put

play33:29

the right information and you can add on

play33:30

a couple little uh things at the bottom

play33:32

so that it really listens to them at the

play33:33

end and finally here we have markdown

play33:36

formatting man I'm talking at a mile

play33:39

here and I'm getting really hot anyway

play33:42

markdown formatting is kind of the final

play33:43

piece of this puzzle and tied all

play33:44

together and I learned this from a CTO

play33:46

Spencer he put me onto this technique

play33:48

and I use it all the time now so uh

play33:50

markdown formatting is a way that we can

play33:51

structure our prompts um for both our

play33:54

sake so that it's more readable CU When

play33:55

you write these large prompts it can get

play33:57

a little bit and like there's a lot of

play33:58

stuff going on so for our sake it allows

play34:01

us to structure the reprompt better but

play34:03

also it allows the llm to understand the

play34:06

structure a little bit better as well

play34:07

while I don't have any research to back

play34:08

that up uh my only data on why we should

play34:12

be doing this and why it may perform

play34:13

better is because you can see over here

play34:17

uh someone managed to extract out the

play34:18

system prompt from th 3 within chat GPT

play34:21

and open AI themselves are actually

play34:23

using uh using these the smart

play34:26

formatting so you can see uh a pound

play34:28

symbol here and then tool so these are

play34:29

marked out headings as we're going to go

play34:31

into in a second but if open AI is using

play34:32

it um to train their systems and to to

play34:35

prob their own systems we should

play34:36

probably be using it as well which is

play34:38

kind of why we're doing it here so uh

play34:41

basically markdown gives us a few new

play34:42

tools to structure um you may notice if

play34:45

you're writing a prompt you just got PL

play34:46

text you don't have any any method to to

play34:48

Signal what a hitting would look like or

play34:50

what bulb would look like but markdown

play34:52

gives us uh those those techniques so we

play34:54

have hittings uh hitting one is the

play34:56

largest hitting two is the second lest

play34:58

hting three is the third lest so you

play34:59

have now different layers of hittings so

play35:02

you can have like roll task all these in

play35:04

the hitting one so just H one as a as a

play35:06

pound symbol and then and then a space

play35:09

and then whatever you want after it

play35:10

which you'll see in a sign um but then

play35:12

if you have little subsets or

play35:14

subsections like examples hitter and

play35:16

then you want example one you can have

play35:17

example one as a hitting three or a

play35:19

hitting two so you have different layers

play35:20

of hitting and importance uh you also

play35:22

have bolds italics underlines list

play35:24

horizontal rules and more so if you want

play35:26

to jump into the fancy stuff I'll teach

play35:28

you the basics here of markdown but you

play35:29

can also do these other things I'm not

play35:31

sure what the effectiveness is um of

play35:33

bolds and italics and stuff but I tend

play35:35

to just use the use the headings as a as

play35:37

a structure tool so key takeaways on

play35:39

markdown formatting is use these H1 tags

play35:41

single pound symbol uh to Mark each of

play35:43

the components for your prompt and then

play35:45

you can use the H2 or three tags or even

play35:47

bolds and stuff to sort of add add

play35:49

additional additional structure to other

play35:51

parts of it so here's example of how you

play35:52

should add it in hitting one roll

play35:54

hitting one task specifics context and

play35:57

then Within context I've added in here

play35:58

look you might want to break the context

play36:00

into subsections of okay let's use a

play36:02

heading 2 and go about the business

play36:03

about our system so you don't need to do

play36:05

that all the time but this is how you

play36:06

can start to use other types of headings

play36:09

in like H2 or H3 tags to to split up uh

play36:12

some of the other subsections under each

play36:14

of your main headings and then again

play36:16

examples we can have an example one as a

play36:18

as a heading three and give the examples

play36:20

and the notes so that's roughly and you

play36:22

come in here and obviously you are a BL

play36:26

blah um

play36:28

generate BL BL blah you get what I'm

play36:30

doing you get what I'm saying and so

play36:32

what this all looks like when we tie it

play36:34

together um we now have our completed

play36:38

prompt which this is the before remember

play36:40

this is where we started this is the uh

play36:42

the the the super guy who doesn't not

play36:43

had a prompt this is what we started

play36:45

with and this is what we have after when

play36:47

we apply all of these techniques now

play36:48

this is a little bit overol for an email

play36:50

classification system but what I want to

play36:52

show you is that this is how you would

play36:54

apply it to a simple task like this so

play36:55

we have the roll that's wrapped in the

play36:57

AG one tag we have H1 tag here

play37:00

Etc um and we have all of these

play37:01

different components role task specifics

play37:03

context examples and notes all

play37:06

integrating the uh techniques that we've

play37:08

been over in this video and now stacking

play37:10

up all of the increases in accuracy that

play37:12

we get from these different techniques

play37:14

we can see that we don't know how much

play37:15

markdown formatting gives us uh but the

play37:17

total is potentially above 300% increase

play37:21

in accuracy then the final step here is

play37:22

we can add up all of the different

play37:24

increases and and the performance

play37:25

increases that we get from these

play37:26

techniques and we can can sum it up to a

play37:28

300% or more increase in in performance

play37:31

so me you can listen to me or you can

play37:34

just ignore it or you can use these

play37:36

place by Place wherever you think you

play37:37

need it um but considering emotion

play37:39

prompting is literally just a few words

play37:41

saying you're the best and this is

play37:42

really important to me and Ro prompting

play37:44

is like one or two lines and lost in the

play37:46

middle is really just more of a an

play37:48

understanding of where to put the right

play37:49

information you prompt you've now got a

play37:51

toolkit and going back to this guy over

play37:53

here look at this guy he's got a toolkit

play37:56

he understands the science understands

play37:58

from research papers at why these things

play38:00

work the way they do and because he has

play38:02

this this deeper understanding of what

play38:04

makes llms do the right things that they

play38:06

want them to do he's better able to

play38:08

perform and as you can see he is on the

play38:10

upper end of the spectrum here so this

play38:12

is the guy that you should be now all

play38:14

you need to do is take these and apply

play38:16

it and you'll start to see and and

play38:18

connect them go okay okay so lost in the

play38:20

middle um that's not doing what I want

play38:22

maybe I need to change the stuff at the

play38:23

start and the end okay uh it's giving me

play38:25

the wrong structure and style okay maybe

play38:27

maybe I give some more F short examples

play38:29

of how it should be responding and I I

play38:31

take my time and I write them carefully

play38:32

and I tell them the kind of style and

play38:34

structure of the response I want it's

play38:35

really not rocket science and people

play38:37

have already done the hard work by doing

play38:38

the the research to get these kind of

play38:40

results so um to wrap up this video I've

play38:42

given oh actually we have a

play38:43

considerations page here uh context

play38:46

length and costs as I mentioned earlier

play38:47

for high volume tasks um like this

play38:50

example of email classification system

play38:52

uh I guess it's not too high volume but

play38:53

if this thing is doing like 50 50 100

play38:56

reps a day it's really being put through

play38:58

the ringer and there's a lot of volume

play39:00

going through the task that you're

play39:01

building you need to focus on making

play39:03

that prompt as short and succinct as

play39:04

possible uh because every time you run

play39:06

it you are charged for the input and the

play39:07

output tokens so while you may only be

play39:09

outputting a label in this case of just

play39:11

needs needs work new opportunity or

play39:14

needs attention or ignore you're also

play39:17

charged for the input tokens as well so

play39:18

all the prop that you put in you're

play39:20

going to be charged for plus the

play39:21

inserted variables as well so you've got

play39:22

the prompt then you're inserting the

play39:24

email context you're getting all of that

play39:26

information and that over you're you're

play39:28

going to get charged on that so uh keep

play39:30

in mind that if you're doing a lot of

play39:31

volume try to use a a cheaper model as

play39:33

we're going into next but also keep the

play39:35

The Prompt shorter as well the choice of

play39:36

model is important as well better prompt

play39:38

engineering and the skills that I've

play39:39

just taught you on this going back to

play39:40

this guy here he has better prompt

play39:42

engineering skills and can get better

play39:43

performance out of Cheaper models this

play39:45

guy doesn't have the skills so he relies

play39:46

on the more expensive and slower models

play39:48

which are not good for the client um to

play39:51

get the performance that he needs

play39:52

because he doesn't have the skills to

play39:53

get it to do what he wants and that

play39:54

brings me back to this choice at model

play39:56

point which is where possible you need

play39:57

to use your skills and use your

play39:59

advantage to bend the cheapest and

play40:01

fastest model to execute the task

play40:02

successfully so 3.5 turbo is basically

play40:05

free like this thing open AI has made

play40:07

that so cheap and whatever whenever

play40:09

you're watching this video might be

play40:10

different but the cheapest fastest model

play40:12

should be your goto and if you can't get

play40:14

it working there then you can go up but

play40:16

you have the skills now um if it has

play40:18

high volume and requires fast responses

play40:20

this is when your skills will shine

play40:21

because you can create prompts that do

play40:23

and perform um fast and cheap then we

play40:26

have the temperature and and other model

play40:27

settings if you're doing creative rating

play40:29

adiation Etc then test higher levels so

play40:31

0.5 to1 uh but anything else if you're

play40:34

putting systems like this whereas

play40:35

classification or AI is kind of doing a

play40:37

a a fixed piece of the of the puzzle uh

play40:40

you want it to be on zero just have that

play40:42

we're trying to fight against the

play40:44

inconsistency and and natural randomness

play40:46

of these models and in order to do that

play40:48

we need to uh set that temperature to

play40:50

zero and that's going to make the system

play40:52

a lot more consistent uh so zero is what

play40:54

I typically use for basically anything

play40:55

apart from creative writing cutter uh

play40:57

script rting prompts the other and the

play40:59

other model settings like frequency

play41:00

penalty and top PE are not needed in my

play41:02

experience just play around with the the

play41:03

temperature that's all you need to worry

play41:04

about what I'm going to jump to now is

play41:05

actually having a chat with my CTO

play41:06

Spencer um and he's going to share what

play41:08

we've done at morning side on one of our

play41:10

projects where we had to go from GPT 4

play41:12

uh which was doing the job great and

play41:14

then the client wanted to change to GPT

play41:15

3.5 turbo to save money and then we had

play41:17

to kind of rebuild everything in order

play41:19

to get it working so uh we're going to

play41:20

jump to that and you get to here for

play41:21

Spencer again lot smarter than me and a

play41:23

lot of the stuff that I'm sharing

play41:24

actually came from what he's learned uh

play41:26

learned on the job and what he does at

play41:27

warning side so everyone if you haven't

play41:29

met Spencer already this Spencer my CTO

play41:31

he's a lot smarter than I so I'm

play41:32

bringing him on to chip into this prompt

play41:34

engineering video just briefly because

play41:36

um a lot of the stuff that I've just

play41:37

told you about has actually come from

play41:39

has big brain here he's been sharing a

play41:40

lot of the the research papers

play41:42

particularly within our slack across the

play41:44

companies we're on the same page so

play41:46

Spencer I wanted to bring you on here

play41:47

particularly because we've been working

play41:48

with a one of our biggest clients ever

play41:51

today U and I want to particular focus

play41:53

on how I was talking in this video about

play41:56

the pr engineering skills allowing you

play41:57

to get more out of uh lesser and cheaper

play41:59

models um and how we've had to switch

play42:01

from a gbg4 based SAS that we built over

play42:04

to a hbt 3.5 turbo and and the

play42:07

difficulties in transitioning that so if

play42:08

you just want to um give any notes on

play42:10

the on the presentation prior but also

play42:11

specifically on uh getting more out of

play42:14

these these lesson models really which

play42:15

is what I'm trying to teach people in

play42:16

this video yeah yeah definitely so um

play42:21

yeah it's an interesting one I usually

play42:23

uh like to try and break things down so

play42:26

um when going through these path the key

play42:29

is is that obviously want to use the

play42:30

cheaper models first so 3.5 comes comes

play42:33

first to mind um in this case

play42:35

specifically for this client there's a

play42:37

lot of complex uh kind of information

play42:39

that they were synthesizing out of it so

play42:41

we made the decision to start off with

play42:44

gp4 um to to make sure that we were

play42:46

getting the responses that we wanted now

play42:49

once it kind of got closer to uh to

play42:52

release there we realized that the the

play42:53

cost that was Associated um with running

play42:55

these models is going to be ative so we

play42:58

had to yeah kind of take that transition

play43:00

now and and gauge down to 3.5 so

play43:02

whenever I'm doing that specific task

play43:06

the key one that I'm looking at is yes

play43:07

prompt engineering one um and then two

play43:10

is scope reduction um gp4 is really good

play43:14

at a bunch of different things uh and

play43:17

and understanding kind of the hidden

play43:18

context that uh that's in the words that

play43:20

you're doing uh 3.5 is is much less so

play43:23

so um you almost want to break it down

play43:26

into smaller kind of component size

play43:28

chunks for the task um and then use

play43:32

those as kind of contributive to to get

play43:35

the same results as you would with four

play43:37

um so that was the steps that we're

play43:39

taking in this particular project

play43:41

another good tactic to use as well and

play43:43

and one that I would highly recommend is

play43:44

using gp4 first and then taking the

play43:47

input and output pairings as training

play43:49

data to fine-tune a 3.5 model as well um

play43:53

because we found that that's that's

play43:54

really helpful uh for getting your cost

play43:56

down but keeping up that GPT for L

play43:59

quality yeah I'm kind of just bashed

play44:01

fine tuning earlier in this video

play44:02

because I say it's it's unnecessary in

play44:04

almost every case um so I mean using few

play44:07

short examples is essentially a way of

play44:09

of fine tuning VI prompting so if you

play44:10

just give a few short examples of gp4

play44:13

outputs or human rid outputs would that

play44:15

not do a lot in terms of getting more

play44:17

towards the outputs that you're looking

play44:19

for yeah 100% and you're completely

play44:22

right on that one fine tuning for I

play44:24

would say a vast amount of use cases

play44:26

isn't really NE necessary you can get I

play44:28

would say 90 even 95% of the way with uh

play44:31

with just good old fashioned prompt

play44:32

engineering and and F shot prompt in

play44:34

here um with f shot prompting there's a

play44:38

interesting paper that came out last

play44:39

year um and I can't remember the

play44:42

specific name of it but uh it talks

play44:44

about the decision boundary so there's

play44:46

an important uh kind of lesson to learn

play44:49

on that is that for the fot prompts that

play44:51

you're giving the important part is to

play44:54

give ones that are confusing to the

play44:56

model itself so the ones that you notice

play44:58

that it's getting wrong consistently if

play45:00

you actually categorize those and take

play45:02

those in and take the one to five artist

play45:05

examples that you get and then use those

play45:09

as the uh yeah as the examples in there

play45:12

you'll actually get a lot of better

play45:13

results coming out of your model too

play45:15

well that's that's I'm learning

play45:17

something on this on this call in this

play45:18

video as well because uh I mean I'd

play45:20

always start in my fut show examples

play45:21

have kind of like the most common ones

play45:23

you might check a a curve B in there as

play45:24

well but I just kind of put the five

play45:26

three to five common ones um but knowing

play45:28

that we should try to figure out when

play45:29

it's stuffing up and then and put those

play45:31

on next examples is great so any other

play45:33

notes you have on on the content just

play45:35

Tak a look at the presentation but the

play45:36

markdown formatting aspect um any of the

play45:39

other any other techniques I know motion

play45:40

promps than you want for me so anything

play45:42

that you got there yeah uh markdown is

play45:45

one that we use extensively um I'm a

play45:48

huge ner so I I like writing in markdown

play45:50

anyways just because most of the the

play45:51

notebooks uh Jupiter notebooks if

play45:53

there's any other uh data nerds out

play45:55

there like myself um so it's it's rather

play45:58

um yeah

play45:59

consistent familiar for myself is is any

play46:02

data or or papers that you've seen with

play46:05

the uh the markdown base because in the

play46:07

presentation just before I was like look

play46:08

I I can't find any research papers but

play46:10

I'm sure just probably G on but uh it's

play46:12

more like if open AI using it you'd be

play46:14

pretty stupid not to do it and even just

play46:15

functionally for us as as writing these

play46:17

prompts it's so much more useful to at

play46:19

least have some kind of structure to it

play46:21

so purely on our side you'd use it

play46:22

regardless just to make it easier on

play46:24

your on your end yeah absolutely so I

play46:28

definitely remember reading I think at

play46:29

least a couple papers about structured

play46:31

uh structured inputs in markdown format

play46:33

and there's other ones as well that you

play46:34

can use um but even intuitively so when

play46:37

they're doing the fine-tuning or fine

play46:40

tuning in terms of

play46:42

uh uh reinforcement learning with human

play46:45

feedback rlf um what they're doing is

play46:47

they're actually providing markdown

play46:49

based formatting and that's how they're

play46:51

structuring these prompts that they're

play46:52

giving to it in order to fing it so

play46:54

intuitively of course if it's seen it

play46:57

more it's going to do better when it

play46:58

sees more of the same that it's been

play47:00

trained off um the cool part about using

play47:03

markdown as well is you get to actually

play47:04

use semantic information so if you're

play47:07

writing a Word document if you want to

play47:08

put bold in there if you want to put

play47:10

something in italics titles subtitles

play47:12

all these things it makes it into a much

play47:14

more structured format and that Nuance

play47:16

comes through on the other side to be

play47:18

able to uh yeah make better better

play47:21

prompts to to get better outputs the

play47:24

other one that uh I would suggest as

play47:26

well is they like small little things so

play47:29

uh being very encouraging towards uh an

play47:32

llm can help so uh I usually start off

play47:34

with you're a world class X and you know

play47:38

you are an absolute star doing this it

play47:40

seems a little bit ridiculous at the

play47:41

time that I'm not getting this positive

play47:43

feedback to a machine but uh very

play47:46

helpful um the other one's telling the

play47:49

model to take a deep breath and to think

play47:51

it through step by step before

play47:52

responding I'm 100% serious has been

play47:56

proven to actually increase the quality

play47:58

of your responses and that also doubles

play48:00

as a as a great one when you're

play48:01

significant other as is angry usually

play48:05

that yeah yeah I would not suggest that

play48:08

as a as a I'll be honest follow the

play48:11

chcken by calm

play48:14

down anyway it's good you mention that

play48:17

sorry that the the hype in the model up

play48:19

I talked about this just earlier in the

play48:20

video is that look this a motion prompt

play48:22

thing where you can get I think 115%

play48:24

increase in your in your accuracy it's

play48:26

just by being like wow you well firstly

play48:27

on the role prompting being like wow you

play48:29

are like the best at this and then

play48:31

providing enriching it with additional

play48:32

words to to reinforce like how good it

play48:34

is at that toas and then the other I

play48:37

think so um let M anyway back to what

play48:40

you said yeah I and it's actually funny

play48:43

as well Persona based uh thing so if you

play48:46

uh not only tell it it's a world class X

play48:49

if you actually use names of specific

play48:51

people especially people who have

play48:52

written over the Internet or uh you know

play48:54

if you say you are Albert Einstein

play48:58

it will actually come out with higher

play48:59

quality outputs um that are very much in

play49:02

the style of writing the the person that

play49:04

you're talking about I use it for

play49:06

programming personalities so Theo he he

play49:08

does the T3 stack um and I'll constantly

play49:11

say you're Theo show me how to refactor

play49:14

my code like the wood and and that

play49:16

actually goes really really well um and

play49:19

then the other kind of last one in here

play49:21

is on the positivity rout but not using

play49:24

negative uh feedback for so a lot of the

play49:27

time your your first impulse is going to

play49:29

be like stop doing this don't do this

play49:31

don't do that if you instead focus on do

play49:35

this or do that um the negative conent

play49:40

uh words actually are associated with

play49:43

worse outcomes than positively France

play49:46

yeah it's just interesting because then

play49:48

the in the research for this and I was

play49:49

trying to put together okay like

play49:50

negative prompting is this a real thing

play49:51

it seems like the consensus is that it

play49:53

doesn't actually uh do much but I will

play49:55

I've anecdotally

play49:57

the contrary which is uh if if it's

play49:59

doing something incorrectly I'll usually

play50:01

just put at the very bottom in the notes

play50:02

section just never do this in your

play50:05

output and it usually tends to work so I

play50:06

mean there's both sides there it works

play50:09

for me sometimes but it's probably

play50:10

something a lack of my skills as well um

play50:12

that I should be doing it further up but

play50:14

yeah there's some really good things I

play50:15

think if you guys can as B said that's

play50:18

another GM that I'll be I'll be

play50:19

incorporating into my prompting is

play50:20

giving it a name giving the rle a name

play50:22

um and that's something OB you just say

play50:24

you're an expert this this this um but

play50:26

if you have an example of a real person

play50:28

or that someone that the internet would

play50:30

have had information about um you can

play50:31

throw that in there as

play50:33

well yeah absolutely um yeah I think

play50:37

those are the the big topl line ones for

play50:39

me at least right yeah no that's really

play50:41

helpful again this is why I brought

play50:42

Spencer on even I've I've learned

play50:44

something here um but yeah we can jump

play50:45

back to the video thank you Spencer

play50:47

thanks so much then so I hope that's

play50:48

drilled in the importance of PR

play50:49

engineering and and being able to use

play50:51

these cheaper and faster models to

play50:53

achieve the outcomes that your clients

play50:54

want otherwise you're not going to make

play50:55

any money uh but going back to to this I

play50:57

just want to say look everything that

play50:58

I've just taught you here can be applied

play51:00

to all these different types of systems

play51:01

and what I want to leave you off with at

play51:02

the end of this is examples of things so

play51:05

an AI agent is is like GPT is are a good

play51:07

example of this um or the building AI

play51:09

agents on my own platform in my own

play51:11

software agentive if you want to check

play51:13

it out we're only on weight list at the

play51:14

moment so you can check that out in the

play51:15

description uh but agentive allows you

play51:17

to build AI agents as does the gbt

play51:20

Builder on on the chb site but what we

play51:22

want to do if we modifying this prompt

play51:23

formula for this use case of AI agents

play51:26

is to modify to include how to use the

play51:27

knowledge how to use the tools and your

play51:29

answer then you can provide examples of

play51:31

response styes and Toad so you can pause

play51:33

that take a look see but here most

play51:35

important things to point out is that

play51:37

I've added in U so you can see roll task

play51:39

specifics and then tools so the tools

play51:42

here if you are adding custom tools into

play51:43

your uh into your gpts or into your AI

play51:46

agents you can add a little section uh

play51:48

using the same kind of format right we

play51:50

have a heading and say you have two

play51:51

tools to use one I like to include the

play51:54

knowledge base if I've added any

play51:55

knowledge to my AI agent I'll sell tell

play51:57

it use the knowledge base because it's

play51:59

actually that's how it's working they

play52:00

use it as a knowledge based tool they

play52:02

just don't already tell you that it's a

play52:03

it's a tool um so you construct it

play52:05

knowledge base is one of the tools you

play52:07

have you can use it when you're

play52:08

answering AI business related questions

play52:10

and number two is a coine similarity

play52:11

tool it could be other tool that's

play52:13

calling relevance or something uh but

play52:15

tell it how to use each of the tools

play52:16

that's involved and then examples of

play52:18

okay here's a question someone ask the

play52:20

agent here's how you should respond uh

play52:22

Etc so not not rocket science you guys

play52:24

can use that uh but that's how I write

play52:25

my adapt this formula to do AI agent

play52:28

prompts and it works really well next is

play52:30

voice agents you need to modify the

play52:32

prompt formula to include a script

play52:33

outline if necessary uh so sylow BL AI

play52:36

air all these things that are popping

play52:37

off right now uh you can modify the same

play52:39

prompt template uh to do uh really good

play52:42

voice agents for you so role task but in

play52:45

the task here we're giving you an

play52:46

outline of how it should talk and the

play52:47

steps involved uh then we have the

play52:49

specifics then we have context about the

play52:51

business uh this is an example for a

play52:53

restaurant um I'm just giving a bit of

play52:54

context on the restaurant there then we

play52:56

have examples of how it should respond

play52:58

to the most common questions as I said

play53:00

before you can also come in here and add

play53:01

in a script section and add in like a

play53:05

rough outline of how the script would go

play53:06

but I've kind of included that in this

play53:08

uh in this in in this section here from

play53:10

a high level at least so voice agents

play53:13

same sort of thing modify it to to do

play53:15

the job then we have ai automations

play53:16

which can be using zapia make air table

play53:18

air table now has AI which is cool uh

play53:20

but you can create powerful AI tasks and

play53:22

businesses they can be relied upon to

play53:23

handle thousands of operations a month

play53:25

uh what we just built in the email

play53:27

classifier is an example of an

play53:28

automation so I don't really need to go

play53:29

over this but here's another example at

play53:31

the end here you can see sometimes I

play53:33

like to throw this in um is after I've

play53:35

given examples at the bottom I'll go q

play53:38

and then I'll put the constraint in or

play53:39

in this case the variable uh in again

play53:42

and then I'll leave the a open up put

play53:44

space and then it's just going to kind

play53:45

of autofill that and it's a it's another

play53:46

technique you can use to to get it to

play53:48

only output uh the exact kind of uh

play53:50

output style that you want so feel free

play53:52

to use that as you need AI tools um you

play53:56

may not know what I mean by tools but

play53:57

basically we can set up a bunch of

play53:59

inputs say Okay Niche offer then we can

play54:01

insert that into a uh into a into

play54:04

pre-written prompt and then that's going

play54:06

to be allowed to connect to either gpts

play54:08

or you can build it um on on a on a

play54:11

landing page and it can be used to speed

play54:12

up workflows so there's so many

play54:14

different ways you can use it um here's

play54:16

an example again you can pause that this

play54:18

an example um here you can see I'm

play54:19

inserting the variables uh we have lots

play54:21

of input output Pairs and then I'm

play54:24

screaming at the end here because

play54:25

because it wasn't do what I wanted so uh

play54:28

yeah take those I'll I'll leave a link

play54:30

to this presentation down on uh I think

play54:32

it'll be on my school community so you

play54:33

just find this video um there'll be a a

play54:35

resource for this thing in the YouTube

play54:37

Tab and you can find this video pull

play54:39

this up and then and use this as you

play54:41

wish so I want to bring you back to this

play54:43

um here's a lollipop um because you get

play54:45

a lollipop for now completing this

play54:47

course and you're now a successful and a

play54:50

a genius level I'm not even sure what

play54:51

this guy's supposed to his name is

play54:52

supposed to be but he looks like a

play54:53

genius to me he looks like a Jedi or

play54:55

something cool so you now this guy and

play54:57

you didn't end up being stuck in this uh

play55:00

this midb territory so here's your

play55:01

little lop and I'm proud of you for

play55:03

getting through this because the skills

play55:04

that I just taught you as I say affect

play55:06

every different thing you're trying to

play55:07

sell in this AI space if you don't have

play55:09

this nailed um you're not going to be

play55:10

able to build things and you're not

play55:11

going to create value for your clients

play55:12

cuz you're going to have to use even if

play55:14

you're kind of okay but you can't get

play55:16

the cheaper model to do what you need it

play55:18

to do then you're not going to be able

play55:19

to succeed long term and I mean you put

play55:22

yourself up if if someone was offering

play55:24

the same AI service and you said Hey

play55:25

look it's going to cost you this much

play55:27

month and it's going to take 10 seconds

play55:29

to respond and some other guy goes okay

play55:30

it's going to cost you one1 of that and

play55:32

it's going to take a quarter of the time

play55:35

um who's going to win there so as as

play55:37

much PVP there's not much PVP going on

play55:39

in the space right now because there's

play55:40

very few people selling selling a

play55:42

Solutions at agencies so we're still

play55:43

very early to it but over time if you

play55:46

don't have these skills you're going to

play55:46

get wiped out by people who do um and

play55:50

yeah keep in mind there's so much

play55:52

potential to be squeezed out of these

play55:53

prompts and out of the these models if

play55:55

you just apply this technique so every

play55:57

300% increase I'm going to be making a

play55:59

couple more of these Style videos if you

play56:00

did like this if you like me being a lot

play56:01

more uh no and just telling you

play56:04

outs then let me know in the comments

play56:06

because I much prefer doing these kind

play56:07

of videos even though I'm now getting

play56:09

super hot and ready and my cats here but

play56:11

I've like making this personally it's a

play56:12

lot more fun than my normal videos where

play56:15

but uh yeah you get the idea if you've

play56:16

enjoyed please let me know down below

play56:18

and uh subscribe to the channel if you

play56:19

haven't already I'm probably going to

play56:20

have a couple more videos like this on

play56:21

core things that I think you need to

play56:23

understand because if you don't learn

play56:24

this then you can't use my sass and I

play56:26

can't make money so I'm very selfishly

play56:29

teaching you this stuff so that one day

play56:30

you can use my sass and I can sell my

play56:31

sass for hundreds of millions of dollars

play56:33

so forgive me for being selfish but you

play56:35

get to win along the way um but yeah see

play56:37

you in the next one

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Prompt EngineeringAI SystemsEfficiency TechniquesEmail ClassificationRole TaskChain of ThoughtEmotion PromptFew Shot PromptingMarkdown FormattingAI AutomationModel SelectionBusiness Solutions