OpenAI-o1 x Cursor | Use Cases - XML Prompting - AI Coding ++

All About AI
16 Sept 202423:32

Summary

TLDRIn this video, the creator explores the use of OpenAI's GPT-3.5 and GPT-4 models on the coding platform, Cursor. They test different Cursor rules to enhance productivity and experiment with structured prompts using XML tags for clearer instructions. The video compares the efficiency of GPT-3.5 and GPT-4 for coding tasks, highlighting GPT-3.5's speed and reliability. The creator also discusses the benefits of GPT-4's larger output limit for complex refactoring jobs. They demonstrate using these models to create folder structures and automate website updates, showcasing practical applications and workflow improvements.

Takeaways

  • 😀 The video explores the use of OpenAI's GPT-3.5 and GPT-4 models, focusing on the 'mini' version for coding within the Cursor platform.
  • 🔧 The creator has been testing Cursor's AI rules to enhance productivity and is experimenting with XML tags to structure prompts more effectively.
  • 📊 A Reddit post is discussed, comparing GPT-3.5 to GPT-4, with the conclusion that GPT-3.5 is currently superior for daily tasks due to its speed and reliability.
  • 📝 The video highlights the 64k output token limit of GPT-4 as a significant advantage for large refactoring jobs, allowing more extensive code generation in fewer prompts.
  • 💬 The necessity for precise prompts with GPT-4 is emphasized due to its longer 'thinking time', suggesting that getting the prompt right the first time is crucial.
  • 🛠️ Cursor's implementation of GPT-4 is noted to have some bugs, such as occasional failures to output text along with code.
  • 📁 The video demonstrates using structured prompts with XML tags to generate complex folder structures and code, leveraging GPT-4's capabilities.
  • 🔄 The creator shares a workflow of switching between GPT-4 for initial setup and GPT-3.5 for debugging and fine-tuning.
  • 🎥 A practical example is given where GPT-4 is used to automate the process of updating video content on the creator's website, showcasing a real-world application.
  • 🔗 The video concludes with a call to action for viewers to experiment with the models, share their findings, and consider integrating these techniques into their own workflows.

Q & A

  • What is the main focus of the video?

    -The main focus of the video is to explore the use of OpenAI's GPT-3.5 and GPT-4 models, particularly the 'mini' version, for coding tasks using the Cursor platform. The video also discusses the effectiveness of different models for productivity and coding efficiency.

  • What is Cursor and how does it relate to the video's content?

    -Cursor is a code editor that the video uses to test different coding rules and to integrate with OpenAI models for coding assistance. The video explores using Cursor to rewrite prompts with XML tags for more precise instructions when using AI models.

  • Why does the video mention XML tags in the context of coding?

    -The video mentions XML tags as a method to structure and clarify prompts for AI models. By using XML tags, the video aims to provide more precise instructions to the AI, which can lead to better and more efficient code generation.

  • What are the advantages of using GPT-4 'mini' mentioned in the video?

    -The video highlights the 64k output tokens of GPT-4 'mini' as an advantage, allowing for larger refactoring jobs to be done in fewer prompts due to the increased output limit compared to GPT-3.5.

  • What is the 'Chain of Thought' mentioned in the video in relation to AI models?

    -The 'Chain of Thought' refers to the process where AI models think through steps to solve a problem, which can be time-consuming. The video suggests that with GPT-4 'mini', users need to provide more specific prompts to minimize waiting time for the AI's thought process.

  • What is the verdict on GPT-3.5 vs. GPT-4 'mini' in the video?

    -The video concludes that for day-to-day coding tasks, GPT-3.5 is still considered better due to its speed and reliability, while GPT-4 'mini' is more suitable for large refactoring jobs that can benefit from its 64k output token limit.

  • How does the video suggest improving prompts for AI models?

    -The video suggests improving prompts by using more structured language with XML tags to provide clear and detailed instructions to the AI, which can help in getting more accurate and efficient code generation.

  • What is the significance of the 64k output tokens in GPT-4 'mini' as discussed in the video?

    -The 64k output tokens in GPT-4 'mini' allow for the generation of more extensive code in a single prompt, which is beneficial for creating large-scale project structures or refactoring without needing to split the task into multiple prompts.

  • How does the video demonstrate the use of GPT-4 'mini' for creating folder structures?

    -The video demonstrates using GPT-4 'mini' to create complex folder structures by providing a detailed prompt wrapped in XML tags. It then uses the large output token capacity to generate all the necessary files and folders in one go.

  • What is the practical application shown in the video for automating website updates?

    -The video shows a practical application where GPT-4 'mini' is used to create a script that can update a website's video section by generating a description based on a video title and URL, streamlining the process of adding new content to the site.

Outlines

00:00

🤖 Exploring OpenAI Models for Productivity

The speaker begins by discussing their exploration of OpenAI models, focusing on the 'mini' version. They mention testing different cursor rules to enhance productivity and experimenting with structured prompts using XML tags. The speaker also shares their weekend endeavor of seeking opinions on using GPT-3.5 for coding and reflects on the findings from a Reddit post, comparing GPT-3.5's speed and reliability with the newer models. They conclude that GPT-3.5 is still superior for daily tasks due to its consistency but remain open to exploring the 'mini' version's potential.

05:00

💻 Leveraging Cursor and XML Tags for Coding Efficiency

In this segment, the speaker dives into their experience with using Cursor, an AI assistant, to improve coding efficiency. They discuss setting up rules within Cursor to suggest XML tags for clearer prompts. The speaker demonstrates how they've been using Cursor to rewrite prompts with XML tags for better clarity. They also show a practical example of creating a terminal app to fetch and display Hacker News posts, explaining the process of using structured prompts with OpenAI's 'mini' model and then switching to GPT-3.5 for debugging.

10:01

📁 Automating Folder Structure Creation with OpenAI

The speaker illustrates how they've been using OpenAI's 'mini' model to automate the creation of complex folder structures. They provide an example of a large project setup and demonstrate how to use the 'mini' model with its 64k output token limit to generate an entire project structure in one go. The speaker emphasizes the time-saving benefits of this approach and how it streamlines the initial setup of projects.

15:03

🛠️ Building a Video Update Pipeline for a Personal Website

In this part, the speaker shares their project of creating a pipeline to update their website with new videos. They outline the steps to automate the process of adding a video URL and title to generate a description and update their website's video section. The speaker uses a combination of OpenAI's 'mini' model for the initial setup and GPT-3.5 for fine-tuning. They successfully demonstrate the pipeline's efficiency and express their satisfaction with the time saved.

20:04

🔧 Final Thoughts on Integrating OpenAI Models into Workflow

The speaker concludes by reflecting on the integration of OpenAI models into their workflow. They express excitement about the potential of the 'mini' model for large tasks and the continued preference for GPT-3.5 for daily tasks. The speaker invites viewers to share their experiences and discoveries with OpenAI models and encourages experimentation. They also mention plans to continue exploring and possibly share their Cursor rules and prompts with the community.

Mindmap

Keywords

💡OpenAI

OpenAI refers to a company that specializes in creating artificial intelligence technologies. In the context of the video, the presenter is experimenting with OpenAI's AI models, specifically the 'mini' version, to explore their capabilities in coding and productivity. The video discusses the use of these AI models within the 'cursor' environment, highlighting their potential for improving coding efficiency.

💡Cursor

Cursor is a code editor that the presenter is using in the video. It is designed to work with AI models to assist in coding tasks. The video discusses how the presenter has been testing different 'cursor rules' to enhance productivity and has been leveraging Cursor to rewrite prompts into XML tags for more precise instructions from the AI.

💡XML tags

XML tags are used in the video to describe a structured format for writing prompts. The presenter is experimenting with using XML tags to provide clearer instructions to the AI model, which is intended to improve the precision of the AI's responses. This method is showcased as a way to 'up the prompting game' and to leverage the AI's capabilities more effectively.

💡Productivity

Productivity is a central theme in the video, as the presenter is exploring ways to enhance their coding efficiency through the use of AI models. The video discusses how the presenter has been testing different AI models and techniques, such as using XML tags and Cursor rules, to streamline coding processes and improve overall productivity.

💡Prompting game

The 'prompting game' refers to the strategy of crafting effective prompts for AI models to elicit the most accurate and useful responses. The video focuses on improving this skill by using structured prompts with XML tags, aiming to get more precise instructions from the AI and thus improve the AI's performance in coding tasks.

💡Output tokens

Output tokens refer to the maximum number of tokens an AI model can generate in a single response. The video mentions the 64k output tokens of the 'mini' version of OpenAI's model, which is significantly higher than the 8K output tokens of the 3.5 version. This larger capacity allows for more extensive output, such as large refactoring jobs, in a single go.

💡Refactoring

Refactoring is a software development technique that involves restructuring existing code without changing its external behavior. In the video, the presenter discusses using the AI model to perform large refactoring jobs, taking advantage of the 64k output tokens to restructure codebases in fewer interactions.

💡Composer feature

The 'composer feature' is mentioned in the context of creating and managing files and folders within the Cursor environment. The video demonstrates how the presenter uses the AI model's large output capacity to generate entire folder structures and files, streamlining the initial setup process for coding projects.

💡Folder structure

Folder structure refers to the organization of files and directories in a project. The video shows how the presenter uses the AI model to create a specific folder structure with placeholder content, leveraging the 64k output tokens to automate the setup of complex project environments.

💡Debugging

Debugging is the process of identifying and fixing errors in code. The video discusses switching between different AI models for different tasks, such as using the 'mini' version for initial setup and then switching to the 3.5 version for debugging, to take advantage of each model's strengths in different aspects of coding.

Highlights

Exploring the use of OpenAI's mini version with a focus on productivity and process improvement.

Utilizing XML tags and structure in prompts to enhance precision with AI models.

Testing different Cursor rules to optimize productivity.

Comparing GPT-3.5 with the new GPT-4 models for coding in Cursor.

GPT-3.5 is still considered superior for day-to-day tasks due to speed and reliability.

GPT-4's 64k output tokens allow for large refactoring jobs in fewer prompts.

The necessity for specific prompts with GPT-4 to avoid long thinking times.

Limitations of GPT-4 include the need for one-shot tasks and potential bugs in Cursor implementation.

Strategies for using GPT-4 for massive refactoring jobs in one go.

Using GPT-3.5 for smaller tasks and GPT-4 for larger, one-shot tasks.

Implementing Cursor rules to suggest XML tags for clearer prompts.

Demonstration of using Cursor to rewrite prompts with XML tags for clarity.

Creating a terminal app to fetch and display Hacker News posts using GPT-4.

Efficiently generating folder structures for projects using GPT-4's 64k output tokens.

Developing a pipeline to automate website video updates using GPT-4 and GPT-3.5.

Integrating the new pipeline into a personal website to streamline video updates.

Transcripts

play00:00

today's video is going to be a bit of a

play00:01

mixed bag we're going to continue trying

play00:03

out open

play00:04

ai1 uh focusing mostly on the mini

play00:06

version uh today because we are playing

play00:09

here over on cursor I've been testing

play00:11

out different cursor rules to see if

play00:13

that has any effect on my productivity

play00:16

my processes and I've been kind of

play00:18

trying to up my prompting game by using

play00:20

more XML tags more structure and to do

play00:23

that I've been trying to leverage cursor

play00:25

here to actually help me rewrite my

play00:28

prompts into this yeah you can see here

play00:30

XML tags to hopefully get a more yeah

play00:33

precise instruction so yeah I think

play00:36

we're just going to dive in we have some

play00:37

examples we're going to do so yeah let's

play00:39

just get started so over the weekend

play00:41

I've been also trying to look out for

play00:43

what other people think of using let's

play00:45

say 01 mini for coding uh maybe on

play00:47

cursor so I found this Reddit posted

play00:49

that I thought was pretty good and kind

play00:51

of fits my view of 01 mini to after

play00:54

trying it out over this weekend so I

play00:57

thought we can just go through a few of

play00:59

the points this uh who's afraid of 138

play01:04

has made here I thought it had some good

play01:05

points here so you can see the fast few

play01:08

days I've been testing 01 mini uh in

play01:10

cursor compared to 3.5 yeah which has

play01:13

been a Workhorse model that has been

play01:14

insanely consistent and useful that is

play01:16

kind of my IDE too so the verdict is

play01:19

claw 3.5 sonets is still a better

play01:21

day-to-day model and that is kind of my

play01:23

takeaway too just because of speed and

play01:25

reliability and yeah that is my idea

play01:28

right but I'm open to change right and

play01:30

I've been trying out 01 mini over this

play01:33

weekend so I just wanted to go through

play01:35

some of the pros he found and some of

play01:38

the cons and see if I agree here so the

play01:42

64k output tokens uh that is big right

play01:46

so if you compare I think a CLA 3.5

play01:49

maybe has 8K output tokens so that's

play01:52

eight times more output you can print

play01:55

right in cursor so that is nice when you

play01:58

do these composer features and you won't

play02:00

like create bunch of files that we're

play02:02

going to look at later uh so you can see

play02:05

uh if your prompt is good it generally

play02:07

can do a large refactor re architecture

play02:09

job in two three shots yeah I think

play02:11

that's good uh I'm not going to read

play02:13

example so in general this was quite a

play02:15

large of reflecting job and it do well

play02:17

large output context is a big part of

play02:20

facilitating this I think that's pretty

play02:23

cool uh so here he has listed some cons

play02:26

that I kind of agree with so you have to

play02:28

be very specific with with your prompt

play02:30

and that is kind of what I dived into

play02:32

these XML tags this weekend to try to be

play02:35

even more uh

play02:37

instructive uh than I've been with clo

play02:39

3.5 son to see if that can improve

play02:41

something on when using 01 mini in

play02:44

cursor right and you can see due to Long

play02:47

thinking time you pretty much need a

play02:49

perfect prompt because it's annoying

play02:51

right if you don't have good enough

play02:53

instructions and you waste time by

play02:55

waiting for these 01 models to do their

play02:58

Chain of Thought and and H there was

play03:00

something wrong

play03:01

right uh so he is working with 01 you

play03:04

have to do everything one shot uh I have

play03:08

not been doing that but let's see uh

play03:11

limited chats yeah that's a huge limit

play03:13

there of course but this is early 64

play03:15

outcut that has to be that is not a con

play03:18

right uh I wish son 3.5 has uh much more

play03:22

output tokens yeah if Sonet had 64k that

play03:25

would be pretty cool uh verbos yeah

play03:29

cursor implementation is buggy this is

play03:31

important so sometimes there's no text

play03:34

output only code I've seen this too and

play03:36

I wonder this if this is something to do

play03:38

with um cursor

play03:41

backend uh prompting I'm not quite sure

play03:44

uh so I've been trying to do kind of my

play03:47

own let me show you here I did this uh

play03:51

let me see here yeah so I try this 01

play03:54

rule so ignore all Chain of Thought step

play03:56

by step prompting when using cursor

play03:58

ignore chain of thought step by step

play04:00

prompting me using cursor chat but I

play04:02

don't know we might try it but I don't

play04:04

think it's going to work to be honest so

play04:07

we can see he has 01 mini CLA five son

play04:09

conclusions so if you're doing a massive

play04:11

refactoring job or gree and filling a

play04:12

massive project use 01 mini combination

play04:15

of deeper thinking and massive output

play04:17

token limits means you can do things one

play04:19

shot I haven't tried that too much so I

play04:21

can't really reflect on that so if you

play04:23

have a collection of smaller task CL

play04:25

Sonet is still the king so be very

play04:28

specific and overl Bose in your prompt

play04:31

to1 mini describe as much as your task

play04:35

in detail as possible it will save you

play04:37

time because this this is not a model to

play04:39

have conversations or fix small bugs

play04:43

it's a Ferrari to the Honda at is Sonet

play04:45

so yeah I've been switching models right

play04:47

so I maybe I do like one shot with 01

play04:50

mini and then I kind of switch to 3.5

play04:53

Sonet so that's the way I've been

play04:54

solving this but I just thought this

play04:56

post was pretty interesting and it kind

play04:57

of confirms

play05:00

my uh comparisons with 3.5 Sonet 2 so I

play05:04

just wanted to go through this because

play05:06

it's ni to get like a confirmation of

play05:08

what you've been feeling too right so I

play05:11

think we're just going to dive over to

play05:13

cursor now test some rules do some

play05:15

prompting with XML text so let's just

play05:18

head over there so if you see here on

play05:20

cursor in the settings you see we have

play05:22

something called rules for AI right and

play05:24

here you can kind of set these general

play05:26

rules that will apply for all your

play05:28

sessions right but we can also include a

play05:31

DOT cursor rules files if you didn't

play05:33

know that and in this uh project we kind

play05:36

of have these uh different rules so

play05:40

always assist the user and suggest XML

play05:43

rapper tags to improve the

play05:45

prompt uh the user rights right and

play05:48

we're going to the suggestions must help

play05:51

must help the user make it as clear as

play05:53

possible for an llm to understand the

play05:55

scope of the prompt so these are the

play05:57

rules I have been testing out and I just

play06:00

wanted to show you kind of how I've been

play06:02

playing around with this and how I've

play06:03

been using cursor to kind of write my

play06:06

prompts instead of doing it like yeah

play06:08

just in the chat box right so if you're

play06:10

planning to try this for yourself uh you

play06:12

got to kind of look at down in the right

play06:14

corner here there's something called uh

play06:16

cursor tab so here we have a hacker

play06:19

news. mark MD that's kind of a markdown

play06:22

file so in this cursor tab here you can

play06:24

see sometimes it says uh disable for

play06:28

markdown uh so if this is kind of

play06:30

checked off you can see we have this

play06:32

cursor tab that is kind of not available

play06:35

here so uncheck this so you can see we

play06:37

can use this tab so you can see when I

play06:39

start writing now right we get some Auto

play06:43

suggestions right if you wanted to use

play06:45

that if not just go back and just start

play06:48

typing so let me just come up with a

play06:50

prompt here and we can try to show you

play06:54

kind of the way I've been playing around

play06:55

with this okay so here you can see my

play06:57

prompt now I want to create a terminal

play06:59

app that fetch the top 10 posts on

play07:00

Hacker News and display them in a retro

play07:02

terminal style uh I want to scrape The

play07:04

Hacker News website for this to post

play07:07

using bs4 I want to use the rich library

play07:09

to style it the terminal app must

play07:11

include the links to the post and the

play07:13

title of the post the code should be in

play07:15

Python make the code modular easy to

play07:17

understand and of course be effective

play07:19

and secure as possible this I kind of

play07:21

wrap in Project the action for llm is to

play07:23

execute the project above okay so that

play07:26

is a prompt I used to use right so what

play07:29

I've been testing out here I just uh

play07:32

select all contrl K so here I just go

play07:35

wrap the prompt in XML tags to make the

play07:37

instructions as clear as possible for

play07:40

the llm I just do submit uh edit right

play07:44

and we are on CL 3.5 Sonet here that's

play07:46

fine and now this is going to rewrite

play07:49

this it's going to add all the XML tag

play07:51

it thinks is the best possible

play07:54

instructions here to kind of get the

play07:57

most out of that so you can see here

play07:59

right so it right so let's just accept

play08:02

this and take a look now so you can see

play08:05

we have description we have description

play08:07

requirements and we have some

play08:08

requirements here kind of wrapped in

play08:10

this XML tags we have action step by

play08:13

step here right okay and yeah I think

play08:18

this is fine so let's go ahead and try

play08:22

this out now so let me just open our new

play08:24

cursor here and let's test this prompt

play08:26

using open ai1 mini I see I forgot to

play08:29

add something I've been also focusing on

play08:31

lately let me uh select all this and

play08:34

let's do include our instruction to the

play08:37

llm to include the folder structure of

play08:38

the project with old folders and files

play08:41

so this is also something I've been

play08:42

playing around with lately because uh

play08:44

using o1 we have this big uh 64k output

play08:48

token so it's super easy to kind of

play08:51

recreate folder structure using the

play08:52

composer feature so you can see here now

play08:55

uh we added a folder

play08:57

structure I don't think I want it like

play09:00

this so let me rephrase this prompt here

play09:02

so let's do include a step to print a pH

play09:04

structor with XML tags just the

play09:06

instruction so I just wanted to add this

play09:08

into the steps here right for Action

play09:10

let's see if we can do that okay good

play09:12

now we have it here so just accept that

play09:13

let me save that and now let's try to

play09:15

run this so I just open a new directory

play09:17

here let's paste in our prompt right uh

play09:20

now I want to select 01 mini uh because

play09:23

these are very precise instructions

play09:25

right so let's just try to run uh in the

play09:27

chat here first if we get the Fage

play09:30

structure we're going to take that and

play09:32

we're going to go to the composer and

play09:33

try to create all the files before we

play09:35

add the code right okay that was pretty

play09:37

quick so now you can see we have our

play09:39

project structure so I'm just going to

play09:40

copy this control I right paste in this

play09:44

and I've kind of played around with uh

play09:46

so this is kind of my instructions for

play09:48

generating the folder structure so so

play09:50

you're an L liit access to generating uh

play09:52

folds and filers in the current working

play09:54

directory generate all files and folders

play09:57

so yeah we have some action here so I'm

play09:58

just going to call copy this to head

play10:01

back to our composer and just paste in

play10:03

this so let's try this now so we are on

play10:05

the 01 mini

play10:09

right okay so you can see we have some

play10:11

placeholder here we have the

play10:12

requirements so let's just accept all

play10:15

this and if you go back here now you can

play10:17

see we have all our files right and then

play10:20

we can just start adding here so we can

play10:22

do requirements let's just add this

play10:25

right we have our scraper dop good we

play10:28

can apply the code here accept save we

play10:33

have uh the display so now we have all

play10:36

files ready right should be pretty easy

play10:40

we have

play10:41

main let add

play10:43

this install the dependencies okay

play10:47

install

play10:49

requirements good and then we can run

play10:53

it okay uh that was a bit strange you

play10:57

can see it's running but we missing the

play11:00

posts here uh so let's try to do like a

play11:02

quick followup here and see if we can

play11:05

fix this so I'm just going to go the app

play11:07

is running but we don't see any hacking

play11:09

News Post in our terminal but now I'm

play11:10

going to select CLA 3.5 control enter to

play11:13

kind of use the code base so this is

play11:15

what I've been doing I've been switching

play11:17

between 01 mini o1 preview and Claw 3.5

play11:21

depending on the use case right so we

play11:24

want to go to our main we want to add

play11:26

some new code here right okay uh scraper

play11:30

let's add some new code

play11:32

here uh let's see now so let's clear

play11:35

this let's run

play11:37

it and boom we got it so here is our app

play11:41

right top 10 Hacker News Post uh we have

play11:44

this link we can follow right okay boom

play11:47

we got it so that was pretty cool so

play11:51

yeah seems to be working pretty nice uh

play11:54

our F structure was good our initial

play11:56

prompt from 01 mini was pretty good good

play12:00

and then we switch to claw 3.5 for some

play12:02

debugging and boom we got it so yeah so

play12:05

yeah that was pretty good I'm happy with

play12:07

this I just wanted to do one more

play12:08

example to show you kind of how I've

play12:10

been using 01 mini to create these

play12:12

folder structures because that's been

play12:13

pretty interesting right so here you can

play12:16

see we have a huge folder structure

play12:19

right and you can see here I kind of

play12:22

took my prompt in uh in the end here so

play12:26

you can see the action is create a

play12:28

specific five Direct and files in the

play12:30

current work directory for each file

play12:32

include a basic placeholder content that

play12:35

represents it purpose or structure so if

play12:37

we grab this full prompt here so this is

play12:40

a pretty big folder structure right so

play12:42

if we grab all this uh let's head over

play12:45

to yeah we can do just the same folder

play12:48

here so let's head control shift I into

play12:50

the composer let's remove this old one

play12:53

and now you can see we paste in like the

play12:55

full structure here and here is where

play12:58

the kind of the big output window comes

play13:00

uh 64k output token window comes into

play13:03

play so you can see in the bottom left

play13:05

here I guess you can't see because of my

play13:07

face but we are selecting o1 mini and

play13:09

when we run this now hopefully we can

play13:12

create this big folder structure uh with

play13:15

files and folders so let's see

play13:20

now so you can see now it's writing all

play13:23

the files right so this is pretty big

play13:25

it's that's a lot of files to create

play13:28

right and it was pretty fast when it

play13:30

first got going here so if we accept all

play13:34

this now right accept all we go back

play13:36

here boom we got it so we have our back

play13:38

end right uh with all the files

play13:41

middleware chat controller Roots

play13:44

Services open I service our front end

play13:47

our app page layout Styles Global CSS

play13:50

components right uh so what we had to do

play13:53

now is just fill in this because we

play13:55

already have created all the files we

play13:57

could just since you can see they have

play13:58

this placeholder code so if we had the

play14:01

code ready now we can just apply this to

play14:03

all the files so that means that we

play14:05

didn't manually have to create all the

play14:07

file Series this is a big Advantage I

play14:09

have found with using 01 mini uh using

play14:13

this composer just because of the big uh

play14:17

output token window right or output

play14:19

tokens so this is a bunch of files I'm

play14:22

sure we can do even more if we try to so

play14:24

I think this is a very good use case so

play14:26

now if we want to chat we can just

play14:28

switch to CL 3.5 and start adding code

play14:31

here right I just think this is very

play14:33

interesting and this is definitely

play14:34

something I will be implementing into my

play14:36

workflow so the final thing I thought we

play14:38

can do today is actually do something

play14:39

that is impacting me because this is a

play14:42

feature I've been wanting for my own

play14:44

website so let me just show you now so

play14:47

this is the setup I have for my website

play14:49

if we head over here now I have a

play14:51

section called videos so here I kind of

play14:53

upload all my latest videos right uh but

play14:57

the way it's set up now if we we go here

play15:00

I have to go into the code here right I

play15:02

have to go to website I have to find my

play15:05

Json structure down here

play15:08

somewhere where is it yeah you can see

play15:11

here uh I have to fill in this small Jon

play15:14

here that has the title description and

play15:16

iframe that is kind of the embedded

play15:18

video you can see here and it's very

play15:20

annoying to do this manually so I kind

play15:23

of want to create a script that can just

play15:25

maybe just link the URL and the title

play15:28

and justun the script in the terminal

play15:30

and this will generate this kind of

play15:34

structure uh and I kind of want open a i

play15:36

API to write the description based on

play15:39

the the title right so let's see if we

play15:42

can actually use cursor open a i uh1

play15:45

mini I think and maybe some 3.5 to

play15:48

actually create this uh pipeline for me

play15:51

and this is kind of my raw prompt before

play15:52

I add the XML tag so I want to create a

play15:55

pipeline to update my react website it's

play15:57

easier to use than the current method

play15:58

now I have to go into the code in the

play16:00

website to find a code where latest

play16:02

videos are update the section uh I want

play16:05

to add a nice pipeline I can use to

play16:07

update the latest video section on my

play16:09

website I want to be able to add a

play16:11

YouTube video by copying the URL pasting

play16:13

into a script that adds the video Title

play16:15

by the

play16:16

video into a script that adds the video

play16:18

to the website the script should be able

play16:20

to run with the following command so I

play16:23

just want to add video URL and title to

play16:26

write the description for the YouTube

play16:27

video I want to use llm to generate a

play16:29

short description based on the title uh

play16:31

used open a i API the generated Json

play16:34

structure must be fed into website JS to

play16:37

update the website I pasted in some

play16:39

documentation so that's the gp40 mini

play16:42

implemented features above into my re

play16:44

website project I will add website JS as

play16:47

context so now I'm just going to select

play16:49

all contrl K and let's wrap this into

play16:52

XML tags so let's just wrap my prompt in

play16:54

XML tags to make the task as clear as

play16:57

possible so let's just try this and see

play17:00

what structure we get now okay so let's

play17:02

accept this and take a look here so now

play17:05

we kind of have an objective we have a

play17:06

current method okay we have the script

play17:09

we want the process and the description

play17:12

generation okay that's pretty good and

play17:15

we have the implementation request so an

play17:18

action and some context I think we just

play17:20

going to try this and see if this

play17:23

actually works so let me just copy this

play17:26

head

play17:27

over yeah to our uh directory here that

play17:31

we have all the files so what I want to

play17:33

do now is open up our composer right uh

play17:36

I want to add the website as context I

play17:38

want to add maybe like the folder so

play17:42

that's going to

play17:43

be Source I think and let's paste in our

play17:47

prompt here now so I think we just going

play17:50

to try this again I want to select 01

play17:51

mini here for the first iteration so

play17:54

let's run this and see where this takes

play17:57

us okay okay so now you can see we have

play18:00

our EnV file we have our videos Json so

play18:02

this is kind of our where we're going to

play18:04

store our examples if we look at the new

play18:06

video section now we're going to load

play18:09

from our Json file I think and the add

play18:12

video script uh if we open up this I

play18:16

think we're going to run this

play18:18

command uh yeah we're going to include

play18:20

our video and our title okay that's

play18:23

interesting we need to install some

play18:25

dependencies yeah I think this looks

play18:27

pretty interesting and it looks pretty

play18:29

good so what I'm going to do now is I'm

play18:31

going to accept all this I'm going to

play18:33

implement all the code I'm going to set

play18:35

it up and then I'm just going to try it

play18:37

and see how this actually work uh yeah

play18:41

let me do that and I'll come back to you

play18:43

when we are ready okay so now I kind of

play18:46

want to show you how this works so if we

play18:48

head to my website now uh you can see we

play18:50

have this video here in latest videos

play18:53

right so I just did a quick test looks

play18:56

superb right so if we go here now you

play18:59

can see uh we are using open AI gp40

play19:03

mini to generate a concise and engaging

play19:06

description from based on its title

play19:09

right so we have some examples here on

play19:12

kind of the ey framing right and yeah

play19:14

I'm not going to go too much into the

play19:16

code but this is going to be you can see

play19:18

new videos will be inserted here by the

play19:21

ad. video.js script perfect so how this

play19:24

works now is we can do so what we can do

play19:27

now node uh at video at video.js right

play19:31

we can do D- URL and then I can head

play19:34

over to let's say my YouTube channel uh

play19:36

let's just copy this URL here I can go

play19:39

back to cursor paste in this URL I can

play19:42

do d-h title uh let's do a string and I

play19:47

can go back here let's just what was it

play19:50

copy this

play19:52

title and paste it in here okay that

play19:55

kind of missed so let me go back and fix

play19:57

that and we need a string here uh okay

play20:01

so I'm going to add this so generating

play20:04

description using open AI boom okay so

play20:06

you can see this got slotted in here it

play20:08

got saved if we go to our website now

play20:10

boom we got a second one so that was

play20:13

super nice pipeline for me let me just

play20:15

go grab another video here so AI app of

play20:19

the week 2 I can just do node ad videojs

play20:21

URL uh okay so I have the title boom

play20:25

generate description add it here go back

play20:28

to my we site boom we have another video

play20:31

that saves me a lot of time so this is a

play20:32

superb use case so I'm super happy with

play20:35

this this is going to save me a lot of

play20:37

time so yeah combining uh 01 mini with

play20:42

kind of the The Prompt we created with

play20:45

all the XML tags and we use some CL 3.5

play20:48

for some simple

play20:49

debugging how much time did I spend on

play20:52

this that's maximum 10 minutes so that's

play20:55

a superb integration into my website by

play20:58

adding like a a new feature that saves

play20:59

me a lot of time when updating new

play21:02

videos so this is something I'm going to

play21:04

just Implement and deploy right away so

play21:07

that's pretty cool right uh yeah super

play21:10

happy with this and the way it turned

play21:11

out and combining all of these things

play21:13

I've been playing around with this

play21:15

weekend I think this yeah kind of helped

play21:17

me with my productivity so I hope you

play21:19

found some of these things interesting

play21:21

uh if you want to access all of these

play21:23

prompts I have here uh just follow the

play21:25

link in the description I have kind of

play21:26

open public repo where I will be putting

play21:30

out all of these yeah the small prompts

play21:32

here I've been using so just go check it

play21:35

out if you want to copy maybe my folder

play21:37

structure instructions or something like

play21:39

that also if you want to shout out

play21:42

something in my YouTube video some

play21:43

GitHub project you have you can just go

play21:45

to my website just go to Services I have

play21:49

this YouTube video shout out I'm not

play21:51

sure about the price yet so you can kind

play21:52

of buy if you want to do some shout out

play21:55

maybe on your GitHub repo other things

play21:57

you want to shout out some small startup

play21:59

you're working on just send me a request

play22:02

here and maybe we can do something so or

play22:06

I I can put out the cursor rules too if

play22:08

you want to copy this but uh just go

play22:11

play around try out different things

play22:13

with 01 if you have access if you have

play22:16

cursor you should have access to at

play22:18

least a few uh requests right uh but I

play22:21

think still there's a lot of things we

play22:23

need to explore here when it comes to

play22:25

using the new models uh but uh daily

play22:28

driving is going to be CLA 3.5 maybe

play22:31

some GPT 40 but I'm going to keep

play22:33

experimenting using uh using the 01 pre

play22:38

uh not not so much the o1 preview mostly

play22:40

I'm going to be using 01 mini I think

play22:42

but I'm going to try something with the

play22:44

preview too and it was pretty cool to

play22:46

see uh what other people have been using

play22:49

on Min I've been watching some videos

play22:51

this weekend there's a lot of cool stuff

play22:53

out there uh but yeah I kind of agree

play22:56

with this CLA 3.5 son it is still uh the

play23:00

better day to-day model for me for

play23:03

now uh but the 64k output tokens in 01

play23:06

mini makes it super easy when we want to

play23:10

do this uh big folder structure so that

play23:13

is something I'm going to using it for

play23:14

so I'm going to keep experimenting let

play23:16

me know in the comments if you have any

play23:18

cool things you have found out about

play23:20

using o1 yeah I hope you took something

play23:22

away from this video maybe you learned

play23:23

something new maybe something you want

play23:25

to try let me know in the comments give

play23:26

it a like if you found it interesting

play23:29

and yeah have a good day and we speak

play23:31

soon

Rate This

5.0 / 5 (0 votes)

相关标签
AI CodingProductivity HacksGPT-4GPT-3.5Cursor TestingXML PromptingWorkflow IntegrationReddit InsightsFolder StructureWeb Development
您是否需要英文摘要?