A basic introduction to LLM | Ideas behind ChatGPT

ycopie
30 Nov 202319:49

Summary

TLDRThe video discusses language models and large language models like GPT and ChatGPT. It explains how LMs work by predicting the next word in a sequence to model patterns in human language. As more training data and parameters are added, LMs become LLMs like GPT and can be used in solutions for tasks like question answering. The video also introduces concepts like prompt engineering, model security, giving LMs access to tools through APIs, reasoning in LMs, retrieval augmented generation, and model fine-tuning.

Takeaways

  • πŸ˜€ Language models (LMs) predict the next word in a sequence based on patterns in training data
  • πŸ“š LMs can be used to build solutions like question answering systems
  • πŸ”¬ Researchers use more data and parameters to create large LMs (LLMs)
  • πŸ’° LLMs require lots of compute and are expensive to train
  • πŸ€— Some LLMs are open source and can run locally without APIs
  • ✏️ Prompt engineering involves carefully crafting inputs to get desired LLM outputs
  • πŸ”’ There are security concerns around malicious use of powerful LLMs
  • βš™οΈ LLMs can be given access to tools through APIs to take actions
  • 🧠 Making LLMs exhibit reasoning is an area of research
  • πŸ“ Fine-tuning trains parts of a model for specialized tasks

Q & A

  • What is a language model and how does it work?

    -A language model (LM) takes a sequence of words as input and predicts the next word. It tries to model the patterns in human language based on the data it has been trained on.

  • How can language models be useful?

    -Instead of giving an LM random sentences, we can give it questions and instructions to get useful outputs like answers. With enough data and model capacity, LMs can be used to build solutions.

  • Why do large language models require so much data and compute?

    -To model the complexity of human language, LMs need to be trained on internet-scale data (10s of TBs). Bigger models with 100s of billions of parameters also require specialized GPUs for training over months.

  • What are some popular large language models?

    -GPT by OpenAI, LLama by Meta, Falcon by Microsoft/FAIR, Bloom by Anthropic, and more. Many are now open source so you can run them locally.

  • What is prompt engineering for large language models?

    -The way inputs are formatted and fed to LMs can greatly impact outputs. Prompt engineering studies how to frame prompts to get desired and accurate responses from LMs.

  • How can LMs access tools through APIs?

    -LMs can be instructed to output API calls instead of just text. These payloads can then be used to actually invoke those APIs and take actions.

  • What security concerns exist around large language models?

    -Potential issues include generating harmful text, prompt hacking to force unsafe outputs, and more. Work is being done to make LMs secure.

  • What does retrieval augmented generation mean?

    -When an LM needs extra context documents to answer questions, relevant chunks can be retrieved and added to the prompt for better responses.

  • How does fine-tuning a large language model work?

    -Task-specific layers can be added and trained on top of a pre-trained LM for customized performance on specialized datasets.

  • What other focus areas exist for improving large language models?

    -Giving LMs reasoning abilities, tools access through APIs, prompt engineering for better responses, and security.

Outlines

00:00

πŸ€“ What is a language model

A language model (LM) takes a sequence of words as input and predicts the next word based on patterns it has learned from training data. LMs are useful for building solutions like question answering by formatting prompts in certain ways. The more data used to train LMs, the better they get.

05:02

🌟Scaling up language models into LLMs

To improve language models, researchers use internet-scale data and increase model sizes into the billions of parameters. This requires a massive amount of compute and funding, resulting in large language models (LLMs) that only big organizations can train over months. Some LLMs are now open source.

10:02

🎯 Using and customizing LLMs

Pre-trained LLMs like LLaMA can be downloaded and run locally. Prompt engineering refers to formatting prompts to LLMs in ways that produce better, more accurate responses. Fine-tuning allows customizing LLMs for specific tasks by re-training only certain model layers.

15:04

πŸ”Ž Other areas around LLMs

Some other active areas around LLMs include retrieval augmented generation (RAG) for providing documents as context, ACT which gives LLMs access to take actions, security to prevent illicit content generation, jailbreaking LLMs, and trying to add reasoning and thinking capabilities.

Mindmap

Keywords

πŸ’‘Language Model (LM)

A Language Model (LM) is a computational algorithm designed to understand, interpret, and generate human language based on the pattern of words. In the video, it's described as a system that takes a string of text as input, like 'she is,' and predicts the next word, such as 'watching,' based on the data it has been trained on. The LM's ability to predict subsequent words showcases its fundamental role in processing and generating human-like text, making it central to the development of products like ChatGPT and other language-based applications.

πŸ’‘Large Language Models (LLMs)

Large Language Models (LLMs) are advanced versions of language models that have been trained on extensive datasets, often spanning billions of parameters and requiring significant computational resources. The video emphasizes that LLMs, like GPT and its variations, are powered by huge, diverse datasets and state-of-the-art architectures like Transformers. They represent the cutting edge in the field, capable of understanding and generating text with a high degree of complexity and nuance.

πŸ’‘Transformers

Transformers are a type of neural network architecture that has significantly advanced the capabilities of language models. They are highlighted in the video as the driving force behind the current state-of-the-art language models, including GPT. Transformers excel at processing sequences of data, such as text, making them particularly effective for tasks involving natural language understanding and generation.

πŸ’‘Prompt Engineering

Prompt Engineering is a technique discussed in the video that involves carefully crafting the input given to a language model to elicit a desired output. It underscores the strategic manipulation of prompts to improve the accuracy and relevance of responses from LMs, highlighting how users can guide the model's behavior to achieve specific objectives, such as sentiment analysis or answering questions accurately.

πŸ’‘Fine-tuning

Fine-tuning is a process of adjusting a pre-trained model on a smaller, specialized dataset to tailor it to specific tasks or improve its performance on particular types of data. The video discusses how fine-tuning can adapt a general-purpose LM, like GPT, to perform specialized tasks by training it further on a dataset with specific input-output pairs, thereby enhancing its utility for customized applications.

πŸ’‘API

API, or Application Programming Interface, is mentioned in the context of enabling language models to perform actions, such as booking a flight, by making API calls to external services. This concept illustrates the potential of LMs to not just generate text, but to interact with other software systems, thereby executing tasks or retrieving information from the web or specialized databases.

πŸ’‘Retrieval Augmented Generation (RAG)

Retrieval Augmented Generation (RAG) is a technique where a language model enhances its responses by incorporating external information. In the video, RAG is described as augmenting the generation process by retrieving relevant chunks of information from a larger document or dataset to provide context or specific answers that wouldn't be possible based solely on the model's pre-existing knowledge.

πŸ’‘Security in LLMs

The video touches on the importance of ensuring security in language models, specifically in preventing them from generating harmful, biased, or inappropriate content. It mentions mechanisms to restrict LMs from responding to certain queries or prompt hacking, which involves trying to bypass these restrictions. This concept underscores the ethical considerations and technical challenges in developing and deploying LLMs.

πŸ’‘Open Source LLMs

Open Source LLMs are freely available models that can be used, modified, and distributed by anyone. The video highlights how organizations like Meta have released models like LLaMA to the public, democratizing access to state-of-the-art language technologies. This practice allows a wider community to build upon these advancements, fostering innovation and accessibility in the field.

πŸ’‘Compute Resources

Compute resources, particularly GPUs (Graphics Processing Units), are essential for training large language models due to their intensive computational demands. The video discusses how the scale of LLMs requires substantial investment in high-performance computing infrastructure, making the development of these models feasible primarily for well-funded organizations. This point highlights the technological and financial barriers to entry in the field of AI research and development.

Highlights

Language models predict the next word in a sequence based on patterns in training data

LMs can be used to build solutions by prompting them with questions and getting back answers

Researchers increase training data and model parameters to improve LMs

Large LMs require massive compute and are expensive to train

Some large LMs are open sourced for anyone to use

Prompt engineering tunes inputs to get better LM outputs

Giving LMs access to tools enables them to take actions beyond just answering

Security is needed to prevent harmful LM responses

LMs currently lack reasoning and a thought process

Retrieval augmentation retrieves relevant context to answer questions

Fine tuning trains parts of a model for specialized tasks

RAG uses retrieval to provide context for generation

Fine tuning controls how LMs generate text

Pre-trained LMs can be fine-tuned for specific projects

Fine tuning is efficient as only parts of LM are retrained

Transcripts

play00:00

hello everyone this is Yash and I'll be

play00:02

talking about language models and large

play00:04

language models some of the ideas that

play00:07

encapsulate uh Chad GPD and uh other

play00:10

similar products it's very hot these

play00:13

days these products and uh I just want

play00:16

to discuss the ideas or or the story

play00:18

line uh that comes uh along with this

play00:21

products uh we'll not be implementing or

play00:24

coding anything in this video but maybe

play00:26

in the in the next or the subsequent

play00:28

videos we'll actually get get get our

play00:31

hands dirty and we'll build something as

play00:33

well so uh the thing that the central

play00:36

idea around all of this is uh a language

play00:40

model or an LM so what exactly is an LM

play00:44

an LM takes a string as input so let's

play00:46

say an example like she is and uh it

play00:49

will sort of predict the next word so

play00:52

she is uh let's say uh watching so let's

play00:56

say this is what uh the LM has predicted

play00:59

watching now uh there can be many uh

play01:02

words that can come instead of watching

play01:05

but it just uh predicts like the top

play01:08

probable topmost probable word it highly

play01:11

depends on the kind of data that rlm has

play01:14

been trained

play01:15

on uh you see it it it's not uh it's

play01:19

like human language it's not very random

play01:22

like the order of words it's not

play01:25

extremely random there is some pattern

play01:27

to it and uh the LM exactly tries to uh

play01:31

model that

play01:33

pattern uh it's like U she is uh like

play01:38

running sleeping you can say these kind

play01:40

of words uh maybe like nouns as well

play01:43

like she's president or or some

play01:46

adjective like she's beautiful or

play01:48

something and uh but there won't be like

play01:51

something like is again so let's say she

play01:54

is is so is as a prediction is quite uh

play01:59

less likely or or very rare uh you can

play02:03

say it's almost

play02:06

impossible uh or words like January so

play02:09

uh so she is January uh January is also

play02:13

possible so she is January born or

play02:15

something uh we can say like in this

play02:18

case it becomes a adverb of time January

play02:21

so uh so it's it's it's possible uh but

play02:25

it's still uh you know like uh very rare

play02:29

Maybe uh or or or less likely not very

play02:31

rare but uh less likely compared to

play02:34

watching or or some verb like sleeping

play02:36

or

play02:37

something so this is how the LMS work it

play02:41

highly depends on the data that it has

play02:43

been trained on the data that is needed

play02:46

for an LM to

play02:49

train it's uh it's just a plain Text

play02:53

data so uh so just the text uh like con

play03:00

a book or something which has a human

play03:02

language like a proper human

play03:05

language uh which is readable and

play03:07

understandable it's not like completely

play03:09

random or anything so if you give that

play03:12

to an LM model it will uh sort of model

play03:16

uh the patterns that are there it'll try

play03:18

to learn it the more the data the better

play03:20

it is for the model uh if you want to

play03:23

see an example of how an LM Works

play03:25

actually and the some little bit of math

play03:27

behind it I've made a video around

play03:30

engram language model it's a very simple

play03:33

uh model how we can uh build it uh there

play03:36

are more sophisticated ones after that

play03:38

which uh which uses neural network deep

play03:41

learning and now uh there's a whole uh

play03:44

bunch of models which uses

play03:46

Transformers so Transformers have been

play03:49

shown to perform the best uh and uh that

play03:53

is what is driving the current State

play03:54

ofthe art and chat GPD and all of those

play03:57

things now if you if you think about it

play04:00

uh if you really think about it uh

play04:02

you'll ask uh how are these LMS even

play04:05

useful right we are giving in some uh

play04:08

input and uh we are getting the

play04:10

completion basically like like the words

play04:13

how is it even useful so uh what how we

play04:16

can think about it is U we can start

play04:19

using these LMS to actually build

play04:22

Solutions so what do I mean is U rather

play04:25

than uh you know giving it a random

play04:27

sentence we can actually give it a like

play04:30

a question so let's say I give a

play04:32

question like

play04:34

capital

play04:37

of India so uh and and what I can do is

play04:42

I can write answer and I can end the

play04:44

string there so if I say if this is my

play04:47

string qu I I I I write a q q for

play04:51

question and I end it with an A so

play04:54

hopefully it'll complete it with the

play04:56

answer right so this is the idea like

play04:59

we'll start getting answers we can start

play05:01

getting answers as well but uh it really

play05:04

depends on the data that LM has been

play05:07

trained on if uh LM has never seen Delhi

play05:12

or something in its text this might be

play05:15

difficult uh to uh still give give the

play05:18

solution for so this is how the story

play05:20

continues and uh that is what

play05:23

researchers uh uh do so what they do is

play05:27

uh they they sort of

play05:30

increase the

play05:31

data uh which is used to train uh these

play05:34

language models and uh when the data

play05:38

goes up uh we uh increase the model size

play05:42

as

play05:43

well uh what do I mean by increase the

play05:45

model size is we increase the learnable

play05:48

parameters in in in the model so more

play05:51

parameters can be learned or or more

play05:53

weights are there and uh increase of

play05:57

data means internet scale of data which

play06:00

includes uh like a uh web crawl or or

play06:03

you can say like a chunk of Internet

play06:07

which uh whichever is possible like

play06:10

publicly available and which can be

play06:12

which can be crawled or scraped so for

play06:14

instance uh this video I'm putting uh

play06:17

maybe like the transcript of this video

play06:19

can be taken and uh you'll get that text

play06:23

to uh you know train your uh language

play06:26

model uh the posts Maybe you uh might

play06:30

have made some Facebook post or Reddit

play06:32

post or something so those posts can

play06:35

also be included in in within the data

play06:37

so it's all on the internet people like

play06:40

you have me have contributed all this

play06:42

data over the past years and this data

play06:45

can be used to train the language models

play06:48

now when the data increases uh maybe in

play06:51

the internet scale so it's like around

play06:53

10 TBS of data that we are looking at

play06:57

and uh the model parameter are in

play07:00

billions so it's like tens or even

play07:03

hundreds of billions of parameters are

play07:05

present so these are like tens or even

play07:07

hundreds of billions of

play07:10

parameters uh we'll also need uh so uh

play07:15

if if the model goes this big and if we

play07:17

are training this large of data we will

play07:20

also need more

play07:21

compute very high compute and by compute

play07:24

I mean

play07:25

GPU so uh so that's where the whole

play07:29

story comes to an

play07:31

end uh we'll require a lot of GPU which

play07:34

requires a lot of money so with this

play07:37

money goes up uh and that's why all the

play07:40

big organizations are able to train uh

play07:43

these big

play07:44

models and uh when you sort of train

play07:48

these language models on this huge of of

play07:51

of data which is actually so it's so big

play07:55

then it's called as llm which is large

play07:58

language model and uh all these big

play08:02

organizations can do it because it costs

play08:05

millions and it requires some time for

play08:08

uh for it to be trained maybe like

play08:10

months or so uh with a lot of gpus uh so

play08:16

they can afford it so they train it and

play08:19

uh yeah we don't do anything we only use

play08:23

it I guess uh but yeah like uh some

play08:27

organizations have been uh very kind now

play08:29

to open source their models as well so

play08:32

these llms um it's available the the

play08:35

pre-trained llms so you don't have to do

play08:37

do any training you can just pick the

play08:39

model and run it and it runs on uh on

play08:42

your local machine as well so there are

play08:44

these uh llms that uh that come in

play08:47

market right so everyone knows GPT or uh

play08:51

GPT is the LM and uh Chad GPT is like

play08:54

the product or or you can say like a

play08:56

fine tune version of it uh this this

play08:58

small model is proprietary it's not you

play09:01

can access it only through API or U web

play09:05

interface uh or or like the web browser

play09:08

that you use right to use chat GPT or

play09:10

something but you can't run GPT on your

play09:13

local machine unless you work for open a

play09:17

but uh some some companies some

play09:19

organizations uh like meta has uh

play09:21

launched llama uh it's a llama 2 or

play09:25

llama or or you can say llama series of

play09:28

models or uh there is other models like

play09:31

Falcon and uh Bloom and other models are

play09:35

there like lot lots of models are there

play09:37

maybe depending on the time when you're

play09:39

watching this videos there will be many

play09:41

more new better models that come out

play09:44

hopefully so uh for now these are the

play09:47

this is the state and uh these models

play09:50

you can actually download them and run

play09:52

run it on your local you don't have to

play09:54

use any internet to use them I mean once

play09:57

you download it uh you you can just run

play09:59

it and without any API or without any

play10:02

web interface or anything as such you

play10:04

can just use these models out of the box

play10:07

we'll see how how to exactly use these

play10:09

models in the next video but uh that's

play10:13

uh about it that's like the whole idea

play10:16

of U how the llm story comes into

play10:19

picture uh there is uh lots of things

play10:22

like around llm so I would like I I just

play10:25

want to talk about prompt engineering as

play10:27

a field that came up so uh when when we

play10:31

give some input sentence or some

play10:33

instructions uh to these

play10:36

llms there are specific way in which uh

play10:40

we can specify these inputs to llm so

play10:43

that we get desired output or desired

play10:46

answers so let's say sentiment uh

play10:49

sentiment classification if I say if I'm

play10:52

um asking the llm to do to tell me like

play10:56

what sentiment this sentence belongs to

play10:59

and uh I'm just passing it uh I'm just

play11:01

passing in the

play11:02

sentence uh so maybe rather than me

play11:05

saying it that way it it makes more

play11:08

sense to put the sentence first and then

play11:10

say like what is the above sentence

play11:13

sentiment for the above sentence

play11:15

something like that so so the the way

play11:18

you uh put in this input uh is called

play11:21

prompt engineering and researchers have

play11:24

found that uh if you uh for for certain

play11:26

questions if you do prompting in certain

play11:30

way uh it will give uh more accurate

play11:33

answers so we'll see all that it's very

play11:36

interesting uh the other part is uh act

play11:40

which comes uh around U uh like when we

play11:43

talk about llms uh these things uh come

play11:46

around and uh what what do I mean by Act

play11:49

is we get give llms access to tools so

play11:54

we give some tools to these llms which

play11:57

uh llms can access while apis what do I

play12:00

mean by that is when we prompt something

play12:02

like uh let's say book of flight so we

play12:05

don't just want llm to uh you know tell

play12:08

us something but we actually want LM to

play12:11

do something so rather than just uh you

play12:14

know uh printing the answer or something

play12:17

it can print like API calls so API call

play12:21

and uh it might say the source and

play12:24

destination and it will uh oh I I I

play12:28

think I should draw it here so it it

play12:30

might say like source and destination

play12:32

and and it can send the

play12:34

API uh to some endpoint so as soon as we

play12:38

see this text appearing API call we can

play12:41

actually copy paste this payload and we

play12:43

can actually make the API call as well

play12:46

so that is what um it means by giving

play12:49

access to tools to these llms and uh

play12:53

yeah that is the whole field of act so

play12:55

other than act there is also this field

play12:57

of uh security or around llms uh like

play13:00

how can we uh make llms not to generate

play13:04

any profane kind of words or wrong

play13:07

things so uh something like if someone

play13:10

asks how to destroy this

play13:12

planet uh we don't want llms to answer

play13:16

that maybe so how do we stop llms to do

play13:20

that and um uh and uh even if we stop

play13:24

there is a whole field of jailbreaking

play13:27

and uh you know prompt hacking

play13:29

which comes like even if it's not

play13:31

answering how to try and still get the

play13:34

answer out using these llms so those are

play13:37

some some more things uh security

play13:40

related things yeah and there is uh one

play13:43

more part like uh thinking part so it

play13:46

comes uh around llm again so llms right

play13:50

now is just generating uh the completion

play13:54

or the answer we can say but it's not

play13:56

really thinking it it doesn't have any

play13:59

you know sense of knowledge so or or or

play14:02

or I I shouldn't say knowledge I should

play14:05

say like uh the reasoning it it doesn't

play14:07

have any reasoning as such so how can we

play14:10

uh you know make LMS U make like a

play14:13

thinking tree that uh hey it's because

play14:17

of this word that is present in the

play14:18

sentence that is why the sentiment can

play14:21

be like this or or it it should be able

play14:24

to sort of reason these things and uh

play14:26

you know like the whole thought process

play14:28

should go on within lm's mind so this is

play14:31

another field around llm that is there

play14:34

uh these are some of the things that are

play14:36

around llm prompt engineering security

play14:38

acting thinking uh there can be many

play14:41

more I know only these uh if you know

play14:44

some you can put in the comments as well

play14:46

uh so yeah in the next video we'll

play14:48

actually see uh you know maybe we can

play14:50

start with U llama or something and uh

play14:54

how we can actually use uh llama models

play14:56

or or some existing open source

play14:59

uh llms um to get these answers and

play15:04

all uh so yeah thanks for watching uh if

play15:07

you have any queries you can put in the

play15:08

comments or uh we can also connect on

play15:11

LinkedIn uh I'll give the link in the

play15:13

description if you want uh okay I'm

play15:15

sorry uh there are two things I I also

play15:18

wanted to uh you know I thought I'll say

play15:22

it in in next videos but uh this seems

play15:24

like a right place so I'll say it here

play15:26

itself so there are also these concepts

play15:29

of uh rag and uh fine tuning so they

play15:34

also come along uh when we talk about

play15:36

llms uh so along with prompt engineering

play15:39

and uh security thinking acting uh we

play15:43

also have these two things a rag uh this

play15:46

is retrieval augmented generation so uh

play15:50

let's say you want to you have a

play15:52

document uh like like an internal

play15:54

document uh which uh llm doesn't know

play15:58

about

play15:59

uh so uh like if you want to pass it

play16:02

through llm this is the doc and uh if

play16:05

you want to sort of uh you know talk to

play16:07

this dog or do question answering from

play16:10

this dog if the answer is contained

play16:12

within this dog then uh llm might not be

play16:15

able to answer it purely without any

play16:18

context so what we do is we uh we

play16:22

augment this document along with our

play16:24

question and we say that uh hey can you

play16:27

find the answer from from this document

play16:29

and all the context is attached with it

play16:33

now uh llms have like a fixed window of

play16:36

input it can't take uh infinite input so

play16:41

very long documents uh can't be

play16:43

processed through llm so what we do is

play16:45

we break the these documents uh into

play16:48

chunks and one of the chunk is retrieved

play16:51

where the answer is present and that

play16:53

chunk is added as context while

play16:56

prompting the llm for answer so this is

play16:59

uh like we retrieve the most relevant

play17:02

Chunk from this document and then we ask

play17:05

the question so that the answer is

play17:07

present in that context itself and a

play17:09

whole document need not be provided to

play17:11

the llm so that is what we call

play17:13

retrieval augmented

play17:15

generation uh we augment the generation

play17:18

process the answer generation process

play17:20

with uh this retrieval step we'll see

play17:23

how it works and uh I'm planning to make

play17:25

more videos on this in detail but uh I

play17:28

thought I just at least introduce this

play17:29

concept here in this video uh then we

play17:32

have finetuning so uh let's say you have

play17:34

like a particular data set uh with like

play17:37

X and Y so okay this is X this is

play17:41

y uh so you have uh like a proper input

play17:45

output kind of pairs uh like a very

play17:48

specific task for your use case your uh

play17:51

maybe you're working on some project or

play17:53

or as an organization you have some

play17:54

project where uh you want to do very

play17:57

very specific spe ific classification

play17:59

kind of thing so maybe you have two

play18:02

labels uh like like two intents or

play18:04

something which is specific to your

play18:06

project uh so llms might not be uh you

play18:09

know like out of the box it might not be

play18:11

able to classify them but uh if we have

play18:15

the data set for those we can uh

play18:18

finetune an

play18:20

llm we can fine tune an llm and

play18:24

uh uh then we can use the llm by giving

play18:27

X as input and uh we hopefully get y as

play18:31

output uh we'll see how everything works

play18:34

uh in fine tuning there are there are a

play18:36

lot of things in fine tuning it's not

play18:37

just about X and Y pairs uh it's about

play18:40

the way llm speaks as well uh like we

play18:43

can sort of guide the llm to speak or or

play18:47

to generate in a certain way uh in a

play18:50

certain style or in certain formats so

play18:53

that is where fine tuning uh becomes

play18:55

very useful uh these these models that

play18:58

that we spoke about llama Falcon they

play19:00

are not fine-tuned models they are

play19:02

pre-trained models and that those models

play19:05

we use them as base base models and we

play19:08

sort of U if this is the base model

play19:11

let's say this llm then we add like a

play19:13

small layer which we train so we need

play19:16

not train the whole model we just do the

play19:20

training of this layer or some or it's

play19:22

called

play19:23

fine-tuning Uh only only specific parts

play19:25

of the model are trained so that it does

play19:28

is it's not very computationally

play19:30

expensive uh we'll see how how this is

play19:33

done as well in subsequent videos I'm

play19:35

planning to make this uh like way ahead

play19:37

in future maybe not not the next video

play19:40

right now but uh this is also very

play19:42

interesting and uh we'll see how llm

play19:45

generates with fine tuning as well yeah

play19:47

that's all thanks