"okay, but I want GPT to perform 10x for my specific use case" - Here is how

AI Jason
9 Jul 202309:53

Summary

TLDRهذا النص يناقش التقنيات الأساسية لتحسين نموذج لغة واسعة النطاق مثل GPT، مثل ال微调 والقاعدة المعرفة. يشرح ال微調 كيف يمكن استخدامه لتدريب النموذج على التصرف بطريقة معينة، مثل تحويل تعليقات بسيطة إلى مدونة مبتكرة، باستخدام بيانات خاصة. ويناقش النص أيضًا كيفية إنشاء قاعدة معرفة مدمجة للتعامل مع الاستعلامات المحددة للنطاق، مثل القضايا القانونية أو الإحصاءات المالية. يتضمن النص خطوات تفصيلية ل如何选择 النموذج وتحضير البيانات وتدريبه على Google Colab، مع توضيح على كيفية تحسين النتائج من خلال زيادة حجم البيانات.

Takeaways

  • 🔧 fine-tuning هي طريقة تعديل نموذج لغوي كبير لتحقيق سلوك معين عن طريق تغذية البيانات الخاصة بعينها.
  • 📚 base هي طريقة أخرى تتضمن إنشاء قاعدة بيانات متضمنة معرفة معينه لتحسين الاستجابة لنماذج اللغات الكبيرة.
  • 🤖 fine-tuning مناسب لdigitizing شخصية معينة مثل ترامب، لكن ليس مناسبًا لتقديم بيانات دقيقة مثل الحالة القانونية أو الإحصاءات المالية.
  • 📈 base مناسب لاستخدام المعرفة المتخصصة مثل القضايا القانونية أو الإحصاءات الماليه، حيث يمكنها توفير البيانات الفعلية.
  • 🛠️ يمكن استخدام GPT لإنشاء مجموعة تدريبية كبيرة من خلال تحليل وتحويل بيانات موجودة.
  • 🌐 يمكن العثور على مجموعات بيانات عامة عبر الإنترنت مثل Kaggle و Hugging Face.
  • 📝 يمكن أن تكون مجموعة البيانات الخاصة التي لا تتوفر في أي مكان آخر هي الأكثر فائدة لتحسين أداء fine-tuning.
  • 💻 يمكن استخدام بотات مثل Randomness AI لإنشاء مجموعة بيانات تدريبية على نطاق واسع.
  • 🎯 عند fine-tuning، يجب اختيار النموذج المناسب مثل Falcon الذي يحتوي على إصدارات مختلفة بلغات متعددة.
  • 🔧 يمكن استخدام Google Colab لتحسين النماذج، مع القدرة على اختيار بين GPU مختلفة.
  • 📝 بعد fine-tuning، يمكن حفظ النموذج محليًا أو رفعه إلى Hugging Face لمشاركة واستخدام آخر.
  • 🎉 يمكن أن تكون fine-tuning مفيدة في مجالات مثل الدعم العملاء، ووثائق القوانين، وتشخيص طبي، ونصائح مالية.

Q & A

  • ما هي الطريقة الأساسية للتحسين التقني للنماذج النصية الكبيرة؟

    -التحسين التقني للنماذج النصية الكبيرة يمكن أن يشمل طريقتان رئيسيتان: التكيف الدقيق وإنشاء قاعدة المعرفة. التكيف الدقيق يتضمن استرداد النموذج الكبير ببيانات خاصة، بينما إنشاء قاعدة المعرفة يتضمن إنشاء تضمين أو قاعدة متجه لجميع المعرفة وإيجاد البيانات المناسبة لتقديمها إلى النموذج النصي الكبير.

  • لماذا يمكن أن تكون الطريقة التكيف الدقيق مفيدة في الاستخدامات المتخصصة مثل الطب أو القانون؟

    -التكيف الدقيق مفيد في الاستخدامات المتخصصة لأنه يمكنه جعل النموذج النصي الكبير يتصرف بطريقة معينة، مثل تحويل المحادثات أو المقابلات الإعلامية مع شخص ما، مما يمكن أن يساعد في توليد بيانات محددة للاستخدامات المتخصصة.

  • ماذا يقصد بقاعدة المعرفة في سياق التحسين التقني؟

    -قاعدة المعرفة هي مجموعة من البيانات التي تم تجميعها وترتيبها بطريقة يمكن من خلالها إيجاد المعلومات ذات الصلة وتقديمها إلى النموذج النصي الكبير كجزء من المعالجة.

  • لماذا قد لا تكون التكيف الدقيق مناسبًا للاستخدامات التي تتضمن معرفة مجالية مثل القضايا القانونية أو الإحصاءات المالية؟

    -التكيف الدقيق قد لا يكون مناسبًا للاستخدامات التي تتضمن معرفة مجالية لأنه ليس م擅长 في توفير بيانات دقيقة، بل يفضل استخدام التضمين لإنشاء قاعدة معرفة يمكن من خلالها توفير البيانات الفعلية للاستخدامات المتخصصة.

  • ما هي الخطوات الأساسية لتحسين تكيف النموذج النصي الكبير؟

    -الخطوات الأساسية لتحسين تكيف النموذج النصي الكبير تشمل اختيار النموذج، تحضير المجموعة البيانات، استيراد المجموعة البيانات، تحليل بيانات التكيف، وتشغيل عملية التكيف، ثم حفظ النموذج المحسن.

  • لماذا يجب اختيار النموذج الذي يناسب الاستخدامات المتخصصة؟

    -اختيار النموذج الذي يناسب الاستخدامات المتخصصة يضمن أن النموذج يمكنه التعامل مع نوع البيانات والمحتوى الخاص بتلك الاستخدامات بطريقة أكثر فعالية.

  • ما هي المصادر التي يمكن من خلالها الحصول على مجموعات البيانات العمومية للتدريب؟

    -يمكن الحصول على مجموعات البيانات العمومية من مصادر مثل كغل وهاUGING، التي توفر مجموعة واسعة من البيانات عبر مختلف الموضوعات.

  • كيف يمكن استخدام GPT لإنشاء مجموعة بيانات تدريبية كبيرة؟

    -يمكن استخدام GPT لإنشاء مجموعة بيانات تدريبية كبيرة من خلال إعطائه مجموعة من الأمثلة المهمة وطلبه من النموذج توليد إدخالات مستخدم جديدة تتناسب مع هذه الأمثلة لاستخدامها في التدريب.

  • ما هي الخطوات التي يجب إتباعها لإعداد مجموعة بيانات التدريب؟

    -الخطوات لإعداد مجموعة بيانات التدريب تشمل تحميل المجموعة البيانات، تحليل البيانات، تحويل البيانات إلى تنسيق مناسب للتدريب، وتحضير البيانات مع الإدخالات والنوع المناسب للنماذج.

  • ما هي الفوائد الأساسية لتحسين تكيف النموذج النصي الكبير؟

    -التحسين التقني للنموذج النصي الكبير يمكن أن يوفر تقليل في التكلفة، وتحسين الأداء، وقدرة على توليد بيانات محددة للاستخدامات المتخصصة، مما يمكن أن يؤدي إلى تحسين النتائج بشكل عام.

Outlines

00:00

🤖_FINE-TUNING LARGE LANGUAGE MODELS

The first paragraph discusses two methods for utilizing large language models (LLMs) in specific domains such as medical or legal: fine-tuning and knowledge base embedding. Fine-tuning involves retraining the model with private data to achieve desired behavior, while knowledge base embedding involves creating a vector database to find relevant data for the model. The paragraph emphasizes the importance of choosing the right method based on the use case, with fine-tuning being suitable for mimicking specific behaviors and knowledge base embedding for providing accurate domain-specific data. It also introduces a step-by-step guide on fine-tuning a model named Falcon for creating military power prompts, highlighting the selection of the model, preparation of data sets, and the use of platforms like Randomness AI for generating training data at scale.

05:00

🛠_FINE-TUNING PROCESS AND RESULTS

The second paragraph continues the discussion on fine-tuning, focusing on the technical process of fine-tuning the Falcon model using Google Colab. It details the setup, including selecting the right hardware, installing necessary libraries, and preparing the data set. The paragraph explains the use of a specific method called 'Low ranks adapters' for efficient fine-tuning and shares the initial results from the base model versus the fine-tuned model. The fine-tuned model's improved performance in generating Mediterranean prompts is showcased, demonstrating the effectiveness of fine-tuning for specific tasks. The paragraph concludes with the suggestion to save and upload the fine-tuned model to Hugging Face for sharing and further use, and mentions an ongoing contest that offers significant computational resources for training, inviting viewers to explore fine-tuning for various applications.

Mindmap

Keywords

💡GPT

GPT هو معرف عام يستخدم لوصف أنظمة الذكاء الاصطناعية التي يمكنها إنشاء النصات بشكل تلقائي. في النص، يُستخدم GPT لوصف نموذج الذكاء الاصطناعي الكبير الذي يمكن تحسينه لتحقيق أهداف معينة مثل الطبية أو القانونية. مثالاً من النص: 'I want GPT for a specific use case like, medical or legal'.

💡Fine-tuning

تعني 'التحسين الدقيق' في هذا السياق، إعادة تدريب نموذج الذكاء الاصطناعي الكبير بمجموعة بيانات خاصة لتحقيق تصرف معين. يُستخدم في النص لتوضيح كيف يمكن تعديل سلوك نموذج GPT لتحقيق أهداف محددة. مثالاً من النص: 'one method is fine tuning, which means you retrieve the large layout model with a lot of private data'.

💡Knowledge base

قاعدة المعرفة هي نظام يتضمن مجموعة من المعلومات المجمعة والمنظمة التي يمكن استخدامها لتحسين أداء الذكاء الاصطناعي. في النص، يُستخدم كطريقة أخرى لتحسين الذكاء الاصطناعي، بدلاً من إعادة تدريب النموذج بالكامل. مثالاً من النص: 'another is knowledge base which means you are not actually retraining the model'.

💡Embedding

Embedding في الذكاء الاصطناعي يشير إلى تحويل المعلومات أو النصوص إلى مجموعة من الأرقام التي يمكن استخدامها في العمليات الحسابية. يُستخدم في النص كجزء من إنشاء قاعدة المعرفة. مثالاً من النص: 'you are creating an embedding or vector database of all your knowledge'.

💡Domain knowledge

معرفة النطاق هي معلومات متخصصة محددة ل某个領域، مثل القانون أو العلوم المالية. في النص، يُستخدم لتوضيح نوع من المعلومات التي يمكن أن تتطلب استخدام قاعدة المعرفة بدلاً من التحسين الدقيق. مثالاً من النص: 'if your use case is that I have a bunch of domain knowledge like a legal case or financial Market stats'.

💡Falcon

Falcon هو اسم نموذج الذكاء الاصطناعي الذي يُذكر في النص كأقوى أمثلة على النماذج الكبيرة. يُستخدم في النص كمثال على النموذج الذي يمكن تحسينه لتحقيق أهداف معينة. مثالاً من النص: 'the one I'm going to use is the Falcon it is one of the most powerful large Lounge model'.

💡Data sets

مجموعة البيانات هي مجموعة من المعلومات التي تم جمعها لاستخدامها في التدريب على الذكاء الاصطناعي. في النص، يُستخدم لتوضيح أهمية نوعية مجموعة البيانات في تحسين النموذج. مثالاً من النص: 'the quality of your data set decides the quality of your fine tune model'.

💡Tokenizer

Tokenizer هو أداة تستخدم في الذكاء الاصطناعي لتحويل النص إلى مجموعة من الرموز التي يمكن أن تفهمها الذكاء الاصطناعي. في النص، يُستخدم في عملية التحضير لمجموعة البيانات لتحسين النموذج. مثالاً من النص: 'and tokenize it first'.

💡API key

مفتاح API هو مفتاح خاص يستخدم لاستخدام واجهات برمجة التطبيقات (APIs) لربط بين البرمجيات. في النص، يُستخدم لربط بين النموذج الذي يتم تحسينه وخدمات خارجية مثل Hugging Face. مثالاً من النص: 'you will need to use hugging face as a way to upload and share our model'.

💡Mid-journey prompt

Prompt في الذكاء الاصطناعي هو نص يستخدم لتوجيه النموذج لإنشاء أو تحليل نص آخر. في النص، يُستخدم 'Mid-journey prompt' لوصف نوع من النصوص التي يمكن أن تولد النموذج بعد تحسينه. مثالاً من النص: 'I want chat GPT to reverse engineer generate a simple user instruct that might generate this mid-journey prompt'.

Highlights

Two methods for utilizing GPT for specific use cases: fine-tuning and knowledge base.

Fine-tuning involves retraining the model with private data for specific behaviors.

Knowledge base creation involves embedding domain knowledge without retraining the model.

Fine-tuning is suitable for replicating specific behaviors, such as emulating a personality like Trump.

For domain-specific knowledge like legal cases, embedding is more effective than fine-tuning.

Embedding can provide accurate data for queries, such as stock price movements.

Creating a large knowledge model can reduce costs by teaching the model specific behaviors.

A step-by-step case study on fine-tuning a large language model for creating military power prompts.

Choosing the right model for fine-tuning, such as the powerful Falcon model.

The importance of dataset quality for the success of fine-tuning.

Utilizing public datasets and private datasets for fine-tuning.

Using GPT to generate training data by reverse engineering prompts.

Platforms like Randomness AI can automate the generation of training data at scale.

Google Colab as a platform for fine-tuning the Falcon model.

The process of preparing and tokenizing the training data for fine-tuning.

Creating training arguments and starting the training process with the trainer.

Saving the fine-tuned model locally and uploading it to Hugging Face.

Comparing the results of the base model with the fine-tuned model for generating prompts.

The potential of fine-tuning for various use cases such as customer support and financial advisory.

An upcoming video on creating an embedded knowledge base.

Transcripts

play00:00

so a lot of people are saying that I

play00:01

want GPT for a specific use case like

play00:04

medical or legal but there are two

play00:06

methods you should consider to achieve

play00:07

the outcome one method is fine tuning

play00:09

which means you retrieve the large

play00:11

layout model with a lot of private data

play00:13

you're holding and another is knowledge

play00:14

base which means you are not actually

play00:16

retraining the model instead you are

play00:18

creating an embedding or vector database

play00:20

of all your knowledge and try to find

play00:22

the relevant data to feed into large

play00:25

language model as part of prop and these

play00:27

two methods are feet for different

play00:28

purpose so what fine tuning is good at

play00:30

is making sure the large knowledge model

play00:32

behave in certain way for example if you

play00:34

want to digitize someone like the other

play00:36

AI talks like Trump that's where you

play00:38

will use fine too because you can feed

play00:39

all those chat history or broadcast

play00:41

interview transcript into large language

play00:44

model so it can have certain type of

play00:46

behavior but if your use case is that I

play00:48

have a bunch of domain knowledge like a

play00:49

legal case or financial Market stats

play00:52

fine tune is actually not going to work

play00:53

because it's not good at providing very

play00:56

accurate data instead you should use

play00:58

embedding to create a knowledge base so

play00:59

so that where someone asking which stock

play01:01

has the highest price movement yes it

play01:04

will get real data and feed it as part

play01:06

of pop so those two methods are three

play01:08

different use case a lot of times you

play01:09

can just create it in betting but find

play01:10

here is still super valuable for you to

play01:13

create a larger knowledge model that

play01:14

have certain Behavior it's a pretty way

play01:16

to decrease cost because instead of

play01:19

adding a big chunk of prompt to making

play01:21

sure large language model can behave in

play01:23

a certain way you can just teach large

play01:25

language models so you cut the cost so

play01:27

there's still a lot of legit use case

play01:29

where you should fine tune the legendary

play01:30

model unless I want to show you a

play01:32

step-by-step case study how can you

play01:34

fine-tune large language model for

play01:36

creating military power and this is a

play01:38

great use case because it is not a task

play01:40

that base model like GPT are good at

play01:43

what I want is a large energy model can

play01:45

take a simple instruction like this and

play01:48

turn it into a Miss Journey prompt so

play01:51

let's get started firstly we need to

play01:53

choose which model to use for fine

play01:55

tuning hacking face has this leaderboard

play01:56

for all the open larger launcher model

play01:58

and you can take a look to choose is the

play02:00

one that suits you most the one I'm

play02:02

going to use is the Falcon it is one of

play02:04

the most powerful large Lounge model

play02:06

there has been a number one place on the

play02:08

leaderboard in a very very short time

play02:09

it's also a few ones that are available

play02:11

for the commercial use so you can

play02:13

actually use this for production level

play02:14

products for your own company and it's

play02:16

actually not just our English a large

play02:18

set of different type of languages like

play02:20

German Spanish French and it has couple

play02:22

versions 40b version which is most

play02:25

powerful but also a bit slower think

play02:26

about more like gpd4 but it also a 7B

play02:29

version which is much faster and cheaper

play02:31

to train as well the next which is most

play02:33

important step is getting your data sets

play02:36

ready the quality of your data set

play02:38

decides the quality of your fine tune

play02:40

model there are two type of data sets

play02:41

you might use one is public data sets

play02:44

that you can get from internet and their

play02:45

model place you can get it like Kegel

play02:47

which is data set library that has a

play02:49

wide range of data across different

play02:50

topics like sports Health software you

play02:53

can just click on any of them preview

play02:55

the details of the data and if it's good

play02:57

you can download to use on the outside

play02:59

hugging is also have very big data set

play03:01

library and to find the ones that you

play03:03

will use for training large Lounge model

play03:05

you can click on data sets move down

play03:07

here try to find the text generation and

play03:09

you can try to find the relevant data

play03:11

sets that you want for example this is

play03:13

one public data set for medical related

play03:15

QA data sets you can preview what data

play03:18

actually inside but on the other side I

play03:20

think the most of the use case for fine

play03:22

tuning is actually use your own private

play03:24

data sets that is not available anywhere

play03:26

else it actually didn't require too big

play03:28

a data sets you can even start as little

play03:30

as 100 rows of data so it should be

play03:32

still manageable so this is one tip I

play03:34

want to share is that you can actually

play03:36

use GPT to create a huge amount of

play03:38

training data for example I have

play03:40

collected list of really high quality

play03:42

mid-journey prompts and I want chat GPD

play03:44

to reverse engineer generate a simple

play03:46

user instruct that might generate this

play03:49

mid-journing prompt and what I will do

play03:50

is give charity GPD a prompt like this

play03:52

you will help me create training data

play03:54

sets for generating text to image

play03:56

prompts and then I'll give it a few

play03:58

examples like this is from and this is

play04:00

user input and in the end it will start

play04:02

generating a user input that pair with

play04:05

this prompt which I can use them as a

play04:06

training data for fine-tuning Falcon

play04:08

model and all we need to do just repeat

play04:10

this process for hundreds or thousands

play04:12

of rows but luckily there are platforms

play04:14

like Randomness AI where you can run the

play04:16

GPT prompt f scale in bulk for example I

play04:19

can create an AI chain with this input

play04:21

variable called mediterating pump and

play04:22

then I will copy paste The Prompt that I

play04:24

was using in charge GPT the point the

play04:26

last prompt to the variable that we

play04:28

created here and let's run this so you

play04:30

can see it is working properly as it

play04:32

generates a user input and all we need

play04:34

to do next is go to the use tab this

play04:37

running block option allow me to upload

play04:39

the whole CSV file of the military

play04:40

prompt and then it will import the whole

play04:42

CSV file and run the GPD prompt for

play04:45

every single row hundreds of time

play04:47

automatically in the end I can have the

play04:49

training data like this so there's a

play04:50

pair of the user inputs as well as a

play04:52

corresponding mid Journey prompt so now

play04:54

let's fine tune the Falcon model I'm

play04:56

using Google collab as a platform to

play04:58

fine tune the model and I decided to use

play05:00

a 7B version which is much faster but if

play05:02

you want to use the 40p version it's

play05:04

basically the same code you just need to

play05:06

find more powerful computer before you

play05:08

run this making sure you check the

play05:09

runtime type and choose the GPU and at

play05:11

default I think you will be on T4

play05:13

version which still works but I have

play05:15

upgraded so I can choose 800 model which

play05:18

will be faster so firstly let's install

play05:20

a few libraries once it's finished you

play05:22

will see a little check mark here then

play05:24

the next step is we will import all

play05:26

those libraries okay great and you will

play05:28

run this notebook login which will ask

play05:30

for your hacking face API key if you

play05:33

don't have hacking face account just

play05:34

create account and then copy the link

play05:36

here and paste here we will need to use

play05:38

hugging face as a way to upload and

play05:40

share our model the next thing we will

play05:42

do is we will try to load the Falcon

play05:43

model and tokenize it first and here the

play05:45

model I choose is 7B instruct shared so

play05:49

instruct is a fine-tuned 7B model

play05:51

specifically for conversation so think

play05:53

about as chat GPT versus gpt3 and share

play05:56

it just a version of samd model and

play05:59

shared shared is this version of 7B

play06:02

model that would be faster and easier to

play06:04

run and it will take a while for you and

play06:06

it is downloading the whole bottles it

play06:08

will take a while okay so the model is

play06:10

downloaded and then let's load the model

play06:12

Q Laura is a specific type of method

play06:15

called Low ranks adapters which is one

play06:17

way to fine-tune the large language

play06:18

model much more efficient and fast

play06:21

before we fine-tune 7B let's try this

play06:23

prompt with the base model to see what

play06:25

kind of results we get so I will create

play06:27

a prompt and then start loading a bunch

play06:29

of configuration for the model and click

play06:31

around so this is the results we get

play06:33

it's not even close to generating a good

play06:36

Mediterranean prop as they didn't really

play06:37

understand the context and as I

play06:40

mentioned before even check GPT is not

play06:42

doing a good job for this task so I'm

play06:44

pretty curious to see the results and

play06:46

let's first try to prepare the data sets

play06:48

so what I'll do is I will drag and drop

play06:51

the training data says here and once

play06:53

it's finished I should see this file

play06:54

showing up on the left side you can

play06:56

click on this file button to open this

play06:58

side panel by the way and then the first

play07:00

is we will load this data set that we

play07:02

store locally and we can preview of this

play07:04

data so it has two column user and

play07:06

prompt it has 289 rows so this is

play07:10

actually another point I would mention

play07:11

you actually don't need a huge data set

play07:12

even 100 or 200 rows can already

play07:14

generate a really good results for fine

play07:17

tuning and if we pick up the first row I

play07:19

can see the data that is properly loaded

play07:21

and then what we want to do is to map

play07:23

the whole data sets in this format human

play07:25

and assistant and then tokenize the

play07:28

prompt into our data set so once it's

play07:30

finished you can see the data set is

play07:31

fully prepared with input IDs token type

play07:34

IDs and attention masks and firstly we

play07:37

will need to create the list of training

play07:39

arguments and you can use this one I

play07:41

have here as default and then we'll just

play07:44

run trainer.train to start the training

play07:46

process and this will take a while for

play07:48

the higher end GPU I choose it take me

play07:50

two minutes I think if you're using T4

play07:53

version it will probably take you around

play07:55

10 minutes okay great so we just

play07:57

finished fine-tuning the model

play07:59

next we will need to save the model that

play08:01

we've just trained you can either save

play08:02

locally by doing modal.save pre-trained

play08:05

and once it is finished you will see on

play08:08

the left side there's a folder called

play08:09

train model and inside this is model

play08:12

that we just created but you can also

play08:13

upload this model to hugging face so you

play08:16

will come to hugging phase click on this

play08:18

new model under your profile give a name

play08:20

and choose a license then click create

play08:22

model once you finish that you can copy

play08:24

this and then coming back to paste on

play08:27

here this will upload the model to your

play08:29

hanging face repo okay we successfully

play08:31

load the model and let's run this again

play08:34

I will create a list of configuration

play08:36

for the model then I will create this

play08:38

prop mid Journey prompt for a boy

play08:40

running in the snow and let's run this

play08:42

okay great so we got this result as you

play08:45

can see it produced a really great

play08:47

prompt that I just tell you that why

play08:49

running in the snow and it is able to

play08:51

generate prompt if by running the snow

play08:53

with backpack a red scarf by the famous

play08:56

artist The Simpsons style the red is a

play08:59

bit messed up and I think if I provide

play09:01

him more data it probably will produce

play09:04

better results but it's already much

play09:05

better result than the base model and

play09:07

chatty GPT so this is how you can fine

play09:09

tune a large language model I'm really

play09:11

Keen to see the results you are getting

play09:13

here I'm training the 7B model because

play09:15

40b takes a lot more computer power but

play09:18

luckily tii which is maker of Falcon

play09:21

model they are running a contest where

play09:23

the winner will be awarded with huge

play09:26

amount of training computer power so I

play09:28

think this is a brilliant opportunity if

play09:30

you really want to get into the fine

play09:31

tune space and there are a few use case

play09:34

you can try either customer support

play09:35

legal document medical diagnose or

play09:38

financial advisories I'm very keen to

play09:40

see what kind of models you guys got to

play09:42

train I hope you enjoyed this video if

play09:43

you're interested I will also produce

play09:45

another video talking about how can you

play09:47

create embedded knowledge base so if you

play09:49

like this video please like And

play09:51

subscribe and I see you next time

Rate This

5.0 / 5 (0 votes)

Related Tags
تدريب الشخصينماذج لغويةلغة البرمجةبيانات خاصةقاعدة المعرفةالبرمجة الآليةال人体健康تحليل ال金融市场البرمجة الحيويةالبرمجة الLEGALالبرمجة العسكرية
Do you need a summary in English?