Build an AI code generator w/ RAG to write working LangChain

Deploying AI
3 Jan 202414:13

Summary

TLDR在这段视频中,演讲者分享了他们如何利用OpenAI和Lang Chain技术来提高编写代码样板的效率。他们首先尝试了GitHub Co-pilot,但发现其使用的API版本过时,无法满足需求。随后,演讲者通过检索增强生成(Retrieval Augmented Generation)和少量学习(Few Shot Learning)的方法,结合完整的工作代码示例和详细的描述,成功地训练了一个模型,使其能够生成所需的代码。他们还创建了一个向量数据库,用于存储和检索相关示例,并通过Pantic Schema生成了专家的描述。最终,他们展示了一个工作流程,该流程能够根据用户的请求生成准确的代码样板,大大简化了开发过程。演讲者还讨论了如何改进这个系统,使其更加用户友好,并减少手动干预。视频最后,演讲者提供了GitHub链接,供观众查看和使用他们的代码。

Takeaways

  • 📝 作者花费了大量时间编写Lang chain的样板代码,用于部署和测试不同的原型和功能。
  • 🔄 Lang chain的表达式语言在2022年8月才引入,因此许多模型的知识截止日期远早于此,缺少对Lang chain工作方式的了解。
  • 💡 解决这个问题的常见方法是通过检索增强生成(retrieval augmented generation)和少量学习(few shot learning)结合的方法。
  • 🚀 通过提供相关的工作代码示例和请求,可以引导模型输出对用户有用的代码。
  • ❌ 直接使用GitHub Co-pilot生成Lang chain表达式语言的代码,结果并不理想,使用的是过时的API,没有使用Lang chain。
  • 🤖 使用最新的模型进行实验,通过提供明确的指令和Lang chain表达式语言,可以生成更接近预期的代码。
  • 🔍 作者采用了检索增强生成的方法,通过向量数据库检索最相关的数据,并将它们添加到上下文中。
  • 📚 为了提供足够的上下文,作者生成了多个不同的示例,并为每个示例提供了详细的描述。
  • 🔑 作者创建了一个简短的描述,用于驱动相似性搜索,以便在向量数据库中检索最相关的示例。
  • 📈 通过设置一个模式,将代码、描述和标签存入we8数据库,作者能够构建一个用于检索的数据库。
  • 🔗 最终,作者构建了一个工作流程,通过检索正确的上下文、将其附加到请求中,并通过模型生成代码,成功地解决了问题。
  • 🚨 作者意识到需要持续更新示例,以跟上Lang chain的变化,以确保生成的代码保持最新。

Q & A

  • 什么是Lang chain,它在脚本中提到的作用是什么?

    -Lang chain是一种编程语言或框架,它在脚本中被用来生成代码原型和功能。脚本中提到,Lang chain不断变化,对于AI模型来说,理解和生成与之相关的工作代码是一个挑战。

  • 脚本中提到的“boilerplate”指的是什么?

    -在脚本中,'boilerplate'指的是编写代码时那些重复且标准化的代码段,它们通常在不同的项目中保持不变,但需要为每个项目重复编写。

  • 脚本中提到的“retri retrieval augmented generation”和“few shot learning”是什么?

    -这是两种AI技术。'retri retrieval augmented generation'是一种结合了检索和生成的方法,用于提高AI生成内容的相关性和准确性。'few shot learning'是一种机器学习方法,通过提供少量的示例来训练模型执行特定任务。

  • 脚本中提到了使用Pantic来创建一个专家的schema,这是什么意思?

    -在脚本中,使用Pantic创建一个专家的schema指的是定义一个专家的框架或模板,包括专家的名称、领域和描述。这个schema用于生成专家的简介,并可以被AI理解和使用。

  • 脚本中提到了GitHub Co-pilot,它在这个过程中的角色是什么?

    -GitHub Co-pilot是一个AI编程助手,它可以根据给定的上下文自动生成代码。在脚本中,尝试使用Co-pilot来生成Lang chain的表达式语言,但遇到了问题,因为它使用的是过时的API,并没有正确地使用Lang chain。

  • 脚本中提到的“playground”是指什么?

    -在脚本中,'playground'可能指的是一个实验性的环境或平台,用户可以在这个环境中测试和实验不同的AI模型和代码,以查看它们如何响应特定的输入或指令。

  • 脚本中提到的“vector database”是用来做什么的?

    -在脚本中,'vector database'被用来存储和检索数据。它通过将数据编码为向量的形式,来优化搜索和检索的效率,特别是在处理大量数据和需要快速检索相似内容时。

  • 脚本中提到的“we8”是什么,它在构建系统时起到了什么作用?

    -脚本中提到的'we8'可能是指一个工具或框架,用于帮助构建和管理工作流程。在脚本中,它被用来设置数据库的schema,存储代码示例、标签和描述,以便后续可以检索和使用。

  • 脚本中提到的“prompt”在AI生成过程中扮演了什么角色?

    -在AI生成过程中,'prompt'是给AI模型的输入或指令,它告诉模型需要生成什么样的输出。在脚本中,prompt结合了系统消息和用户请求,形成了一个完整的请求,AI模型根据这个请求生成代码或响应。

  • 脚本中提到的“system message”和“human message”有什么不同?

    -在脚本中,'system message'指的是由系统生成的消息,它包含了所有必要的上下文信息,如代码示例和描述。而'human message'则是用户的请求,它更直接地表达了用户想要AI做什么。两者结合形成了完整的prompt,供AI模型处理。

  • 脚本中提到的“chain”具体指的是什么?

    -在脚本中,'chain'指的是一系列自动化的步骤或过程,它们连接在一起以完成一个复杂的任务。在这种情况下,chain涉及到从数据库检索信息、创建prompt、查询AI模型以及解析输出等步骤。

  • 脚本的作者提到了他们将分享GitHub上的代码,这有什么意义?

    -这意味着脚本的作者愿意开放他们的工作,允许其他人查看、使用和提供反馈。这有助于社区学习和改进技术,同时也促进了透明度和协作。

Outlines

00:00

🤖 探索使用AI生成Lang Chain代码的挑战

第一段主要讲述了作者在编写Lang Chain代码时遇到的挑战,包括Lang Chain语言的快速变化和AI模型缺乏最新信息的问题。作者提出了使用检索增强生成和少量学习的方法,通过提供相关示例和请求来提高AI生成代码的准确性。同时,作者尝试了GitHub Copilot,但发现它使用的是过时的API,并没有使用Lang Chain。

05:02

🔍 利用检索增强生成和向量数据库优化AI代码生成

第二段详细描述了作者如何通过检索增强生成的方法,将大量数据输入向量数据库,找到最相关的数据并添加到上下文中,以提高AI生成代码的相关性。作者生成了多个不同的例子,并为每个例子提供了详细的描述,以帮助AI更好地理解和生成代码。此外,作者还介绍了如何使用we8设置向量数据库的架构,并填充了代码、描述和标签等信息。

10:04

📈 实现Lang Chain代码生成的自动化流程

第三段讨论了作者如何通过自动化流程生成Lang Chain代码。作者创建了一个链,该链从查询集合中检索正确的上下文,并将其与请求结合,创建一个包含所有代码示例和上下文的提示。然后,这个提示被用来查询模型并解析输出。作者展示了如何通过这个链生成格式化的专家信息,并且虽然有一点误解,但最终能够成功生成所需的代码。作者强调了持续更新示例以跟上Lang Chain变化的重要性,并提出了未来可能探索的改进方向,包括自动生成描述和优化请求过程。

Mindmap

Keywords

💡Lang chain

Lang chain 是一种编程语言或框架,用于构建和部署智能合约。在视频中,它被提及为一种不断变化的技术,需要通过上下文和示例来让 AI 模型理解其最新的变化和用法。

💡Boilerplate

Boilerplate 指的是编程中重复使用的代码模板。视频中提到作者花费大量时间编写 Lang chain 的样板代码,这表明了在开发过程中对标准化代码段的需求。

💡Prototypes

Prototypes 指的是产品或技术的初步模型。在视频中,原型是指作者正在测试和部署的不同功能和特性的初步实现,这些原型帮助他探索 Lang chain 的不同应用。

💡Retrieval Augmented Generation

Retrieval Augmented Generation 是一种结合了检索和生成的 AI 方法,它通过检索相关信息并将其与生成的内容结合来提高输出的相关性和准确性。视频中,这种方法被用来解决 AI 模型对 Lang chain 知识的更新问题。

💡Few Shot Learning

Few Shot Learning 是一种机器学习范式,其中模型在给定少量示例的情况下进行学习。在视频中,作者使用这种方法通过提供相关示例来帮助 AI 快速学习 Lang chain 的新特性。

💡Pantic

Pantic 在视频中被用作创建和解析专家信息的框架或工具。它被用来生成专家的名称和描述,展示了如何通过 AI 辅助生成特定格式的数据。

💡Vector Database

Vector Database 是一种数据库,它存储和检索以向量形式表示的数据。在视频中,作者使用向量数据库来存储和检索代码示例,以便在需要时快速找到最相关的信息。

💡Schema

Schema 在这里指的是数据库中的数据结构或组织方式。作者使用 Schema 来组织存储在向量数据库中的信息,如文件名、标签、代码和描述,以便有效地检索和使用。

💡GitHub Co-pilot

GitHub Co-pilot 是一个 AI 编程助手,它可以帮助开发者编写代码。在视频中,作者提到使用 GitHub Co-pilot 来尝试自动生成 Lang chain 相关的代码,但遇到了一些限制和挑战。

💡API

API(应用程序编程接口)是一套预先定义的函数、协议和工具,用于构建软件应用。视频中提到使用 Open AI API 和 Pantic,说明了 API 在实现自动化和集成不同服务中的作用。

💡Shorthand Description

Shorthand Description 是一种简洁的描述方式,用于快速传达复杂信息的核心要点。在视频中,作者创建了一种缩写描述,用于在向量数据库中检索最相关的代码示例。

Highlights

作者花费大量时间编写Lang chain的样板代码,用于部署和测试不同的原型和特性。

Lang chain的表达式语言在2022年8月才被引入,许多模型的知识截止日期远早于此,因此它们缺乏对Lang chain工作方式的内置知识。

通过结合检索增强生成和少量学习,可以通过上下文来解决模型知识更新的问题。

使用Pantic创建专家模式,并将其传递给Open AI解析,能够生成专家的名称和描述。

GitHub Co-pilot尝试使用过时的API来生成代码,但并未使用Lang chain,导致生成的代码无法工作。

通过在Playground中使用最新模型,可以生成工作代码,但需要提供足够的上下文。

作者通过生成多个不同的工作示例,并提供详细的描述,来增强检索增强生成的方法。

通过向量数据库检索最相关的示例,并将其添加到上下文中,可以更有效地控制示例的选择。

作者创建了一个简化的描述,用于驱动相似性搜索,并在向量数据库中设置了一个模式来存储示例。

通过查询数据库并审查距离,可以确认检索增强生成的方法是否有效。

作者展示了如何通过设置和检索来创建一个链,该链可以基于请求生成正确的上下文编码示例。

通过提供工作代码示例和上下文,可以生成准确的专家名称和描述,即使Open AI对Lang chain的了解有限。

作者提出了一个解决方案,通过少量学习和检索增强,可以使模型生成有用的代码,即使模型的知识库不包含Lang chain。

为了保持模型的更新,作者需要不断添加新的示例,以适应Lang chain的变化。

作者正在探索如何改进描述的自动生成,以及如何将长形式请求简化为模型能够理解的最佳结果。

作者考虑了如何使整个体验更加用户友好,例如通过自定义UI来生成系统提示,以提高控制和定制性。

作者成功地克服了Open AI缺乏对Lang chain足够了解的问题,并通过GitHub分享了代码,供他人使用和反馈。

Transcripts

play00:01

hey so I spent a lot of time writing

play00:04

boilerplate for Lang chain I'm kind of

play00:06

deploying and testing a bunch of

play00:07

different prototypes and features um as

play00:10

a lot of the same setup for all of them

play00:12

so my question that I posed was is there

play00:15

a way to have open AI or any of the new

play00:19

llms accomplish this for me in a way

play00:22

that's much faster and easier the

play00:24

problem is that Lang chain basically

play00:26

keeps changing Lang chain expression

play00:28

language was only introduced in in

play00:30

August uh the knowledge cut off for a

play00:32

lot of these best models is much earlier

play00:35

than that so there's no knowledge of how

play00:37

these things work actually just

play00:39

basically built into their knowledge

play00:42

base um so the common way of solving

play00:45

that problem is through basically adding

play00:48

it in context so using an approach of

play00:51

retri retrieval augmented generation and

play00:54

few shot learning if I combine both of

play00:56

those and basically pass in relevant

play00:58

examples of how it works a along with my

play01:00

request can I get it to a point where

play01:02

this code it

play01:04

outputs actually works for

play01:07

me so let's

play01:09

see if I request please take a

play01:14

string format it to send it to open AI

play01:18

use pantic to create a schema an expert

play01:21

with a name and a description pass it

play01:23

open AI pars the

play01:25

response here you can see I've given it

play01:28

that

play01:28

description it's able to spit out an

play01:31

expert after it takes in a

play01:34

field where this person is expert in

play01:37

Quantum algorithms and Hardware

play01:40

design um so that's perfect that's the

play01:43

end goal here I'll walk through why it

play01:45

doesn't really work the you know method

play01:47

without having any of this augmentation

play01:49

um and then why this one works and how I

play01:51

got it to actually kind of be more of a

play01:54

product so that I don't have to just

play01:56

generate and pass in my own examples

play01:57

because that wouldn't save me any time

play01:59

at all

play02:00

so really quickly if I were go to GitHub

play02:04

co-pilot and ask it to do this exact

play02:07

same task give me line chain expression

play02:10

language used GPT 4106 preview generate

play02:13

a bio for an expert in a given field

play02:16

output a name and description 10 words

play02:17

or less that is possible via pantic

play02:20

write this for me in

play02:21

Python so this is what it gave

play02:25

me I don't think we want to spend too

play02:27

much time on this but if I you know do

play02:30

swap in my own key

play02:33

here we can see you I kind of set up an

play02:36

expert bio here using

play02:38

pantic um it uses what looks to be an

play02:43

outdated API

play02:45

here it does some really bad

play02:48

parsing it's got some validation built

play02:51

in it doesn't use l chain at all uh but

play02:53

it does kind of attempt the

play02:57

process does not work so yeah

play03:00

right from the get-go I can see it's

play03:02

it's not only you know not using Lang

play03:03

chain it is using open AI API directly

play03:07

but it looks like an outdated

play03:11

version so this approach is not going to

play03:14

work um let's take approach number two

play03:17

if I just go into the playground and

play03:19

kind of you know experiment with this

play03:20

myself and use the latest model you know

play03:22

I don't necessarily have any control of

play03:24

what GitHub is doing or what my prompt

play03:25

looks like um but here let's just assume

play03:29

I'm giv it now the instructions use Lang

play03:31

chain specifically Lang chain expression

play03:33

language just give me the working code

play03:37

and then here I give it the exact same

play03:39

instructions here's kind of a detailed

play03:41

breakdown except a string format it to

play03:44

send it to open AI here are the output

play03:46

instructions using pantic schema pass it

play03:49

to open AI parse the

play03:51

response um if I do that now what it

play03:55

gives me and I think I actually didn't

play03:57

give it a long enough maximum length

play03:59

here so got cut off but that's not going

play04:01

to be our biggest concern

play04:04

anyway so let's look here it kind of set

play04:07

up a little bit of an issue here it

play04:09

doesn't really describe what it actually

play04:11

wants from from pantic uh it's creating

play04:14

a prompt and just some wholly new

play04:17

way it's parsing the raw

play04:22

response and just creating some sort of

play04:24

chain concept um none of this actually

play04:27

uses line chain I don't think any of

play04:29

these they probably even re I don't

play04:32

think these are

play04:33

real um this this one might be uh so it

play04:38

just gave me gibberish this is not going

play04:40

to be helpful it's not going to speed it

play04:42

up at all so I showed you the final

play04:46

solution I had which

play04:48

is let's pass in some working code

play04:51

examples give it that

play04:53

context run this and it will actually

play04:56

work so now the question is how do I

play04:59

actually you know select and generate

play05:02

good context for it uh so the method I

play05:05

took is you know taking kind of standard

play05:08

approach to retrieval augmented

play05:09

generation where I'm feeding it a bunch

play05:11

of data into a vector database finding

play05:14

the most relevant ones adding them to

play05:15

the context um but specifically here I'm

play05:18

interested because I don't want to just

play05:20

pass it kind of you know some window

play05:22

around something that's interesting I

play05:24

really want to pass through these entire

play05:27

examples so what I did is I set out and

play05:31

generated a bunch of different examples

play05:32

this one is calling a function this one

play05:35

is chaining together a bunch of

play05:37

different chains here's an example of

play05:41

yeah parsing a response

play05:44

using pantic schema here's an example of

play05:49

using retrievable augmented

play05:50

Generation Um so for each of those I

play05:53

give it a full working example I

play05:56

describe it in some detail here about

play05:59

what the process is actually doing so

play06:01

that I can kind of you know interpret

play06:03

and actually understand what's actually

play06:05

going on under the hood here figure that

play06:07

might be a little bit extra helpful um

play06:10

but I don't actually think that at this

play06:14

point yet it's still going to do a very

play06:15

good job of giving me relevant results

play06:18

uh the problem is if I you know talking

play06:21

about a joke or something it might think

play06:23

that the most relevant one is the setup

play06:25

for the joke but you know now it's going

play06:28

to be calling a function to do that

play06:30

maybe I was just saying hey I'm going to

play06:31

pass in something and you generate a and

play06:33

return a joke um so what's interesting

play06:36

here is actually not the content of what

play06:39

I'm requesting or the content of what

play06:40

it's accomplishing it's really just a

play06:43

couple of like really basic elements so

play06:45

I created this little kind of shorthand

play06:47

up here that I used description of what

play06:49

is each of these actually

play06:52

doing creating multiple chains that work

play06:54

together that's the valuable information

play06:55

it's pulling out of this it's parsing

play06:57

results as a string takes an object and

play07:01

so each of these is actually kind of the

play07:03

very shorthand description and what I'm

play07:05

using to power the similarity search um

play07:08

so once I've created a bunch of these

play07:10

examples provided my little metal

play07:12

description up here now all I need to do

play07:15

is populate the vector database with

play07:17

that information so I set up a schema

play07:19

here using we8 which is very easy to

play07:22

work with um where I basically you know

play07:26

save the file name the tags I've added

play07:29

the code that I've added and then the

play07:31

description which is the meta

play07:33

description I provided and this is

play07:35

actually what I'm embedding everything

play07:36

else is just here to be retrieved from

play07:38

the database the description is actually

play07:40

the thing that I'm looking at the

play07:43

similarity so here is just basically how

play07:46

do I populate the database I'm going

play07:47

through and adding a bunch of this

play07:48

information I need to kind of you know

play07:50

extract the information from each file

play07:52

so I can pass through just the code and

play07:54

separately pass through the description

play07:56

and the

play07:57

tags and now you know run through

play07:59

everything that I have in my example

play08:02

directory and populate the

play08:05

database and so I wanted to confirm that

play08:07

it works I can just kind of quickly

play08:09

query it here return all of those big

play08:11

chunks of information where I'm just

play08:14

kind of you know getting a similarity

play08:16

like I said on the request that I'm

play08:19

making and I can even review the

play08:21

distance just to make sure that it's

play08:22

working well uh so this is basically how

play08:25

the querying Works um but when I

play08:28

actually demonstrate

play08:29

here how my chain

play08:32

Works um what I'm trying to really

play08:34

generate at the end is exactly this I've

play08:37

got a system message that has the exact

play08:40

right context coding examples I have the

play08:43

user message of what they want to

play08:45

produce and

play08:47

ultimately give me some code at the end

play08:51

so if I look at the chain that I created

play08:54

that's exactly what it

play08:55

is I do the setup and retrieval so this

play08:58

F etches from my query collection based

play09:01

on

play09:02

my request that I'm going to pass

play09:05

through and it passes through the

play09:07

request so that now I can create a

play09:09

prompt The Prompt is a system message

play09:12

which is this system template I just

play09:15

showed with a little bit of context

play09:17

provided of all those code examples and

play09:20

then it combines the human message the

play09:22

human message is much easier it's just

play09:23

the request I'm trying to make it now

play09:26

queries the model that I'm interested in

play09:30

and parses the output so that's really

play09:34

all the chain is up to so now when I

play09:36

pass through my request it'll stream

play09:38

back something

play09:40

here and if we look at the

play09:46

output um it looks like it is going to

play09:49

it may have slightly misinterpreted this

play09:51

so it did a name of the expert as

play09:53

opposed to the field that the expert's

play09:55

in but let's just assume that's correct

play09:58

and so it looks like it creates a

play09:59

successful chain here where it takes the

play10:01

expert name passes it through to a

play10:04

prompt so the prompt now takes this

play10:07

prompt template passes in the expert

play10:10

name and format instructions that it

play10:13

used from the parser that was correctly

play10:15

set using pantic so up here it's got the

play10:18

name which is the name of the expert and

play10:20

the 10w or less description of the

play10:23

expert and so now if I copy all of

play10:28

this and run

play10:31

it once again we should now have

play10:35

formatted results um so this

play10:39

is all it actually ends up being is this

play10:43

you know simple chain that retrieves the

play10:45

right context appends it to my message

play10:49

and you know passes through the request

play10:52

so now instead of writing all the boiler

play10:53

plate I just need to think through what

play10:55

is my request actually trying to

play10:56

accomplish you know what am I trying to

play10:58

structure here and it'll you know it

play11:01

could output all of this for me you know

play11:04

I can then go in and just kind of modify

play11:05

and update the templates or something

play11:07

but the boiler plate is done and so that

play11:09

was really my end goal of get past the

play11:12

problem where uh open AI doesn't

play11:14

necessarily or you know whatever model

play11:16

I'm using doesn't have the knowledge of

play11:17

lank chain Can I with few shot learning

play11:20

and the retrieval kind of Coes it into

play11:22

having enough knowledge that it creates

play11:24

working code and the answer so far seems

play11:26

to be yes it's just going to be up to me

play11:28

to

play11:29

provide enough examples that it actually

play11:31

can generate things uh so to keep this

play11:34

up to date I'll have to keep adding more

play11:35

examples as I said Lang chain keeps

play11:37

changing so the examples may even become

play11:39

out of date I'll need to update those

play11:41

but this seems to work better because or

play11:43

better than just kind of feeding in all

play11:45

of the documentation and having it try

play11:47

to understand those bits because you

play11:49

know unless you have control of what

play11:51

that context window looks like of what

play11:53

it retrieves from each one it's either

play11:54

going to pick pick out kind of like too

play11:56

little too much um it might kind of get

play11:59

similarity based on the wrong things and

play12:01

so by structuring it this way by giving

play12:03

and kind of controlling the balance of

play12:05

examples that I'm providing it um it

play12:08

does a much better job of doing kind of

play12:10

my tailored use

play12:12

case uh so this works today you know

play12:15

things that I'm probably will be

play12:17

exploring soon you know at what point

play12:19

does it break down the number of

play12:20

examples that I provide what's the cost

play12:22

look like it's pretty small just to

play12:24

provide a little bit more context um but

play12:27

not negligable

play12:29

um and then I think you know the the

play12:32

part that kind of requires the manual

play12:34

translation today is giving it a good

play12:37

description long form and then

play12:40

condensing that into this kind of short

play12:41

form description so I think it'll make

play12:44

sense here once I have at least enough

play12:45

examples you know see if I can kind of

play12:47

fine-tune something that would take this

play12:51

automatically generate these

play12:52

descriptions for me um to embed and then

play12:57

at the same time how do I

play13:00

then pass in the long form request to

play13:03

what I actually want have it kind of

play13:05

condense pull out only the you know

play13:07

really required information make the

play13:09

request and return the best results but

play13:12

still kind of maintain this long form

play13:14

request for what I actually want it to

play13:16

generate um so I think that would be you

play13:18

know a way to improve it obviously I

play13:20

could make this entire experience way

play13:21

more userfriendly or whatnot or maybe as

play13:24

opposed to outputting code here I could

play13:27

just output the messages and you know

play13:29

just use my own sort of UI or even this

play13:32

UI to kind of make the request and

play13:34

customize things here um so maybe it

play13:36

would just be a nice shorthand to

play13:37

basically generate the system prompt on

play13:40

my own and that would allow me to kind

play13:42

of you know cut out examples I don't

play13:44

want and you know have some extra

play13:45

control over it but for today this seems

play13:47

to work really well I can now generate

play13:50

uh really quick

play13:52

prototypes and finally overcome the

play13:54

problem of open AI not having enough

play13:57

knowledge to help at working Lang chain

play14:00

um if you're interested I'll post a link

play14:02

to the GitHub and you can go check out

play14:04

the code you know use it for yourself

play14:06

share some opinions but thanks for

play14:09

thanks for watching hope you find that

play14:12

helpful

Rate This

5.0 / 5 (0 votes)

Related Tags
OpenAILang Chain专家生成代码原型检索增强生成法API编程自动化AI技术
Do you need a summary in English?