LangGraph Crash Course with code examples

Sam Witteveen
25 Jan 202439:00

Summary

TLDR本视频介绍了LangGraph的概念和应用,通过几个编程示例展示了如何构建LLM Agents。LangGraph可以看作是LangChain的新方式,通过状态图、节点和边来构建复杂的代理,支持条件边缘和自定义工具。视频还探讨了如何使用OpenAI模型和自定义工具来增强代理的功能,并鼓励观众分享他们对构建特定代理的兴趣。

Takeaways

  • 📚 **LangGraph 简介**:LangGraph 是一种新的运行代理的方式,与 LangChain 生态系统完全兼容,特别适合运行代理。
  • 🌐 **图的概念**:LangGraph 中的图由节点和边组成,节点可以做出决策,决定下一个要连接的节点。
  • 🔄 **状态图**:状态图用于在代理生命周期中以某种方式持久化状态,类似于在不同链或工具之间传递和更新字典。
  • 🔩 **节点和边**:节点是代理的组件,可以是链或工具;边则负责将所有组件连接起来,可以是固定的或条件性的。
  • 💡 **条件边**:条件边允许使用函数(通常是大型语言模型)来决定接下来要访问哪个节点。
  • ⚙️ **编译图**:设置好节点和条件边后,需要编译图,之后图就像标准的 LangChain 可运行对象一样工作。
  • 🔍 **LangSmith 可视化**:通过 LangSmith 可以观察到每一步的执行情况,包括发送到大型语言模型的请求。
  • 🛠️ **自定义工具**:可以创建简单的自定义工具,如生成随机数或将输入转换为小写。
  • 📈 **代理执行器**:代理执行器是构建代理的一种方式,可以使用函数调用来获取一系列响应。
  • 🔁 **消息和中间步骤**:在代理运行过程中,可以更新中间步骤,例如添加到立即步骤的列表中。
  • 📝 **状态管理**:在代理运行时,状态(如输入、聊天记录和中间步骤)被保留并可以被覆盖或添加。
  • 🤖 **代理和工具的协同**:代理可以通过调用工具来执行特定的任务,如生成随机数或转换文本大小写。

Q & A

  • LangGraph是什么,它与LangChain有什么关系?

    -LangGraph是一种用于构建代理的新方式,它与LangChain生态系统完全兼容,特别是能够很好地利用LangChain表达语言中的自定义链。LangGraph是为了运行代理而构建的,可以看作是一个大型状态机,其中的图结构决定了代理的状态和转换。

  • 在LangGraph中,节点和边的概念是什么?

    -在LangGraph中,节点可以代表链或工具,是代理组件的一部分。边则是用来连接这些节点的,可以是固定的,也可以是有条件的。固定边总是从一个节点指向另一个节点,而条件边则允许函数(通常是LLM)决定下一个节点是什么。

  • 如何使用LangGraph构建一个状态图?

    -状态图是LangGraph中状态被持久化的地方。它可以通过从一个链传递到另一个链或从一个链到工具的方式,来传递和更新字典。状态图可以更新某些内容,可以覆盖它们,或者可以添加到它们,例如列表形式的即时步骤。

  • LangGraph中的自定义工具是如何创建的?

    -在LangGraph中,自定义工具可以通过使用工具装饰器来创建,这允许你快速将函数转换为工具。这些工具可以具有名称和描述,并且可以通过调用它们的run方法来执行。

  • LangGraph是如何编译和执行的?

    -在设置好节点和条件边之后,LangGraph会被编译,之后图就像一个标准的LangChain可运行对象。可以调用invoke或stream等方法来执行代理的状态,并提供入口点和终点。

  • LangGraph能否与OpenAI模型一起使用,是否有其他模型选择?

    -LangGraph可以与OpenAI模型一起使用,并且没有限制只能使用OpenAI模型。只要模型支持函数调用,就可以使用其他模型。但是,如果使用函数调用来做出决策,可能需要像OpenAI或Gemini这样的模型。

  • 在LangGraph中,如何创建一个简单的代理执行器?

    -在LangGraph中,创建一个简单的代理执行器涉及到设置状态、定义自定义工具、创建代理以及将这些组件添加到图中。然后,通过定义节点和边来构建执行流程,最后编译图并执行。

  • LangGraph中的会话管理是如何实现的?

    -LangGraph中的会话管理通常是通过持久化输入、聊天历史和中间步骤等信息来实现的。在某些情况下,会话管理可以通过消息列表来实现,而不是传统的聊天历史。

  • 在LangGraph中,如何决定代理何时结束?

    -在LangGraph中,代理的结束通常由一个名为`should_continue`的函数来决定。这个函数会检查代理的最后结果,如果结果是`agent_finish`,则代理结束;如果还需要调用其他工具或执行其他操作,则代理继续。

  • LangGraph是否支持构建具有多个代理的复杂系统?

    -是的,LangGraph支持构建具有多个代理的复杂系统。通过设置多个节点和边,以及使用条件边来决定代理之间的流转,可以实现复杂的代理交互和决策流程。

  • 如何使用LangSmith来调试和理解LangGraph的执行过程?

    -LangSmith是一个工具,可以用来可视化LangGraph的执行过程。通过LangSmith,可以看到每一步的执行情况,包括传递给大型语言模型的输入和输出,以及在不同节点之间的转换。

Outlines

00:00

📚 介绍LangGraph及其应用

本段介绍了LangGraph的概念,它是一种新的运行代理的方式,与LangChain生态系统完全兼容,特别适合运行代理。LangGraph的核心是状态图和节点,状态图用于在代理生命周期中持久化状态,而节点可以是链或工具,是代理的基本组件。节点通过边连接,边可以是固定的或条件性的,条件性边允许使用函数(通常是LLM)来决定下一个节点。最后,通过编译图,使其像标准的LangChain可运行实体一样工作。

05:02

🔍 LangGraph的代码实现和工具使用

这一段通过代码示例展示了LangGraph的实现。首先设置了代理的状态,包括输入、聊天记录和中间步骤。接着创建了一些自定义工具,例如生成随机数和将输入转换为小写。然后,通过定义一个使用OpenAI函数的代理,并将其作为图中的一个节点。通过调用LangSmith可视化工具,可以观察到每一步的执行情况和语言模型接收到的输入。

10:04

🔗 LangGraph中的节点和条件边

本段继续深入LangGraph的实现,讨论了如何在图中添加节点和条件边。节点代表代理的不同部分,可以是工具或链。边用于连接这些节点,可以是固定的也可以是有条件的,允许基于某些条件决定流向哪个节点。通过设置条件边和固定边,可以控制代理的流程。最后,通过编译图,可以像运行标准LangChain可运行实体一样运行代理。

15:06

📝 LangGraph的工作流程和消息处理

这一段通过一个具体的例子展示了LangGraph的工作流程。首先,定义了工作流和状态图,然后添加了代表不同功能的节点,如代理节点和工具节点。设置了入口节点,并定义了条件边和固定边,以控制代理的执行流程。通过编译图,可以运行整个代理,包括消息的传递和工具的调用。展示了如何通过流式传输查看代理的逐步执行情况。

20:10

🤖 构建和运行具有多个代理的LangGraph

本段介绍了如何构建一个包含多个代理的LangGraph。首先,创建了多个工具和代理,每个代理都有其特定的角色和工具集。然后,构建了一个监督代理,它可以根据用户的输入决定将任务分配给哪个代理。通过设置不同的节点和边,包括条件边和固定边,定义了代理之间的交互流程。最后,通过LangSmith展示了整个代理的执行过程,包括随机数的生成、直方图的绘制和最终结果的呈现。

25:12

🚀 LangGraph的实际应用和未来展望

最后一段总结了LangGraph的基本构建方法,并提出了未来可能的发展方向。强调了LangGraph作为一个状态机的强大功能,以及如何通过设置不同的节点和边来控制代理的行为。提出了对于LangGraph和代理构建的兴趣,并鼓励观众分享他们想要构建的代理类型。最后,邀请观众通过填写Google表单来表达他们对特定代理的兴趣,以便未来可以提供更相关的内容。

Mindmap

Keywords

💡LangGraph

LangGraph是一种用于构建代理的新型工具,它与LangChain生态系统完全兼容,特别适合运行代理。在视频中,LangGraph被描述为一种图,其中包含节点和边,节点可以做出决策,决定接下来要连接到哪个节点。它类似于构建一个大型的状态机,用于决定代理的当前状态以及接下来要运行的特定链或工具。

💡LangChain

LangChain是一个生态系统,LangGraph与其完全兼容。LangChain提供了一套工具和框架,用于构建和运行语言模型(LLM)代理。在视频中,LangChain被用来构建自定义链和使用LangChain表达式语言。

💡状态图(State Graph)

状态图是LangGraph中的一个概念,它表示代理在其生命周期中的状态如何被持久化。在视频中,状态图被用来传递和更新信息,如输入、聊天记录和中间步骤,这些信息可以在代理的不同部分之间传递和修改。

💡节点(Nodes)

在LangGraph中,节点代表可以添加到图中的组件,它们可以是链或工具,是构建代理的基本组件。节点通过边(edges)相互连接,形成代理的工作流程。在视频中,节点被用来构建图的不同部分,并决定代理的工作流程。

💡边(Edges)

边是LangGraph中连接不同节点的元素,它们可以是有条件的,也可以是固定的。边决定了节点之间的数据流向。在视频中,边被用来构建代理的工作流程,允许代理在运行过程中根据条件选择下一步操作。

💡条件边(Conditional Edges)

条件边是LangGraph中的一种特殊类型的边,它们允许基于某些条件或函数的输出来决定代理的下一个节点。在视频中,条件边被用来实现代理的决策过程,例如决定使用哪个工具或转换到哪种代理角色。

💡编译图(Compiling the Graph)

在LangGraph中,一旦设置了节点和边,就需要编译图。编译后的图可以像标准的LangChain可运行对象一样被执行。在视频中,编译图是将图转化为可以执行的应用程序的过程,它允许代理根据定义的入口点和终点执行。

💡工具(Tools)

在LangGraph中,工具是指可以被代理使用的预制功能,如生成随机数或将输入转换为小写。在视频中,工具被用来创建简单的功能,这些功能可以被代理在执行过程中调用。

💡自定义工具(Custom Tools)

自定义工具是用户在LangChain中定义的特殊工具,它们可以执行特定的任务。在视频中,自定义工具被用来创建简单的功能,如生成随机数或将字符串转换为小写,这些工具可以被集成到代理中。

💡代理(Agents)

代理是LangGraph中的核心概念,指的是可以执行特定任务的实体。在视频中,代理被用来构建可以响应输入、运行工具、并根据条件做出决策的智能体。

💡状态机(State Machine)

状态机是一种计算模型,它根据输入和当前状态来决定下一个状态。在视频中,LangGraph被比作一个大型的状态机,用于控制代理的状态和决策流程。

Highlights

介绍了LangGraph的概念,它是一种新的运行代理的方式,与LangChain生态系统完全兼容。

LangGraph可以利用LangChain表达式语言中的自定义链,为构建代理提供便利。

LangGraph的核心是状态图,它允许在代理生命周期中以某种方式持久化状态。

节点和边的概念,节点可以是链或工具,边则负责将不同的节点连接起来。

边可以是有条件的,允许函数(通常是LLM)决定下一个节点是什么。

展示了如何编译图,使其像标准的LangChain可运行文件一样工作。

LangGraph可用于构建可重用的代理,并在图中将它们组合在一起。

通过代码示例展示了LangGraph的实现,包括状态设置和自定义工具的创建。

解释了如何在LangGraph中使用OpenAI模型,并强调了函数调用的重要性。

展示了如何通过LangSmith工具观察LangGraph在每个步骤中的实际行为。

讨论了如何使用LangGraph构建具有多个代理的复杂系统,例如监督代理。

提供了一个实际的例子,说明如何使用LangGraph生成随机彩票号码并绘制直方图。

强调了LangGraph在构建复杂代理和状态机方面的潜力,以及它在编程中的实用性。

提供了一个Google表单,收集观众对LangGraph代理构建的兴趣和反馈。

鼓励观众在评论区提出问题,并承诺在视频发布后的24到48小时内回复。

Transcripts

play00:00

Okay.

play00:00

So in this video, I want to have a look at LangGraph.

play00:04

so I'm going to talk a little bit about what it is, and then I'll go

play00:06

through some coding examples of it.

play00:08

So if you are interested in building LLM Agents you will want to learn this

play00:11

and then maybe over the next few videos, we can look at going more in depth

play00:14

with building some different agents and some different, use cases here.

play00:19

so first off, what actually is, LangGraph?

play00:21

you can think of this as sort of the new way to run agents with, LangChain.

play00:26

So it's fully compatible with the LangChain ecosystem and especially,

play00:31

can really make good use of the new sort of custom chains with the

play00:35

LangChain expression language, But this is built for running agents.

play00:39

So they talk about the idea of being a graph and what they're

play00:42

talking about, you know, a graph here, is where you've basically got

play00:46

nodes joining to different edges.

play00:48

and they're not always going to be directed.

play00:51

So this is not a DAG or a fixed directed graph in any way.

play00:54

This is basically where nodes can make decisions about which

play00:58

node they can go to next.

play01:00

So another way of thinking about this is it's like a giant state machine

play01:04

that you're building, where the graph is basically the state machine that

play01:08

decides, okay, what state are you in now?

play01:11

What state will you go to run a particular chain or to run

play01:15

a particular tool, et cetera?

play01:17

and then, how do you get back?

play01:19

And then also things like, how do you know when to complete or end the

play01:23

graph or end the sequence, in here?

play01:26

So LangGraph is built on these ideas of trying to make it easier for

play01:31

you to build, custom agents and to build, things that are more than

play01:36

just simple chains, with LangChain

play01:38

So there are a number of key parts to this.

play01:40

You've got this idea of a state graph.

play01:43

So this is where your state is being persisted in some way

play01:47

throughout the agent's life cycle.

play01:50

And you can think about this as a sort of way of passing the dictionary around

play01:56

from chain to chain or from chain to tool and stuff like that, and then

play02:01

being able to update certain things.

play02:02

And you can update things, where you can just overwrite

play02:05

them, or you can add to them.

play02:06

So if you've got a list of things like immediate steps, you can basically

play02:10

add, to that as the agent is actually going through running the various

play02:14

parts of the graph, et cetera.

play02:16

The next part, which is key to this, is the whole idea of nodes.

play02:20

As you build the graph, you want to add nodes to the graph.

play02:24

And you can think of these nodes as being like, chains, or actually they're

play02:28

also runnables, so it could be a tool, it could be a chain, and you can have

play02:34

a variety of these different nodes.

play02:36

And you think of those as being like the components of your agent that

play02:40

you need to wire together somehow.

play02:42

So while the nodes are the actual components, The edges are

play02:46

what wires everything together.

play02:48

And the edges can come in different forms as well.

play02:51

So you can set an edge where, it's just going to always go from

play02:55

this node to this other node.

play02:57

So if you have a return from a tool going back to the main node, you're

play03:03

just going to let that, you're going to want to probably hardwire that in there.

play03:07

But you can also then set edges which are conditional edges.

play03:10

And these conditional edges allow a function, often going to

play03:14

be, the LLM, to actually decide which node that you go to next.

play03:19

So you can imagine that, this can be useful for deciding if

play03:22

you're going to go to a tool, what tool you're going to go to.

play03:25

If you're going to go to a different sort of persona in the agent.

play03:30

Let's say your agent has got multiple personas and you want to go from

play03:34

one to the other, or you want to have a supervisor that's basically

play03:37

delegating to different, personas.

play03:40

all those things are going to be on sort of conditional edges, that we go through.

play03:44

Now, once we've set up these nodes and we've set up these conditional edges,

play03:48

you then basically want to compile the graph, and now the graph acts, just like

play03:53

a sort of standard LangChain runnable.

play03:55

So you can, run invoke on it, you can run stream, etc.

play04:00

it will now basically run the state of the agent for you, and, give you the

play04:05

entry point, you will define an entry point as an entry node, and give you

play04:09

the sort of end point, and it will be able to get through the whole sort of

play04:13

thing of wiring these things together.

play04:15

Now I can see that there's going to be a lot of use for sort of making,

play04:19

reusable, sort of agents that you would then wire together on a graph.

play04:24

So you might have lots of, little pre made things for

play04:27

using tools, that kind of thing.

play04:29

And you could imagine also that you've got agents that use certain

play04:32

kinds of prompts based on, the inputs that come before them here.

play04:37

what I want to do now is go through some of the code.

play04:39

we'll look at some of the examples that they've given.

play04:42

I've gone through and changed them a bit just to highlight the,

play04:45

what's going on, and then we'll also look at it in LangSmith's.

play04:48

So we can actually sort of see what actually happens at each

play04:52

step and what gets, sent out to the large language model, etc.

play04:56

You'll find that I'm using, the OpenAI models in here.

play04:59

There's no reason why we can't use other models.

play05:02

The only, I guess, challenge is that a lot of those models need

play05:05

to support function calling.

play05:07

If you're going to be using function calling on these.

play05:09

Now, if you're just running sort of standard chains or something where

play05:13

you're not using the function calling, you could use any sort of model.

play05:17

but if you want to have the parts where you're using function calling

play05:20

to make the decisions and stuff.

play05:22

then you're probably looking at models like the OpenAI

play05:25

models, like the Gemini models.

play05:27

And now we're starting to see some open source models that can do this

play05:30

function calling stuff as well.

play05:32

All right.

play05:32

Let's jump in and have a look at the code.

play05:34

All right, let's start off with the simplest, sort of example they give, which

play05:39

is probably not that simple in some ways.

play05:42

the agent executor.

play05:43

So this has been around in LangChain for quite a while.

play05:46

you can think of it as a way of, building an agent where you can

play05:50

then use function calling to get, a bunch of responses in here.

play05:54

So, what I've done is I've taken the notebook, I've sort of changed it a bit.

play05:58

I'm going to make some things a little bit, simpler and I'm going

play06:01

to add some more things to it so we can sort of get a sense of, okay,

play06:04

what actually is going on in here.

play06:06

so first off is basically setting the state.

play06:09

So I've left a lot of their comments in here.

play06:11

There are a number of key things that you want to persist across the actual

play06:14

agent while it's running so in this case, they're persisting the input,

play06:18

they're persisting a chat history.

play06:20

so this is more a sort of traditional way of, adding the

play06:23

memory and doing that kind of thing.

play06:25

you'll see in the second notebook, that we move to more just a

play06:29

list of messages going back.

play06:31

But this is using more of sort of a traditional way of having a chat

play06:34

history, and then having things like intermediate steps here.

play06:38

And you'll see that some of these, things, can be basically overwritten.

play06:42

So this agent outcome, gives us the outcome, from something that the agent

play06:47

did, or gives us this agent finish of when the actual, agent should finish,

play06:53

So in this case, this can be overwritten as a value, in here.

play06:56

Whereas things like the, intermediate steps here, this is basically a

play07:00

list of, steps of agent outcomes, or agent actions, rather, and then

play07:06

show the results of those actions.

play07:08

And you can see in this case, this is being, operator.add.

play07:11

So this is just adding to the list as we go through it.

play07:14

So this state that you start out with, we're going to pass that

play07:17

in to make the graph later on.

play07:20

Alright, now what I wanted to do is set up some custom tools in here.

play07:23

many of you have seen, custom tools before.

play07:25

I did some videos about it a long time back.

play07:28

I probably should have done some more videos updating, as things

play07:31

in LangChain changed, for it.

play07:32

But, if you think, custom tools, You can basically, pick a bunch of

play07:36

pre made tools from LangChain, and there are a lot of those already.

play07:39

but you can also do, custom tools.

play07:41

So here I've made two sort of, silly little custom tools.

play07:45

and one is basically just going to give us a random number.

play07:48

between zero and a hundred.

play07:50

And the other one's just going to take the input of whatever we've got

play07:53

and turn it into lowercase, right?

play07:56

So these are very simple functions.

play07:58

You can see here we're using the tool decorator to basically

play08:02

convert these into tools.

play08:04

And then when we do that, we're getting the name of the tool.

play08:07

We're getting the description of the tool, in this way.

play08:10

so it's a nice way of just quickly making tools, in here.

play08:14

and you can see that when I want to run these tools, I basically just say,

play08:17

whatever the tool is or the function and then basically just dot run.

play08:21

in the case of random, I'm having to pass something in, so

play08:24

I'm just passing in a string.

play08:25

Really, the string can be anything in here.

play08:27

It doesn't really matter.

play08:28

you'll see that the agent likes to pass in random.

play08:31

So I've given this as an example here, but really it could be an empty string,

play08:36

it could be, a string with whatever in it.

play08:38

so in this case, the input is not that important.

play08:40

In this case, the input is important.

play08:43

in this case, we're basically, if I pass something in uppercase, it will

play08:47

be converted to lowercase, right?

play08:49

So whatever the string gets passed in, that will be converted to

play08:52

lowercase and then passed back out.

play08:54

Now, they're simple tools you could change with this with, a bunch of different

play08:59

things like search DuckDuckGo , Tavily is what they originally used in here.

play09:03

but I kind of feel these are nice simple tools where you can go in and then see

play09:07

very clearly what is it That's going on is, you know, what I think going on

play09:12

rather than getting this long JSON back, of a search or something like that.

play09:16

All right, next up, we've basically got, the way of making, an agent.

play09:21

Now, remember a graph can have, multiple agents.

play09:24

It can have multiple parts of agents, can have multiple chains in there.

play09:27

in this case, this is the the sort of agent, the standard sort of agent, which,

play09:33

Basically uses, OpenAI functions, right?

play09:36

You can think of it as an OpenAI functions, agent here.

play09:39

So here we're basically pulling in a prompt, this is they had

play09:43

originally where they're pulling in the prompt from the hub.

play09:45

if we go and have a look at that prompt, we can see that there's really

play09:48

nothing special in there, right?

play09:50

It's basically just got a system message saying you are a helpful assistant.

play09:55

It's going to have a placeholder for chat history.

play09:58

It's going to have a human message, which is going to be the input.

play10:01

It's going to have a placeholder for the agent scratchpad in there.

play10:04

So that's what we're getting back, from that, when we just

play10:07

pull that down from the hub.

play10:09

we set up our LLM.

play10:10

And then we've got this, create open functions agent, which is going

play10:14

to be, an agent runnable in here.

play10:16

So I've passed in, the LLM, the tools, the prompt that we're getting back here.

play10:22

And then, you can see that, that if we actually look at the prompt,

play10:24

it's, it looks quite complicated because it's got a bunch of

play10:27

different, parts going on in there.

play10:29

And we can look at the prompt two ways.

play10:30

We can just look at it like this.

play10:31

We can actually, get prompts and stuff like that.

play10:34

And you'll see

play10:34

that now, if I've got that agent, I can just pass an input into it, and I

play10:39

need to pass in a dictionary, right?

play10:42

So I've got a dictionary with my input text, I've got a chat history,

play10:45

I've got intermediate steps.

play10:46

Neither of those have got anything in here.

play10:48

All right, so we've got these inputs, we pass this in, and you can see that

play10:53

the outcome that we're getting from this is that we're getting an agent

play10:57

action message log response back.

play11:00

So this is basically telling us, give me a random number and then write

play11:03

in words to make it lowercase So you can see that, all right.

play11:07

What's it doing?

play11:08

It's basically deciding what tool to select via a function call.

play11:12

So, if we come in here and look at Langsmith, we can see that when we

play11:16

actually passed this into the LLM, we were passing in these functions

play11:20

and tools in here as well, right?

play11:22

So we can see that this has got, the, details for, that tool, we've got the

play11:27

details for the random number tool, and they've been converted to the OpenAI

play11:32

functions format for us in there.

play11:34

we then basically have got our input, our system input, and then

play11:38

we've got the human input there.

play11:40

And we can see the output that came back was this function call saying

play11:43

that we need to call random number.

play11:45

And if we look back here, we can see that, okay, it's actually passing back

play11:49

that we're going to call random number with the input being random, and we've

play11:53

got a message log back there as well.

play11:56

Alright, so this was a basically one step in the agent.

play12:00

so this hasn't called the tool for us, it's just told us what

play12:03

tool to actually call in here.

play12:05

So that's showing you what that initial part does.

play12:08

Now we're going to use that as a node on our graph.

play12:12

And we're going to be able to go back and forth to that between that particular node

play12:18

and the tools node as we go through this.

play12:20

first off we want to set up, the ability to execute the tools.

play12:23

So we've got this tools executor here.

play12:26

We pass in the list of tools that we had.

play12:28

So remember we've got two tools, one being a random, number, generator and

play12:33

one being, convert things to lowercase.

play12:36

If we come up here and we look at, okay, the first thing we're going to do,

play12:40

what are we going to put on this graph?

play12:41

We're going to put in the agent, but we're actually going to run the agent.

play12:44

So we've got that agent runnable invoke, and then the data

play12:47

that we're going to pass in.

play12:49

and you can see that we return back The agent outcome from that.

play12:53

So in this first case, that, agent outcome is going to be

play12:56

telling it what tool to use.

play12:58

If we put the same inputs that we had before there.

play13:01

we've then got a second function for actually running the tools.

play13:04

so you can see here that this is going to basically, get this, agent outcome.

play13:09

that's going to be our agent action, which is going to be

play13:11

what tool to run, et cetera.

play13:13

And then we can run this tool, executor function and just invoke

play13:17

this with the, telling it what tool and what input to pass in there.

play13:21

Now I've added some print functions in here just so that we can look

play13:24

at, okay, the agent action is what actually it is, and also then the

play13:29

output that we get back from that.

play13:31

Finally, when we get that output back, we add that to the intermediate steps there.

play13:36

The next function we've got is for dealing with, okay, do we now, so remember,

play13:42

each of these can be called at different stages, even though I'm going through,

play13:46

the agent tools and stuff like that.

play13:48

these are separate things, at the moment.

play13:50

the next thing that we've got, this function, is basically determining,

play13:53

okay, based on the last, agent outcome, do we end or do we continue?

play13:59

if it's going to be like an agent finish, then we're going to end, right?

play14:02

We're not going to be doing something.

play14:04

So if it's coming back where it's saying, giving us the final answer out, we

play14:08

don't need to go and call tools again.

play14:10

We don't need to call another language model call again, we just finish, there.

play14:15

So these functions you're going to see, are what we're going to add in here.

play14:18

So first off, we've got our, workflow, which is going to be the state graph.

play14:22

And we're passing in that agent state that we defined earlier on.

play14:26

We're then going to add a node.

play14:28

for agent and that's going to be running the agent there.

play14:31

We're going to add a node for action and we could have called

play14:34

this actually tools, right?

play14:36

tool action or something like that.

play14:38

This is going to be that the function that we've got here for actually,

play14:42

running the tool and getting the response back and sticking it

play14:46

back on immediate steps like that.

play14:48

Alright, so they're the two main nodes that we've got there.

play14:51

We set the entry, node in here.

play14:54

So we've got this, entry node.

play14:55

We're going to start with agent, because we're going to take the inputs,

play14:58

we're going to run that straight in, just like we did above there.

play15:01

and then you can see, okay, now we need to basically put in the conditional edges.

play15:06

the conditional edges is where we're using this function, should continue.

play15:10

and we're basically saying, that, after, agent, The conditional edges are going

play15:16

to be, okay, should we continue or not?

play15:18

you'll see down here, I'll come back to this in a second, but you'll see down here

play15:21

we've got a sort of a fixed, edge where we always go from action back to agent.

play15:27

So meaning that we take the output of the tool and we use that as the

play15:31

input for calling the agent again.

play15:34

But then the agent can then decide, okay, do I need to use another tool?

play15:38

Or can I just finish here?

play15:40

And that's what this conditional edge is.

play15:42

So after the agent, it will decide, if I ask it something, that's totally not

play15:47

using any of those tools, it's just going to give me a normal, answer back

play15:51

from a large language model or from the OpenAI language model in here.

play15:55

But if I give it something where If it's going to be an action, then it's

play15:59

going to, continue, and it's going to go on to, to use the tools and stuff,

play16:03

in there, So here, we want to sort of decide, this is this sort of conditional

play16:08

edge part, and you'll see this in, one of the other notebooks that this

play16:12

can, get a lot more complicated if you've got multiple agents, going on

play16:16

the same graph, as we go through this.

play16:19

All right, we then compile, the graph, I've tried to go with the terminology

play16:22

as much as, as they've got here of like workflow and stuff like that.

play16:26

But really this is the graph.

play16:28

We're compiling it to be like an app in here.

play16:30

If we look at it, we can actually see the branches, that are going on.

play16:35

If we look at it, we can also see, the nodes.

play16:37

that are on this and the edges that are on this so we can see, okay, what goes, from

play16:42

what, And we can also see the intermediate steps, of how they're being persisted

play16:47

on that graph, okay, so now we're going to basically stream the response

play16:50

out so we can see this going through.

play16:52

I'm basically just going to take this app, remember I can do dot

play16:56

invoke, I can do dot stream, And I'm going to pass in the inputs.

play16:59

The inputs here are going to have an empty chat history.

play17:03

but I'm going to pass in, the input basically saying give me a random number

play17:07

and then write in words, should be write it in words, but anyway, write

play17:11

it in words and make it lowercase.

play17:14

So you'll see that, all right, what happens here?

play17:17

So we start off and it decides, ah, okay, I need a tool, right?

play17:22

So its tool is going to be random number.

play17:25

and in this case, it's putting the input being random number.

play17:28

it then gets that response back.

play17:30

Now I've printed this.

play17:31

So that it then basically sends that to the tool, right?

play17:34

so you see each of these is where we're going from one node to the next

play17:38

node that we're printing out here.

play17:40

so we go from this agent run node, the outcome being that,

play17:44

okay, I need to run a tool.

play17:46

coming back and then in this one we're going to, basically now

play17:51

have it where, we've run the tool.

play17:53

It's given us back a random number.

play17:55

The random number is 4 in this case.

play17:58

And so now, it's going to stick that on the intermediate steps in there.

play18:03

So now, the, that's going to be passed back to our original agent node.

play18:08

And now it basically says, okay, this was my initial sort of thing.

play18:12

I've got this number back.

play18:13

Oh, I need to write it in words.

play18:15

And then I need to make it lowercase.

play18:16

So to make it lowercase, I need to use the tool.

play18:19

And the tool is lowercase in this case, right?

play18:22

So the input is gonna be four with all capitals, and you'll see that

play18:27

the lowercase that we're getting out here, is gonna return back.

play18:31

Somewhere here we'll see this.

play18:32

It's going to return back, yes, here, we're going to see the tool

play18:34

result is 4, in lowercase there.

play18:37

So again, this is a tool, this is the straight up agent, this is the

play18:40

tool, this is the initial agent again, this is the tool, and then

play18:45

finally we go back to the agent again.

play18:47

And now it says, okay, now I can do agent finish.

play18:51

Because I've done everything that, I was asked to do in there, I've got

play18:54

the random number, I've got it in words, I've got it in lowercase, here.

play18:59

So we can see that the output here is, the random number is 4, and when written in

play19:04

words and converted to lowercase, it is 4.

play19:08

All right, so it's, a bit of a silly sort of task to do it, but it

play19:11

shows you how it's breaking it down.

play19:13

And we can see, if we look at the intermediate steps that we're getting

play19:16

out there, we've got the steps, for each of the different, things going along.

play19:20

We've got the message log and stuff as we're going through this.

play19:23

All right, if we wanted to do it without streaming it, we could do it like this.

play19:26

if I just say invoke.

play19:28

I'm not going to see, each agent broken out, I'm just going to see, okay, the

play19:31

first off, it's going to pick the random number tool, get 60, Takes that as a word

play19:37

in uppercase, puts it into the lowercase tool, and we get the result out in here.

play19:43

Now, I've saved the output to output in this case.

play19:46

So remember, these are print statements that I put in there.

play19:49

That's why we're seeing this, come along.

play19:51

and then we've got this agent, get agent outcome, if we return values

play19:56

and get the output, we can see the random number is 60, and in

play19:59

words it is 60 all in lowercase.

play20:01

If see the intermediate steps, we can see the intermediate steps there.

play20:05

Just to show you, sort of finish off, if we didn't put something in that needed a

play20:10

tool, if we just put in the, okay, does it get cold in San Francisco in January?

play20:15

it comes back.

play20:16

Yes, San Francisco can experience cold weather in January.

play20:19

So now, notice it didn't use any tools.

play20:21

It just came straight back with a finish.

play20:24

and there's no intermediate steps, right?

play20:26

We've just got that one, call going on in here.

play20:29

So if we come in here and have a look in, LangSmith, we can see, this going on.

play20:34

So we can see that, okay, we started out with that call.

play20:38

It basically gave us a return to a call random number.

play20:41

We got random number.

play20:42

We got four out of that, from that, we then basically went back, so that,

play20:47

remember the action is like our tools, we went back to the agent, and we can

play20:51

see that, if we look at the OpenAI thing here, we can actually see what

play20:55

was getting passed in here, and we can see that, okay, it's going to come

play20:58

back that the input is four in capital letters, We'll go into the tool lowercase

play21:04

here, this is going to transform it to just deliver back 4 in lowercase.

play21:08

And then finally, we're going to pass that in, if we look at, here,

play21:12

we're passing all of that in.

play21:14

With this string of okay, what we've actually done in here as well.

play21:18

And now it can say, okay, the output is going to be the random number is four.

play21:22

And when written in words, it's converted to lowercase four, right?

play21:25

And you can see the type that we got back was agent finish.

play21:29

so that's what tells it not to continue, as we go through this.

play21:33

All right, let's jump in and have a look at the second example.

play21:37

So the second example in here is very similar.

play21:40

The big difference here is it's using a chat model and it's using a list

play21:46

of messages rather than this sort of chat history that we had before.

play21:50

So we've got the tools here.

play21:52

Now, one of the things with doing it this way is that we're not using, the,

play21:58

createOpenAI functions agent here.

play22:01

So we are using an OpenAI model, but we need to basically bind

play22:06

the functions to the model.

play22:08

So we've got the model up here, and we can basically just bind these.

play22:13

so we just go through for each tool that we've got in here.

play22:16

We run it through this format tool to OpenAI functions format, and then

play22:20

we basically bind that to the model.

play22:22

So meaning that the, model can then, use that and call back a function

play22:27

just like it did before in there.

play22:29

Alright, we've got an AgentState again, like we had before, this is the sort

play22:33

of state graph, that we had here.

play22:36

In this case, the only thing that we're going to have though, is just messages.

play22:39

We don't need to have the intermediate steps we're not doing, any of that

play22:44

stuff, and because the input is already in the messages, we can

play22:47

actually get at that here, so we don't need to, persist that either.

play22:51

All right, our nodes, so we've got the should continue node again, so this,

play22:56

again is going to, decide whether we, go back to the sort of original, agent

play23:02

node or whether we go to the tools node, and in this case, you can see that what

play23:06

it's actually doing is it's getting off that, the last message that we got back.

play23:12

And it's using that to basically see is there a function call in

play23:16

that or not, If it doesn't have a function call, we know then that's

play23:20

not using a tool, so we can just end.

play23:22

If it does, we can then continue.

play23:25

We've then got basically calling the model, so this is, taking our messages,

play23:29

passing this in and, invoking the model.

play23:32

We're going to get back a response for that.

play23:35

so we've got, this response, and we're just putting that response back in there.

play23:39

we've got a function for calling the tools.

play23:42

okay, here again we're going to get the last message.

play23:45

because this is what's going to have the actual, function, that we need to

play23:49

call, or the tool that we need to call.

play23:51

So you can see that we get that by just getting last message,

play23:54

looking at function call, getting the name, and then that basically

play23:57

pass that back of what the tool is.

play23:59

And then same kind of thing for getting the tool input in here.

play24:03

and then here I'm just basically printing out that agent action,

play24:06

is that action again, so we can actually see, what's going on.

play24:09

And the response back, The same as what we did in the previous one.

play24:13

and then, we're gonna basically, use that response to create a function message,

play24:18

which we can then basically, assign to the list of messages, so it can be

play24:24

the last message that gets taken off and passed back in to the agent again.

play24:28

alright, we've got the graph.

play24:30

here, same kind of thing, we add two nodes, we've got one node

play24:33

being the initial sort of agent and one node being the tool or

play24:37

the action that gets called, here.

play24:39

We set the entry point to agent again.

play24:42

We've got our conditional, the same as we had before as we go through

play24:46

this, so we've got a conditional edge and we've also got a hardwired edge,

play24:50

being that always from action, we always go back to agent, in this case.

play24:54

Compile it, and then now we can just run it.

play24:56

So you can see here that we can, I'm just going to invoke these as we go through it.

play25:01

But you can see that, I've got, give me a random number and then write

play25:03

the words and make it lowercase.

play25:05

We can see we're getting the same thing as what we had before.

play25:08

So this is replicating the same kind of functionality, but, you're going

play25:12

to find that in some ways this can be, this allows you to do a lot more things

play25:16

in here, in that if we were to come in here, you see how we're popping off

play25:20

the last message when we come in here.

play25:22

We're also, able to basically summarize messages, we're able to play with, the

play25:27

messages, we're able to limit it so that we've only got the last 10 messages in,

play25:32

memory so that we're not making our calls, 35 messages long or something like that.

play25:37

Even with, the GPT 4 Turbo, we can go, really long, but we don't probably want to

play25:41

waste so much money by doing, really long, calls and using up really large amounts

play25:46

of tokens in the context window there.

play25:49

so we can run that through.

play25:50

you can see here, we've asked for the random number.

play25:53

Sure enough, it's done the same thing.

play25:54

It's got the tool.

play25:55

and remember these are coming from the print statements that I put in there.

play25:59

and, we're then basically getting this output in here.

play26:03

so that's the output of the whole thing that's coming back with the various

play26:06

messages that we've got going through and those messages, everything from,

play26:11

the system message to human message to the AI message to a function

play26:15

message to an AI message again, to a function message back to an AI

play26:19

message for the final one out there.

play26:21

if we just want to try it where it's just using one tool.

play26:24

. So if you and I have put in, please write Merlion in lowercase, you can see

play26:28

now it just uses one tool, just uses the lowercase one, goes through and does that.

play26:32

And then again, if we want to just try it with, no tools if I

play26:36

ask it, okay, what is a Merlion?

play26:37

A Merlion is a mythical creature with a head of a lion and a body of a fish.

play26:41

Alright, so this sort of shows you.

play26:43

that it can handle, both using the tools and not using the tools.

play26:48

And it also shows you that each time though, we're getting these

play26:52

list of messages back, which is, the way of us being able to see

play26:57

what's going on and persist the conversation as we go through this.

play27:00

Okay.

play27:01

In this third notebook, we're going to look at the idea of building

play27:05

a sort of agent supervise us.

play27:07

So where you've got it so that the user is going to pass something in the

play27:11

supervisors, then going to decide, okay, which agent do I actually route this to?

play27:16

and then it's going to get the responses back.

play27:19

and some of these agents can be, tools.

play27:21

some of them can be, just other, agents that actually, are not using a tool, but

play27:26

a using a large language model, et cetera.

play27:28

So let's jump in.

play27:29

so we've got the same inputs that we had before.

play27:32

I'm setting up LangSmith in here.

play27:35

I'm bringing in the model.

play27:36

So the model I'm going to use for this one is GPT-4.

play27:39

And then we've got a number of tools in here.

play27:40

I've got the, my custom tools that we used in the first two notebooks.

play27:44

So the lower case and the random number there.

play27:46

but we've also got the PythonREPL tool, right?

play27:49

Remember, this is a read, evaluate print loop, a tool.

play27:52

So basically you can run Python code.

play27:54

So you always want to be a bit careful, of, what prompts you're letting go into

play27:59

that because, it can be used maliciously.

play28:01

Obviously, if it can run anything that Python can run, it can do a lot of.

play28:05

damage in there.

play28:06

All right.

play28:07

So then in the example notebook, They've got these helper, utilities.

play28:10

this is basically for making a general agent and you can see that

play28:14

we're going to pass in the LLM.

play28:15

We're going to pass in the tools that the agent can use.

play28:17

We're going to pass in a system prompt For this.

play28:21

And it will then basically assemble it with the, messages

play28:25

with the scratch pad et cetera.

play28:28

and it's going to make that, create_openai_tools agent Just like

play28:32

we had in the first notebook, that we went through and it's going to then

play28:35

return that, executor back in here.

play28:38

So this is just a way to sort of instantiate multiple agents, based

play28:42

on, their prompts and stuff like that.

play28:44

So the second helper function here is this basically this agent node which

play28:48

is for converting, what we got here that, creating the agent into an actual

play28:53

agent node, so that it can be run in here it's also got a thing where

play28:57

it's going to take the message and convert it to being a human message.

play29:01

because we've got multiple, agents which are going to be LLM responses and stuff,

play29:06

we're often going to want to convert those to be human responses to get the sequence

play29:11

of responses as we go through this.

play29:14

Alright, next up is creating the agent supervisors.

play29:18

So, this case, is where you're going to determine your.

play29:22

multiple agent personalities and stuff like that.

play29:25

So the personas I've got here.

play29:27

I've changed their example.

play29:28

So I've got the lotto manager, which is obviously going to use

play29:31

the tools that we had before of the random number, et cetera.

play29:35

And we've got a coder.

play29:36

So I've stuck to the original example that they had of having a coder

play29:39

that will make a plot out of this.

play29:43

but what we're gonna do is plot out the lotto numbers for this.

play29:47

So we can see here that this supervisor has got a very

play29:49

sort of unique prompt, right?

play29:50

It's basically that, you're a supervisor tasked with managing a conversation

play29:54

between the following workers.

play29:56

And then we're passing in the members.

play29:58

So the members is this lotto manager and coder.

play30:02

given the following user request respond with the worker to, act next.

play30:07

So each worker will perform a task and respond with their results and status.

play30:12

When finished respond with finished.

play30:13

So this is what's guiding the supervisor to decide the delegation and to

play30:20

basically decide when it should finish.

play30:22

for these.

play30:24

So for doing that, delegation, it's going to use an OpenAI function, and

play30:28

this is basically setting this up.

play30:29

So this is setting up like the router for deciding the next

play30:33

roll of who should, do it.

play30:35

and then, passing these things through.

play30:38

and passing in like this enum of, the members and finish,

play30:42

so it can decide, do I finish?

play30:44

Do I go to this member?

play30:45

Do I go to this other member as we go through this?

play30:48

And you can see that because that is its own call in here, we've got a

play30:52

system prompt there that says, given the conversation above who should act next?

play30:57

And then we basically give it the options of finish lotto manager or coder in this.

play31:03

And then basically just putting these things together,

play31:05

making this supervisor chain,

play31:07

where we're going to have this prompt, we're going to bind these

play31:09

functions, that this function above that we've just had to the LLM and

play31:14

then we're going to pass that out.

play31:16

getting that back

play31:17

. So hopefully it's obvious that will then become like a node

play31:20

on the actual graph as well.

play31:23

So now we're going to look at, actually creating the graph.

play31:25

So we've got this agent state, going on here.

play31:28

and so this is our graph state.

play31:30

again, we're going to have the messages that we're going to be passing it in.

play31:33

So we're sticking to that sort of a chat executor like we did

play31:37

in the second notebook there.

play31:39

And you can see here that we're going to basically have the lotto agent.

play31:43

So I'm just going to instantiate these with that helper

play31:45

function for create agent.

play31:47

And so here, I've got, a lot of agents going to take in our, GPT-4 turbo

play31:52

model, it's going to take in the tools.

play31:54

And then the prompt for this is you are a senior lotto manager, you run the

play31:59

lotto and get random numbers, right?

play32:01

it's telling it that, Hey, this is the agent to do that.

play32:04

It's telling that it's going to have to basically use the tools to do that.

play32:07

so that's the lot of agent.

play32:09

And then the second agent is this coder agent.

play32:12

So this coder agent is just using the tool.

play32:15

So I passed in all the tools in here for tools, by the way.

play32:18

And this particular agent is, just going to use the PythonREPL, tool.

play32:23

And this is basically saying you may generate safe, Python code to analyze

play32:26

data and generate charts using matplotlib.

play32:29

So it's just setting it up to do the charting in there.

play32:32

So if you look carefully, you'll actually see that, I think I accidentally passed

play32:36

in the PythonREPL into these tools as well, So it's not ideal in that

play32:41

we would want to limit, the number of tools that we pass into something to

play32:45

as few as possible, one, it saves on tokens and two, it just makes it easier

play32:49

for the model to make the decision.

play32:51

But anyway, we've got those.

play32:53

and then we've got, this basically setting up the node here.

play32:56

And so we've got our lotto node.

play32:58

We've got our code node.

play33:00

we can then basically pass these in as we go through this.

play33:04

We need some edges.

play33:05

So the edges we've actually got a lot more edges cause

play33:09

we've got a lot more nodes now.

play33:11

and you can see that they're just using a four loop to make these edges.

play33:14

So, from every agent or persona, whether it's the lotto manager,

play33:21

whether it's the coder, it always goes back to the supervisor.

play33:24

So even if we had 10 different agents, as you can see we've got

play33:28

to being lotto manager and coder.

play33:31

it will go back to the supervisor at the end of that.

play33:34

and then we've got conditional ones.

play33:35

where it will determine, this is sort of setting up a conditional

play33:39

map for, The conditional edge of being the supervisor going to what?

play33:45

So, this conditional map, in fact, maybe in the future example, I would

play33:49

just hard-code this out so people can sort of see what's going on here.

play33:53

But basically it's just making a dictionary in here.

play33:56

it's adding in the, finish node in there as well that it can

play34:00

basically use as a condition.

play34:01

and we can see that we can go from supervisor to any of those

play34:05

on the conditional map, which is going to be our members and is

play34:08

going to be finished in there.

play34:10

finally we set up the entry point.

play34:12

So the entry point is going to be the supervisor.

play34:14

Compile a graph and then we can use the graph.

play34:18

So you can see now when I've asked it to do is human message in, get

play34:22

10 random lotto numbers and plot them on a histogram in 10 bins.

play34:27

And tell me what the 10 numbers are at the end.

play34:30

So this runs through.

play34:32

it does the plot for us.

play34:34

So we don't really see that much here.

play34:37

But let's jump over to LangSmith and see what's going on here.

play34:41

if we look at the LangSmith for this, we can see that it starts out.

play34:45

and we've got the router as the actual, function calling

play34:50

thing at the start, right?

play34:51

Not the tools.

play34:51

This is the router that is basically deciding, do I go to lotto manager?

play34:56

Do I go to coder?

play34:57

Do I go to, finish of this.

play35:00

We pass in now prompt there and you can see now it's got the workers

play35:03

being lotto manager, and coder.

play35:05

which got, you know, put it in there.

play35:08

and then we've got, when finished respond with finish.

play35:10

and then we passed in the actual sort of human prompt.

play35:13

And you can see that it's decided that okay, from this select one of these.

play35:18

It's a solid, okay, need to go to lotto manager.

play35:20

So that's where we get to lotto manager.

play35:23

Now, lotto manager.

play35:24

basically it looks at this and now it's getting tools in there.

play35:28

So remember I said, I accidentally passed it in the PythonREPL in here.

play35:31

I probably shouldn't have done that.

play35:32

But anyway, we've got, you're a senior lotto manager, get

play35:35

10 random lottery numbers.

play35:37

Were passing in that, in there.

play35:39

And you can see it's going to, it's worked out that, okay, it needs to do this random

play35:43

thing and it needs to do it 10 times.

play35:45

So it goes through and runs.

play35:47

the random number tool 10 times.

play35:49

So we get 10, separate, random numbers back.

play35:52

from that.

play35:53

it, then, can take those.

play35:55

and decide, okay there's our 10 numbers back that we got.

play35:59

and it can decide, okay, now it needs to go to the coder.

play36:04

now, in this case, actually, because it had the PythonREPL in here,

play36:08

it just did it itself in here.

play36:10

But you'll see on some of them, we'll actually go back to the coder in there.

play36:14

and then finally, we've got the supervisor out, which is giving a lot of numbers out.

play36:19

telling us that we can't see the, the plot, we can't pass the plot back

play36:23

cause it's already plotted it out.

play36:25

Here is our plot out.

play36:26

and if we went along, we can see that.

play36:29

here are the numbers that correspond to the plot out that we've got there.

play36:33

Anyway, this is just running it.

play36:35

two times.

play36:36

If we look at the final response out, we can see that this is what we've got.

play36:40

if we want to actually just sort of give the human response

play36:42

back out, we can get this out.

play36:44

So we've got this, the histogram has been plotted for the following numbers,

play36:48

passing in the numbers with new line characters, et cetera as we go through it.

play36:52

Okay.

play36:53

So this shows you the sort of basics of building a supervisor agent that

play36:57

can direct multiple agents in here.

play37:00

So in some future videos, I think we'll look at, how to actually, go through

play37:04

this, more in depth and actually do some more real world agent things with this.

play37:09

and then from this, you could basically take it, you could

play37:12

deploy it with a LangServe.

play37:14

You could do a variety of different things with it to make

play37:17

a nice UI or something, for this.

play37:20

But hopefully this gives you a sort of crash course in

play37:23

what LangGraph actually does.

play37:26

And what some of the key components are for it.

play37:28

if you just think of it as being a state machine, this is

play37:32

fundamentally how I think about it.

play37:33

if you've ever done any sort of programming for games and stuff,

play37:37

you often use state machines there.

play37:39

a lot of sort of coding will often have some kind of state machine.

play37:42

And the state machine is basically just directing things around this.

play37:46

so don't be intimidated by it.

play37:48

It's pretty powerful that, you can do a lot of different stuff.

play37:51

I would say You can get confusing at times when you're first

play37:54

getting your head around it.

play37:55

But once you sort of work out like how, you're setting up the different

play37:59

nodes, what the actual nodes are, how you're going to have conditional edges

play38:03

between the nodes and then what it, you know, what should be hardwired

play38:07

edges to basically bring things back is another way of thinking through this.

play38:12

So for me, I'm really curious to see what kind of agents people

play38:15

want to, learn to actually build.

play38:18

Agents is something that I've been interested in with a

play38:21

LangChain for over a year or so.

play38:23

And I'm really curious to see, okay, what kind of agents do you want?

play38:26

And, we can make some different examples of these.

play38:28

in the description, I'm going to put a, Google form of just basically asking you

play38:32

a little bit about what agents you're interested to see and stuff like that.

play38:36

If you are interested to find out more about this.

play38:38

fill out the form and then, That will help work out what

play38:41

things to go with going forward.

play38:44

anyway, as always, if you've got comments, put them in the comments below.

play38:48

I always tried to read the comments for the first 24 hours or 48 hours after the

play38:52

video is published and reply to people.

play38:54

so if you do have any questions, put them in there.

play38:57

and as always, I will see you in the next video.

Rate This

5.0 / 5 (0 votes)

Related Tags
LangGraphLLM代理状态图节点决策编程实例LangChain自定义链状态机工具执行函数调用OpenAI模型
Do you need a summary in English?