Building a Generative UI App With LangChain Python

LangChain
13 Jun 202443:00

Summary

TLDRこのビデオでは、PythonバックエンドとNext.jsフロントエンドを使用して生成UIチャットボットを作成する方法が紹介されています。シリーズの第3回で、ジェネラティブUIの概念やその利点を説明し、実際にアプリケーションを構築するプロセスをステップバイステップで案内します。Lang chainとL graphライブラリを活用し、ユーザー入力や画像、チャット履歴を処理する方法を学びます。また、ツールの呼び出しとUIコンポーネントの更新を含む、サーバーとクライアントのアーキテクチャの詳細も解説されています。

Takeaways

  • 🌟 このビデオは、Lang chainを使用して生成UIアプリケーションを構築するシリーズの第三弾です。
  • 🔗 前回のビデオで紹介された生成UIの概念やユースケース、および比較の利点を復習することをお勧めします。
  • 💬 今回のビデオでは、Pythonバックエンドとxjsフロントエンドを使用して、生成UIチャットボットを作成する方法を説明します。
  • 🔧 Lang graphライブラリを使用して、グラフを構築し、UIコンポーネントに関連するツールをバインドします。
  • 📈 Lang graphは、ノードとエッジを使ってアプリケーションフローを構築し、条件付きエッジでツールの呼び出しを決定します。
  • 🛠️ チャットボットのアーキテクチャは、サーバーとクライアントの2つのセクションに分かれており、サーバーはPythonコード、クライアントはNext.jsコードを保持します。
  • 🔄 Lang serveを使用して、Pythonバックエンドを実行し、ストリームイベントエンドポイントでリアルタイムにUIコンポーネントを更新します。
  • 📝 ビデオでは、GitHubリポジトリ情報取得ツール、インボイス解析ツール、天気予報ツールの実装方法も説明しています。
  • 🌐 フロントエンドでは、React Contextを使い、サーバーからのストリームイベントを処理し、UIを動的に更新します。
  • 🔑 環境変数を使用して、GitHubトークンやAPIキーなどの認証情報を管理しています。
  • 🎉 最後に、デモとして生成UIアプリケーションを実行し、ユーザー入力に応じて動的にUIコンポーネントを表示させています。

Q & A

  • このビデオでは何について説明されていますか?

    -このビデオでは、Pythonバックエンドとxjsフロントエンドを使用して生成UIチャットボットを作成する方法について説明されています。

  • 生成UIとは何で、なぜ従来の方法よりも優れていると言えるのですか?

    -生成UIとは、ユーザー入力に応じて動的にUIコンポーネントを生成するインターフェースのことです。従来の方法よりも柔軟性があり、ユーザーのニーズに合わせたカスタマイズが容易であるため優れています。

  • Lang chainとはどのような技術ですか?

    -Lang chainは、Pythonバックエンドで使用されるライブラリであり、グラフを構築してアプリケーションのフローを管理するのに使われます。

  • L graphの基本的な構成要素は何ですか?

    -L graphの基本的な構成要素はノードであり、各ノードは呼び出される関数で、状態が渡されます。

  • チャットボットのアーキテクチャ図には何が示されていますか?

    -チャットボットのアーキテクチャ図には、サーバーとクライアントの2つの異なるセクションが示されており、サーバーにはPythonコードが、クライアントにはNext.jsコードが含まれています。

  • L graphの条件分岐とは何ですか?

    -L graphの条件分岐とは、特定の条件に基づいてアプリケーションフローを異なるノード間でルーティングする機能のことです。

  • チャットボットのサーバーサイドで使用される主要なファイルは何ですか?

    -チャットボットのサーバーサイドで使用される主要なファイルは、`chain.py`ファイルであり、ここでLang graph chainが実装されています。

  • ツールのパースと呼び出しはどのように行われますか?

    -ツールのパースは、OpenAIツールの出力をパースするJSONパーサーを使用して行われます。その後、モデルがツールを呼び出すかどうかを判断し、必要に応じてツールを呼び出します。

  • クライアントサイドで使用されるReactコンポーネントとサーバーサイドのロジックはどのように連携していますか?

    -クライアントサイドのReactコンポーネントは、サーバーサイドのロジックと連携し、ツールの呼び出しや結果のストリーミングに応じてUIを更新します。

  • デモで作成された生成UIアプリケーションの動作例を教えてください。

    -デモで作成された生成UIアプリケーションでは、ユーザーが「SFの天気は?」と尋ねた場合、チャットボットは天気ツールを選択し、ローディングコンポーネントを表示した後、APIからデータを取得してUIを更新します。

Outlines

00:00

😀 総括とシリーズ紹介

このビデオは、UIアプリケーションを構築するシリーズの第3弾で、Lang chainを使用して生成UIアプリケーションを構築する方法を紹介します。前回のビデオで扱われた生成UIの概念やその利点、および今回構築するアプリの概要について説明しています。また、JavaScriptバージョンのビデオへのリンクも提供されています。

05:01

🛠️ アプリケーションアーキテクチャの概要

ビデオでは、サーバーとクライアントの2つのセクションを持つチャットボットのアーキテクチャ図を紹介し、各セクションの機能とPythonバックエンドとNext.jsフロントエンドの使用について説明しています。また、Lang chainとL graphの使い方とそのAPIについても触れています。

10:02

🔄 L graphの基本とツールのバインド

L graphの基本的な構造と、ノードとして機能するグラフの流れについて説明しています。ツールのバインド方法や、言語モデルがUIコンポーネントに関連するツールを選択し、必要に応じて呼び出すプロセスについて詳述しています。

15:22

🌐 ツールの実行とストリーミングイベント

ツールが呼び出された場合の処理方法と、ストリーミングイベントを使用してクライアントにデータをリアルタイムに返す方法について説明しています。また、ツールの実行結果がUIに反映されるプロセスも紹介されています。

20:26

📝 Pythonバックエンドの構築

Pythonバックエンドの構築プロセスについて詳細に説明しています。状態の定義、グラフの作成、ノードの実装、そして言語モデルの呼び出しとツールのバインド方法について詳述されています。

25:28

🛠️ ツールの実装例

具体的なツールの例として、GitHubリポジトリ情報を取得するツールの作り方と、他のツールとの比較について説明しています。また、APIキーの取得方法やツールの実装に必要なライブラリのインポートについても触れています。

30:31

🌐 Lang serveサーバーの設定

Lang serveサーバーを設定し、FastAPIを使用してAPIエンドポイントを実装する方法について説明しています。また、環境変数の読み込みやリモートランナブルの追加など、サーバーの起動方法についても紹介されています。

35:31

📜 クライアントサイドの接続とUIの更新

クライアントサイドでAPIリクエストを行ってサーバーと接続し、UIを更新する方法について詳細に説明しています。また、React Contextを使用してデータを管理し、ストリーミングUIコンポーネントを処理するプロセスも紹介されています。

40:32

🎉 デモと総括

最後に、実際に構築された生成UIアプリケーションのデモを紹介し、ユーザー入力に応じてツールが選択され、リアルタイムにUIが更新される様子を示しています。また、Lang Smith Traceを使用して、サーバーサイドでの処理フローを可視化する方法も説明しています。

Mindmap

Keywords

💡Generative UI

Generative UIとは、ユーザーの入力やコンテキストに基づいてUIを動的に生成するアプローチです。このビデオでは、Generative UIが従来の方法に比べてなぜ優れているか、またどのようなユースケースがあるかが説明されています。例えば、ユーザーがチャットボットと対話する際に、入力内容に応じて適切なUIコンポーネントを表示することが可能です。

💡Lang chain

Lang chainは、Pythonバックエンドで使用されるライブラリであり、Generative UIアプリケーションの構築に不可欠です。ビデオでは、Lang chainを使用して条件付きエッジやツールとの連携を実装し、ユーザー入力に応じて適切なツールを呼び出す方法が紹介されています。

💡Pythonバックエンド

ビデオではPythonバックエンドを使用して、Generative UIアプリケーションのサーバーサイドロジックを実装しています。Pythonコードは、ユーザー入力や画像、チャット履歴などのデータを処理し、Lang chainと連携して最適なUIコンポーネントを提供します。

💡xjsフロントエンド

xjsフロントエンドとは、JavaScriptの拡張構文であるjsxを使用したNext.jsフレームワークで構築されたフロントエンドアプリケーションです。ビデオでは、Next.jsを使用して、Pythonバックエンドから提供されるデータに基づいて動的なUIを生成する方法が説明されています。

💡ツール(Tools)

ツールは、Lang chainにおいて、ユーザーインターフェースのコンポーネントに対応する機能的なユニットです。ビデオでは、GitHub情報の取得や天気予報の取得など、特定のタスクを実行するためのツールが実装され、言語モデルによって呼び出される例が示されています。

💡L-graph

L-graphは、Lang chainのライブラリの一つで、グラフ構造を用いてアプリケーションのフローを構築します。ビデオでは、L-graphを使用して、言語モデルの出力をもとに、条件分岐やツールの呼び出しを行う方法が紹介されています。

💡ストリーミングイベント

ストリーミングイベントは、Lang chainの機能で、関数がyieldするイベントをリアルタイムにクライアントにストリームバックすることができる機能です。ビデオでは、ストリーミングイベントを用いて、ツールの呼び出しや言語モデルの応答をUIに即座に反映させる方法が説明されています。

💡チャットボット

チャットボットは、ユーザーとの対話を行うためのアプリケーションであり、このビデオではGenerative UIアプリケーションの形で構築されています。ユーザーからの入力に対して、チャットボットは適切なツールを呼び出し、応答を提供します。

💡API

APIは、アプリケーションプログラミングインターフェースの略で、異なるソフトウェアシステム間でデータを交換するための規約です。ビデオでは、GitHub APIや天気情報APIなど、外部のAPIを呼び出してデータを取得し、ユーザーに提供する例があります。

💡Reactコンポーネント

Reactコンポーネントは、Reactフレームワークで使用されるビルディングブロックであり、UIを構築するために使用されます。ビデオでは、Reactコンポーネントを用いて、サーバーからストリーミングされたデータに基づいて動的なUIを生成する方法が紹介されています。

Highlights

这是关于使用Lang chain构建生成式UI应用程序的系列视频的第三部分。

介绍了如何使用Python后端和xjs前端构建生成式UI聊天机器人。

如果尚未观看第一部分,建议回顾,因为其中涵盖了生成式UI的高级概念和用例。

展示了聊天机器人的架构图,包括服务器端的Python代码和客户端的Next.js代码。

服务器接收用户输入,然后传递给语言模型(LM),LM通过绑定的工具与UI组件交互。

介绍了Lang graph库,它用于构建图,类似于以前使用的an for。

Lang graph允许构建智能决策流程,但限制在可预测的范围内。

演示了如何使用Lang chain的stream events端点实时流式传输事件。

详细解释了如何在Lang serve服务器文件中实现Lang graph chain。

展示了如何定义状态、工具解析器、提示以及如何绑定工具到模型。

介绍了如何实现条件边缘,根据模型是否使用工具来调用不同的节点。

解释了invoke tools函数,处理工具调用并将数据发送回客户端。

提供了GitHub工具的实现示例,展示了如何从GitHub API获取数据。

讨论了如何实现Lang serve端点以及如何加载环境变量。

演示了如何在前端使用remote runnable连接到后端Lang serve API。

展示了如何在客户端使用UI chat box与后端进行交互。

介绍了stream runnable UI函数,它处理从服务器流式传输的UI组件。

最后,演示了整个应用程序的运行情况,包括与聊天机器人的交互和工具的使用。

Transcripts

play00:00

what's up everyone it's brace and this

play00:01

is the third video in our gener of UI

play00:03

series on building gener UI applications

play00:06

with Lang chain in this video we are

play00:08

going to walk through how to build a

play00:10

generative UI chatbot with a python back

play00:12

end and then an xjs front end um if

play00:14

you've not seen the first video you

play00:16

should go back and watch that because

play00:18

that's where we cover some high level

play00:19

Concepts like what is generative UI uh

play00:22

some different use cases why it's better

play00:24

than previous methods and then we go

play00:26

into a little bit of detail into the

play00:27

apps we're going to build today um if

play00:29

you're looking for the JavaScript

play00:30

version that's going to be linked in the

play00:31

description that in that video we build

play00:34

the same chatbot that we built here um

play00:37

but we built it with a full JavaScript

play00:39

typescript stack uh this video is going

play00:41

to have a python backend uh but we're

play00:43

still going to be using some JavaScript

play00:45

for the nextjs front end so for a quick

play00:48

refresher if you watch the first video

play00:51

this is the architecture diagram of the

play00:53

chat bot we're going to be building

play00:55

today and we can see we have two

play00:56

distinct sections the server which is

play00:58

where our python code will live and then

play01:00

the client which is where our nextjs

play01:01

code will live uh so the server takes in

play01:04

some inputs some user input any images

play01:06

chat history those then get passed to an

play01:08

LM and the LM has a few tools bound to

play01:11

it these tools all correspond to UI

play01:14

components which we have on the

play01:16

client this LM is then invoked with

play01:19

these tools um it can either select a

play01:21

tool to call if the user's input

play01:23

requires it and if not then the LM will

play01:27

just return plain

play01:28

text um we're going using using Lang

play01:30

graph for our python back end and that's

play01:33

where this conditional Edge goes to um

play01:35

if you have not if you're not familiar

play01:36

with L graph I'm going to add a link

play01:38

somewhere on the screen to our Lang

play01:40

graph playlist where we go into detail

play01:42

on Lang graph and all of its apis um but

play01:45

as a quick refresher we can take a look

play01:47

at this simple diagram L graph is

play01:49

essentially um one of our libraries

play01:53

which you can use to construct graphs um

play01:56

or we like to use them for anything we

play01:58

would have used an for in the past so

play02:01

this simple diagram shows you um what

play02:04

kind of what a l graph application

play02:06

consists of so you take an input each of

play02:08

these circles are a node in L graph a

play02:11

node is just a function that gets

play02:12

invoked and some state is passed to it

play02:14

so the question gets passed to the

play02:15

retrieve node um and then at the end of

play02:18

each node so in the beginning all the

play02:20

state or your current state gets passed

play02:22

into the node that could be a list of

play02:24

messages it could be a dictionary with

play02:27

you know five Keys um or whatever you

play02:29

want your State can be really whatever

play02:31

you want so that your state always gets

play02:33

passed into the node and then uh when

play02:35

you return that node you can return in

play02:37

an individual item or the entire State

play02:40

and L graph will just combine um what

play02:43

you returned with the state so if you

play02:44

just returned one item in your

play02:46

dictionary it's just going to replace

play02:48

that field or there's some more

play02:50

complexities you can go into where you

play02:52

can make them like combine or add or you

play02:55

know have a custom function deal with

play02:56

you combining State um but for now we'll

play02:58

just think about it it gets all the

play03:00

state to the input and whatever you

play03:02

return just replaces that field in the

play03:04

state so we have a retrieve node the

play03:06

results of that get then get passed to

play03:07

our grading node uh the results of our

play03:10

grading node get passed to this

play03:11

conditional Edge we also have a

play03:14

conditional edge here um and this

play03:16

conditional Edge essentially says are

play03:18

the documents relevant if they're not

play03:20

relevant or sorry are any docs

play03:23

irrelevant if they're all relevant then

play03:25

it goes right to the generate node and

play03:27

then the generate node returns an answer

play03:29

if they're irrelevant then it gets

play03:31

routed to the rewrite query node the

play03:34

results of the rewrite query node go to

play03:35

the web search and then finally we go

play03:37

back to the generate node and then to

play03:39

the answer so L graph essentially as we

play03:41

can see here allows you to have a series

play03:42

of nodes and then route your application

play03:45

flow between these nodes um without

play03:48

having it be say an agent which could

play03:51

pick any node and it's not very

play03:53

predictable and it could you know go

play03:54

right from retrieve to generate or

play03:56

something um or an llm chain which will

play03:58

always do the same flow so with L graph

play04:01

you're able to construct your graph in a

play04:03

way which it can be somewhat smart and

play04:05

make decisions on its own but it's still

play04:08

somewhat fenced in um so it can't just

play04:11

do whatever it

play04:13

wants so if we go back here we see our

play04:16

llm is our first node that gets invoked

play04:18

and the results of that get passed to

play04:20

our conditional Edge um if no tool was

play04:22

called then we just stream that text

play04:24

right back to the to the UI and as these

play04:26

chunks are coming in then they get

play04:28

rendered on the UI if a tool is used it

play04:30

gets passed to our invoked tool node

play04:32

here you see we stream back the name of

play04:34

the tool that was used we then execute

play04:36

some tool function this is any arbitrary

play04:39

python function in our case it's

play04:41

typically be hitting an API um and then

play04:43

after that we uh invoke our or we return

play04:47

our function results which then get

play04:48

streamed back to the client we're going

play04:51

to be using the stream events endpoint

play04:52

from Lang chain which essentially allows

play04:54

you to stream back every event which is

play04:56

yielded inside of a function in your

play04:58

Lang chain in our case our lane graph

play05:01

graph so one of these events that'll

play05:03

yielded back is the name of the tool we

play05:05

then send that back to the client as

play05:07

soon as it get selected so we can map

play05:09

that to a loading component or some sort

play05:11

of component to let the user know that

play05:13

we're processing the request we've

play05:14

selected this tool um and that gets

play05:16

rendered on the UI right away so instead

play05:17

of having to wait until the entire Lane

play05:19

graph graph uh finishes and we have the

play05:22

results we can select the tool that

play05:24

usually happens pretty quickly and then

play05:26

instantly renders something on the page

play05:27

so the user knows we're working on their

play05:29

requ Quest um and has a much quicker

play05:31

time to First interaction then while

play05:33

their loading component is being shown

play05:35

to the user we're executing our tool

play05:37

function in the background and then once

play05:38

the results come in we then stream those

play05:40

back to the client and map our tool to

play05:43

our component and this will then be our

play05:45

final component on our loading component

play05:46

we'll then populate that component with

play05:48

whatever fields are returned from our

play05:50

function and then we update the UI um

play05:53

and this updating and appending the UI

play05:55

process can happen or sorry we can

play05:57

update it or append the UI as many times

play05:59

as we would would like in our case we're

play06:00

only going to update it once and then

play06:03

finish it with a final component uh but

play06:05

you could update and append your UI as

play06:08

many times as you would like let's you

play06:09

you could have some much more complex L

play06:12

graph graph like this where the retrieve

play06:14

node updates the UI and then you let

play06:16

them know you're grading it and then you

play06:17

let them know the result of the

play06:18

conditional Edge um so since we're using

play06:20

stream events we're able to get all

play06:21

those events and render them on the UI

play06:24

as they happen on our server so for our

play06:27

python backend you're going to want to

play06:28

go into the

play06:30

backend folder and then gen UI backend

play06:32

and find the chain. Pui file this is the

play06:35

file where we will be implementing our

play06:37

laying graph chain um and the first

play06:40

thing you want to do here is Define the

play06:42

state of the chain which can be passed

play06:43

through to each of the

play06:44

nodes so we're going to name our state

play06:47

generative UI State at our Imports uh we

play06:50

will use this AI message later but for

play06:51

now we just need the human message our

play06:53

state contains the input which will be a

play06:56

human message um and that's going to be

play06:58

the user's input

play07:00

it will also contain the result which is

play07:03

optional because this will only be set

play07:04

if the llm calls a string or calls does

play07:08

not call tool and only responds with a

play07:09

string so it's the plain text response

play07:12

of no tool was was used we also have an

play07:14

optional tool calls um list of objects

play07:18

so a list of parse tool calls if the LM

play07:21

does call a tool or tools we're going to

play07:23

parse it and set that value before we

play07:25

invoke the tool and then the result of a

play07:27

tool call if the LM does call A tool

play07:29

we'll call invoke tools and then this

play07:31

will return this tool res result value

play07:34

which will then use on the client to

play07:35

update the chat history so the lmc's are

play07:37

user input and then the result of a tool

play07:39

so it knows it properly processed that

play07:41

tool now we can Implement our create

play07:44

graph function we have not implemented

play07:46

our nodes yet but this will give us an

play07:47

idea about the different nodes and the

play07:50

flow our graph is going to take uh we're

play07:52

going to want to implement or import our

play07:54

state graph and compile graph um this is

play07:59

we're going to use as a type or type

play08:00

hint and this is going to be the state

play08:02

graph we're going to use for l l graph

play08:04

uh as you can see it's pretty simple

play08:05

there's two nodes invoke model which

play08:07

will be this model or this node and then

play08:10

invoke tools which will be here you see

play08:13

we don't have a node for plain text

play08:14

response because this conditional Edge

play08:16

which is this part will essentially say

play08:19

if the model use a tool then call the

play08:22

invoke tools node and if it didn't use a

play08:24

tool it's just going to end and end the

play08:26

graph and send the response back or

play08:27

sorry the result back to the Cent

play08:30

our entry point is going to be invoke

play08:32

model and our finish point is going to

play08:34

be invoke tools or the end variable

play08:37

which this conditional Edge will return

play08:39

if um no tools were called then we're

play08:42

going to compile the graph and return it

play08:44

and then inside of our Lang serve server

play08:46

file when we import this um this is

play08:48

going to be the runnable which Lang

play08:50

serve can call now that we've defined

play08:53

our graph structure we can Define our

play08:55

first model so that or sorry our first

play08:58

node which is going to be invoked model

play09:00

is going to take in two inputs one for

play09:02

state which is going to be the full

play09:04

generi state that we' defined since this

play09:06

will be the first node that's called it

play09:08

will only have the input um and then pre

play09:11

or nodes that are called after this will

play09:14

have these different state values

play09:16

populated if the model called a tool or

play09:19

return a string or you know whichever

play09:21

one the model uses then we have a config

play09:23

object which will pass to um the llm

play09:26

when we invoke it and then finally it's

play09:28

going to return an instance of generate

play09:31

State and as we see we have total false

play09:34

and that's so we don't have to return

play09:35

all of the different values in this in

play09:38

this uh class now that we defined the

play09:40

structure we can go ahead and Define the

play09:42

first part of our invoke model node

play09:45

we're going to have a tool parser which

play09:46

is a Json output tools parser from the

play09:48

open AI tools output parsers and then a

play09:50

prompt this prompt is going to be pretty

play09:52

simple your helpful assistant you got

play09:54

some tools you need to determine whether

play09:56

or not the tool can hander the US user's

play09:58

input or return plain text and then we

play10:00

have a messages placeholder for the

play10:02

input where the input in chat history

play10:04

will

play10:05

go after defining our tools parser in

play10:08

our prompt we can go and Define our

play10:11

model and all the other tools we will

play10:13

assign to it so we can paste that in as

play10:16

you can see we imported our gab Rebo

play10:19

tool our invoice tool and our weather

play10:21

data tool um we will imp Implement these

play10:23

in a second uh and we've also imported

play10:26

our chat open AI class so we Define our

play10:28

model chat open AI gbt 40 uh temperature

play10:31

zero and streaming is true we then

play10:34

Define our list of tools which is the

play10:36

get a Revo tool invoice parer tool and

play10:38

weather data tool next we're going to

play10:40

bind the tools to the model so we Define

play10:42

a new variable model with tools and then

play10:45

we're binding these tools to the model

play10:47

and finally we use our Lang train we use

play10:50

the Lang chain expression language to

play10:52

pipe the initial prompt all the way to

play10:55

the model with tools and then invoke it

play10:57

passing in our input and our config and

play10:59

we get this result which will either

play11:01

contain the tool calls or it will

play11:03

contain just a plain text

play11:05

response now we can Implement our

play11:07

parsing logic so first we make sure that

play11:10

the result is an instance of AI message

play11:12

it should always do that but we have

play11:15

this checked here just so we get this

play11:17

typed down here um this should in theory

play11:19

never throw then we check to see if

play11:22

result. tool calls is a list and if

play11:25

there are more than zero or if there is

play11:27

a tool call there if a tool call does

play11:29

exist then we're going to parse this

play11:31

tool call passing in our result from the

play11:34

chain. invoke and the config and then

play11:36

we're going to return tool calls with

play11:38

parse tools which will populate this

play11:41

field um if tool calls were not called

play11:44

then we're just going to return the

play11:46

content as a string in the result field

play11:48

which will populate this um and then now

play11:51

we can Implement our add conditional our

play11:53

our conditional Edge which will say if

play11:55

result is defined and and if tool calls

play11:58

are defined then and uh call our invoke

play12:02

tools node which we'll Implement after

play12:03

our conditional Edge so for our invoke

play12:06

tools or

play12:08

return method it takes in the state and

play12:12

Returns the string so if result is in

play12:14

the state and in it is an instance of

play12:16

string which means it would have been

play12:17

defined because we returned it then

play12:19

return end and this end variable is a

play12:21

special variable from from L graph which

play12:24

indicates to L graph to finish and not

play12:26

call any more um nodes it's essentially

play12:29

like setting like calling set finish

play12:31

point but you can dynamically call it

play12:33

because if Ling graph CES returned to n

play12:35

from conditional Edge it's just going to

play12:37

end uh if result is not defined but tool

play12:40

calls are defined and they are in

play12:42

instance of list then return tool calls

play12:44

Lan graph will read this and then it

play12:47

will call the tool tool calls tool

play12:49

invoke tools

play12:50

node in theory this will never happen

play12:53

because we should always either return a

play12:56

string via result or tool calls but we

play12:58

add the this just to make it happy in

play13:00

case there is somehow a weird Edge case

play13:02

where that happens now that we've

play13:04

implemented our conditional Edge we can

play13:06

implement the invoke tools function

play13:09

which will then process or handle

play13:11

invoking these tools and sending the

play13:13

data back to the client where we can

play13:15

process it and send the UI components

play13:17

over to the UI so for the invoked tools

play13:21

function this is somewhat similar to

play13:22

what we saw in the server. TSX file

play13:29

where we're mapping or adding the map

play13:33

tool map here

play13:37

um it basically has a tool map with the

play13:39

same names of the tools and then those

play13:41

tools and we're going to use the state

play13:43

to find the tool that was requested and

play13:45

then we can we can invoke

play13:47

it so what we do after this is we say if

play13:51

tool calls is not none which means that

play13:55

tool calls have been returned here and

play13:56

our conditional Edge called tool calls

play13:58

which which they should never be none um

play14:01

but once again linting issue got to make

play14:03

it happy uh because invoke tools should

play14:05

in theory never be called unless there

play14:06

already an instance of a list uh but

play14:08

yeah we need to make it happy by

play14:10

confirming that they are defined we will

play14:12

then extract the tool from State tool

play14:15

calls and then just the zero with item

play14:17

you could update this to process

play14:19

multiple tools that your language model

play14:20

returns for this demo we're only going

play14:22

to handle a single tool that the

play14:25

language model selects then via our

play14:27

tools map tool. type type is always

play14:29

going to be the name of the tool um we

play14:31

can use our tools map to find the proper

play14:33

tool so now we have our selected tool

play14:35

and then we return tool result with the

play14:37

select tool. invoke with the RX language

play14:40

model supplied and that's going to

play14:42

populate this field and then since tool

play14:46

invoke tools is our finish point the

play14:48

lane graph graph will end now we can

play14:51

Implement our g a repo tool and then

play14:52

I'll just walk you through how the

play14:54

invoice and weather data tool are

play14:55

implemented they're pretty similar to

play14:57

get a Breo um but we'll only implement

play14:59

the gith REO tool so in your backend you

play15:02

should navigate to tools

play15:22

github.io input with two Fields owner

play15:25

and repo the owner will be the name of

play15:27

the repository owner and repo is the

play15:29

name of the repository like Lan chain AI

play15:32

Lan graph and these are the fields that

play15:35

the GI of API requires in order to fetch

play15:37

data about a given repo next we're going

play15:40

to want to define the actual tool for a

play15:43

GitHub tool so we can we're going to

play15:45

import tool from Lang chain core. tools

play15:49

so from Lang chain core. tools import

play15:54

tool we're going to add this decorator

play15:56

on top of our GitHub repo um method

play15:59

we're setting the name to get a repo

play16:01

which we also have here obviously so we

play16:04

can map it properly and then the schema

play16:07

for this tool and return direct tool

play16:09

true and then our GI a repo tool takes

play16:12

in the same inputs as here owner and

play16:14

repo and it returns let's add these

play16:18

Imports object and string so now we can

play16:21

implement the core logic here which is

play16:23

going to uh hit the GI of API if it

play16:26

returns an error then we'll return a

play16:27

string and if it does not return return

play16:29

eror we're going to return the data that

play16:30

the API gave us so first things first

play16:33

we'll add our um documentation string

play16:37

and then implement or import OS to get

play16:40

the GI of token from your environment I

play16:42

have a read me in this repo if you want

play16:43

to use the tools that we provided or

play16:45

that we've yeah we've provided in this

play16:46

repo pre-built um you're going to need a

play16:48

GitHub token and then for the weather

play16:50

tool you're going to want this geoc code

play16:52

API key they're all free to get and I've

play16:54

added instructions in the repo on how

play16:56

how to get them but then you should set

play16:58

them in your environment and inside this

play17:00

tool we're going to want to confirm that

play17:01

this token is set before calling the get

play17:03

up

play17:04

API then we will Define our headers with

play17:07

our environment token and the API

play17:10

version and the URL for the GI up API

play17:12

passing in the owner and repo because

play17:15

this is an FST string um and now we can

play17:17

use requests to actually hit this URL

play17:20

and hopefully get back the data from our

play17:22

repo if the user and the LM provided the

play17:24

proper owner and repo for a given

play17:27

repository so what we'll do is we will

play17:30

wrap our request in a try and accept so

play17:35

if an error is thrown we can return a

play17:36

string and just log the error instead of

play17:38

killing the whole thing what this is

play17:40

going to do is it's going to try to make

play17:42

a get request to this URL with these

play17:45

headers raise for status get the data

play17:48

back and then return the owner repo

play17:50

description stars and language this is

play17:52

going to be the owner of the repo the

play17:54

name of the repo description if the

play17:56

description is set how many stars uh are

play17:59

on that repo and then the primary

play18:00

language like python this is the end of

play18:04

the get a repo tool and now we can

play18:05

quickly go and look at the invoice and

play18:07

weather tool as we can see they're

play18:09

pretty much the same the invoice tool

play18:11

has a bit or is much more complex with

play18:14

the schema and that's because um it's

play18:17

going to extract these fields from any

play18:19

image you could upload uh and then it's

play18:21

going to use our pre-built invoice

play18:24

component on the front end to fill out

play18:26

any Fields like you know the line items

play18:29

or the total price um shipping address

play18:31

from an invoice image that you update

play18:33

and then it just returns these

play18:35

fields for the weather tool just going

play18:39

to hit three

play18:41

apis um in order to get the city the

play18:43

weather for your city state country and

play18:46

then today's forecast which is the

play18:48

temperature and then the schema is also

play18:50

simple city state optional countries

play18:53

defaults to USA now that we've Define

play18:55

our tools we can Define our laying serve

play18:58

and end point which we'll use as the

play19:00

backend server endpoint that our front

play19:02

end will actually connect

play19:03

to for the L serve server you're going

play19:06

want to go to your geni backend and then

play19:08

the server.py file and then the first

play19:10

thing we're going to want to do here is

play19:12

load any environment variables using

play19:15

thein um dependency and this will load

play19:18

any enironment variables from your INF

play19:19

file like your open API key or open AI

play19:22

API key your GI up token yada y y now to

play19:26

implement our um fast API for a l serve

play19:29

endpoint if you've ever worked with Lang

play19:31

serve this should be pretty familiar U

play19:33

but we're going to have this start this

play19:35

should be named start start cly does not

play19:38

make much sense um and then we're going

play19:40

to Define new instance of fast API which

play19:43

is going to return this app we're going

play19:44

to give it a title of genui backend and

play19:46

then this is you know just the default

play19:48

for um Lang

play19:51

serve since our backend API is going to

play19:54

be hosted on locally Local Host 8000 and

play19:57

then our front end is Local Host 3,000

play19:59

we need to add some code for Cores so

play20:01

that it can accept our requests um we're

play20:04

going to add this import as

play20:08

well once we've added cores we can go

play20:11

and add our route which is going to

play20:12

contain our runnable which we defined

play20:15

inside of our chain. piy file this

play20:18

create graph

play20:20

function so we will create a new

play20:26

graph add in types so L serve knows what

play20:29

the input and output types are we're

play20:31

going to add a route SL chat it's going

play20:35

to be a chat type and then passing in

play20:37

our runnable in our app this runnable is

play20:39

going to be what's called when you hit

play20:40

the endo and then finally start the

play20:44

server here at Port

play20:47

8000 as you can see we have this chat

play20:50

input type here which is going to define

play20:52

the input type for our chat um so we're

play20:54

going to want to go to back end/ types

play20:56

and Define this type this type is fairly

play20:58

simple it's our chat input type which

play21:01

contains a single input which is a list

play21:03

of human message AI message or system

play21:06

messages and these are going to be our

play21:07

input and chat history um that we are

play21:11

compiling on the client and sending over

play21:13

the API to the back end once this is

play21:16

done your server is finished and you can

play21:18

go to

play21:22

your

play21:23

console and

play21:25

run or poetry Run start and this should

play21:31

start your

play21:33

a that's right we updated that name so

play21:35

we need to update this file as our

play21:38

poetry or Pi Project sorry to instead of

play21:42

trying to call

play21:44

the start cly it should just call start

play21:48

so now but if we go back here and we run

play21:50

po Run start our length serve server has

play21:53

started um and then we can go to our

play21:55

browser and go to locost 8000 docs and

play21:59

we can see all the a automatically

play22:02

generated Swagger docs for API endpoint

play22:05

and this is the stream events endpoint

play22:06

which we are going to be using now that

play22:08

we've done this we have one thing left

play22:10

to do or which is add the remote

play22:12

runnable to our client so we can connect

play22:15

to this and then using our uh UI chat

play22:18

box which this repo already pre-built

play22:20

out you just clone the repo and you can

play22:21

use that then we can actually start

play22:23

making API requests and check out the

play22:25

demo so for our remote runnable you're

play22:28

want to go back to to the front end

play22:28

directory app and agent. TSX we're then

play22:31

going to import server only because this

play22:33

is should only run in the server and

play22:35

then add our API URL obviously if you're

play22:38

into production this should not be Local

play22:39

Host 8000 but for us in this demo it is

play22:42

and SL chat which is

play22:44

this chat end point we defined here once

play22:48

we've done that we can Define our agent

play22:50

function which takes in some inputs your

play22:52

input your chat history and any images

play22:54

are uploaded and designate this as a

play22:56

server function this is similar to the

play22:59

or this is the inputs we saw here and

play23:01

then we're going to want to create a

play23:02

remote runnable so we'll say const

play23:05

remote runnable equals new remote remote

play23:10

runnable from Lan chain core runnable

play23:12

remote passing in the URL as the API URL

play23:16

here and this is how we will have a

play23:18

runnable that can then connect to our

play23:20

Lang serve API in the back end um but

play23:22

since it's a runable we can use all the

play23:24

nice Lan chain types and invoke and

play23:26

stream events that we implemented in our

play23:29

stream runnable UI function here so this

play23:31

remote runnable is what we'll pass to

play23:33

this function and then we'll call stream

play23:35

events on so now we can import stream

play23:38

runnable

play23:40

UI import stream runnable UI from u/s

play23:44

server and then we can return stream

play23:47

runnable UI with the remote runnable

play23:49

inputs but then we need to also update

play23:51

these inputs to match the proper type

play23:54

that the backend is

play23:56

expecting so we iterate over our chat

play23:59

history creating a new object with a

play24:01

type rooll and content of the content

play24:04

and then finally the input from the user

play24:07

should be type human and content is

play24:09

inputs. input once this is done we'll be

play24:13

able to use this agent function on the

play24:15

client um but first we need to add our

play24:17

or export our context so this is be

play24:20

going to be able to be used so export

play24:25

const ends Point endpoints context

play24:27

equals Expos end points passing our

play24:29

agent and this is using that same

play24:31

function we defined in our server. TSX

play24:33

file which is going to add this agent

play24:35

function to the react context so now in

play24:38

our chat. TSX file which you should use

play24:42

um from the repo and not really updated

play24:45

at all we have our use actions hook

play24:47

passing our end points context which we

play24:50

defined

play24:52

here and then since we're using reacts

play24:55

create context it knows it can call an

play24:57

agent

play24:59

it's then going to push these elements

play25:00

to a new array with the UI that was

play25:03

returned from the uh stream and then

play25:05

finally parse out our invoke model or

play25:10

invoke tools um into the chat history so

play25:14

the LM has the proper chat history this

play25:17

is obviously implementation specific so

play25:18

if you're updating this for your own app

play25:20

with your own um Lang graph back end you

play25:23

should update these to match your nodes

play25:25

and kind of how you want to update your

play25:27

chat history

play25:29

finally we clean up the

play25:30

inputs uh resetting our input text box

play25:33

and any files that were uploaded um and

play25:36

then this is just the jsx which we'll

play25:38

render in our chat poot go to the

play25:40

frontend utils server. TSX file and this

play25:43

is where we will Implement all the code

play25:45

around uh streaming UI components that

play25:48

we get back from the server to the

play25:49

component and calling the servers um

play25:53

runnable via stream

play25:55

events so first thing to do in this file

play25:58

is

play26:01

import import server only and that's

play26:03

going to tell let's say you're using

play26:05

forell forell that this file should only

play26:07

be ran on the server next we are going

play26:09

to implement this with resolvers

play26:11

function um essentially this has a

play26:14

resolve reject function those are then

play26:15

assigned to a resolve and a reject

play26:17

function in a new promise and then it's

play26:19

all returned and we have to TS ignore

play26:21

this because

play26:23

typescript thinks that resolve is being

play26:25

used before it's assigned um and

play26:27

technically in the context of just this

play26:30

function that's correct however we know

play26:33

that we will not use this resolve reject

play26:37

function before we use this promise so

play26:39

in practice this is not the

play26:42

case next we're going to implement this

play26:44

expose endpoints function this is going

play26:46

to take in a generic type which will

play26:47

then be assigned to actions this action

play26:49

in practice will be our lane graph agent

play26:52

which we will then invoke or the remote

play26:54

remote runnable which will call this L

play26:55

graph agent on the server and then it

play26:58

returns um a jsx

play27:00

element this jsx element is going to be

play27:05

a function called AI which takes in

play27:08

children um of type react node so any

play27:10

react node children and then it passes

play27:13

the actions variable here as a prop to

play27:16

the AI provider which we'll look at in a

play27:17

second and then any children and this AI

play27:20

provider is essentially going to use

play27:22

react create context to give context to

play27:25

our children which will be the elements

play27:27

that we are pass passing back to the

play27:28

client and any actions that we want to

play27:32

use on the client which will be our

play27:34

agent action which will then call the

play27:35

server um and it uses reacts create

play27:38

context to give context to these files

play27:41

um if we look inside of

play27:44

our app SL layout. TSX file we see we

play27:48

are also wrapping the page in this end

play27:50

endpoint context variable which we will

play27:52

Implement in just a minute uh now that

play27:55

these two are implemented we can go and

play27:57

implement the the function which will

play27:58

handle actually calling the server

play28:00

calling stream events on that and then

play28:02

processing each of the

play28:04

events so this function is going to be

play28:06

called stream runnable UI we will add

play28:09

our

play28:11

Imports

play28:13

import

play28:15

runnable

play28:16

[Music]

play28:18

from

play28:20

score runnables and then also

play28:24

import it's not getting it import

play28:28

compiled State graph from Lang chain

play28:30

SL Lang graph so our runnable will be

play28:34

our remote runnable which we'll use to

play28:36

hit our server endpoint uh this remote

play28:39

runnable we're going to call stream

play28:40

events on so we get each of the events

play28:42

or all the events that our server

play28:43

streams back and then we're going to

play28:45

have a set of inputs these inputs are

play28:47

going to be things like the user input

play28:49

and chat history which will then pass to

play28:50

a runable when we invoke it the first

play28:53

thing we want to do in this function is

play28:55

create a new streamable UI which we can

play28:57

import this fun function from the aisk

play29:00

this create streamable UI function is

play29:01

what we will use to actually stream back

play29:04

these components from a react server

play29:06

component to the client and then we're

play29:08

going to use our with resolvers function

play29:10

we defined to get our last event and

play29:12

resolve which we will resolve and await

play29:15

a little bit later next we're going to

play29:18

implement this ASN function which we're

play29:20

calling let's add our Imports this has a

play29:24

last event value which we will assign at

play29:26

the end of each stream event we it over

play29:28

so that this will always contain the

play29:30

last event we're then going to use this

play29:31

a little bit later on um after we

play29:34

resolve our promise on the client so we

play29:35

know when the last event is resolved

play29:37

because this function will resolve

play29:40

before add this import this function or

play29:43

this asnc function that is returned will

play29:46

resolve um before the actual API call is

play29:49

finished so we need to assign each of

play29:53

the events to that so that the last

play29:54

event will be in this variable and then

play29:56

when we await our last event will be to

play29:58

access our last event on the client even

play30:00

though the async function would have

play30:01

already

play30:02

resolved we also have this callbacks

play30:05

object which is an object containing a

play30:07

string and then either create runnable

play30:09

UI or sorry create streamable UI or

play30:12

create streamable value this is going to

play30:14

be an object which tracks which streamed

play30:16

events we've um processed already the

play30:19

string will be the ID of that stream

play30:21

event and the return type will be the UI

play30:23

stream which is getting sent back to the

play30:25

client which corresponds to that event

play30:27

so could be a tool call or it could be

play30:30

um just a plain text llm

play30:32

response after this we need to go up

play30:35

above this function and Define two types

play30:37

and then one object map let's add our

play30:40

Imports

play30:42

first why is that

play30:45

deprecated that's because we import it

play30:46

from the wrong place um we need to add

play30:50

this here as well so these are some

play30:52

pre-built components rendering like a

play30:54

GitHub repo card we have a loading

play30:57

component for that as well

play30:58

and then the actual component which

play30:59

takes them props um these are all just

play31:02

normal react components even though

play31:04

we're using them on the react Ser

play31:06

components on the server they're normal

play31:08

react components that'll get streamed

play31:09

back to the client so you can

play31:10

essentially stream back any component

play31:12

that you would build in react and they

play31:14

can have state they can connect apis um

play31:17

and that's kind of what makes this so

play31:18

powerful is you can use actual react

play31:20

components that can have their own life

play31:23

inside of them so you can stream this

play31:24

back to the client you get a new UI

play31:26

component on your client that user or

play31:28

CES and that UI component can be very

play31:30

Dynamic and stateful and

play31:32

whatnot um but those are pre-built and

play31:34

we have this map here tool component map

play31:36

we will use this as our tool component

play31:39

map here so when we get an event back

play31:41

which matches the name of our tool we

play31:43

can then map it to the Loading component

play31:45

and the final component um there will be

play31:47

a different event which we'll Implement

play31:48

in a second which checks if it's a if it

play31:50

should be the loading component get

play31:51

stream back or the final component gets

play31:53

stream back and then you can pass any

play31:55

props to these components

play31:59

now we're going to Define two variables

play32:01

selected tool component and selected

play32:02

tool UI these are going to keep track of

play32:05

the individual component and the UI

play32:08

stream which we've implemented to stream

play32:10

the components back to the client that's

play32:12

because after this we're going to be

play32:13

iterating over stream events and we need

play32:15

these variables to be outside of each

play32:17

event so we have access to them in all

play32:19

the subsequent events after they've

play32:20

already been assigned um but now we can

play32:23

implement the stream

play32:24

events that's just going to call

play32:26

runnable Dost stream events with the V1

play32:28

version passing any inputs this runnable

play32:31

is the same runable that gets passed in

play32:32

here which will be well we'll implement

play32:35

we will implement this in a second but

play32:36

it's essentially going to be a remote

play32:37

runnable function which calls our Lang

play32:40

serve python server um and now we can

play32:42

iterate over all of the stream events

play32:45

and extract the different events that we

play32:47

want to then either update our UI or

play32:50

update these variables or callbacks and

play32:53

whatnot so really quick we're going to

play32:56

extract the output and and the Chunk

play32:58

from our stream event. data and then the

play33:00

type of event which we will use a little

play33:02

bit later

play33:04

on now we're going to implement our

play33:06

handle invoke model event this handles

play33:09

the invoke model event by checking for

play33:10

the tool calls in the output if a tool

play33:12

call is found and no tool component is

play33:13

selected yet it selects the selected

play33:15

tool component based on the tool type

play33:17

and Depends the loading state to the UI

play33:19

so what this is going to do is we will

play33:21

call this if the streamed event is the

play33:24

invoke model um node when we do

play33:27

implement our python backend one of the

play33:29

nodes in our lane graph graph is going

play33:31

to be invoke model and this is the

play33:33

function which is going to process any

play33:35

events streamed after um that invoke

play33:38

model is

play33:41

called now for the body of this function

play33:44

we first check to see if tool calls is

play33:46

in the output um and if output. tool

play33:49

calls length is greater than zero so if

play33:50

there are more than if there are one if

play33:53

there is one tool call then we're going

play33:55

to extract that tool call this is the

play33:57

invoke model so it's going to be this

play33:58

first step um and this conditional node

play34:01

will either return a tool or string if

play34:03

returns a tool then this should get

play34:05

caught we extract that tool and then if

play34:08

these two variables have not been

play34:09

assigned yet then we're going to find

play34:11

the component in the component map

play34:14

create a new stream whe UI passing in

play34:16

the initial value as the loading

play34:19

component for that component and this is

play34:21

going to then update we're then then

play34:23

going to pass the stream streamable ui.

play34:26

value to our

play34:28

our create streamable UI which is

play34:30

getting which is going to get sent back

play34:32

to the

play34:33

client um with the value of our new

play34:36

great streamable

play34:37

UI which will be our loading component

play34:39

for the first

play34:41

event the next function we want to

play34:43

process or sorry the event we want to

play34:45

process is the invoke tools event um

play34:47

we're going to update the selected tools

play34:49

UI with the final State sorry with the

play34:53

final State and Tool result data that

play34:54

will be from this node um and it takes

play34:57

an input handle invoke tools event so

play35:00

now it's going to be pretty similar to

play35:02

this where we're going to take the event

play35:03

of this tool node and update the UI but

play35:06

using these already defined uh

play35:09

variables so if selected tool UI is true

play35:12

and selected tool component are true

play35:14

which they should always be because the

play35:16

invoke tool node should never be called

play35:19

until the invoke model tool is called

play35:21

which we'll see when we pl our python

play35:22

server then we're going to want to get

play35:25

the data from the output here via the

play35:28

tool result and then tool ui. done with

play35:31

the selected component which we assigned

play35:33

here and then the final version of that

play35:36

component passing in any props so for

play35:38

example let's say we have our weather

play35:40

tool it's then going to use the uh UI

play35:44

stream for the weather tool find the

play35:46

final version of that component which is

play35:49

the current weather pass in any props to

play35:51

it and then update that stream and call

play35:54

done to end the stream um updating the

play35:57

weather component that is already being

play35:58

rendered on the

play35:59

UI now the last function we want to

play36:02

implement is going to be handle chat

play36:04

model stream event and that's going to

play36:06

be if the language model just um does

play36:09

not pick a tool and is only stream back

play36:11

text it's going to stream back all of

play36:13

those text Chunk chunks and we're going

play36:15

to want to extract those to then stream

play36:17

them again to our

play36:19

UI so handles the on chat mod stream

play36:21

event by creating a new text stream from

play36:23

the for the AI message if one does not

play36:25

already exist and for the current ID

play36:28

then it pends the chunk to the cont

play36:30

content um and then app pends the chunk

play36:33

content to the corresponding text Stream

play36:35

So the value of this function is going

play36:37

to be this we're going to use our

play36:40

callbacks object here after we add our

play36:44

import and we're going to say if

play36:46

callbacks um if the Run ID for the

play36:49

stream event does not exist in our

play36:50

callback object then create a new text

play36:53

stream we want to create a text stream

play36:55

because this bypasses some um back in

play36:58

that the create runnable UI does uh

play37:01

because we're only stream back text so

play37:02

we create our text stream use our stream

play37:05

or sorry create streamable UI and add

play37:09

our AI message which will look like our

play37:12

you know AI message text bubble and the

play37:15

value of that is going to be the text

play37:16

stream and then we are going to set this

play37:18

callback object with the Run ID to this

play37:21

value of the text stream then if we set

play37:23

that or if it was already set then we're

play37:26

going to check make sure it's it exists

play37:28

and then append any of the content from

play37:31

the Stream So each chunk of the LM

play37:33

streams will be chunk. content and we

play37:35

will append that to our text stream

play37:38

value which will then stream each text

play37:40

and update the UI message as those

play37:43

chunks come in now we've implemented

play37:45

these functions we're going to want to

play37:46

implement our if else statements um on

play37:50

the different stream events so we can

play37:53

get the proper events and up call the

play37:56

the functions which are required for

play37:57

those events so the first one we want to

play38:00

implement is if the type is end so that

play38:03

means if the chain has ended and the

play38:05

type of output as an object we first

play38:07

check to see if the stream event. name

play38:08

is invoke model if it was invoke model

play38:11

then we want to handle the invoke model

play38:12

event passing in the output and if this

play38:15

or if the stream event was invoked tools

play38:17

then we call the invoked tools event

play38:20

makes sense passing in the object the

play38:23

last function we need we need to add an

play38:24

if statement for is the chat model

play38:26

stream so those are not going to be tool

play38:28

nodes instead they're going to be on

play38:30

chunk model streams so we're going to

play38:32

say if the event is on chat model stream

play38:35

the chunk is true and the type of Chunk

play38:37

is an object then handle the chat model

play38:39

stream and then finally at the end of

play38:42

our let me collapse these once we're at

play38:45

the end of our stream event iteration we

play38:48

assign the last of value to the stream

play38:50

event and this is so this value is

play38:52

always going to be the last stream once

play38:55

the stream exits

play38:57

finally we're going to clean all this up

play38:59

so using our resolve function return

play39:01

from our with resolvers we're going to

play39:03

pass in the data. output from the last

play39:06

event so this is going to be the last

play39:08

value from our stream um if it was text

play39:10

it's going to be text if it was the

play39:11

result of a tool it's going to be a tool

play39:13

that data we will set when we Implement

play39:15

our python backend we're then going to

play39:17

iterate over all of our

play39:20

callbacks and call done on each of them

play39:24

which is going to call this stream do

play39:27

sorry stream. even though we're calling

play39:30

UI and that's just so this um create

play39:33

streamable value stream finishes and

play39:36

then call UI Doone and that's for this

play39:40

create streamable UI and it's going to

play39:42

end the stream streaming UI components

play39:44

back to the

play39:46

client finally outside of this async

play39:48

function we're going to want to return

play39:50

the value of our UI stream this is going

play39:52

to be the jsx element which we'll render

play39:54

on the client and then the last event

play39:57

right here which is that promise that we

play39:58

can resolve once our stream events have

play40:02

finished resolving and then get the

play40:03

value of the last event now everything

play40:07

is finished we can go back to our

play40:08

terminal and we can run yarn

play40:11

Dev this will start up a server at

play40:13

locost 3000 we can go to our UI reload

play40:17

this page and we should see our

play40:19

generative UI application that we just

play40:21

built and we say something like what's

play40:24

the weather in SF

play40:28

send that over boom we get back our

play40:30

loading component it recognized that it

play40:31

was in San Francisco California um as we

play40:34

saw it selected the tool sent that back

play40:36

to the client that was a map to our

play40:37

loading component that was rendered here

play40:39

and then once the weather API was or had

play40:43

resolved it then sent that data back

play40:45

again and it updated this component with

play40:47

the proper data so we can also say

play40:49

something like what's the info on

play40:53

linkchain AI SL

play40:57

graph we send that over it should select

play40:59

our GitHub tool we saw it was loading

play41:01

for a second and now we have our GitHub

play41:03

um repo component here which has the um

play41:07

description and the language and all the

play41:09

Stars this is you know react component

play41:11

so it's interactable we can click on the

play41:13

star button and it takes us to the L

play41:16

graph repo and we see that the um

play41:19

description and stars all

play41:22

matches so before we finish the last

play41:24

thing I want to do is show you the Lang

play41:26

Smith Trace as we see this is a link

play41:28

serve endpoint / chat it passes in the

play41:31

input the tool calls and then the most

play41:34

recent input as we can see the output

play41:36

contains tool calls and Tool result

play41:38

which we use to update our um chat

play41:41

message history but it calls invoke

play41:43

model as the first node in Lang graph as

play41:46

we can see obviously there's no inputs

play41:47

for these because they have not been

play41:48

called yet um but it does contain the

play41:51

messages input field that then calls our

play41:54

chat model our chat model is Prov

play41:57

provided with some tools it's selected

play41:58

to get a repo tool which is what we want

play42:00

because we asked about to get a repo

play42:02

return the values for that that then got

play42:05

par passed to our output parser and then

play42:08

our invoke tools or return uh

play42:10

conditional Edge which obviously we

play42:13

invoke tools so it's then going to call

play42:14

the invoke tools node which invoked our

play42:17

tool was while it was invoking our tool

play42:20

it was stringing back the name of the

play42:21

tool which we used to send the loading

play42:22

component to the client then after it

play42:24

hit the Gib API it streamed back the fin

play42:27

result of our tool as we can see here

play42:30

and then that on our client was used to

play42:32

um update the component with the final

play42:34

data and then since invoke tools was the

play42:36

last node it finished and that is it for

play42:40

this demo on building Lang graph or

play42:42

sorry generative UI with python and

play42:44

react front end um if you are interested

play42:47

in the types video which is just the

play42:49

same demo as this but with a full

play42:51

typescript app that'll Link in the

play42:53

description and I hope you all have a

play42:54

better understanding of how to build

play42:56

gener applications with linkchain now

Rate This

5.0 / 5 (0 votes)

الوسوم ذات الصلة
生成UIPythonReactチュートリアルチャットボットLangChainAPIGitHub天気情報インタラクション
هل تحتاج إلى تلخيص باللغة الإنجليزية؟