Building a Generative UI App With LangChain.js

LangChain
11 Jun 202439:23

Summary

TLDRこのビデオでは、3部作のシリーズの第2部として、Langing Chainを使用して生成型UIアプリケーションの構築方法が解説されています。シリーズでは、生成型UIの概念やユースケース、そしてチャットボットのハイレベルアーキテクチャが紹介されています。さらに、PythonパワードのバックエンドとNext.jsフロントエンドを使用した実装が詳述され、TypeScript開発者向けにReactサーバーコンポーネントを活用したストリーミングUIコンポーネントの実装方法が解説されています。

Takeaways

  • 😀 このビデオは、LangChainを使用して生成的UIアプリケーションを構築する三部作シリーズの第二部です。
  • 😀 最初のビデオでは、生成的UIの基本概念、ユースケース、高レベルのアーキテクチャについて説明しました。
  • 😀 次のビデオでは、Pythonをバックエンド、Next.jsをフロントエンドとして使用するチャットボットを取り上げます。
  • 😀 TypeScriptの開発者でない場合は、次のビデオを待つか、今日のビデオの一部のトピックが重複するため視聴することをお勧めします。
  • 😀 今日のビデオでは、ユーザー入力、画像、チャット履歴をLMに送信し、条件付きエッジを使用して応答をクライアントにストリーミングするチャットボットの内部構造について説明します。
  • 😀 Reactサーバーコンポーネントを使用してUIコンポーネントをストリーミングし、サーバーからUIに戻すためのユーティリティサーバー.tsxファイルを実装します。
  • 😀 LangChainのストリームイベントエンドポイントを使用して、すべてのイベントをクライアントにストリーミングします。
  • 😀 チャットボットの構築に使用するLグラフエージェントを構築するためのLグラフのビデオへのリンクも提供されます。
  • 😀 GitHubリポジトリの詳細を取得するためのツール、請求書の詳細を抽出するツール、天気情報を取得するツールを実装します。
  • 😀 最後に、デモ用のアプリケーションを起動し、チャットボットが正常に動作することを確認します。

Q & A

  • このビデオは何について説明していますか?

    -このビデオは、Lang Chainを使用して生成的なUIアプリケーションを構築する方法について説明しています。

  • 前のビデオでどのような概念がカバーされましたか?

    -前のビデオでは、生成的なUIとは何か、ユースケース、高レベルのアーキテクチャについて説明されました。

  • 次のビデオで取り上げる予定の内容は何ですか?

    -次のビデオでは、PythonチャットボットのバックエンドとNext.jsフロントエンドについて取り上げる予定です。

  • このビデオではどのような技術を使用していますか?

    -このビデオでは、Reactサーバーコンポーネント、Lang Chain、AI SDKなどの技術を使用しています。

  • サーバーコンポーネントで行われる処理は何ですか?

    -サーバーコンポーネントでは、UIコンポーネントのストリーミングと実行可能なランダム処理が行われます。

  • ストリームイベントとは何ですか?

    -ストリームイベントは、Lang Chainアプリケーションからクライアントに中間ステップをストリーミングする方法です。

  • 実行可能なUIを作成するための主要なファイルは何ですか?

    -utils/server.TSXファイルが、UIコンポーネントのストリーミングと実行可能なランダム処理のロジックを含んでいます。

  • GitHubリポジトリツールの役割は何ですか?

    -GitHubリポジトリツールは、GitHub APIを使用してリポジトリの詳細を取得し、UIに表示する役割を果たします。

  • チャット履歴の管理方法について説明してください。

    -チャット履歴は、ユーザーの入力や画像とともにLMに送信され、適切なツールを使用して応答が生成されます。

  • エージェントエグゼキューター関数の目的は何ですか?

    -エージェントエグゼキューター関数は、ステートグラフを作成し、ノード間のエッジを定義して、ツールの呼び出しや応答の生成を管理します。

Outlines

00:00

😀 シリーズ第2回目のビデオ紹介

この段落では、Braceが3部作のシリーズの第2回目のビデオを紹介しています。シリーズはジェネラティブUIアプリケーションの作成に焦点を当てており、Chainを使用しています。前回のビデオではジェネラティブUIの概念や使用事例、Chatbotのアーキテクチャが説明されています。また、明日公開される次のビデオではPythonパワードのバックエンドとNext.jsのフロントエンドについて触れる予定です。TypeScript開発者でない場合は、次のビデオを待つか、現在のビデオでもNext.jsのフロントエンドやコードのストリーミングに関連するトピックが重なっているため視聴を検討するべきです。

05:00

🛠️ 高レベルアーキテクチャのリフレッシュ

この段落では、ビデオの高レベルアーキテクチャのリフレッシュが行われています。Chatbotがどのように動作するかを説明し、ユーザーの入力やチャット履歴をLanguage Model(LM)に送信し、応答を生成するプロセスが解説されています。応答がテキストのみの場合は、クライアントにストリームバックされることが示されています。また、ツールが使用された場合は、ツールの実行とUIコンポーネントの更新が行われます。このプロセスはLangChainと呼ばれるツールを使用して構築されており、条件付きエッジやストリームイベントなどの概念が紹介されています。

10:03

📝 utils.server.tsx ファイルの実装

この段落では、utils.server.tsxファイルの実装について説明されています。このファイルは、UIコンポーネントのストリーミングとランナブルの呼び出しに関するロジックを含むReact Serverコンポーネントです。重要なインポートと型の定義、およびwithResolvers関数の作成が行われています。この関数は、プロミスとresolve/reject関数を返すために使用され、UIコンポーネントのストリーミングを完了する際に使用されます。

15:04

🌐 エンドポイントの公開とUIコンポーネントのストリーミング

この段落では、APIエンドポイントの公開方法とUIコンポーネントのストリーミングについて説明されています。exposeEndpoints関数を使用してアクションに適切なコンテキストを提供し、AIプロバイダーReactコンテキストを活用してクライアント側のファイルにコンテキストを提供します。また、createRunnableUI関数とstreamEvents関数が実装され、これらの関数はランナブルとUIコンポーネントのストリーミングを処理します。

20:05

🔗 ストリームイベントとランタイムUIの実装

この段落では、streamEvents関数とcreateRunnableUI関数の詳細な実装について説明されています。これらの関数は、ランナブルからUIコンポーネントをストリーミングバックし、ツールの呼び出しを処理します。streamEventsは、ランナブルからのイベントをストリームし、UIの更新やテキストのストリーミングを担当します。createRunnableUIは、UIコンポーネントを作成し、ランナブルをラップしてストリームイベントを生成します。

25:06

🛠️ ツールの実装とランタイムUIの使用

この段落では、ツールの実装方法とランタイムUIの使用について説明されています。GitHubのリポジトリ情報取得ツールの例を通じて、Zodを使用したスキーマの定義、ocitを使用したGitHub APIの呼び出し、createRunnableUIを使用したUIコンポーネントのストリーミングが実装されています。ツールはダイナミックに定義され、言語モデルから取得されたパラメータに基づいてAPIを呼び出し、UIコンポーネントを更新します。

30:08

📈 天気予報ツールとインボイスツールの紹介

この段落では、天気予報ツールとインボイスツールの概要が紹介されています。これらのツールは、言語モデルから得られたパラメータをもとにAPIを呼び出し、UIコンポーネントを更新します。天気予報ツールは、city、state、およびオプションのcountryを受け取り、weather.gov APIを利用して現在の天気情報を取得します。インボイスツールは、アップロードされた画像からインボイスの詳細を抽出し、UIコンポーネントを生成します。

35:08

🔄チャットUIの実装とデモ

最後の段落では、チャットUIの実装とデモが行われています。チャットUIはクライアント側のコンポーネントで、サーバー側のランタイムUIを呼び出します。チャット履歴、メッセージの送信、ファイルのアップロードなどが行えるインターフェースが提供されています。また、デモでは、天気予報、インボイスの詳細、GitHubリポジトリ情報の取得など、実装されたツールを使用して様々な機能がテストされています。

Mindmap

Keywords

💡Generative UI

Generative UI refers to user interface components that are created or modified dynamically by a system, typically using machine learning models. In the video, generative UI is the central theme, where the process of building such interfaces using LangChain is explored. The concept is vital as it allows for more responsive and adaptive user interfaces based on user interactions and inputs.

💡LangChain

LangChain is a framework mentioned in the video for building generative UI applications. It enables the creation of dynamic and interactive user interfaces by leveraging language models and other tools. The video series uses LangChain to construct a chatbot with sophisticated backend processes that enhance user experience by dynamically updating UI components.

💡Next.js

Next.js is a React framework used for building server-rendered React applications with support for static site generation. In the video, Next.js is employed to develop the front end of the generative UI application, highlighting its importance in handling the dynamic content rendered by the backend processes.

💡React Server Components

React Server Components are a feature of React that allows developers to render components on the server, which can then be streamed to the client. This concept is crucial in the video as it describes how UI components are generated and sent from the server to the client, ensuring efficient updates and interactions in the generative UI application.

💡AI SDK

AI SDK refers to a software development kit used to integrate artificial intelligence capabilities into applications. In the video, the AI SDK is used to manage the streaming of UI components from the server to the client, facilitating the integration of dynamic content generated by the backend processes.

💡Stream Events

Stream Events are mechanisms used to stream intermediate steps or data from a server to a client in real-time. In the context of the video, Stream Events are utilized to transmit UI components and data dynamically from the LangChain application back to the client, enabling real-time updates and interactions.

💡Runnable Lambda

Runnable Lambda is a function or component that can be executed to produce or modify UI components dynamically. The video explains how Runnable Lambda functions are created and used within LangChain to handle the streaming and updating of UI components, making the UI adaptive to user interactions.

💡Conditional Edge

A Conditional Edge is a concept from the LGraph used in LangChain to determine the next step in a process based on conditions. The video illustrates how Conditional Edges are used to decide whether to stream text responses directly to the client or to invoke additional tools for more complex processing, enhancing the chatbot's functionality.

💡Multimodal Inputs

Multimodal Inputs refer to inputs that involve multiple forms of data, such as text and images. In the video, the chatbot is designed to handle multimodal inputs, processing and responding to user queries that may include text and images, demonstrating the flexibility and advanced capabilities of the generative UI application.

💡Agent Executor

Agent Executor is a component in LangChain that manages the execution of tasks and processes within the application. The video details how the Agent Executor handles the flow of data and interactions, invoking appropriate tools and managing the state of the application to ensure seamless user experiences.

Highlights

シリーズの第2弾として、generative UIアプリケーションを構築するための3部作のビデオを紹介

前回のビデオではジェネラティブUIの基本概念やユースケース、チャットボットのアーキテクチャが説明されている

明日公開の次のビデオではPythonパワードのバックエンドとNext.jsフロントエンドを構築する

TypeScript開発者ではない場合は、Next.jsのビデオを見逃さないで推奨

ビデオではReactサーバーコンポーネントとLang Chainを使用してUIコンポーネントをストリーミングする方法が解説されている

Lang ChainとLang Graphを使用して、サーバーからUIへのイベントストリーミングを実装

stream eventsエンドポイントを活用して、Lang Chainアプリケーションからの中間ステップをクライアントにストリーミング

utils server.tsxファイルの解説で、Reactサーバーコンポーネントのロジックを含む

AI SDKのインポートとその使用方法、UIコンポーネントのストリーミングに活用方法が紹介

withResolvers関数の解説で、プロミスとresolve/reject関数の使い方と重要性が強調

exposeEndpoints関数を使用して、アクションに適切なコンテキストを提供する方法

createRunnableUI関数の解説で、ランナブルLambdaとUIコンポーネントのストリーミング方法が説明されている

streamRunnableUI関数の解説で、エージェントの呼び出しとUIコンポーネントのストリーミングを実装する方法

L-graphエージェントの作成と、チャットボットの内部アーキテクチャの解説

invoke modelノードとinvoke toolsノードの役割と実装方法が解説

GitHubツールの実装例で、Zodを使用したスキーマ定義とocit SDKの活用方法

ツールのロジックを実装し、APIのレスポンスをUIコンポーネントにストリーミングする

チャットJSXコンポーネントの解説で、クライアントサイドの実装とサーバーサイドの連携方法

デモアプリケーションの実演と、実際に構築されたチャットボットの機能紹介

Pythonビデオの告知と、今後の公開予定についての案内

Transcripts

play00:00

what's up everyone it's brace and this

play00:02

is the second video in a three-part

play00:04

series on building generative UI

play00:05

applications with laying chain if you

play00:07

haven't seen the first video already you

play00:08

should go back and watch that in that

play00:10

video we cover some high Lev Concepts

play00:12

such as what is generative UI we cover

play00:15

use cases and then also we go over the

play00:18

highle architecture of how we're going

play00:19

to build this chatbot and then also in

play00:21

the next video which releases tomorrow

play00:23

the python chatbot uh which will be a

play00:26

python powered back end and nextjs front

play00:28

end that being said if you are not a

play00:30

typescript developer you should probably

play00:32

hold out for tomorrow's video or you can

play00:35

still watch today's video because some

play00:36

of the topics will overlapped with the

play00:38

nextjs front end um and the aisk which

play00:41

we will use to power some of the uh code

play00:44

which sends the UI component from the

play00:46

server or in this video's case the react

play00:48

server component back to the UI so as a

play00:51

quick quick refresher this is the highle

play00:55

architecture of what the chap bot we're

play00:57

going to be building today will look

play00:59

like on the inside so it takes some

play01:00

inputs user input any images some chat

play01:03

history sends that to an LM the LM has

play01:06

some tools bound to it then using that

play01:09

response uh we have this conditional

play01:11

Edge if you're familiar with L graph you

play01:12

should know the what a conditional Edge

play01:14

is if not I'm going to link a uh video

play01:17

going over L graph which you should

play01:19

watch because that is what we're going

play01:20

to be using to build our L graph agent

play01:22

in this video so the conditional Edge

play01:25

says if there's only a plain text

play01:26

response then stream those chunks back

play01:28

to the client and then as this those

play01:30

chunks come in from open AI or whatever

play01:32

model provider we use we render them on

play01:34

the client if a tool is used that gets

play01:37

sent to our invoke tool section where we

play01:39

first stream the initial component back

play01:41

to the UI which could be some sort of

play01:43

loading component or UI element that

play01:47

tells the user hey we have process your

play01:49

request we're using this tool um and

play01:51

we're going to get back to you in a

play01:52

second so it just allows for a quicker

play01:54

time to First interaction we then can

play01:57

execute some arbitrary tool function

play01:59

which is just a generic typescript

play02:01

JavaScript function so in our case it'll

play02:03

typically be hitting an a an external

play02:04

API and then once we get the response

play02:06

back from that we update our UI

play02:08

component stream with the final

play02:09

component and close it uh you can call

play02:12

update or append as many times you would

play02:14

like to update or append UI components

play02:17

um to the UI your user see

play02:19

sees the way we're able to stream all

play02:22

these events back to the client is via

play02:24

the stream events endpoint in Lang chain

play02:26

stream events essentially is a way to

play02:28

stream intermediate steps from your

play02:30

Langan chain application back to the

play02:34

client or just send them in a in a

play02:35

response object um since this is all

play02:38

using Lane chain and Lane graph stream

play02:40

events is able to access every single um

play02:43

yield that a function might yield so in

play02:46

our case we're going to be yielding UI

play02:47

components we're going to be yielding

play02:48

data and then once again yielding UI

play02:50

components again or yielding text um in

play02:53

stream events is able to just capture

play02:55

all of those streams and then forward

play02:58

them back to the client okay so the

play03:00

first file we're going to want to

play03:01

implement is our utils server. TSX oh

play03:04

and if you want to follow along the link

play03:06

to this GitHub um repo will be linked in

play03:08

the description you can clone it and

play03:09

then you can go through and read each of

play03:11

the files as we as we code them um the

play03:14

code will all be there in that refo but

play03:16

this first server. TSX file this file is

play03:19

what's going to contain all of our logic

play03:21

around streaming UI components and

play03:23

invoking our runnable so let's add our

play03:27

Imports the first line you can see it's

play03:29

server only the all of the streaming UI

play03:32

components and invoking the runnable

play03:34

will have will happen inside of a react

play03:36

server component so we want to make sure

play03:38

this file is only used on the server

play03:40

next we're going to import react node

play03:42

and is valid element from react we're

play03:43

going to use uh the this for typing as

play03:46

this is a type and then is valid element

play03:49

we're going to use to make sure the

play03:51

elements inside of our stream from our

play03:53

runnable are UI elements before sending

play03:55

them back via the aisk which leads us

play03:58

into this next import the AIS SDK

play04:00

Imports um we use the AI SDK under the

play04:02

hood to handle streaming back UI

play04:05

components because they do a lot of the

play04:06

heavy lifting with react server

play04:08

components next we have some imports

play04:10

from runnables or sorry from Lang chain

play04:13

core runnables uh this will we're going

play04:15

to be using as a type and then also

play04:17

runnable Lambda is what we're going to

play04:19

use to wrap our one of our streaming UI

play04:21

components back so we can um essentially

play04:24

upsert the UI components into the stream

play04:27

event next these are all going to be for

play04:29

Ty types we have our stream event type

play04:32

and then we have our AI provider which

play04:33

is going to provide context we're going

play04:35

to use this to wrap um our children so

play04:37

that all of the UI components have the

play04:40

proper context and then this we're also

play04:42

going to use as a

play04:43

type so the first function we're going

play04:45

to implement is the with resolvers

play04:48

function this function is going to

play04:50

return um a promise and then a resolve

play04:52

and a reject function we're going to use

play04:55

these so that when you are streaming

play04:57

back from the UI component or sorry from

play05:00

when you're streaming the UI component

play05:01

and any other values from our chain

play05:03

we're able to properly resolve the final

play05:06

promise and know when our UI is done um

play05:10

streaming inside of this component we

play05:12

have a resolve reject function a new

play05:15

promise which then assigns the resolve

play05:18

and the reject function to the resolve

play05:20

and reject function from the promise and

play05:22

then we return it we need to expect the

play05:24

error here because typescript does not

play05:26

think that these have been assigned

play05:29

technically that true however the way

play05:31

we're going to use this um we will

play05:33

always use this promise

play05:35

before calling these resolve and reject

play05:38

functions so although typescript thinks

play05:40

they're not used yet in practice we will

play05:43

use them after calling our

play05:46

promise next we're going to implement

play05:49

our expose endpoints function this

play05:51

function is what we're going to use to

play05:53

return the proper actions and provide

play05:55

context to these actions so they can

play05:57

call our agent and then also so our UI

play06:00

components have the proper context we're

play06:02

using this AI

play06:05

provider react context that we imported

play06:07

from client. TSX which is is in which is

play06:10

in the same UTS file if you're following

play06:13

along and then as we see here we also

play06:15

have this use context hook which uses

play06:18

the use context from react and it's

play06:20

going to provide context to our client

play06:22

side uh files so they can properly

play06:24

invoke the

play06:26

agent here it's pretty simple returns a

play06:28

new function AI which contains an AI

play06:31

provider um react contacts or jsx

play06:34

component which you saw in the other

play06:36

file the next function we're going to

play06:38

want to implement is the one which will

play06:40

handle streaming back all the UI

play06:42

components so handling this stream

play06:44

handling this stream and then it's what

play06:46

passes the UI components from our tool

play06:49

call functions up into our stream events

play06:52

call so we'll implement the stream

play06:54

events function after this one but this

play06:56

function we're going to implement will

play06:58

handle we will

play07:00

is essentially wrapping the aisk and a

play07:02

runnable Lambda and is uh streaming and

play07:05

yielding these UI components so that we

play07:07

can access them inside of our stream

play07:10

events we'll Implement that here it's

play07:12

going to be called create runnable UI

play07:14

takes in two args one is required the

play07:16

second is optional the config argument

play07:18

this is going to be used to we're going

play07:21

to pass in our config config values here

play07:23

and this is so these stream events um

play07:26

when we invoke stream events it's going

play07:27

to have access to this runable Lambda

play07:30

function next we take in a an optional

play07:34

initial value this gets passed to the

play07:36

create streamable UI function so you see

play07:39

right down here we have our runnable

play07:41

Lambda this Lambda takes in a single

play07:43

input of initial value which should

play07:46

actually be

play07:49

optional um this initial value is going

play07:51

to get passed to the create streamable

play07:53

UI function from the asdk and then we're

play07:55

going to return this UI value this UI uh

play07:58

function is going to have a value which

play08:00

is the jsx element this is what we're

play08:02

going to use to actually send back to

play08:04

our client and it's going to contain the

play08:05

stream which can update uh anytime we

play08:08

call update aen error done um and this

play08:11

is what we're going to render on the

play08:13

client we then attach a config so that

play08:16

this runnable Lambda always has the same

play08:18

name stream UI Lambda and this is what

play08:20

we'll use later on to identify this

play08:22

runnable Lambda and extract the UI value

play08:24

from it and then we're going to return

play08:27

us invoking that function passing in the

play08:29

proper config so that using the we pass

play08:32

in the config so that stream events is

play08:34

able to find this runnable Lambda inside

play08:36

the St stream

play08:39

event the next function we're going to

play08:40

want to implement is the function which

play08:42

we'll we'll use to invoke our agent this

play08:45

is also going to call stream events and

play08:47

extract any UI values we return from

play08:49

here so when we invoke our agent it's

play08:51

going to be using calling stream events

play08:53

inside of this function so let's paste

play08:56

that in and then let's walk through

play08:58

exactly what this does

play09:00

so it's called stream runnable

play09:02

UI the first line we're creating a new

play09:04

streamable UI from thek and then using

play09:07

that with resolvers function we

play09:09

implemented below uh the last event

play09:11

which is our promise and then our

play09:13

resolve function we then have an async

play09:16

function which executes in here uh we

play09:18

have some callbacks which contain a

play09:20

string and then either the return type

play09:22

of create streamable UI or create

play09:23

streamable value we'll use those in a

play09:26

second and then as you can see our first

play09:28

input is a runnable

play09:29

we're going to call stream events on

play09:31

this runnable to stream back all the

play09:33

events this runnable will be our

play09:35

agent we iterate over each event and the

play09:38

first thing we do is we check to see if

play09:40

it's a UI value which was returned from

play09:42

this Lambda so as we can see our run

play09:44

name is here we're checking to see if

play09:46

the stream event. name is that run name

play09:49

and it's after that Lambda has finished

play09:51

so on chain end should be the event if

play09:55

it is that event and and runnable Lambda

play09:58

we're then going to check and make sure

play09:59

that the value of the output is a valid

play10:02

UI element if it is then we're going to

play10:05

append that UI element to our UI which

play10:09

we created via the great streamable

play10:11

UI next we're going to check to make

play10:14

sure that it's a stream and it's not a

play10:16

chain um this is what we're going to use

play10:18

to extract any text values from the uh

play10:22

from the our agent or our Ling graph

play10:24

graph that will be this part of our

play10:27

diagram where the LM returns just text

play10:29

I'm going to send it right

play10:31

back so if it

play10:34

is text and type of chunk. text is

play10:38

string we want to make sure we have not

play10:40

already processed this this run run

play10:43

event because your llm could in theory

play10:45

invoke a language model twice and get

play10:48

two sets of text streams back so we make

play10:50

sure that we've not already processed it

play10:52

if we haven't then as this comment says

play10:54

the create streamable value sluse

play10:56

streamable value is preferred as the

play10:58

stream events are updated immedi in the

play10:59

UI rather than being batched by react

play11:02

via create streamable UI so if we're

play11:04

just updating text we want to use this

play11:06

create streamable value and not create

play11:08

streamable UI then we're going to append

play11:11

our create streamable UI with a generic

play11:16

react or jsx function this could be you

play11:20

know this is customizable for what you

play11:21

want we have ai message text which is

play11:23

going to be our text bubbles but you

play11:24

should probably replace this with

play11:26

whatever UI you want and here you say

play11:28

you see using Create streamable value

play11:31

which we mentioned there and then the

play11:33

value of that is the value of our text

play11:34

stream and we're using text stream so we

play11:36

can bypass any sort of batching that

play11:38

react does and instantly update our UI

play11:40

with the stream as it comes

play11:43

in next if the Run idea is true which it

play11:46

is because we just said it we're going

play11:47

to append the text from the stream event

play11:50

chunk this will be the text language

play11:52

while stream back and then we're also

play11:54

going to be updating our last event

play11:56

value of the stream event this happens

play11:58

at the end of each stream event so we

play11:59

know that the last event value will

play12:02

always be the last stream event finally

play12:04

when our stream events has resolved

play12:06

we're going to resolve our promise which

play12:09

we implemented in the with resolvers

play12:11

function with the output of our stream

play12:13

event this will typically be some sort

play12:14

of string but it could also be um an

play12:17

object with say our tool

play12:19

call we're then going to iterate over

play12:22

all of our callbacks and call done on

play12:24

them to finish our callbacks and then

play12:26

call UI Doone which is going to close

play12:29

the UI stream between the server and the

play12:31

client via the the aisk finally we're

play12:34

going to return these values in the last

play12:36

event which we will use in our client

play12:38

when we Implement

play12:39

that now that we've implemented our

play12:42

stream runnable UI and create runnable

play12:44

UI we can go ahead and Implement our

play12:46

graph this is going to be this language

play12:48

model graph with the Edge invoking tools

play12:51

or sending back the response we're going

play12:53

to go into the AI graph file and first

play12:56

things first we're going to add our

play12:57

Imports as you can see we're not adding

play12:59

any serveron um text because since this

play13:03

is only going to be used inside of our

play13:06

serveron code here it'll already be

play13:08

server only we're importing some prompt

play13:11

templates um start and end uh variables

play13:14

from L graph State graph which is what

play13:16

we're going to use to create our lane

play13:18

graph chat open AI you can obviously

play13:20

replace this with any language model

play13:21

which supports tool calling from the the

play13:23

lane chain Library um our GitHub tools

play13:26

which we'll Implement later or after

play13:29

after this I guess we're going to find

play13:31

our base message which we use for types

play13:33

and then runable con config which we

play13:34

will also use for types once we've added

play13:37

our Imports we're going to want to add

play13:39

our type for our L graph agent so we're

play13:42

going to name it agent executor State

play13:44

the first value is a is the input this

play13:46

is going to be the input that the User

play13:48

submitted right here next is chat

play13:51

history once again the chat history from

play13:53

their previous conversations and then we

play13:56

have some optional values these are

play13:57

optional because they're only going to

play13:58

be popular ated later on in our graph so

play14:01

result the plain text result in LM if no

play14:03

tools used that's going to be this part

play14:05

so if the LM does not invoke a tool and

play14:07

only returns some text that's going to

play14:09

populate this value next is the parse

play14:12

tool result that was called if the LM

play14:14

does call a tool we're going to parse

play14:16

that and return it that will be pointing

play14:18

to this conditional Edge and then

play14:20

finally the result of a tool that will

play14:22

be anything that this arbitrary function

play14:24

returns um and we're going to uh

play14:26

actually include the result of that

play14:28

because we want to update that in our

play14:30

chat history later on so the language

play14:31

model knows that it did in fact um

play14:34

invoke and complete any tool requests

play14:37

that the user ask

play14:38

for after that we can skip adding our

play14:41

nodes for now because we're want to con

play14:42

we're going to want to construct the uh

play14:45

graph

play14:46

first so we're going to create our agent

play14:48

executor function this is going to

play14:50

create a new state graph passing in our

play14:52

state and then also creating these

play14:54

channels um here we're going to add just

play14:56

two nodes one for invoking the model and

play14:58

one for for invoking the tools invoke

play15:00

model will obviously be this part make

play15:03

that a little bigger and then invoke

play15:05

tools will be what happens when we pick

play15:07

a tool and we want to invoke that tool

play15:10

invoke model will then always call

play15:12

invoke tools or return which is this

play15:14

conditional Edge this conditional Edge

play15:16

will just check to see if the tool is

play15:17

used if it is used then this function

play15:20

will then return invoke tools so invoke

play15:22

tools is called if it's not used then

play15:25

it's going to return end which is this

play15:27

end variable and that indic Ates to Lane

play15:29

graph that it should always finish and

play15:31

that's what this is where it finishes

play15:32

and sends the response back invoke tools

play15:35

will also then always end so that once

play15:38

the tool is done invoking it returns and

play15:40

finishes the L graph graph and responds

play15:44

back to the UI and obviously start is

play15:46

always going to call invoke model

play15:49

because that's the first thing we want

play15:50

to do in all of our graphs we're then

play15:52

going to compile it and return our graph

play15:54

and then we're going to use this a

play15:55

little bit later on when we're passing

play15:57

it to our stream runnable UI

play15:59

and this graph right here will be the

play16:02

runnable that is in is invoked via

play16:05

stream

play16:06

events so if we go back to our graph

play16:08

file the first node we're going to want

play16:10

to implement is the invoke model as

play16:11

that's going to be the first node which

play16:12

is always called so if we paste that in

play16:15

let make this a little bit bigger we can

play16:16

see it takes in two values state which

play16:19

is our agent executor State this is kind

play16:21

of the magic behind Lang graph where

play16:23

it'll always pass the state to every

play16:25

single node even though we're not

play16:28

returning the full State here so Lane

play16:30

graph can recognize what values in your

play16:33

state that you returned it just appends

play16:35

that to the total State and then passes

play16:37

the complete State through to each node

play16:39

so we take in our state and then also

play16:41

our config value this is what we're

play16:43

going to pass to invoke so that out our

play16:45

Lang Smith traces and the stream events

play16:47

all have the same all contain the same

play16:50

runs in the single trace so stream

play16:52

events can get back all the

play16:53

values first thing inside our functions

play16:55

we Define our prompt you're a helpful

play16:57

assistant you given a list of tools need

play16:59

to determine which tool is best to

play17:00

handle the user input or respond with

play17:01

plain text we then have a message

play17:03

placeholder for the chat history this is

play17:05

where our chat history rle go so the

play17:07

language model has access to all of our

play17:09

history it's obviously going to be

play17:11

optional because there will be no

play17:13

history on the very first invocation and

play17:15

then we have our human message with just

play17:17

a plain input uh

play17:20

argument next we Define our tools we'll

play17:23

Define those after this function but

play17:25

we're going to provide three Tools in

play17:26

language model uh GitHub tool invoice to

play17:28

web weather tool when we Define these

play17:30

tools we'll talk about what each of them

play17:32

do next we're going to Define our llm

play17:35

we're not going to give it we're going

play17:36

to give it a temperature of zero so it's

play17:38

more predictable and not

play17:40

as um creative I guess we're going to

play17:43

use gbt 40 because that's their newest

play17:45

fastest model which can give us text

play17:47

back super quickly and also process our

play17:49

images um and then we're going to bind

play17:51

tools to this model so all of the the

play17:53

model has access to all the tools we've

play17:55

defined next we're going to use the line

play17:58

chain expression language anguage to

play17:59

pipe our prompt to our language model

play18:01

and create a chain and then we're going

play18:02

to invoke our chain passing in our input

play18:05

from the user input and the chat history

play18:07

and then also our config object once

play18:10

this is finished invoking we're going to

play18:12

check to see if any tool calls were on

play18:14

the model or if the model uses any tool

play18:16

calls and if they are we're going to

play18:18

return this tool call Value which will

play18:20

populate this

play18:22

field if the model did not use any tool

play18:24

calls we're just going to return the

play18:26

content now that we've defined our first

play18:29

node we're going to want to Define our

play18:31

conditional Edge which will always be

play18:32

called after this

play18:34

node invoke tools or return this takes

play18:37

in the state and it essentially says if

play18:39

tool calls are defined then you want to

play18:42

call the invoke tools node next and if

play18:44

it's not defined but the result field is

play18:47

defined then we're going to end because

play18:49

that's just the string that was returned

play18:51

and then this should never happen but if

play18:53

for some reason neither of these are

play18:55

defined um then it's going to throw an

play18:57

error but we're never going to get this

play18:59

so that should not cause an issue for us

play19:02

finally we're going to Define our last

play19:03

node invoke

play19:05

tools this takes in the same input

play19:07

arguments as the invoke model our state

play19:10

and our config um and it's going to

play19:12

first check to make sure there's a tool

play19:14

call once again this should never happen

play19:16

because it should only call invoke tools

play19:18

if tool calls are defined but because of

play19:20

typescript we need to add this here but

play19:22

we should never see this error next

play19:24

we're going to Define our tools map

play19:25

which is going to be a map containing

play19:27

key value pairs each each key is going

play19:29

to be the name of the tool and then the

play19:31

value is going to be the actual tool

play19:34

using this map and our tool input we're

play19:37

going to try and find the tool in there

play19:39

once again this should never happen

play19:41

because our language model especially if

play19:43

you're using a um state-of-the-art

play19:45

language model it should never pick a

play19:46

tool which doesn't exist you know that

play19:49

you didn't provide to it um but we have

play19:51

this here once again for typescript once

play19:53

we have our selected tool we're going to

play19:54

then invoke that tool passing the

play19:56

parameters that the tool called for that

play19:58

the language will provided to us in our

play20:00

config and then finally once we get a

play20:03

result back this result will always be a

play20:04

string but usually we're returning an

play20:06

objects we're going to parse that result

play20:08

and then return it in our tool result

play20:10

value which will populate this

play20:13

field now that we've done this we've

play20:15

implemented our entire agent and we can

play20:17

go and Implement our tools which we will

play20:19

then provide to the agent these tools

play20:21

are going to contain all the logic

play20:22

around streaming back UI components and

play20:24

hitting any external

play20:26

apis so for our tools we have this tools

play20:28

folder which contains some files uh get

play20:31

a repo weather and invoice we're just

play20:33

going to implement the get a repo tool

play20:35

and then I'll quickly walk through the

play20:36

other tools because they're all pretty

play20:37

redundant and contain kind of the same

play20:39

logic uh but for for our GitHub repo

play20:41

tool we're going to want to add our

play20:43

Imports first we're going to be using

play20:45

Zod Zod is what we're going to use to

play20:46

define the schema so language model

play20:47

knows what parameters to pass or extract

play20:50

from the input and then pass to our tool

play20:53

um ocit which is the GitHub API wrapper

play20:57

SDK which is is what we're going to use

play20:59

to actually call the giab API create

play21:01

runnable UI which we defined in our

play21:02

server. TSX file which is going to wrap

play21:05

that Lambda create a new streamable UI

play21:08

and that's what we're going to use to

play21:09

actually stream back these UI components

play21:12

our tool from Lane chain and then our

play21:14

pre-built components which we can

play21:16

quickly look at we have a loading

play21:18

component which just contains some

play21:19

skeletons to show that we're loading and

play21:21

then the actual component which contains

play21:23

a card um this card is going to have a

play21:25

link to the GitHub repo it's going to

play21:27

show how many stars they have the repo

play21:29

description and other things like that

play21:31

which we'll get back from the GitHub

play21:33

API the first thing we're going to want

play21:35

to do is Define our schema our schema is

play21:37

can to be the owner in the repo which is

play21:39

the which are the fields that the giab

play21:40

API requires in order to fetch details

play21:42

about a repository um if the language

play21:45

model sees that a user is submitted an

play21:47

input with an owner and repo fields that

play21:49

look like a get of repo it'll likely

play21:51

call this tool and provide us the name

play21:53

of the repository and the repository

play21:56

owner next we're going to Define our fun

play21:58

which we'll actually call the giab API

play22:00

we're going to call it the giab revo

play22:02

tool the input is going to be the type

play22:05

of our Zod schema so z. infer type of

play22:07

our schema and that will infer the type

play22:11

here next we're going to make sure you

play22:13

you have your GitHub API token in your

play22:16

environment if it's not we're going to

play22:17

throw an airror obviously if you're

play22:19

going to use this tool you should set

play22:21

that in the read me of this repo I add

play22:22

instructions on how to get all the API

play22:24

Keys you need for free for the different

play22:27

tools um except for your language model

play22:30

API key which will obviously cost money

play22:32

uh when you invoke the language model

play22:34

we're then going to instantiate our octo

play22:36

kit SDK passing in our GitHub token and

play22:38

that's going to return an instance of

play22:40

the GitHub client then we can just call

play22:42

our GitHub client calling the repos um

play22:45

with a get request and that's going to

play22:46

get us the information on this repo and

play22:49

then we return the data from the um from

play22:53

the API response which is the input so

play22:56

we have the owner and the repo and then

play22:57

the get

play22:59

repo description how many stars they

play23:00

have and the primary programming

play23:02

language if there's an error we're just

play23:04

going to return a string and then we'll

play23:05

process this inside of our tool to

play23:08

return either the GitHub final component

play23:11

or an error component if an error was

play23:14

occurred now we can Implement our tool

play23:16

it's going to be a dynamic structured

play23:18

tool with a name GitHub repo a

play23:20

description tool to fetch details of

play23:22

GitHub repository um we're going to pass

play23:24

in our schema so language model knows

play23:26

what fields to provide and then we have

play23:28

our function inside this function we see

play23:30

it takes two arguments input which

play23:32

should be the schema we defined and then

play23:35

config we're going to first create a new

play23:37

runnable UI stream passing in

play23:40

our initial value which is going to be

play23:42

this loading component and this is going

play23:44

to tell the user that as soon as the

play23:45

language model picks this tool and this

play23:47

function is invoked it's going to the

play23:48

user is instantly going to get back

play23:50

their

play23:51

first loading component so they see that

play23:53

we're working on something that's this

play23:55

step here next we're going to hit our GI

play23:58

of API with this function we defined

play23:59

above passing in our input then if they

play24:02

get of API return to string there's an

play24:03

error and we're just going to return um

play24:06

a P tag with the error message calling

play24:09

stream DOD if the type was not a string

play24:13

so this object here then we're going to

play24:15

return our GitHub component which is

play24:17

that jsx component which will actually

play24:19

show all the data passing our data and

play24:21

finally we're going to return the result

play24:22

of all result of our tool you can also

play24:26

call stream. update or append as as many

play24:28

times you would like if you want to say

play24:31

hit an API update your tool hit another

play24:34

API add some more values to that UI

play24:36

element you can really call update um

play24:38

and append as many times you would like

play24:40

to keep interacting or updating that

play24:42

interactable UI component with the user

play24:45

the nice thing about this as well is

play24:46

since it takes in a react node our

play24:48

GitHub component these don't have state

play24:51

however they could be stateful they

play24:52

could contain some button which hits an

play24:54

API or makes another language model call

play24:56

which then updates the UI again and

play24:58

they're really just generic react

play24:59

components so everything that you could

play25:01

do before with react components and make

play25:04

them super Dynamic and interactable you

play25:06

can do that here as well because it's

play25:07

any sort of react component you want and

play25:09

you can pass props to it so they can

play25:11

take in these different inputs and be

play25:13

dynamic and customizable for the user so

play25:15

now that we've imped these tools we can

play25:17

quickly look at our invoice and weather

play25:19

tool invoice is just a schema this is

play25:22

because we're going to want the language

play25:23

model to extract these fields from any

play25:25

uploaded image and then it just creates

play25:28

a own UI with this initial loading which

play25:31

is kind of redundant because then it

play25:32

instantly turns around and um updates it

play25:35

with the final component but just to

play25:37

show the same thing and then it Returns

play25:38

the input and then for the weather

play25:40

component or for the weather tool same

play25:43

thing our weather schema is a city and

play25:44

state and then an optional country which

play25:47

which it defaults to USA and then it

play25:49

hits a couple apis if you want this API

play25:51

key I've added some instructions in the

play25:53

REM on how to get it it's totally free

play25:56

um it's going to get the uh longitude

play25:59

and latitude from this API and then it's

play26:01

going to use the weather.gov API which

play26:03

is free passing in these values and then

play26:06

it's going to extract the current

play26:08

weather for your location the tool also

play26:12

the same thing creates a stream passes

play26:14

back that loading weather component then

play26:16

it invokes our weather data function to

play26:19

actually get the weather data from these

play26:20

apis and finally it updates the weather

play26:23

component with our um with the data that

play26:27

the API returns

play26:30

finally we're going to want to implement

play26:31

our chat jsx component which is going to

play26:33

be the chat you can interact with if you

play26:36

want to follow along you should go to

play26:37

pre component prebuilt chat this going

play26:39

to be a client component because it

play26:41

doesn't is not actually uh using

play26:44

anything in the server instead it's

play26:45

going to call our server component we're

play26:47

going to have a state input which are

play26:49

just some sh Shad CN components that

play26:52

they built uh endpoints endpoints

play26:55

context which we already defined

play26:59

here or sorry we need to Define this

play27:02

after this um our use actions which we

play27:04

saw how that was defined that's going to

play27:05

provide the agent act action for us and

play27:09

then our context which is going to

play27:10

provide context to our action and also

play27:14

um render our UI

play27:16

elements these are just some util

play27:18

functions on converting files to base 64

play27:21

we need to convert them to a Bas 64

play27:23

string on the client because react

play27:25

server components don't allow for any

play27:27

arbitrary function fun to be passed over

play27:29

the or sorry any arbitrary object to be

play27:32

passed over the wire only specific

play27:33

objects and obviously uh key value pairs

play27:36

where they're both strings and we would

play27:38

need to convert it to Bas 64 in the

play27:39

server anyway so we just do it on the

play27:41

client U because they don't allow for

play27:43

file objects to get passed over the wire

play27:45

so it's now string and we send it over

play27:46

and then we use it to invoke our

play27:48

language

play27:49

model for our chat function we're going

play27:52

to want to add our state variables first

play27:55

our use actions hook providing our

play27:57

endpoint cont text which is our agent

play27:59

takes in our inputs which we also see

play28:01

here this is how we're going to then

play28:03

invoke our agent we then have a few

play28:05

State variables Elements which is a list

play28:08

of jsx elements these are going to be

play28:09

the elements that the UI returns to be

play28:12

rendered uh your chat history chat

play28:15

history input and then a selected file

play28:17

you've selected here we see we're

play28:19

wrapping our elements with this local

play28:20

context. provider um context jsx uh

play28:24

element and that's going to provide

play28:26

context to our react elements

play28:28

and then we have a simple form as we

play28:30

would see right here just for submitting

play28:33

your inputs and any uh images you might

play28:36

upload next we're going to want to

play28:38

implement our on submit function which

play28:39

is going to actually call our

play28:41

agent so this on submit function does a

play28:44

few things we're going to paste it in

play28:45

and then walk through it so first it

play28:47

makes a copy of our elements array so in

play28:50

case this somehow gets updated later on

play28:52

it's not going to mix and match them so

play28:54

it's always going to be the same at the

play28:56

beginning of the function it's then

play28:57

going to convert your file to base 64

play29:00

string format if you did upload one and

play29:02

then it's going to use our action our

play29:05

agent action to invoke our agent this is

play29:07

going to return an element which is UI

play29:09

and last event we saw that here it's

play29:13

essentially calling this function which

play29:15

returns our UI and last

play29:18

event passes any necessary inputs and

play29:21

then updates our element array with the

play29:23

UI value from our return value from our

play29:26

agent and then also so the human message

play29:29

that the User submitted and then if they

play29:30

uploaded a file the

play29:33

file finally we saw in our server

play29:36

function the first function we

play29:37

implemented was was this with resolvers

play29:39

we're going to use that here so element.

play29:43

last event which once again is

play29:46

the last event here from with from our

play29:50

with resolvers

play29:52

function it's then going to check and

play29:54

see if it's an object this right here is

play29:57

specific to this chatbot we've

play29:59

implemented so you're obviously going to

play30:01

want to update these to reflect your L

play30:04

graph node but we have or nodes we have

play30:07

invoke model and invoke tools as we saw

play30:09

we defined in our graph so if it's an

play30:12

object then we're going to want to see

play30:15

if last event. invoke model. result is

play30:17

true that would be this plain text

play30:19

string if it is then we update the

play30:21

history with that plain text and if it's

play30:24

not true that means that tool was used

play30:26

so we see invoke tool

play30:28

and then we update our history um but

play30:30

make the assistant message be this tool

play30:33

result and then the result of our tools

play30:34

and that's so the the assistant knows

play30:36

it's it's successfully completed the

play30:39

request we made in the past using the

play30:42

result of our tool API um and that's so

play30:44

you can say something like uh you know

play30:46

what's the details on this GI up repo it

play30:49

sends the response it didn't actually

play30:50

give you any text there wouldn't be any

play30:52

text in the chat history but since we

play30:54

are creating this assistant message when

play30:56

you send a followup it's able to see

play30:58

your question and see that it was

play31:00

submitted or resolved and in those it

play31:02

should ignore any questions in your chat

play31:05

history because it's already resolved

play31:06

them here finally we clean up by setting

play31:09

our elements State value and then

play31:12

clearing any

play31:14

inputs finally we can go and Implement

play31:16

our agent wrapper which is going to uh

play31:19

use this stream runnable UI function and

play31:22

call our graph agent executor function

play31:27

um and and that's what we're going to

play31:28

use to invoke our agents let's go

play31:29

Implement that now so if you're

play31:32

following along you should go to app SL

play31:34

agent and you're going to want to paste

play31:36

in our Imports once again server only

play31:38

because we're going from this client

play31:40

component to our server so we want to

play31:42

make sure it's only invoked on the

play31:43

server we're then going to import our

play31:45

agent executor from our graph which we

play31:47

built a little while ago and our expose

play31:49

endpoints and stream runal UI from our

play31:51

server file which we invoked in the

play31:53

first part finally we're going to want

play31:55

to import our AI message and human

play31:57

message uh Lane chain message types

play31:59

we're going to use this when we're

play32:00

constructing our chat history to give

play32:02

the proper message types to our um chat

play32:06

history and this will also when we look

play32:08

at the Lang graph Trace you'll be able

play32:10

to see human message AI message human

play32:13

message AI message um in the proper

play32:15

types uh so the language W knows what or

play32:18

who said

play32:19

what now we can implement this little

play32:22

util function for converting our chat

play32:24

history type see from chat roll and

play32:28

content into a proper list of human

play32:31

message and AI message so iterate over

play32:34

if it was a human then return a human

play32:36

message or is thiser AI return an AI

play32:39

message and then by default we're

play32:40

returning a human message this could

play32:42

also be say chat message from linkchain

play32:45

core and this would just be a generic

play32:51

um we also need a roll so you could say

play32:54

you

play32:56

know roll um and this would just be a

play32:59

generic chat message and it wouldn't be

play33:00

specifically human or AI but in our case

play33:03

this will probably never happen because

play33:04

we know we're always adding this role

play33:06

and assistant but we're adding it just

play33:08

to make our switch case

play33:09

happy next since we're doing image um

play33:13

inputs we need this process file

play33:15

function this is essentially going to

play33:16

process any files you upload and convert

play33:18

them to the proper message type for

play33:21

passing multimodal inputs to our

play33:23

language model so if file is defined

play33:25

it's going to create a new human message

play33:28

um passing into content which is a list

play33:30

of Type image URL and then image URL

play33:33

with our base

play33:34

64 uh conversion of our file and this is

play33:37

the right or this is the proper uh

play33:40

template format for uploading multimodal

play33:42

types to language models I'll add a link

play33:45

in the description to our multimodel

play33:48

how-to guide in the JS documentation if

play33:50

you're interested in that and then

play33:51

finally we return our input and our chat

play33:53

history which matches as we saw the

play33:58

input in chat history from our Lan graph

play34:01

agent if you didn't upload a file then

play34:03

we just need input in chat history and

play34:05

don't need to do anything with image

play34:06

prompt

play34:08

templates now we need to implement our

play34:10

agent function this is going to be the

play34:12

function that when you

play34:14

call where is it actions. agent um this

play34:18

is going to be the function that's going

play34:19

to that it's going to invoke so it's

play34:21

going to take in as we see the same

play34:23

inputs we were passing in here input

play34:25

chat history and file

play34:27

um use server so we make sure that this

play34:29

is always executed on the server as in a

play34:32

react server component we're then going

play34:34

to this does not need to be a

play34:38

sync um we're then going to process the

play34:41

file to get the process inputs which is

play34:43

input in chat history that's going to do

play34:45

what we saw up here and then finally

play34:47

it's going to return a new stream

play34:48

runnable UI passing in our Asian

play34:50

executor which is our Lang graph agent

play34:52

and any inputs um we saw this already

play34:55

but this is just going to call stream

play34:56

events on our runable which we pass into

play35:00

it which is our agent executor function

play35:03

and the last thing we need to do is

play35:05

actually create uh expose the context

play35:08

for this agent so that our actions.

play35:10

agent um and with our use actions hook

play35:13

has the proper context available to

play35:15

invoke

play35:17

this so as we see here there's an error

play35:20

because exposed context does not exist

play35:22

yet but once we expose our context

play35:25

passing in our agent function and Export

play35:27

this the goes away and as we

play35:30

see with our actions. aent it has access

play35:34

to our agent function with the same

play35:36

inputs we just defined here we see we're

play35:38

passing expose uh endpoints this agent

play35:42

function with the same inputs and this

play35:44

just gives the proper context to um our

play35:47

CL C client component so it knows that

play35:49

it can invoke this agent now that we've

play35:52

implemented all this we can go and

play35:54

actually demo our application

play35:59

restart

play36:38

go now that we've implemented all this

play36:41

we can go and start our Dev server and

play36:43

actually check out our demo so we're

play36:44

going to want to navigate to your

play36:45

terminal go into the proper directory

play36:47

and run yarn Dev or you can build it run

play36:50

yarn start kind of the same once it's

play36:52

running we're going to go to Local Host

play36:55

3000 and we will see our website so

play36:58

gener VII with L chain we see our chat

play37:00

bot and our inputs um we can say

play37:02

something like what's the weather in

play37:07

SF it then streamed back our text if we

play37:10

go back here we can see that the

play37:12

language model decided did not have

play37:14

enough um inputs from the user to select

play37:17

the weather tool or any other any of the

play37:19

other tools so it just sent back some

play37:21

text but since we've implemented chat

play37:23

history can it it said can you please

play37:25

specify the state so we can just say

play37:28

California and it's going to use our

play37:30

chat history and our current message to

play37:32

then invoke the weather tool and we see

play37:34

right there there's a loading component

play37:36

because it picked the weather tool sent

play37:38

us back our loading component and then

play37:39

went and actually hit the weather apis

play37:41

and then updated this component to show

play37:44

the actual weather we can also say

play37:46

something like what's the deal with my

play37:51

invoice and we can upload an image let's

play37:54

say we upload our receipt we submit that

play37:56

and this is going to use GPD 40's

play37:58

multimodal capability to read our image

play38:01

and then send us back this nice fully

play38:04

interactable if you implemented this

play38:05

component it wasn't just a demo um uh

play38:08

receipt component or invoice component

play38:11

um and this is all populated with the

play38:13

receipt image I uploaded so it extracted

play38:15

the fields and then passed in the proper

play38:18

properties to this component so it could

play38:20

render and then finally we can check out

play38:22

our GitHub component which we

play38:23

implemented so what the info on

play38:28

Lang chain AI SL Lang graph and our

play38:32

language model is going to be able to

play38:33

recognize that this is a in a get up

play38:35

repo so we submit that and then error

play38:40

doing that so I wonder if I spelled

play38:42

something wrong yes I spelled Lang graph

play38:44

wrong try Lang chain AI

play38:48

SL Lang

play38:51

graph so as we saw there it gave us our

play38:54

loading component but then the get of

play38:55

API returned an error so it just

play38:57

responded with this string when I did it

play38:59

properly it gave us the proper component

play39:01

and then obviously it's fully

play39:02

interactable so you can click on this

play39:04

and it'll bring you to the L graph

play39:06

repository that's it for this video um

play39:09

if you're interested in the python video

play39:10

it's going to come out tomorrow or if

play39:12

it's already released then we will link

play39:13

in the description where we will

play39:15

implement this exact same uh chatbot

play39:17

demo website but with a full python

play39:20

backend I will see you all in the next

play39:22

video

Rate This

5.0 / 5 (0 votes)

相关标签
generative UI開発チュートリアルシリーズReactLang Chainツール呼び出しサーバーサイドクライアントAPI統合ユーザーインターフェース
您是否需要英文摘要?