Building a Generative UI App With LangChain Python
Summary
TLDRこのビデオでは、PythonバックエンドとNext.jsフロントエンドを使用して生成UIチャットボットを作成する方法が紹介されています。シリーズの第3回で、ジェネラティブUIの概念やその利点を説明し、実際にアプリケーションを構築するプロセスをステップバイステップで案内します。Lang chainとL graphライブラリを活用し、ユーザー入力や画像、チャット履歴を処理する方法を学びます。また、ツールの呼び出しとUIコンポーネントの更新を含む、サーバーとクライアントのアーキテクチャの詳細も解説されています。
Takeaways
- 🌟 このビデオは、Lang chainを使用して生成UIアプリケーションを構築するシリーズの第三弾です。
- 🔗 前回のビデオで紹介された生成UIの概念やユースケース、および比較の利点を復習することをお勧めします。
- 💬 今回のビデオでは、Pythonバックエンドとxjsフロントエンドを使用して、生成UIチャットボットを作成する方法を説明します。
- 🔧 Lang graphライブラリを使用して、グラフを構築し、UIコンポーネントに関連するツールをバインドします。
- 📈 Lang graphは、ノードとエッジを使ってアプリケーションフローを構築し、条件付きエッジでツールの呼び出しを決定します。
- 🛠️ チャットボットのアーキテクチャは、サーバーとクライアントの2つのセクションに分かれており、サーバーはPythonコード、クライアントはNext.jsコードを保持します。
- 🔄 Lang serveを使用して、Pythonバックエンドを実行し、ストリームイベントエンドポイントでリアルタイムにUIコンポーネントを更新します。
- 📝 ビデオでは、GitHubリポジトリ情報取得ツール、インボイス解析ツール、天気予報ツールの実装方法も説明しています。
- 🌐 フロントエンドでは、React Contextを使い、サーバーからのストリームイベントを処理し、UIを動的に更新します。
- 🔑 環境変数を使用して、GitHubトークンやAPIキーなどの認証情報を管理しています。
- 🎉 最後に、デモとして生成UIアプリケーションを実行し、ユーザー入力に応じて動的にUIコンポーネントを表示させています。
Q & A
このビデオでは何について説明されていますか?
-このビデオでは、Pythonバックエンドとxjsフロントエンドを使用して生成UIチャットボットを作成する方法について説明されています。
生成UIとは何で、なぜ従来の方法よりも優れていると言えるのですか?
-生成UIとは、ユーザー入力に応じて動的にUIコンポーネントを生成するインターフェースのことです。従来の方法よりも柔軟性があり、ユーザーのニーズに合わせたカスタマイズが容易であるため優れています。
Lang chainとはどのような技術ですか?
-Lang chainは、Pythonバックエンドで使用されるライブラリであり、グラフを構築してアプリケーションのフローを管理するのに使われます。
L graphの基本的な構成要素は何ですか?
-L graphの基本的な構成要素はノードであり、各ノードは呼び出される関数で、状態が渡されます。
チャットボットのアーキテクチャ図には何が示されていますか?
-チャットボットのアーキテクチャ図には、サーバーとクライアントの2つの異なるセクションが示されており、サーバーにはPythonコードが、クライアントにはNext.jsコードが含まれています。
L graphの条件分岐とは何ですか?
-L graphの条件分岐とは、特定の条件に基づいてアプリケーションフローを異なるノード間でルーティングする機能のことです。
チャットボットのサーバーサイドで使用される主要なファイルは何ですか?
-チャットボットのサーバーサイドで使用される主要なファイルは、`chain.py`ファイルであり、ここでLang graph chainが実装されています。
ツールのパースと呼び出しはどのように行われますか?
-ツールのパースは、OpenAIツールの出力をパースするJSONパーサーを使用して行われます。その後、モデルがツールを呼び出すかどうかを判断し、必要に応じてツールを呼び出します。
クライアントサイドで使用されるReactコンポーネントとサーバーサイドのロジックはどのように連携していますか?
-クライアントサイドのReactコンポーネントは、サーバーサイドのロジックと連携し、ツールの呼び出しや結果のストリーミングに応じてUIを更新します。
デモで作成された生成UIアプリケーションの動作例を教えてください。
-デモで作成された生成UIアプリケーションでは、ユーザーが「SFの天気は?」と尋ねた場合、チャットボットは天気ツールを選択し、ローディングコンポーネントを表示した後、APIからデータを取得してUIを更新します。
Outlines
😀 総括とシリーズ紹介
このビデオは、UIアプリケーションを構築するシリーズの第3弾で、Lang chainを使用して生成UIアプリケーションを構築する方法を紹介します。前回のビデオで扱われた生成UIの概念やその利点、および今回構築するアプリの概要について説明しています。また、JavaScriptバージョンのビデオへのリンクも提供されています。
🛠️ アプリケーションアーキテクチャの概要
ビデオでは、サーバーとクライアントの2つのセクションを持つチャットボットのアーキテクチャ図を紹介し、各セクションの機能とPythonバックエンドとNext.jsフロントエンドの使用について説明しています。また、Lang chainとL graphの使い方とそのAPIについても触れています。
🔄 L graphの基本とツールのバインド
L graphの基本的な構造と、ノードとして機能するグラフの流れについて説明しています。ツールのバインド方法や、言語モデルがUIコンポーネントに関連するツールを選択し、必要に応じて呼び出すプロセスについて詳述しています。
🌐 ツールの実行とストリーミングイベント
ツールが呼び出された場合の処理方法と、ストリーミングイベントを使用してクライアントにデータをリアルタイムに返す方法について説明しています。また、ツールの実行結果がUIに反映されるプロセスも紹介されています。
📝 Pythonバックエンドの構築
Pythonバックエンドの構築プロセスについて詳細に説明しています。状態の定義、グラフの作成、ノードの実装、そして言語モデルの呼び出しとツールのバインド方法について詳述されています。
🛠️ ツールの実装例
具体的なツールの例として、GitHubリポジトリ情報を取得するツールの作り方と、他のツールとの比較について説明しています。また、APIキーの取得方法やツールの実装に必要なライブラリのインポートについても触れています。
🌐 Lang serveサーバーの設定
Lang serveサーバーを設定し、FastAPIを使用してAPIエンドポイントを実装する方法について説明しています。また、環境変数の読み込みやリモートランナブルの追加など、サーバーの起動方法についても紹介されています。
📜 クライアントサイドの接続とUIの更新
クライアントサイドでAPIリクエストを行ってサーバーと接続し、UIを更新する方法について詳細に説明しています。また、React Contextを使用してデータを管理し、ストリーミングUIコンポーネントを処理するプロセスも紹介されています。
🎉 デモと総括
最後に、実際に構築された生成UIアプリケーションのデモを紹介し、ユーザー入力に応じてツールが選択され、リアルタイムにUIが更新される様子を示しています。また、Lang Smith Traceを使用して、サーバーサイドでの処理フローを可視化する方法も説明しています。
Mindmap
Keywords
💡Generative UI
💡Lang chain
💡Pythonバックエンド
💡xjsフロントエンド
💡ツール(Tools)
💡L-graph
💡ストリーミングイベント
💡チャットボット
💡API
💡Reactコンポーネント
Highlights
这是关于使用Lang chain构建生成式UI应用程序的系列视频的第三部分。
介绍了如何使用Python后端和xjs前端构建生成式UI聊天机器人。
如果尚未观看第一部分,建议回顾,因为其中涵盖了生成式UI的高级概念和用例。
展示了聊天机器人的架构图,包括服务器端的Python代码和客户端的Next.js代码。
服务器接收用户输入,然后传递给语言模型(LM),LM通过绑定的工具与UI组件交互。
介绍了Lang graph库,它用于构建图,类似于以前使用的an for。
Lang graph允许构建智能决策流程,但限制在可预测的范围内。
演示了如何使用Lang chain的stream events端点实时流式传输事件。
详细解释了如何在Lang serve服务器文件中实现Lang graph chain。
展示了如何定义状态、工具解析器、提示以及如何绑定工具到模型。
介绍了如何实现条件边缘,根据模型是否使用工具来调用不同的节点。
解释了invoke tools函数,处理工具调用并将数据发送回客户端。
提供了GitHub工具的实现示例,展示了如何从GitHub API获取数据。
讨论了如何实现Lang serve端点以及如何加载环境变量。
演示了如何在前端使用remote runnable连接到后端Lang serve API。
展示了如何在客户端使用UI chat box与后端进行交互。
介绍了stream runnable UI函数,它处理从服务器流式传输的UI组件。
最后,演示了整个应用程序的运行情况,包括与聊天机器人的交互和工具的使用。
Transcripts
what's up everyone it's brace and this
is the third video in our gener of UI
series on building gener UI applications
with Lang chain in this video we are
going to walk through how to build a
generative UI chatbot with a python back
end and then an xjs front end um if
you've not seen the first video you
should go back and watch that because
that's where we cover some high level
Concepts like what is generative UI uh
some different use cases why it's better
than previous methods and then we go
into a little bit of detail into the
apps we're going to build today um if
you're looking for the JavaScript
version that's going to be linked in the
description that in that video we build
the same chatbot that we built here um
but we built it with a full JavaScript
typescript stack uh this video is going
to have a python backend uh but we're
still going to be using some JavaScript
for the nextjs front end so for a quick
refresher if you watch the first video
this is the architecture diagram of the
chat bot we're going to be building
today and we can see we have two
distinct sections the server which is
where our python code will live and then
the client which is where our nextjs
code will live uh so the server takes in
some inputs some user input any images
chat history those then get passed to an
LM and the LM has a few tools bound to
it these tools all correspond to UI
components which we have on the
client this LM is then invoked with
these tools um it can either select a
tool to call if the user's input
requires it and if not then the LM will
just return plain
text um we're going using using Lang
graph for our python back end and that's
where this conditional Edge goes to um
if you have not if you're not familiar
with L graph I'm going to add a link
somewhere on the screen to our Lang
graph playlist where we go into detail
on Lang graph and all of its apis um but
as a quick refresher we can take a look
at this simple diagram L graph is
essentially um one of our libraries
which you can use to construct graphs um
or we like to use them for anything we
would have used an for in the past so
this simple diagram shows you um what
kind of what a l graph application
consists of so you take an input each of
these circles are a node in L graph a
node is just a function that gets
invoked and some state is passed to it
so the question gets passed to the
retrieve node um and then at the end of
each node so in the beginning all the
state or your current state gets passed
into the node that could be a list of
messages it could be a dictionary with
you know five Keys um or whatever you
want your State can be really whatever
you want so that your state always gets
passed into the node and then uh when
you return that node you can return in
an individual item or the entire State
and L graph will just combine um what
you returned with the state so if you
just returned one item in your
dictionary it's just going to replace
that field or there's some more
complexities you can go into where you
can make them like combine or add or you
know have a custom function deal with
you combining State um but for now we'll
just think about it it gets all the
state to the input and whatever you
return just replaces that field in the
state so we have a retrieve node the
results of that get then get passed to
our grading node uh the results of our
grading node get passed to this
conditional Edge we also have a
conditional edge here um and this
conditional Edge essentially says are
the documents relevant if they're not
relevant or sorry are any docs
irrelevant if they're all relevant then
it goes right to the generate node and
then the generate node returns an answer
if they're irrelevant then it gets
routed to the rewrite query node the
results of the rewrite query node go to
the web search and then finally we go
back to the generate node and then to
the answer so L graph essentially as we
can see here allows you to have a series
of nodes and then route your application
flow between these nodes um without
having it be say an agent which could
pick any node and it's not very
predictable and it could you know go
right from retrieve to generate or
something um or an llm chain which will
always do the same flow so with L graph
you're able to construct your graph in a
way which it can be somewhat smart and
make decisions on its own but it's still
somewhat fenced in um so it can't just
do whatever it
wants so if we go back here we see our
llm is our first node that gets invoked
and the results of that get passed to
our conditional Edge um if no tool was
called then we just stream that text
right back to the to the UI and as these
chunks are coming in then they get
rendered on the UI if a tool is used it
gets passed to our invoked tool node
here you see we stream back the name of
the tool that was used we then execute
some tool function this is any arbitrary
python function in our case it's
typically be hitting an API um and then
after that we uh invoke our or we return
our function results which then get
streamed back to the client we're going
to be using the stream events endpoint
from Lang chain which essentially allows
you to stream back every event which is
yielded inside of a function in your
Lang chain in our case our lane graph
graph so one of these events that'll
yielded back is the name of the tool we
then send that back to the client as
soon as it get selected so we can map
that to a loading component or some sort
of component to let the user know that
we're processing the request we've
selected this tool um and that gets
rendered on the UI right away so instead
of having to wait until the entire Lane
graph graph uh finishes and we have the
results we can select the tool that
usually happens pretty quickly and then
instantly renders something on the page
so the user knows we're working on their
requ Quest um and has a much quicker
time to First interaction then while
their loading component is being shown
to the user we're executing our tool
function in the background and then once
the results come in we then stream those
back to the client and map our tool to
our component and this will then be our
final component on our loading component
we'll then populate that component with
whatever fields are returned from our
function and then we update the UI um
and this updating and appending the UI
process can happen or sorry we can
update it or append the UI as many times
as we would would like in our case we're
only going to update it once and then
finish it with a final component uh but
you could update and append your UI as
many times as you would like let's you
you could have some much more complex L
graph graph like this where the retrieve
node updates the UI and then you let
them know you're grading it and then you
let them know the result of the
conditional Edge um so since we're using
stream events we're able to get all
those events and render them on the UI
as they happen on our server so for our
python backend you're going to want to
go into the
backend folder and then gen UI backend
and find the chain. Pui file this is the
file where we will be implementing our
laying graph chain um and the first
thing you want to do here is Define the
state of the chain which can be passed
through to each of the
nodes so we're going to name our state
generative UI State at our Imports uh we
will use this AI message later but for
now we just need the human message our
state contains the input which will be a
human message um and that's going to be
the user's input
it will also contain the result which is
optional because this will only be set
if the llm calls a string or calls does
not call tool and only responds with a
string so it's the plain text response
of no tool was was used we also have an
optional tool calls um list of objects
so a list of parse tool calls if the LM
does call a tool or tools we're going to
parse it and set that value before we
invoke the tool and then the result of a
tool call if the LM does call A tool
we'll call invoke tools and then this
will return this tool res result value
which will then use on the client to
update the chat history so the lmc's are
user input and then the result of a tool
so it knows it properly processed that
tool now we can Implement our create
graph function we have not implemented
our nodes yet but this will give us an
idea about the different nodes and the
flow our graph is going to take uh we're
going to want to implement or import our
state graph and compile graph um this is
we're going to use as a type or type
hint and this is going to be the state
graph we're going to use for l l graph
uh as you can see it's pretty simple
there's two nodes invoke model which
will be this model or this node and then
invoke tools which will be here you see
we don't have a node for plain text
response because this conditional Edge
which is this part will essentially say
if the model use a tool then call the
invoke tools node and if it didn't use a
tool it's just going to end and end the
graph and send the response back or
sorry the result back to the Cent
our entry point is going to be invoke
model and our finish point is going to
be invoke tools or the end variable
which this conditional Edge will return
if um no tools were called then we're
going to compile the graph and return it
and then inside of our Lang serve server
file when we import this um this is
going to be the runnable which Lang
serve can call now that we've defined
our graph structure we can Define our
first model so that or sorry our first
node which is going to be invoked model
is going to take in two inputs one for
state which is going to be the full
generi state that we' defined since this
will be the first node that's called it
will only have the input um and then pre
or nodes that are called after this will
have these different state values
populated if the model called a tool or
return a string or you know whichever
one the model uses then we have a config
object which will pass to um the llm
when we invoke it and then finally it's
going to return an instance of generate
State and as we see we have total false
and that's so we don't have to return
all of the different values in this in
this uh class now that we defined the
structure we can go ahead and Define the
first part of our invoke model node
we're going to have a tool parser which
is a Json output tools parser from the
open AI tools output parsers and then a
prompt this prompt is going to be pretty
simple your helpful assistant you got
some tools you need to determine whether
or not the tool can hander the US user's
input or return plain text and then we
have a messages placeholder for the
input where the input in chat history
will
go after defining our tools parser in
our prompt we can go and Define our
model and all the other tools we will
assign to it so we can paste that in as
you can see we imported our gab Rebo
tool our invoice tool and our weather
data tool um we will imp Implement these
in a second uh and we've also imported
our chat open AI class so we Define our
model chat open AI gbt 40 uh temperature
zero and streaming is true we then
Define our list of tools which is the
get a Revo tool invoice parer tool and
weather data tool next we're going to
bind the tools to the model so we Define
a new variable model with tools and then
we're binding these tools to the model
and finally we use our Lang train we use
the Lang chain expression language to
pipe the initial prompt all the way to
the model with tools and then invoke it
passing in our input and our config and
we get this result which will either
contain the tool calls or it will
contain just a plain text
response now we can Implement our
parsing logic so first we make sure that
the result is an instance of AI message
it should always do that but we have
this checked here just so we get this
typed down here um this should in theory
never throw then we check to see if
result. tool calls is a list and if
there are more than zero or if there is
a tool call there if a tool call does
exist then we're going to parse this
tool call passing in our result from the
chain. invoke and the config and then
we're going to return tool calls with
parse tools which will populate this
field um if tool calls were not called
then we're just going to return the
content as a string in the result field
which will populate this um and then now
we can Implement our add conditional our
our conditional Edge which will say if
result is defined and and if tool calls
are defined then and uh call our invoke
tools node which we'll Implement after
our conditional Edge so for our invoke
tools or
return method it takes in the state and
Returns the string so if result is in
the state and in it is an instance of
string which means it would have been
defined because we returned it then
return end and this end variable is a
special variable from from L graph which
indicates to L graph to finish and not
call any more um nodes it's essentially
like setting like calling set finish
point but you can dynamically call it
because if Ling graph CES returned to n
from conditional Edge it's just going to
end uh if result is not defined but tool
calls are defined and they are in
instance of list then return tool calls
Lan graph will read this and then it
will call the tool tool calls tool
invoke tools
node in theory this will never happen
because we should always either return a
string via result or tool calls but we
add the this just to make it happy in
case there is somehow a weird Edge case
where that happens now that we've
implemented our conditional Edge we can
implement the invoke tools function
which will then process or handle
invoking these tools and sending the
data back to the client where we can
process it and send the UI components
over to the UI so for the invoked tools
function this is somewhat similar to
what we saw in the server. TSX file
where we're mapping or adding the map
tool map here
um it basically has a tool map with the
same names of the tools and then those
tools and we're going to use the state
to find the tool that was requested and
then we can we can invoke
it so what we do after this is we say if
tool calls is not none which means that
tool calls have been returned here and
our conditional Edge called tool calls
which which they should never be none um
but once again linting issue got to make
it happy uh because invoke tools should
in theory never be called unless there
already an instance of a list uh but
yeah we need to make it happy by
confirming that they are defined we will
then extract the tool from State tool
calls and then just the zero with item
you could update this to process
multiple tools that your language model
returns for this demo we're only going
to handle a single tool that the
language model selects then via our
tools map tool. type type is always
going to be the name of the tool um we
can use our tools map to find the proper
tool so now we have our selected tool
and then we return tool result with the
select tool. invoke with the RX language
model supplied and that's going to
populate this field and then since tool
invoke tools is our finish point the
lane graph graph will end now we can
Implement our g a repo tool and then
I'll just walk you through how the
invoice and weather data tool are
implemented they're pretty similar to
get a Breo um but we'll only implement
the gith REO tool so in your backend you
should navigate to tools
github.io input with two Fields owner
and repo the owner will be the name of
the repository owner and repo is the
name of the repository like Lan chain AI
Lan graph and these are the fields that
the GI of API requires in order to fetch
data about a given repo next we're going
to want to define the actual tool for a
GitHub tool so we can we're going to
import tool from Lang chain core. tools
so from Lang chain core. tools import
tool we're going to add this decorator
on top of our GitHub repo um method
we're setting the name to get a repo
which we also have here obviously so we
can map it properly and then the schema
for this tool and return direct tool
true and then our GI a repo tool takes
in the same inputs as here owner and
repo and it returns let's add these
Imports object and string so now we can
implement the core logic here which is
going to uh hit the GI of API if it
returns an error then we'll return a
string and if it does not return return
eror we're going to return the data that
the API gave us so first things first
we'll add our um documentation string
and then implement or import OS to get
the GI of token from your environment I
have a read me in this repo if you want
to use the tools that we provided or
that we've yeah we've provided in this
repo pre-built um you're going to need a
GitHub token and then for the weather
tool you're going to want this geoc code
API key they're all free to get and I've
added instructions in the repo on how
how to get them but then you should set
them in your environment and inside this
tool we're going to want to confirm that
this token is set before calling the get
up
API then we will Define our headers with
our environment token and the API
version and the URL for the GI up API
passing in the owner and repo because
this is an FST string um and now we can
use requests to actually hit this URL
and hopefully get back the data from our
repo if the user and the LM provided the
proper owner and repo for a given
repository so what we'll do is we will
wrap our request in a try and accept so
if an error is thrown we can return a
string and just log the error instead of
killing the whole thing what this is
going to do is it's going to try to make
a get request to this URL with these
headers raise for status get the data
back and then return the owner repo
description stars and language this is
going to be the owner of the repo the
name of the repo description if the
description is set how many stars uh are
on that repo and then the primary
language like python this is the end of
the get a repo tool and now we can
quickly go and look at the invoice and
weather tool as we can see they're
pretty much the same the invoice tool
has a bit or is much more complex with
the schema and that's because um it's
going to extract these fields from any
image you could upload uh and then it's
going to use our pre-built invoice
component on the front end to fill out
any Fields like you know the line items
or the total price um shipping address
from an invoice image that you update
and then it just returns these
fields for the weather tool just going
to hit three
apis um in order to get the city the
weather for your city state country and
then today's forecast which is the
temperature and then the schema is also
simple city state optional countries
defaults to USA now that we've Define
our tools we can Define our laying serve
and end point which we'll use as the
backend server endpoint that our front
end will actually connect
to for the L serve server you're going
want to go to your geni backend and then
the server.py file and then the first
thing we're going to want to do here is
load any environment variables using
thein um dependency and this will load
any enironment variables from your INF
file like your open API key or open AI
API key your GI up token yada y y now to
implement our um fast API for a l serve
endpoint if you've ever worked with Lang
serve this should be pretty familiar U
but we're going to have this start this
should be named start start cly does not
make much sense um and then we're going
to Define new instance of fast API which
is going to return this app we're going
to give it a title of genui backend and
then this is you know just the default
for um Lang
serve since our backend API is going to
be hosted on locally Local Host 8000 and
then our front end is Local Host 3,000
we need to add some code for Cores so
that it can accept our requests um we're
going to add this import as
well once we've added cores we can go
and add our route which is going to
contain our runnable which we defined
inside of our chain. piy file this
create graph
function so we will create a new
graph add in types so L serve knows what
the input and output types are we're
going to add a route SL chat it's going
to be a chat type and then passing in
our runnable in our app this runnable is
going to be what's called when you hit
the endo and then finally start the
server here at Port
8000 as you can see we have this chat
input type here which is going to define
the input type for our chat um so we're
going to want to go to back end/ types
and Define this type this type is fairly
simple it's our chat input type which
contains a single input which is a list
of human message AI message or system
messages and these are going to be our
input and chat history um that we are
compiling on the client and sending over
the API to the back end once this is
done your server is finished and you can
go to
your
console and
run or poetry Run start and this should
start your
a that's right we updated that name so
we need to update this file as our
poetry or Pi Project sorry to instead of
trying to call
the start cly it should just call start
so now but if we go back here and we run
po Run start our length serve server has
started um and then we can go to our
browser and go to locost 8000 docs and
we can see all the a automatically
generated Swagger docs for API endpoint
and this is the stream events endpoint
which we are going to be using now that
we've done this we have one thing left
to do or which is add the remote
runnable to our client so we can connect
to this and then using our uh UI chat
box which this repo already pre-built
out you just clone the repo and you can
use that then we can actually start
making API requests and check out the
demo so for our remote runnable you're
want to go back to to the front end
directory app and agent. TSX we're then
going to import server only because this
is should only run in the server and
then add our API URL obviously if you're
into production this should not be Local
Host 8000 but for us in this demo it is
and SL chat which is
this chat end point we defined here once
we've done that we can Define our agent
function which takes in some inputs your
input your chat history and any images
are uploaded and designate this as a
server function this is similar to the
or this is the inputs we saw here and
then we're going to want to create a
remote runnable so we'll say const
remote runnable equals new remote remote
runnable from Lan chain core runnable
remote passing in the URL as the API URL
here and this is how we will have a
runnable that can then connect to our
Lang serve API in the back end um but
since it's a runable we can use all the
nice Lan chain types and invoke and
stream events that we implemented in our
stream runnable UI function here so this
remote runnable is what we'll pass to
this function and then we'll call stream
events on so now we can import stream
runnable
UI import stream runnable UI from u/s
server and then we can return stream
runnable UI with the remote runnable
inputs but then we need to also update
these inputs to match the proper type
that the backend is
expecting so we iterate over our chat
history creating a new object with a
type rooll and content of the content
and then finally the input from the user
should be type human and content is
inputs. input once this is done we'll be
able to use this agent function on the
client um but first we need to add our
or export our context so this is be
going to be able to be used so export
const ends Point endpoints context
equals Expos end points passing our
agent and this is using that same
function we defined in our server. TSX
file which is going to add this agent
function to the react context so now in
our chat. TSX file which you should use
um from the repo and not really updated
at all we have our use actions hook
passing our end points context which we
defined
here and then since we're using reacts
create context it knows it can call an
agent
it's then going to push these elements
to a new array with the UI that was
returned from the uh stream and then
finally parse out our invoke model or
invoke tools um into the chat history so
the LM has the proper chat history this
is obviously implementation specific so
if you're updating this for your own app
with your own um Lang graph back end you
should update these to match your nodes
and kind of how you want to update your
chat history
finally we clean up the
inputs uh resetting our input text box
and any files that were uploaded um and
then this is just the jsx which we'll
render in our chat poot go to the
frontend utils server. TSX file and this
is where we will Implement all the code
around uh streaming UI components that
we get back from the server to the
component and calling the servers um
runnable via stream
events so first thing to do in this file
is
import import server only and that's
going to tell let's say you're using
forell forell that this file should only
be ran on the server next we are going
to implement this with resolvers
function um essentially this has a
resolve reject function those are then
assigned to a resolve and a reject
function in a new promise and then it's
all returned and we have to TS ignore
this because
typescript thinks that resolve is being
used before it's assigned um and
technically in the context of just this
function that's correct however we know
that we will not use this resolve reject
function before we use this promise so
in practice this is not the
case next we're going to implement this
expose endpoints function this is going
to take in a generic type which will
then be assigned to actions this action
in practice will be our lane graph agent
which we will then invoke or the remote
remote runnable which will call this L
graph agent on the server and then it
returns um a jsx
element this jsx element is going to be
a function called AI which takes in
children um of type react node so any
react node children and then it passes
the actions variable here as a prop to
the AI provider which we'll look at in a
second and then any children and this AI
provider is essentially going to use
react create context to give context to
our children which will be the elements
that we are pass passing back to the
client and any actions that we want to
use on the client which will be our
agent action which will then call the
server um and it uses reacts create
context to give context to these files
um if we look inside of
our app SL layout. TSX file we see we
are also wrapping the page in this end
endpoint context variable which we will
Implement in just a minute uh now that
these two are implemented we can go and
implement the the function which will
handle actually calling the server
calling stream events on that and then
processing each of the
events so this function is going to be
called stream runnable UI we will add
our
Imports
import
runnable
[Music]
from
score runnables and then also
import it's not getting it import
compiled State graph from Lang chain
SL Lang graph so our runnable will be
our remote runnable which we'll use to
hit our server endpoint uh this remote
runnable we're going to call stream
events on so we get each of the events
or all the events that our server
streams back and then we're going to
have a set of inputs these inputs are
going to be things like the user input
and chat history which will then pass to
a runable when we invoke it the first
thing we want to do in this function is
create a new streamable UI which we can
import this fun function from the aisk
this create streamable UI function is
what we will use to actually stream back
these components from a react server
component to the client and then we're
going to use our with resolvers function
we defined to get our last event and
resolve which we will resolve and await
a little bit later next we're going to
implement this ASN function which we're
calling let's add our Imports this has a
last event value which we will assign at
the end of each stream event we it over
so that this will always contain the
last event we're then going to use this
a little bit later on um after we
resolve our promise on the client so we
know when the last event is resolved
because this function will resolve
before add this import this function or
this asnc function that is returned will
resolve um before the actual API call is
finished so we need to assign each of
the events to that so that the last
event will be in this variable and then
when we await our last event will be to
access our last event on the client even
though the async function would have
already
resolved we also have this callbacks
object which is an object containing a
string and then either create runnable
UI or sorry create streamable UI or
create streamable value this is going to
be an object which tracks which streamed
events we've um processed already the
string will be the ID of that stream
event and the return type will be the UI
stream which is getting sent back to the
client which corresponds to that event
so could be a tool call or it could be
um just a plain text llm
response after this we need to go up
above this function and Define two types
and then one object map let's add our
Imports
first why is that
deprecated that's because we import it
from the wrong place um we need to add
this here as well so these are some
pre-built components rendering like a
GitHub repo card we have a loading
component for that as well
and then the actual component which
takes them props um these are all just
normal react components even though
we're using them on the react Ser
components on the server they're normal
react components that'll get streamed
back to the client so you can
essentially stream back any component
that you would build in react and they
can have state they can connect apis um
and that's kind of what makes this so
powerful is you can use actual react
components that can have their own life
inside of them so you can stream this
back to the client you get a new UI
component on your client that user or
CES and that UI component can be very
Dynamic and stateful and
whatnot um but those are pre-built and
we have this map here tool component map
we will use this as our tool component
map here so when we get an event back
which matches the name of our tool we
can then map it to the Loading component
and the final component um there will be
a different event which we'll Implement
in a second which checks if it's a if it
should be the loading component get
stream back or the final component gets
stream back and then you can pass any
props to these components
now we're going to Define two variables
selected tool component and selected
tool UI these are going to keep track of
the individual component and the UI
stream which we've implemented to stream
the components back to the client that's
because after this we're going to be
iterating over stream events and we need
these variables to be outside of each
event so we have access to them in all
the subsequent events after they've
already been assigned um but now we can
implement the stream
events that's just going to call
runnable Dost stream events with the V1
version passing any inputs this runnable
is the same runable that gets passed in
here which will be well we'll implement
we will implement this in a second but
it's essentially going to be a remote
runnable function which calls our Lang
serve python server um and now we can
iterate over all of the stream events
and extract the different events that we
want to then either update our UI or
update these variables or callbacks and
whatnot so really quick we're going to
extract the output and and the Chunk
from our stream event. data and then the
type of event which we will use a little
bit later
on now we're going to implement our
handle invoke model event this handles
the invoke model event by checking for
the tool calls in the output if a tool
call is found and no tool component is
selected yet it selects the selected
tool component based on the tool type
and Depends the loading state to the UI
so what this is going to do is we will
call this if the streamed event is the
invoke model um node when we do
implement our python backend one of the
nodes in our lane graph graph is going
to be invoke model and this is the
function which is going to process any
events streamed after um that invoke
model is
called now for the body of this function
we first check to see if tool calls is
in the output um and if output. tool
calls length is greater than zero so if
there are more than if there are one if
there is one tool call then we're going
to extract that tool call this is the
invoke model so it's going to be this
first step um and this conditional node
will either return a tool or string if
returns a tool then this should get
caught we extract that tool and then if
these two variables have not been
assigned yet then we're going to find
the component in the component map
create a new stream whe UI passing in
the initial value as the loading
component for that component and this is
going to then update we're then then
going to pass the stream streamable ui.
value to our
our create streamable UI which is
getting which is going to get sent back
to the
client um with the value of our new
great streamable
UI which will be our loading component
for the first
event the next function we want to
process or sorry the event we want to
process is the invoke tools event um
we're going to update the selected tools
UI with the final State sorry with the
final State and Tool result data that
will be from this node um and it takes
an input handle invoke tools event so
now it's going to be pretty similar to
this where we're going to take the event
of this tool node and update the UI but
using these already defined uh
variables so if selected tool UI is true
and selected tool component are true
which they should always be because the
invoke tool node should never be called
until the invoke model tool is called
which we'll see when we pl our python
server then we're going to want to get
the data from the output here via the
tool result and then tool ui. done with
the selected component which we assigned
here and then the final version of that
component passing in any props so for
example let's say we have our weather
tool it's then going to use the uh UI
stream for the weather tool find the
final version of that component which is
the current weather pass in any props to
it and then update that stream and call
done to end the stream um updating the
weather component that is already being
rendered on the
UI now the last function we want to
implement is going to be handle chat
model stream event and that's going to
be if the language model just um does
not pick a tool and is only stream back
text it's going to stream back all of
those text Chunk chunks and we're going
to want to extract those to then stream
them again to our
UI so handles the on chat mod stream
event by creating a new text stream from
the for the AI message if one does not
already exist and for the current ID
then it pends the chunk to the cont
content um and then app pends the chunk
content to the corresponding text Stream
So the value of this function is going
to be this we're going to use our
callbacks object here after we add our
import and we're going to say if
callbacks um if the Run ID for the
stream event does not exist in our
callback object then create a new text
stream we want to create a text stream
because this bypasses some um back in
that the create runnable UI does uh
because we're only stream back text so
we create our text stream use our stream
or sorry create streamable UI and add
our AI message which will look like our
you know AI message text bubble and the
value of that is going to be the text
stream and then we are going to set this
callback object with the Run ID to this
value of the text stream then if we set
that or if it was already set then we're
going to check make sure it's it exists
and then append any of the content from
the Stream So each chunk of the LM
streams will be chunk. content and we
will append that to our text stream
value which will then stream each text
and update the UI message as those
chunks come in now we've implemented
these functions we're going to want to
implement our if else statements um on
the different stream events so we can
get the proper events and up call the
the functions which are required for
those events so the first one we want to
implement is if the type is end so that
means if the chain has ended and the
type of output as an object we first
check to see if the stream event. name
is invoke model if it was invoke model
then we want to handle the invoke model
event passing in the output and if this
or if the stream event was invoked tools
then we call the invoked tools event
makes sense passing in the object the
last function we need we need to add an
if statement for is the chat model
stream so those are not going to be tool
nodes instead they're going to be on
chunk model streams so we're going to
say if the event is on chat model stream
the chunk is true and the type of Chunk
is an object then handle the chat model
stream and then finally at the end of
our let me collapse these once we're at
the end of our stream event iteration we
assign the last of value to the stream
event and this is so this value is
always going to be the last stream once
the stream exits
finally we're going to clean all this up
so using our resolve function return
from our with resolvers we're going to
pass in the data. output from the last
event so this is going to be the last
value from our stream um if it was text
it's going to be text if it was the
result of a tool it's going to be a tool
that data we will set when we Implement
our python backend we're then going to
iterate over all of our
callbacks and call done on each of them
which is going to call this stream do
sorry stream. even though we're calling
UI and that's just so this um create
streamable value stream finishes and
then call UI Doone and that's for this
create streamable UI and it's going to
end the stream streaming UI components
back to the
client finally outside of this async
function we're going to want to return
the value of our UI stream this is going
to be the jsx element which we'll render
on the client and then the last event
right here which is that promise that we
can resolve once our stream events have
finished resolving and then get the
value of the last event now everything
is finished we can go back to our
terminal and we can run yarn
Dev this will start up a server at
locost 3000 we can go to our UI reload
this page and we should see our
generative UI application that we just
built and we say something like what's
the weather in SF
send that over boom we get back our
loading component it recognized that it
was in San Francisco California um as we
saw it selected the tool sent that back
to the client that was a map to our
loading component that was rendered here
and then once the weather API was or had
resolved it then sent that data back
again and it updated this component with
the proper data so we can also say
something like what's the info on
linkchain AI SL
graph we send that over it should select
our GitHub tool we saw it was loading
for a second and now we have our GitHub
um repo component here which has the um
description and the language and all the
Stars this is you know react component
so it's interactable we can click on the
star button and it takes us to the L
graph repo and we see that the um
description and stars all
matches so before we finish the last
thing I want to do is show you the Lang
Smith Trace as we see this is a link
serve endpoint / chat it passes in the
input the tool calls and then the most
recent input as we can see the output
contains tool calls and Tool result
which we use to update our um chat
message history but it calls invoke
model as the first node in Lang graph as
we can see obviously there's no inputs
for these because they have not been
called yet um but it does contain the
messages input field that then calls our
chat model our chat model is Prov
provided with some tools it's selected
to get a repo tool which is what we want
because we asked about to get a repo
return the values for that that then got
par passed to our output parser and then
our invoke tools or return uh
conditional Edge which obviously we
invoke tools so it's then going to call
the invoke tools node which invoked our
tool was while it was invoking our tool
it was stringing back the name of the
tool which we used to send the loading
component to the client then after it
hit the Gib API it streamed back the fin
result of our tool as we can see here
and then that on our client was used to
um update the component with the final
data and then since invoke tools was the
last node it finished and that is it for
this demo on building Lang graph or
sorry generative UI with python and
react front end um if you are interested
in the types video which is just the
same demo as this but with a full
typescript app that'll Link in the
description and I hope you all have a
better understanding of how to build
gener applications with linkchain now
5.0 / 5 (0 votes)