Python Advanced AI Agent Tutorial - LlamaIndex, Ollama and Multi-LLM!
Summary
TLDR本视频教程向观众展示了如何构建一个先进的人工智能代理,该代理能够使用多个语言模型(LM)进行操作。视频首先概述了所需的工具和技术,包括Python、Llama Index和Ollama,后者允许在本地计算机上运行开源的LM。接着,视频通过一系列步骤引导观众创建了一个能够读取PDF文档和Python代码文件的代理,并通过这些文件生成代码。此外,还介绍了如何使用Llama Pass来提高复杂文档(如PDF)解析的准确性。最后,视频通过一个交互式循环,允许用户输入提示,代理会利用其工具生成响应,展示了代理的实际应用。整个项目的过程不仅展示了AI开发的潜力,也体现了如何将不同的工具和模型组合起来解决实际问题。
Takeaways
- 🚀 介绍了如何构建一个使用多个AI模型的高级AI代理,以及如何在本地运行这些代理。
- 📚 通过一系列步骤,包括连接代理、解析输出等,展示了AI开发的潜力。
- 🛠️ 使用了名为Oh Lama和Lama Index的工具,这些工具适合初学者和中级程序员学习和使用。
- 📈 通过VS Code演示了如何利用AI代理读取代码文件并生成基于现有代码的单元测试。
- 🔄 展示了AI代理如何使用不同的工具来获取正确的信息并生成代码。
- 📋 介绍了如何使用Lama Index框架来处理和传递数据给不同的AI模型。
- 🔧 讲解了如何使用Lama工具在本地计算机上运行开源的AI模型,无需依赖外部API。
- 🔍 通过Lama Pass工具,可以更好地解析和理解复杂文档,如包含表格和图表的PDF文件。
- 📝 展示了如何将AI生成的代码输出并保存到文件中,包括错误处理和文件保存逻辑。
- 🔄 通过多步骤流程,包括使用多个AI模型和工具,实现了从代码读取到生成和保存的完整过程。
- 🌐 强调了使用开源模型和框架的重要性,以及它们在AI开发中的实用性和可访问性。
- 📌 提供了关于如何构建和使用高级AI代理的完整教程,包括所需的技术和工具。
Q & A
视频中展示的AI代理是如何工作的?
-视频中的AI代理通过使用多个工具和模型来执行任务。首先,它通过Lama Index和Lama Pass等工具加载和解析数据,如PDF文件。然后,代理会利用这些数据生成代码或回答问题。AI代理可以根据给定的提示选择合适的工具来完成任务,并将结果传递给另一个模型进行进一步处理。
为什么视频中提到使用Lama Index和Lama Pass?
-Lama Index是一个开源框架,能够处理数据加载、索引、查询和评估,特别适合用于构建大型机器学习模型(LM)的应用程序。Lama Pass提供了生产级别的上下文增强功能,特别是对于包含嵌入式对象(如表格和图形)的复杂文档,如PDF文件,Lama Parse可以提供更好的解析效果。
视频中提到的“RAG”是什么意思?
-RAG代表检索增强生成(Retrieval-Augmented Generation)。这是一种结合了检索(从大量数据中检索信息)和增强生成(基于检索到的信息生成响应)的技术。在视频中,AI代理使用RAG能力来提高其生成代码和回答问题的准确性。
如何将AI代理生成的代码写入文件?
-视频中展示了一个多步骤的过程,首先使用一个模型生成代码,然后使用另一个模型来解析输出,并将解析后的数据格式化为一个Python字典。最后,使用Python的文件操作将代码写入到指定的文件中,确保不会覆盖已存在的文件,并在输出文件夹中创建新的文件。
视频中的AI代理在生成代码时使用了哪些技术?
-视频中的AI代理使用了多个技术来生成代码。首先,它使用Lama Index来加载和索引PDF文件中的数据。然后,使用Lama Pass中的Lama Parse来解析PDF文件。接着,使用一个专门的代码生成模型(如Code Llama)来生成代码。最后,使用一个通用模型来解析和格式化生成的代码,以便写入文件。
为什么视频中的AI代理在生成代码时可能会有错误?
-视频中的AI代理使用的是基于本地运行的开源模型,这些模型可能没有像ChatGPT那样的大型商业模型那样的计算能力和复杂性。因此,生成的代码可能需要一些调整和修正,比如添加缺失的括号或删除转义字符,才能成为一个完全功能的代码。
如何确保AI代理生成的代码是有效的?
-视频中提到了使用一个额外的模型来分析生成的代码结果,并确定它是否是有效的代码。此外,还使用了错误处理和重试逻辑来确保代码生成过程的稳定性。如果代码生成过程中出现错误,代理会重试最多三次。如果仍然失败,会提示用户重新输入提示。
视频中提到的“输出管道”是什么?
-输出管道(output pipeline)是Lama Index中的一个概念,它允许用户将多个查询处理步骤组合在一起,形成一个连贯的处理流程。在视频中,输出管道用于将生成的代码通过一系列的解析和格式化步骤,最终生成一个可以写入文件的格式。
如何使用视频中展示的AI代理来生成单元测试?
-视频中展示了如何通过给代理一个提示,比如“读取test.py文件的内容,并为我编写一个简单的Python单元测试”,来生成单元测试。代理会使用Code Reader工具读取文件内容,然后使用Code Llama模型生成代码,最后使用另一个模型来解析输出并保存到文件。
视频中的AI代理是如何决定使用哪些工具的?
-AI代理会根据给定的提示和上下文来决定使用哪些工具。它通过内置的逻辑和对工具描述的理解来选择最合适的工具来完成任务。例如,如果需要读取API文档,代理可能会选择使用API文档工具;如果需要读取代码文件,它可能会选择使用Code Reader工具。
视频中提到的GitHub仓库有什么作用?
-视频中提到的GitHub仓库用于存放示例项目的数据和代码。它包含了一个数据目录,里面有用于演示的PDF文件和Python文件,以及一个requirements.txt文件,列出了项目所需的所有Python库及其版本。GitHub仓库还提供了安装和设置代理的说明。
Outlines
😀 AI开发入门:构建使用多个模型的高级AI代理
本段落介绍了如何构建一个使用多个模型的高级AI代理。通过一系列步骤,包括连接代理、解析输出,以及使用本地运行的LMS(Lambda)和Lama Index框架,向观众展示了AI开发的高级应用。特别强调了即使初学者或中级程序员也能跟随视频逐步构建项目。
🛠️ 安装和设置:为AI代理配置环境和依赖
详细说明了如何安装和设置所需的依赖项,包括创建Python虚拟环境、安装必要的Python库,以及如何通过GitHub获取示例代码和数据文件。此外,还介绍了如何安装和使用Oh Lama工具来运行本地的AI模型。
📚 利用Lama Pass解析PDF文档
本段落展示了如何使用Lama Pass工具来解析PDF文档,将其转换为更易于AI模型理解的格式。介绍了Lama Pass的注册和使用方法,以及如何通过环境变量在代码中使用API密钥。
🔍 创建向量索引和查询引擎以查询API文档
描述了如何创建一个向量索引来存储和查询PDF文档中的数据,以及如何利用查询引擎根据上下文回答问题。还介绍了如何将查询引擎封装为工具,供AI代理使用。
🤖 构建AI代理:集成工具并测试功能
介绍了如何构建一个AI代理,该代理可以利用先前创建的工具来读取和分析代码,生成代码,并回答有关代码的问题。展示了如何将工具和模型传递给代理,并测试代理是否能够正确地利用这些工具。
📝 输出解析:将AI生成的代码写入文件
本段落讲解了如何处理AI代理生成的代码输出,包括使用Pedantic模型和输出解析器来格式化输出,以及如何将格式化后的代码保存到文件中。还介绍了如何通过查询管道将多个步骤结合起来,以及如何处理可能发生的错误。
🏁 完成与总结:保存生成的代码并回顾项目
最后,确保了生成的代码能够正确保存到文件中,并且不会覆盖已存在的文件。还对整个项目进行了回顾,强调了使用本地模型的局限性,并鼓励观众尝试不同的提示来测试代理的功能。最后,感谢Lama Index对视频的赞助,并邀请观众如果对此类视频感兴趣就留言、点赞和订阅。
Mindmap
Keywords
💡AI代理
💡Lama Index
💡本地模型(Local Model)
💡检索增强生成(RAG)
💡PDF解析
💡代码生成
💡向量存储索引
💡查询引擎
💡环境变量
💡错误处理
💡文件输出
Highlights
介绍如何构建一个使用多个AI模型的高级AI代理。
通过一系列步骤运行AI代理,包括连接两个代理、解析输出等。
即使对于初学者或中级程序员,也可以跟随视频进行实践。
使用名为Oh Lama和Lama Index的开源框架进行本地开发。
演示如何利用AI代理读取test.py文件并生成Python单元测试代码。
AI代理能够读取数据文件,并基于现有代码生成新的代码。
使用不同的工具来处理不同的任务,例如读取文件、解析PDF等。
介绍如何使用Lama Index框架来加载数据、索引数据、查询数据并评估数据。
使用Olam来在本地计算机上运行开源的Lem模型,无需支付额外费用。
通过Lama Pass工具,可以更好地解析包含嵌入对象的复杂文档,如表格和图表。
创建一个新的Lama Cloud账户并获取API密钥,以便使用Lama Pass工具。
展示如何通过代码创建一个AI代理,并为其提供多种工具。
AI代理可以根据提供的工具和上下文生成代码并回答问题。
通过创建一个输出管道,将一个Lem的结果传递给另一个Lem进行格式化。
使用Pedantic库来定义输出数据的结构,并将其解析为Python字典对象。
通过错误处理和重试逻辑,确保代码生成和文件保存的稳定性。
最终,AI代理能够根据用户的提示生成代码,并将其保存到文件中。
Transcripts
If you're interested in AI development, then you're in the right place.
Today I'm going to be showing you how to build an advanced
AI agent that uses multiple lamps.
We're going to run our agent through a series of steps.
We're going to connect the two agents together.
We're going to parse the output.
And you're really going to see how to get a bit more advanced here
and how to do some really cool stuff with LMS running on your own computer.
We're going to be doing everything locally.
We're going to use something known as oh Lama and Lama Index.
And you're really going to get a taste of what AI development is capable of
in the short video
that even beginner or intermediate programmers can follow along with.
So with that said, let's get into a quick demo.
Then I'll walk you through a step by step tutorial on how to build this project.
So I'm here inside of VS code, and I'm going to give you a demo of how
this works.
And the concept is that we're going to provide some data to the model.
This data will be a coding project that we've worked on.
We've kept it simple for this video.
But if you scaled this up you could provide it a lot more code files
and it can handle all of those potentially at the same time.
So you can see that we have a Readme dot pdf file in our data directory.
And this is a simple kind of documentation for an API that we've written.
We then have a test.py file.
And this is the implementation of that API in Python.
So the idea here is we want our agent to be able to read in this information
and then generate some code based on the existing code that we already have.
So I've just run the agent here,
and I've given this a sample prompt that says read the contents of test.py
and write me a simple unit test in Python for the API.
Now the idea here
is that we've provided this agent some different tools that can utilize,
and it will decide when it needs
to utilize the tools and use them to get the correct information.
So in this case, it will use a tool to read in the test on py file.
It will then use a tool
to actually parse and get access to the information in the readme dot pdf.
It's then going to generate some code for us.
And then what we'll do
is use another model to parse that output and save it into a file.
So you can see that when I ran this, it did exactly that.
Now it doesn't always give us the best result
because we are running these models locally.
And I don't have a supercomputer here, so I can't run the best possible models,
but it's showing you what's possible.
And obviously all of this is free.
You don't need to pay for anything
because we're running it locally using all open source models.
So what this did here is generate this test API file.
You can see there's a few minor mistakes, but if I fix these up by just adding
the correct parentheses and removing the escape characters here you see that
we actually have a functioning unit test written for our flask API.
So this is pretty cool.
We might need to change this a little bit
to make it work, but it just outputted all of that for us
based on the context of the files that we provided to it.
Now, if we go here and read through
kind of the output we're getting from the model,
you can see that it's actually sharing with us its thought process.
So it says the current language of the user is English.
I need to use this tool to help me answer the question.
It's using the Code Reader tool.
It passes the file name equal to test.py.
It reads in the contents of test.py and then it says okay,
I can answer without using any more tools.
I'll use the user's language to answer,
to write a simple unit test, blah blah blah.
You do this and then write all of this code.
Now behind the scenes,
what actually happens is we pass the output of this using a secondary model
and take just the code and then generate it into a file,
and that file name will be generated by a different LM.
To make sure it's appropriate for the type of code that we have.
So it is a multi-step process here.
It worked quite well.
And you can see it also gave us a description of what the finished code was.
So that's what I'm going to be showing you how to build.
I know that it might seem simple, but it's actually fairly complex,
and it uses a lot of different tools which are really interesting to learn about.
With that said, let's get into the full tutorial here
and I'll explain to you how we can build this out completely from scratch.
So let's get started by understanding the tools
and technologies that we need to use to actually build out this project.
Now, what we need to do for this project is we need to load some data.
We need to pass that to our LM.
We then need to take the result of one L11, pass it to another LM,
and then we need to actually use a tool and save this to a file.
There's a few different steps here.
And if we wanted to build this out
completely from scratch that would take us a very long time.
So instead we're going to use a framework.
Now of course we're using Python, but we're also going to use Lama index.
Now we've teamed up with them for this video.
But don't worry, they are completely free and they provide an open source framework
that can handle a lot of this heavy lifting and specifically is really good
at loading in data and passing it to our different labs.
I'm on their website right now just because it explains it nicely, but
you can see Lamb Index is the leading data framework for building LM applications.
It allows us to load in the data which you'll see right here,
index the data, query it and then evaluate it.
And also gives us
a bunch of tools to connect different loops together to parse output.
You're going to see a bunch of advanced features in this video.
Now, as well as using Lama index, we're going to use something called a Lama.
Now Olam allows us to run open source.
Lem's locally on our computer.
That means we don't need to pay for ChatGPT.
We don't have to have an open AI API key.
We can do all of this locally.
So the summary here is that we're using Python lama index o lama.
We're also going to throw in another tool which is new from Lama index called
Lama Pass. Don't worry it's free as well.
And all of that is going to allow us to build out this advanced AI agent
that has Rag capabilities Rag meaning retrieval, augmented generation.
Whenever we're taking this extra information
and passing it into the model that's really known as right.
I have an entire video that discusses how Rag works.
You can check that out here.
But for now, let's get into the tutorial and let's see exactly how we can
build this application.
So I'm back on my computer and we're going to start by installing
all the different dependencies that we need.
We're then going to set up a Lama.
And then we'll start writing all of this code again.
Oh Lamas. How we run the models locally.
We need to install that first before we can start utilizing it.
Now there are some prerequisites here.
So what we're going to do
is I have a GitHub repository that I'll link in the description.
And in that GitHub repository you'll find a data directory
that contains a Readme file and a test.py file.
Now you don't need to use the specific piece of code, but
what you should do is create a data directory in a directory in VS code.
So I've just opened up a new one here in VS code, made a new folder called data,
and then put these two files inside of here.
Specifically we want some kind of PDF file in some kind of Python file.
We can have multiple of them if you want,
but the concept is this is the data that we're going to use
for writing the retrieval augmented generation.
So you need something inside of here. So
either take it from the GitHub repository or populate it with your own data.
Now from the GitHub repository as well there is a requirements.txt file.
Please copy the contents of that file and paste it into a requirements.txt file
in your directory, or simply download that file.
This is just going to save you a lot of headache,
because it has all of the specific versions of Python libraries
that we need in order for this project to function properly.
So really again, we want to get this data
directory populated with some kind of PDF and some kind of Python file.
We then want this requirements.txt.
TXT file again.
You can find it from the link in the description.
I'll put a direct link to the requirements.txt
so you can just copy the contents of the file.
Or you can download the file directory directly.
Sorry. And put it inside of your VS code folder.
Now that we have that,
what we're going to do is make a new Python virtual environment.
That's where we're going to install
all these different dependencies that we have this isolated on our system.
So to do that we're going to type the command python 3-MV env
and then I'm going to go with the name of AI.
You can name this anything that you want.
If you're on Mac or Linux this is the correct command.
If you're on windows you can change this to simply be Python.
And this should
make a new virtual environment for you in the current directory.
This is where we'll install all of the different Python dependencies.
Now once we've done that, we need to activate the virtual environment.
The command will be different depending on your operating system.
If you are on windows or sorry if you're on Mac or Linux, so Mac or Linux.
Here you can type source
the name of your virtual environment, which in this case is I Symbian
and then slash activate and you'll know this is activated.
If you see the I prefix you can ignore this base prefix.
For me it's just because I have kind of a separate installation in Python.
But you should see this prefix in your terminal indicating
that this is activated.
Now if you're on windows what you're going to do is open up PowerShell
and you should be able to type slash I slash
and then this I believe is scripts and then slash activate.
And that should activate the virtual environment for you.
Otherwise you can just look up how to activate a virtual environment on windows.
And again make sure you have this prefix once the virtual environment is activated.
If we want to deactivate it we can type deactivate.
We're not going to do that though, and we can install the different packages
that we need.
So what we can do is type pip3, install dash r and then requirements.txt.
Notice this is in the current directory where we are.
When I do that, it's going to read through all of the different requirements
and then install them in this Python installation
or in this virtual environment.
So we'll do that.
It's going to read through requirements.txt
and install everything for you. This is going to take a second.
For me it's already cached so it's going pretty quickly.
And that should be it.
So once this is finished I'll be right back
and then we can move on to the next steps.
All right. So all of that has been installed.
And the next thing we need to do is install a lama alarm.
Again. Let's just run all of this locally.
So in order to install Lama I'm just going to clear in my terminal here.
And I'm going to go to a new browser window and paste in this URL here.
I'm going to leave it in the description.
Now this is the GitHub page for Lama.
And it shows you the installation setup steps here.
So again this will be linked in the description.
So if you're on Mac or Windows you can see the download buttons right here.
Linux.
This will be the command I'm on Mac.
So I'll just click on download.
When I do that it's going to download the Lama installation for me.
Once that's done I'm going to unzip this.
And then I'm going to run the installer.
Now I've already installed it, but I will still run you through the steps.
And then on windows same thing.
You're going to download this and then run through the installer.
And what this will do is download a kind of terminal tool for you
that you'll be able to utilize to interact with Allama.
So you can see it says Lama Run.
And that's something like Lama two.
And this will actually download the Lama to model for you.
And then allow you to utilize it and interact with it.
In our case, we're actually going to use the Mistral model.
But there's a bunch of different models here that you could install,
and there's a bunch of other ones as well.
These are just some examples of ones that you can use.
Okay.
So what we'll do here is unzip this folder.
And once it's unzipped we're going to run the installer.
So you can see here that I can click on oh Lama two.
That's going to load the installation tool for me.
So I'm going to go ahead and click on open.
We're going to move it into applications.
And then we should be good to go with a lama.
So we'll go next.
And then it says install the command line okay.
So we're going to go ahead and install it again I already have this installed.
So I'm not going to run through this tool.
But once you do that you should be able to run
the O Lama command, which we'll do in just one second.
All right.
So once you've gone
through that installer, what you can do is simply open up a terminal,
which we're going to do here.
And we can type.
Let me just zoom in here so we can read this
O Lama and just make sure that that command works.
So if we get some kind of output here we're good.
And then we can type O Lama run.
And then we're going to run the Mistral model.
Okay.
So you can see here it shows you all the different models you can potentially run.
In this case this one is seven billion parameters.
It's 4.1GB.
There's a lot larger models here which obviously would perform better.
But they need some more intense hardware to actually run properly.
So we're going to install Mistral
which is only four gigabytes by doing a lama run Mistral.
It's going to then download that model for you and then we can utilize it.
Now in my case it's already downloaded. So what I can do
is start interacting with it by typing something like Hello World.
And then it's going to give me some kind of output.
Perfect.
So what I'm going to do now is I'm going to quit this.
So I think I can just type quit or something.
or Ctrl c, Ctrl d okay, let's get out of that control D
I'm going to close this terminal and I'm going to show you
now how we can run this from code.
So let it go through, let it install.
It is going to take a second because that's the download all of this stuff.
And then we'll go back to VS code and see how we interact with Allama
from our code.
All right. So I'm back inside of VS code here.
And I'm going to continue here by creating a file.
Now this file will be main.py.
And the idea here
is just to initially test out Lama and make sure that it's working as an alala.
So I'm going to say from and this is going to be lama
underscore index dot LMS.
If we type this correctly.au
lama like that, we're going to import Allama.
And then we're going to say hello Lam is equal to oh Lama.
And inside of here we're going to say the model is equal to Mistral
because this is the one that we want to use.
And we can provide a request
timeout equal to something like 30s just so that it doesn't take too long.
Now that we have the lamp, we should be able to do Lambda run,
and then we can say something like hello world.
If we say the result is equal to this,
we should be able to print out the result.
So let's see if that's going to work.
I can type Python three and then main.py.
We'll give this a second
and we'll see
if that was a valid command or not, or if we need to use, a different one.
So actually my bad here guys,
rather than Lambda run, this is going to be Lemke complete.
And then we can type in something like hello world.
And if we run this, we should see that we get some kind of output.
Give this a second. It says, hello.
Here's a simple hello, World program.
Gives us the output and there we go.
So this is just a simple test to make sure that Alama
was running locally on our computer.
We also can run different types of local models.
For example, in a second we're going to run a code generation one.
but now you can see that this is indeed working.
And we didn't need to have any API key use ChatGPT, etc..
This is a local model running on our own computer.
So now what we want to do is set up a little bit
of the Rag application so we can load in some files,
pass that to the LM and see how it can query based on that.
All right.
So let's go to the top of our program here.
And we're going to import a few things that we need in order
to load our Python file as well as to load our documentation.
We're going to start by looking at how we load in our PDF, which is
unstructured or semi-structured data, depending on the way that it's set up.
And we're going to use something
known as Lama Pass, which can give us a much better parsing of this file.
So what I'm going to do is say from Lama
underscore pass, we are going to import
and then with capital Lama pass like that we'll talk about this in one second.
Don't worry I'm going to say it from lama underscore index dot core.
And we're going to import the vectors store index
and the simple directory reader as well as the prompt template.
While we are here we're then going to say
from lama underscore index dot core dot.
And this is going to be embeddings.
And we are going to import the resolve embed model.
And I believe for now that is actually all that we need.
So let me break down what we're about to do here.
We need to load in our data.
Now in this case we're loading in PDF documents.
But with Lama index we can load in really any type of data we want.
And in the previous video I showed you how to load in, for example, CSV data.
But in this case we have a PDF.
Now what we need to do is we need to parse the PDF into logical portions in chunks.
For example, if the PDF had something like a chart, we'd want to extract that
because that's some structured data that we could be able to look at.
Then once we have that, we need to create a vector store index.
Now a vector store index is like a database
that allows us to really quickly find the information that we're looking
for, rather than having to load the entire PDF at once.
What's going to happen is our LM is going to utilize this database and extract
just the information that it needs to answer a specific query or a prompt.
Now, the way that we'll build
this vector store index is by creating something known as vector embeddings.
Vector embeddings take our textual data or whatever type of data it is.
And they embedded into multidimensional space, which allows us to query for it
based on all different types of factors, based on the context,
based on the sentiment, we don't really know exactly how it works.
It's handled by LMS and some machine learning models in the background, and I'm
not quite qualified to talk about it in this short section of the video.
But the point is that rather than loading all of the data at once,
we're going to query this vector store index,
which is like a really, really fast database.
It's going to give us the information we need injected into the LM.
And then the LM will use that information to answer the prompt.
So really all that means for us is we've got to create this index.
And I'm going to show you how to do that. All right.
So to do this we're going to delete these two lines
because these were really just for testing.
But we will leave this LM.
And what we're going to do is start by setting up a parser.
Now the parser is going to be a llama parse.
And then we can specify what we want.
The result type to be, which in this case is markdown.
Now Llama parse is a relatively new product that's provided by Llama Index.
What this will do is actually take our documents
and push them out to the cloud.
They'll then be passed and then that parsing will be returned to us.
Now, the reason we use something like this
is because it gives us significantly better results when we are trying to query
pieces of data from something like a PDF, which is typically unstructured.
I'll talk more about it in a second, because we do need to make an account
with llama parse. But again, it's totally free.
You don't need to pay for it. So we're going to make this parser.
And then what we're going to do is we're going to create a file extractor.
Now the extractor is going to be a dictionary.
And we're going to specify a file extension, which in this case is dot PDF.
And we're going to say whenever we find a PDF
we want to use this parser which is a llama parse to parse
through the PDF, and then give us back some results that we can then load.
Next we're going to say documents is equal to.
And this is going to be the simple directory reader.
And inside of here we're going to specify the directory
that we want to read from which is the data directory.
And then we're going to specify our file extractor here.
So file extractor is equal to the file extractor that we specified.
And then we're going to say dot load data okay.
So let's write that
now if we hover over this you can see that what this is doing is loading
data from the input directory.
So we're using llama index.
We have something called a simple directory reader.
Well what this will do is go look in this directory.
Grab all of the files that we need
and then load them in and use the appropriate file extractor.
Now that we've done that, what we can do is
we can pass these different documents which have been loaded to the vector
store index and create some vector embeddings for them.
So we're going to say the embed underscore model is equal
to the resolve embed model.
And then this is going to look a little funky.
But we're going to type local colon.
And then bam I/BGE-M3.
Now this is a local model that we can use because by default
when we create a vector store
index it's going to use the OpenAI model, like something like ChatGPT.
We don't want to do that.
We want to do this locally instead.
So what we're doing is we're getting access to a local model.
And this model will be able to create the different vector embeddings for us
before we inject this data into the vector store index.
So I know it seems a bit weird, but we're just grabbing that model.
This is the name of it here and we're specifying.
We want it locally,
which means the first time
we run this, it's going to download that model for us and then use it
okay.
We're then going to say the vector index is equal to the vector store
index and then dot from documents.
And then we're going to pass the documents
that we've loaded here with the simple directory reader.
And we're going to specify manually the embed model is equal
to the embed model that we got above which is our local embedding model.
Now that we've done that we're going to wrap this in
something known as a query engine.
So we can actually utilize it to get some results.
So we're going to say query engine is equal to the vector indexed as
query engine.
And the MLM that we're going to use is going to be the Alama l11.
Now what this means is that I can now utilize this
vector index, as kind of like a question and answer bot.
So what I can do is I can ask it a question like,
what are the different routes that exist in the API?
And it will then go utilize the documents that we've loaded in which
in this case are the PDF documents in this readme pdf file,
and it will give me results back based on that context.
Now, in order to test that we can say query engine Dot,
and then we can actually send this a query and we can say
what are some of the routes in the API question mark?
And then it should give us back some kind of reasonable response.
Now we do need to print that. So we'll say result
is equal to this.
And we will be able to run this code in one second
once we get access to the API key for Lambda Pass.
So let me show you how we do that.
So I am going to show you how to use Lama Pass here.
But I quickly want to break down what it actually is.
So on February 20th, 2024, Lama Index released Lama Cloud and Lama Pass.
Now this brings production
grade context augmentation to your LM and Wragg applications.
Now Lama Parse specifically is a propriety pre parsing for complex documents
that contain embedded objects such as tables and figures.
In the past, when you were to do some kind of querying over this data,
you get really, really bad results when you have those embedded objects.
So Lama Parse is kind of the solution to that,
where it will do some parsing and actually break out these embedded objects
into something that can be easily ingested and understood by your model.
This means you'll be able to answer
complex questions that simply weren't possible previously.
As you can see, Rag is only as good as your data if the data is not good.
If the vector index isn't good, then we're not going to get good results.
So the first step here is that we parse out our documents into something
that's more effective to pass into the vector index.
So when we eventually start using it,
we get better results which drastically affect the accuracy.
in a good way.
So you can kind of read through here and you can see exactly what it does.
I'll leave this link in the description.
But just understand that what this does is give us much better results
when we are parsing more complex documents, specifically things like PDFs.
So what we're going to do here is create a new llama cloud account.
You can do that just by signing in with your GitHub.
I'll leave the link below.
And this will give us access to a free API key so we can use the llama parts tool.
All right.
So once you've created that account or signed in
you can simply just click on Use Llama Pass.
It's pretty straightforward here.
And then what you can do is you can use this with a normal API.
Or you can use it
directly with llama index, which is exactly what we're doing here.
So what we want to do is just get access to an API key here.
So we can click on API key and we can generate a new key.
I'm just going to call this tutorial.
I'm going to press on Create new key.
Can read through the docs if you want.
But I'm just going to copy this key.
And we're going to go back into our Python file.
And I'm going to make a new dot env file here.
And then I'm going to create an environment
variable that stores this key that we'll have access to in our code.
So we're going to say that this is llama
underscore cloud underscore API underscore key.
This is going to be equal to.
And then I'm going to paste in that API key.
Obviously make sure you don't leak this I'm just showing it to you for this video.
And I'll delete it afterwards.
Then we're going to go into Main.py and we're just going to load
in that environment variable.
And it will automatically be detected by our parser.
So I know I went through that quickly, but the basic idea is
we're going to make a new account here on Llama Cloud.
We're then just going to use the free parser.
So we're going to go and generate an API key.
Once we generate the API key we're going to paste that inside
of an environment variable file with the variable Llama Cloud API key.
We're going to close this.
Then we're going to go to our Python file.
And we're going to write the following code
which will allow us to automatically load in this environment variable.
So we're going to say in lowercase is from dot env
imports load underscore dot envy.
We need to spell these correctly though.
And then we're going to call this function.
And what the location v function will do is look for the presence
of a dot env file, and then simply load in all of those variables
which will give this line right here the parser access to that variable.
So we can use llama pass. Great.
So now that we've written this code we can test this out.
So what I'm going to do is type Python three
and then main.py.
We're going to wait a second for it to install everything that we need.
And then we're going to see if we get some kind of output.
All right. So this is finished running.
And you'll see then
what happened here is it started parsing this file using llama parse.
It actually pushed that out to the cloud.
Which means if we had hundreds of files, thousands of files etc.,
we could actually handle all of those,
push them out to one person and get the results back.
Then what it did is gave us the result here after querying that PDF.
So it says the API supports several routes for performing various operations.
These include items, items, items, ID, items, ID, etc.
so it used that context to actually answer the question for us.
So now that we've created this, this is going to be
one of the tools that we provide to our AI agent.
The idea here is that we're going to have this,
and we're going to have a few other tools.
We're going to give it to the agent, and the agent can use this vector index
and this query engine to get information
about our PDF or about our, API documentation.
And then using that information,
it can generate a new response and answer questions for us.
So the agent is going out there utilizing multiple different tools.
Maybe it combines them together, maybe it uses one, maybe it uses two three, etc.
and then it aggregates all of the results there and gives us some kind of output.
So now let's look at how we start building out the agent.
And we'll build out another tool that allows us to read in the Python file.
Because right now we're just loading in the PDF.
All right.
So we're going to go back to the top of our program here.
And we're going to say from Lama underscore index dot core
dot tools.
And we are going to import the query engine tool.
And then the tool metadata.
Then what we're going to do is we're going to take this query engine.
So I'm going to delete this right here.
And we're going to wrap it in a tool that we can provide to an AI agent.
So I'm going to say tools are equal to.
And then this is going to be query engine tool for the query engine.
This is going to be the query engine for our PDF or for API documentation.
We're then going to say
metadata is equal to and then this is going to be the tool metadata.
And here we're going to give this a name and a description.
Now the name in the description will tell our agent when to use this tool.
So we want to be specific.
So I'm going to call this API documentation.
And then we want to give this a description.
So we're going to say the description is equal to.
And I'm just going to paste in the description
to save us a little bit of typing here okay.
So let's paste it in.
And this says this gives documentation about code for an API.
Use this for reading docs for the API
okay.
So we're just giving some information about the query engine tool.
Now we are going to write another tool in here in a second.
But for now I want to make the agent and then show you how we utilize the agent.
So we need to import another thing in order to use the agent here.
So we're going to go up to the top and we're going to say from
Lama index dot.
And this is going to be core dot agent.
And we're going to import the react agent okay.
We're now going to make an agent.
So we're going to say agent is equal to react
agent dot from underscore tools.
And we're going to give it a list of tools that it can use okay.
So we're going to say tools.
And then we need to give it an LM which we're going to define in one second.
We're going to say verbose equals true.
If you do this it will give us all of the output
and kind of show us the thoughts of the agent.
If you don't want to see that, you can make this false.
And then we're going to provide some context to this,
which for now will be empty, but we'll fill in in one second.
Okay. So this is great.
But what I want to do now is I want to make another LM
that we can use for this agent, because we want this to generate
some code for us rather than just be a general kind of question answer.
But so what I'm going to do here is I'm going to say my code.
Hello, LM
is equal to oh llama.
And we're going to use a different LM.
And this is going to be model equal to code llama.
Now code Llama is something that does code generation specifically.
So rather than using the normal Mistral model which we had here,
we're just going to use the code L because we want to do code generation.
So now I'm going to pass code LM here.
And you can see how easy it is to utilize multiple
LMS locally on your own computer.
Again all of these are open source and when you do this
it should automatically download it for you.
Okay.
Last thing we want to do is provide a bit of context to this model.
So just to clean up our code a bit we're going to make a new file.
We'll call this prompts.py.
And inside of prompts
I'm just going to paste in a prompt for the context for this model okay.
You can find this from the link in the description from the GitHub.
But it says purpose.
The primary role of this agent is to assist users by analyzing code.
It should be able to generate code and answer questions about code provided.
Now you can change that if you want, but that's really what it's doing, right?
It's going to read code, analyze it, and then generate some code for us.
So that's what we're doing.
So now what I want to do is import that context.
So I'm going to go to the top and I'm going to say from prompts
import and then context.
And then down here I'm going to pass that context variable
okay. So now we have made an agent.
And we can actually test out the agent and see if it utilizes these tools.
So let's do a simple while loop here.
And let me just make this a bit bigger so we can see it.
We're going to say well prompt colon equals to input.
And we're going to say enter a prompt.
And we're going to say Q to quit.
So if you type in Q then we're going to quit.
And we're going to say well all of that does not equal Q
okay, so let's type this correctly.
Then we are going to do the following, which is result equal to agent dot query.
And then we're just going to pass in the prompt and then print
the result okay.
So you might be wondering what we just did.
Well we simply wrote an inline
variable here using something known as the walrus operator in Python.
This just means it's only defined in the while loop,
and it will get redefined each time the while loop runs.
Just make things a little bit cleaner and we say, okay, let's get some prompt.
When it's not equal to Q,
then we'll simply take the prompt, pass it to our agent.
The agent will then utilize any tools it needs to,
and then it will print out the result.
So let's test this out and see if it's working.
So let's clear and let's run.
And this time we're not just going to be using the API documentation vector index.
We'll use an agent and it will decide when to utilize that tool.
I know it seems a bit weird because we only have one tool right
now, but imagine we had 20 tools, 30 tools, 100 tools, and the agent would pick
between all of them and have the ability to do some really complex stuff.
All right, so this is running. Now I'm going to give it a prompt.
I'm going to say something like send a Post request to make
a new
item using the API in Python.
Okay.
Let's see what this is going to give us here.
And if this is going to work or not okay. Sweet.
So it looks like that works.
If we go here we can see that we get
I need to use a tool to help me answer this question API documentation.
It's looking for post items okay.
The API documentation provides information on how to create an item
using the post method.
I can answer the question to create a new item, blah blah blah.
And then it generates the response and then it gives it to us here, right?
To create a new item we do this import requests
URL payload response okay that actually looks good.
And then come down here and says this will create a new item.
And then we can ask it another question or hit Q to quit.
Perfect. So that is working.
However I want to add another tool to this agent
that allows it to load in our Python files.
So llama parse itself can't handle Python files.
It's actually not what it's designed for.
But what we'll do
is we'll write a different tool that can just read in the contents
of any code file that we want, and then give that into the LM.
So this way can have access to the API documentation.
And it can also have access to the code itself.
So it can read both of them.
So let's start doing that.
And the way we'll do that is by writing a new file here called Code reader.py.
So let's go inside of Code Reader.
And we're going to make a new tool that will then pass to our Lola.
So we're going to say from llama underscore index
score dot tools
imports the function tool.
Now this is really cool because what we can do is wrap any Python
function as a tool that we can pass to the LLN.
So any Python code that you'd want the model to be able to execute
it could do that.
You just have to give it a description of the tool,
and it can actually call that Python function with the correct parameters.
This to me is super cool and it really has a lot of potential and possibilities.
Now I'm also going to import OS, and then I'm going to define a function
which will act as my tool.
So I'm going to say the code reader function.
And we're going to take in a file name okay.
Now what we're going to do is say path is equal to OS dot path dot join.
And we're going to join the data directory and the file name
because we want to look just inside of data here.
Perfect. And then we're going to try to open this.
So we're going to say try.
We're going to say with open
okay.
And this is going to be the path.
And then we're going to try to open this in read mode as F.
Then we're going to say the content equals f dot read.
And then we can simply return
the file underscore content.
And this will be the content okay.
Then we're going to have our accept exception
as E we got to spell accept correctly.
And then what we're going to do here instead is we're going to return
some kind of error.
And that error is going to be the string of E.
And that's it.
That's actually all that we need for this function.
Now this is something that we can wrap in a tool and we can provide to the agent.
It can then call this function
and get back either the file content or the error that occurred okay.
So we're going to say the code underscore reader is equal to the function tool.
And this is going to be dot from underscore defaults.
And then we're going to say fn standing for function is equal to the code
reader func.
And then what we need to do similar to before is we need to give this a name.
And we need to give this a description so the agent knows what to use
or when to use this.
So we're going to call this the code reader.
And for the description I'm just going to paste in a description like I had before.
Let me see
if I can move this on to separate lines okay.
Let's just do it like this so that you guys can read it.
Okay.
So it says this tool can read the contents of code files and return the results.
Use this when you need to read the contents of a file.
Perfect. That looks good to me.
Hopefully that's going to work for us.
And now we can go to Main.py and we can import the code reader tool.
So we're going to say from code Reader, Import
Code Reader which is our tool, our function tool.
And now we can just simply pass that in our list of tools.
So imagine right you can write any Python function you want.
Just wrap it like I said or I did here with the function tool
and then just pass it in this list.
And now all of a sudden your agent has access to this
and it can start manipulating things on your computer, interacting
with Python functions.
This really makes the possibilities of agents quite unlimited.
And that's what I really like about this.
Okay, so we have the code reader tool now.
And we also have the API documentation.
So now our agent should work exactly as before.
And we can simply read the contents of that file.
So let's try running this and see what result we get
when we give it a prompt that asks it to say read that file.
Okay.
So start parsing the file
and then we'll write a prompt and we'll say something like read
the contents of test.py and generate some code okay.
So read
the contents of test up hi
and give me the exact same code back.
Now remember we're running some local models
that don't have a ton of parameters and aren't the best one,
so we're not always going to get the best result.
But I hope this is going to work,
or it should give us at least some kind of result.
So you can see it says, okay, I need to use a tool.
So it says we're going to use the tool code reader file name test.py.
And then it gets the contents of the file.
And then what it says here is you provide a Python script
that contains an in-memory database for simplicity,
which implements a list called items.
The script defines four endpoints one for creating new items,
blah blah, blah and then gives us this whole result.
So it didn't give us the code that we wanted,
but it did actually give us a description
of what was inside of that file, which to me says this is indeed working.
And notice it only used this tool.
It didn't use the other tool because it didn't need that for that specific prompt.
Okay. So I think that's good.
That means that it's working and we're able to utilize both the tools.
Now the next thing that we need to do is we need to take any code
that this model is generating for us, and we need to actually write that
into a file.
Now this is where we're going to use another lamp.
So what we want to do is we want to get the result that we just saw there.
We want to determine if it's a valid code,
and then we want to take that code and write it into a file.
Now in order to do that, we need an Lem to analyze the result of this output.
So what we're really going to do. Right. So we're gonna take this result.
We're going to pass it to a different Lem in that Lem is going to have
the responsibility of taking that result and formatting it
into something that we can use to write the code into a file.
Now, I'm not sure if I'll be able to show this here,
because I think I cleared the console.
Yeah, I did, but you would have seen before that it gives us some code output,
but the code is mixed with like descriptions
and other information that we don't want to write into the file.
So the other LMS job is going to be to parse that output
into a format where we can take it and we can write it into the file.
So let's start writing that.
This is where it gets a little bit more complicated,
but I also think it's where it gets quite cool.
So we're going to go to the top of our program here.
And we're going to start
importing some things that can do some output parsing for us.
So we're going to say from pedantic imports, the base model,
we're then going to say from
Lama index dot core dot output.
And I believe this is underscore parsers.
We are going to import the pedantic output parser.
We're then going to say from Lama index dot core
query pipeline import the query pipeline,
which allows us to kind of combine multiple steps in one.
So now we're going to scroll all the way down.
And after we create our agent
and our code alarm, we're going to start handling the output parsing.
So we're going to make a class here.
And we're going to say class code output.
And this is going to be a base model from pedantic.
Then what we're going to do is define
the type of information that we want our output to be parsed into.
Now this is super cool because we can use Lama index and these output parsers
to actually convert a result from an LM into a pedantic object.
So we can specify the type that we want in the pedantic object.
And then lama index.
And another lm can actually format the result to match this pedantic object.
Super cool.
So I'm going to say code and this is going to be type string.
I'm going to say description.
And I want this to be a string as well.
But we could make it other types.
But in this case we're just going to need strings.
And then I'm going to say file name is a string okay.
So I've just made a pedantic object.
This is just a class that we're going to use to do our formatting.
And then we're going to write some things for our query.
So we're going to say parser is equal to the pi Dan tic output parser.
If I can write this here and we're going to pass the code output
which is specifying, we want to use this pedantic output parser
and get our result and pass it into this code output object.
We're then going to have a Json prompt underscore string.
And this is going to be equal to a parser dot format.
And we're going to format the code parser template
which is a variable that I'm going to write in one second.
So now what we're going to do is go to prompts.
And I'm going to write in a prompt here I'll explain how this works.
We just got a bear with me, because there is a bit of code that we need to write.
Okay. So let me copy this in again.
You can find this from the GitHub.
Or you can just write it out yourself.
And what this says is parse the response from a previous Lem into a description
and a string of valid code, and also come up with a valid file name.
This can be saved.
I also come with a valid file name.
This could be saved as that doesn't contain special characters.
Here is the response.
And then this is the response from the previous all of them.
You should parse this into the following Json format.
Okay, so this seems weird, but this is what I'm providing
to my output parser to tell it how to take the result
from this Lem and parse it into the format that I want.
So let's import that and then we'll look at how this works.
So from here we're going to do the code parser template.
And then I'm going to pass the code parser template here.
Now with the parser dot format will do is it will take this string.
And it will then inject at the end of that string.
The format from this pedantic model.
So I've written my pedantic output parser.
Pass the code output.
It's saying hey, this is the format we want.
Code string description, string file name string.
What the output parser will do when I do parser dot format is
it will take this format, find the Json representation of it,
and then pass it or inject it into the code parser template.
So then when I start using this template on the next step,
it knows the type of format we want the output to be in.
So now I say my Json underscore
prompt underscore template is equal to.
And this is a prompt template.
And I simply pass my Json prompt string.
Now at this stage what we do is we write kind of wrapper on the prompt template.
So we can actually inject inside of here.
The response.
So the response if we look here is this okay.
Just bear with me. This will make sense as we get there.
And then lastly we're going to make an output
pipeline.
And the pipeline is going to look like this.
It's a query pipeline.
And the query pipeline is going to have a chain.
And the chain is going to go that we first need to get the Json prompt template.
We're then going to get whatever we need in the template.
And then we're going to pass that to our lamp.
And notice this time I'm using my normal Mistral LM which is this one right here.
I'm not using the code LM because I want a different L, for this
task, a more general purpose one, not one specifically for code.
Okay. So now we have our output pipeline.
So the idea here is that what we want to do
is we want to take the output pipeline and we want to pass this result to it.
And then we're going to get the result back,
which is going to be that formatted object that we want to look at.
So I know this is a bit complicated, but that was kind of
the point of this video was to make it a bit more advanced.
And you're going to see now how we do this.
So we have the result from Agent Duck Query prompt.
Now let's take that result and pass it to our next agent.
So we're going to say next result is equal to the output pipeline dot run.
And we're going to pass the response equal to the result.
So whatever the result was from the first agent that's now what we're passing.
Is this variable right here in this code parser template prompt.
Then we can print out the next result.
And we can see what we get.
Now keep in mind this doesn't always work.
There is sometimes some errors based on how the parsing occurs.
But overall it's pretty good.
So let's go up here and let's run our agent again.
And let's get it to do a similar thing that it did before where it calls
like a post endpoint or something.
Okay.
So a similar prompt as before, read the contents of test.py
and write a Python script that calls the post endpoint to make a new item.
Let's type enter in here and let's see what we get.
All right.
So we can see that we get the kind of thought process here of the first line,
which is that it needs to use the code reader tool,
which it does, and then it generates this code.
And then what happens is
we actually get output here from our second line that says assistant.
And then it gives us this Python object or this Json object really where
we have the code which is all of the code that it generated, which was this.
Right.
And then it has the what else description.
Use the request library in Python to do this.
And then it has a file name which is this.
So now that we have this kind of output what we want to do
is take this output and load it as kind of valid Json.
what do you call it? data in Python?
I'm going to show you a fancy way to do that.
Once we have that, we can
then access all the different fields like code description and file name.
And we can utilize those to save a new file on our computer.
So again we've gone through we've generated the code.
We've used our different tools from the Rag pipeline.
And then we've now parsed our output into this format where we're able
to utilize these different fields.
We just now need to load this in,
so that it's kind of valid for us to be able to view.
Okay.
So now that we have that, what we're going to do is the following.
We're going to say our cleaned Json
is equal to and we're going to go up to the top of our program.
And we're going to import s t.
Now AST is something that going to allow us to actually load in Python code.
So what we'll do is we'll take the output from here
and we'll load it in as a Python dictionary.
So we're going to say AST dot literal evaluation.
And then we're going to convert the next result into a string
because it's actually a response object.
And we're going to replace the assistant which was kind of
what was leading here with an empty string.
So all we're doing
is removing that assistant that came before that valid Python dictionary,
and then we're loading in the rest of this as a Python dictionary object.
So now this is actually going to give us a Python dictionary.
And what I can do is I can print code
generated and then I can print what it is.
So I can say my cleaned Json and then access the code.
And then I can print my description.
So I'm going to go here and go backslash n backslash n
description.
And this needs to be a backslash not a forward slash okay.
And then the description will be
the cleaned Json of the description.
I can then say my file name is equal to the cleaned Json.
And this will be my file name okay.
So let's run this now and see if we get the correct output.
And then we're just going to add some error handling here.
And we're actually going to save the file
because it is possible that some errors could occur.
So let's save and let's bring this up and let's quit.
And let's copy this prompt because this one worked.
And we will paste it again.
And we'll see if we're able to actually get all of those different fields.
And if we load in the Python object properly.
All right.
So you can see here that we're getting
our result code generated which is this create item.
And then it has some lambda function here.
And then we have the description.
And then we didn't print out the file name.
But we would have had the file name as well.
So it gave us a different result than we had last time.
but you can see this is indeed working.
And now we can quit and we can move to the next step,
which is a little bit of error handling and then actually saving the file.
So what we want to do now is just make sure that this works before
we move forward.
Because it's possible that we could get an error.
So what we're going to do is retry this prompt or this sequence of steps
a few times to just make sure it's hand or it's working properly.
Sorry. Before we move on to the next step.
So we're going to say retries are equal to zero.
And we're going to say while retry are less than three.
So we'll just retry this three times.
Then inside of here we're going to do this.
And we're going to say try.
And we're going to try the following.
We're then going to say accept.
And this is going to be exception.
As if we're going to say retries plus equals one here.
And then we're going to break if this happened successfully.
So what's going to happen now is we're going to retry this up to three times.
So every time we fail something happens here that's wrong.
We simply retry it and it will go and do this again
with the same prompt that we typed.
Now we can also do an error message here.
We could say print error occurred retry number.
And then we can make this an F string.
And we can put in the number of retries.
And then we can print out what the error.
Actually once okay.
So that should be our retry block.
I'm now going to come down here and I'm going to say okay
if retries is greater
than or equal to three which means this block actually failed.
We never successfully generated this cleaned Json.
Then I'm simply going to say continue, which means we're going to go back up here
and ask for another prompt.
And I'm going to say print.
Unable to process
the request, try again.
Okay.
Now you've probably seen this sequence
before, but pretty much we'll try to do this.
If it doesn't work, we'll just say, hey,
you know, that prompt didn't work for some reason.
Okay, give us another one because it's possible
they ask us to do something we're not able to do or the outputs not possible.
there's all kinds of errors that could occur here.
So we're just kind of handling that and cleaning it up a bit with this logic.
Now, if we do get down to this point here,
that means that we were able to actually generate this code.
So what we can do is we can save it to a file.
So we can write a little try here and we can say try with open
and we can say open the file name which we have here.
And then we can say that we want to open this in
W as f and we can say f dot.
Right.
And we can write the clean Json code into that file okay.
And then we can say print
saved file.
And we can just print out the file name.
And then we can have an accept here.
And we can say print
error saving file.
Okay.
Now just to make sure that we're not going to overwrite a file name
that we already have, what we can do is we can do an Aussie path
dot join and we can make an output, folder here.
So we can say output and then file name.
Now we need to import OS.
So we'll go to the top of our program and import OS.
Sorry I know I'm jumping around all over the place here.
Then we can go here and we can make a new folder called output.
So now all of the output we'll just go inside of this folder.
So we don't accidentally override any files that we already have.
Okay. So kind of final run here.
Let's give this a shot and see if this works.
So let's bring up the code.
Let's clear and let's run.
And then we'll
enter our prompt and we'll see if it gives us that generated code okay.
So we're going to use the same prompt as before.
And now what we're looking for is that we actually get a generated file
inside of this output directory.
Okay. So it seemed this did actually work.
You can see it has the code generated here.
And then if we go into our output we have this create item file.
Now we do need to remove this because there was a few characters
that I guess it left in here.
But what it's doing is looking for an access token.
Open up test.py okay f dot read response
request post access token response dot status.
So it's not perfect.
There's a few things that it probably shouldn't be doing here.
but overall it gave us kind of some good starting code,
or at least kind of prove the point that, hey, we can generate some code.
It is attempting to call the API, it is calling it in the correct way.
So yeah, I mean, I would call that a success.
Obviously we can mess around with a bunch of different prompts.
We can see what ones it works for. Once it doesn't work for.
Remember we're using these local models which are quite small,
which don't have the same capabilities of something like ChatGPT.
If we did this at an enterprise level with the best hardware,
with the best models, obviously we'd get some better results.
But for now, I'm
going to quit out of that model and I am going to wrap up the video here.
All of this code will be available to download from the link in the description.
A massive thank you to Lama index for sponsoring this video.
I love working with them.
Their framework is incredible and it really just opens my imagination
and eyes to what's possible with these labs.
I mean, look what we were able to create in about 30 or 45 minutes.
Obviously I was walking you through it step by step.
I was going slower than I would normally code this out.
And we have an advanced AI agent that can do some code outputting and parsing.
We're using multiple different agents locally,
and we can continue to train these and do some really cool things.
Anyways, if you guys want to see more videos
like this, definitely leave a comment down below.
Like the video?
Subscribe to the channel and I will see you in the next one.
浏览更多相关视频
How to build an IVR with Custom AI Voices (in Dialogflow)
Chatbots with RAG: LangChain Full Walkthrough
GPT-4o AI Agents: Easily Create Medical Research Agents (Praison AI)
The RIGHT WAY To Build AI Agents with CrewAI (BONUS: 100% Local)
LangGraph AI Agents: How Future of Internet Search will look like?
Build Anything with Perplexity, Here’s How
5.0 / 5 (0 votes)