AutoGen Studio Tutorial - NO CODE AI Agent Builder (100% Local)
Summary
TLDRAutogen Studio, a Microsoft Research-backed project, has launched an open-source tool enabling users to create AI agent teams effortlessly. It supports integration with external models like GPT and local models, facilitating tasks from stock chart plotting to trip planning and coding. This video tutorial guides viewers through the installation, setup, and utilization of Autogen Studio, demonstrating how to harness its capabilities with GPT 4 and local models. It covers creating environments, setting up API keys, defining agents and skills, and constructing workflows. The video also showcases real-time agent interactions and task completion, highlighting the platform's flexibility and potential for complex, multi-agent operations.
Takeaways
- ๐ Autogen Studio is a new tool released by Microsoft Research, enabling users to create AI agent teams with ease.
- ๐ก It's an open-source project that can be run locally and supports integration with both GPT and local models.
- ๐ ๏ธ Users can perform a variety of tasks with Autogen Studio, such as plotting stock charts, planning trips, and writing code.
- ๐ป To use Autogen Studio with GPT, you need to install Conda for managing Python environments and create a new Conda environment.
- ๐ An OpenAI account and API key are required to power Autogen Studio with GPT.
- ๐ฆ Autogen Studio includes a user interface that simplifies the process of setting up and managing AI agents and their tasks.
- ๐ ๏ธ Skills in Autogen Studio are tools or pieces of code that AI agents can use to accomplish tasks, such as generating images or finding papers.
- ๐ค Agents are individual AI entities with roles and tools; they can be set up to use different models, including local models.
- ๐ Workflows in Autogen Studio combine agents and tasks, allowing for complex interactions and the creation of agent teams.
- ๐ The platform supports local model usage, which can be set up using tools like Ollama and Light LLM for on-premise AI model execution.
- ๐ Autogen Studio also allows for the creation of custom skills and the ability to assign different tools to different agents for specialized tasks.
Q & A
What is Autogen Studio?
-Autogen Studio is a tool developed by Microsoft Research that allows users to create sophisticated AI agent teams with ease. It is fully open source and can be run locally, powered by models like GPT or local models.
How can Autogen Studio be installed and set up?
-To install Autogen Studio, you need to create a new conda environment with Python 3.11, then install Autogen Studio using pip. After setting up the environment and installing the necessary packages, you can start using it.
What is the role of conda in setting up Autogen Studio?
-Conda is used to manage Python environments, which simplifies the process of setting up the required environment for Autogen Studio. It allows users to create a new environment and install the necessary packages without affecting the system's global Python installation.
How do you integrate Autogen Studio with GPT-4?
-To integrate Autogen Studio with GPT-4, you need to create an API key from your OpenAI account and export it in your environment. This allows Autogen Studio to access the GPT-4 model for its AI agent teams.
What are skills in the context of Autogen Studio?
-Skills in Autogen Studio are tools that AI agents can use. They are usually written in code and can be anything from generating images to fetching data. Skills allow AI agents to perform specific tasks.
What is an agent in Autogen Studio?
-An agent in Autogen Studio is an individual AI with a role, tools, and the capability to perform tasks. It can be configured to use different models and can be part of an AI agent team.
How can you create a new skill in Autogen Studio?
-To create a new skill, you go to the 'Build' tab, click 'New Skill', give it a name, and write out the code for the skill. This code defines the functionality that the AI agents can use.
What is a workflow in Autogen Studio and how is it used?
-A workflow in Autogen Studio puts everything together, including the team and the task to be accomplished. It defines the interaction between agents, the summary method for conversations, and the sequence of tasks to be executed.
How can Autogen Studio be used with local models instead of online models like GPT-4?
-To use Autogen Studio with local models, you can use tools like Olama to download and run models locally, and Light LLM to expose an API for these models. You then configure Autogen Studio to use these local APIs instead of online models.
What is the Playground in Autogen Studio and how is it used?
-The Playground in Autogen Studio is where you test different agent teams. You can create a session, assign a task, and see the agents interact to accomplish the task. It also allows you to publish sessions to the web for further analysis.
How can you switch between different models for different agents in Autogen Studio?
-You can switch between different models for different agents by creating separate agents for each model and configuring their respective workflows. This allows for a flexible setup where each agent can be optimized for specific tasks using the most suitable model.
Outlines
๐ Introduction to Autogen Studio
Autogen Studio, developed by the Microsoft Research team, is a platform for creating AI agent teams. It's open-source and can be run locally with support for both Chat GPT and local models. The video will guide viewers through installation, setup, and usage with both GPT 4 and local models. The presenter emphasizes the need for 'conda' for managing Python environments and demonstrates creating a new conda environment and installing Autogen Studio. It also covers setting up an OpenAI API key for Autogen to access.
๐ Setting Up Autogen Studio with Chat GPT
The video provides a step-by-step guide on setting up Autogen Studio with Chat GPT. It starts with activating the conda environment and installing Autogen Studio. The process involves creating an API key from the OpenAI account, setting it in the environment, and starting Autogen Studio with a specified port. The presenter then introduces the user interface, explaining the terminology like 'skills', 'agents', and 'workflows'. Skills are tools for AI agents, agents are individual AIs with roles and tools, and workflows combine agents and tasks.
๐จ Using Autogen Studio with Local Models
The presenter demonstrates how to set up Autogen Studio with local models, specifically using 'olama' and 'light llm'. The process includes installing olama, downloading a local model, and setting up a server with light llm. The video shows how to create a new agent powered by a local model and configure a workflow to use this agent. It also highlights the ability to have multiple agents with different local models running simultaneously, showcasing the flexibility of Autogen Studio.
๐ง Advanced Configuration and Testing in Autogen Studio
The video concludes with advanced configuration tips and testing the setup. It shows how to create different agents with various local models and incorporate them into workflows. The presenter also discusses the ability to publish sessions to a gallery for review and the potential for custom authentication logic within Autogen Studio. The video ends with a call to action for feedback and suggestions for future content.
Mindmap
Keywords
๐กAutogen Studio
๐กAI Agent Teams
๐กOpenAI API Key
๐กLocal Models
๐กSkills
๐กWorkflows
๐กGPT-4
๐กConda
๐กGroup Chat Manager
๐กPlayground
๐กMistal
Highlights
Autogen Studio, a project by Microsoft Research, allows users to create AI agent teams with ease.
It's a fully open-source project that can be run locally and powered by various models.
Autogen Studio can be used for a wide range of tasks, from plotting stock charts to planning trips and writing code.
The video tutorial guides viewers on how to install, set up, and use Autogen Studio with both GPT 4 and local models.
Cond is required to manage Python environments for integration with Chat GPT as the powering model.
Autogen Studio comes with a user-friendly interface that simplifies the creation and management of AI agents.
Skills in Autogen Studio are tools that AI agents can use, often written in code and accessible to any agent.
Agents are individual AIs with roles, tools, and the ability to perform tasks, and can be powered by different models.
Workflows in Autogen Studio combine agents and tasks to accomplish specific objectives.
The Playground feature allows users to test different agent teams and their performance on given tasks.
Autogen Studio enables the creation of custom skills through coding, expanding the capabilities of AI agents.
The video demonstrates how to use Autogen Studio with local models, showcasing its flexibility.
Olama and Light LLM are tools used to power local models, providing an alternative to cloud-based AI services.
Autogen Studio supports the use of different models for different agents, allowing for tailored AI solutions.
The video includes a walkthrough of setting up authentication within Autogen Studio for team sharing.
The tutorial concludes with a demonstration of Autogen Studio's ability to handle complex tasks like generating images and plotting stock data.
Transcripts
autogen studio is here the Microsoft
research team behind autogen the
Revolutionary AI agent project has
finally released autogen Studio which
allows you to create sophisticated AI
agent teams with ease it's a fully open
source project you can run it locally
you could power it with chat GPT and you
can also power it with local models
everything from plotting stock charts to
planning trips to writing code this is
what chaty PT's cust gpts were supposed
to be so in this video I'm going to show
you how to install it how to set it up
and how to use it both with GPT 4 and
local models so let's go first the only
thing you're going to need to get this
to work with chat GPT as the powering
model is cond so if you don't already
have cond installed it's a super easy
way to manage python environments which
is always a headache otherwise so if you
don't already have it installed go ahead
and install it now it's very easy so the
First Command we're going to run is to
create a new cond in environment and I'm
going to say condac create DN AG for
autogen and then we're going to use
Python equals 3.11 and then just hit
enter it's going to ask you if you want
to proceed just hit enter again and it's
going to install all the packages that
we need all right once you're done there
you're going to highlight this code
right here to activate the new
environment so go ahead and highlight it
paste it and it's cond to activate Ag
and then hit enter and you could tell
it's activated now because it says AG
right there next and this really
couldn't be easier we're going to
install autogen studio and by installing
autogen Studio we get everything with
autogen as well as the user interface so
it's pip install autogen studio and
remember this installs to the
environment that we just created so if
you deactivate this environment or you
switch environments it's not going to be
available there so hit enter and it
installs everything we need next open up
your open AI account go to the API key
section and we're going to create a new
key and I'm going to call it AG for
autogen and then create secret key I am
going to revoke this key before
publishing the video go ahead and click
copy switch back to your terminal and
now we're going to export the open AI
key setting it in our environment which
allows autogen to access it and to do
that we're going to type export open
aore API key all capitalized equals and
then you're going to paste in your newly
created key then just hit enter next
we're going to spin up autogen studio
now we're pretty much done so just type
autogen Studio space ui-- port 8081 and
then hit enter and then it's going to
spin up autogen studio for us and
provide us with a URL and here we go
it's Local Host 8081 so we're going to
copy this URL right here switch over to
your browser and now this is autogen
Studio it is absolutely gorgeous and
it's super easy to use and I'm going to
show you how to do all of it now that's
really all you need to do to get autogen
Studio working with chat GPT a little
bit later in this video I'm going to
show you how to set it up with local
models including powering individual
agents with different models it's pretty
amazing and the first tab we're going to
start on is built just so I can tell you
about all the different terminology so
first let's talk about skills skills are
tools that that you can give your AI
agents and AI agent teams they can be
anything but they're usually written in
code so there are three by default here
generate images and if I click into this
we can see this is actually just the
code for generating images and what it
does is it sets up a method that hits
the open aai Dolly endpoint and
generates an image and then it Returns
the image and that's it and now any
agent can use this generate images tool
we also have find papers on archives so
if I click into there it's exactly what
it sounds like it's going to accept a
query and it's going to return papers
that are found on the archive website
now you can probably imagine how
amazingly powerful this really is you
can give it tools for anything any API
that you can connect with you can give
it tools for and not only apis you can
give it instructions to accomplish
pretty much any task and where my mind
starts going is connecting it to a
service like zapier and then all of a
sudden you have Integrations into so
many different applic appliations and
you can mix and match them and
accomplish incredibly sophisticated
tasks through giving zappier
Integrations as tools to your agents so
this is what you do so let's click new
skill and it's as simple as that you
give it a name and then you write out
the code for your actual skill so we're
not going to do that but that's how you
would accomplish it next are agents and
this is the most obvious one an agent is
just an individual AI that has a role
tools and can perform tasks so by
default it comes with two primary
assistant and user proxy this mimics the
autogen framework for when we didn't
have a user interface so user proxy you
can think of as you the user and the
user can actually jump in and give input
or not give input at all and let the
autogen team accomplish the task
completely autonomously the primary
assistant is also exactly what it sounds
like it's another AI agent that doesn't
represent the actual user and it's
completely autonomous it can write and
run code it can use tools it takes on a
roll a description and everything and
I'll show you how to create a new agent
in a little bit and by the way this is
also where you specify which model the
agent is going to use whether you want
to use chat GPT 4 or you want to use a
local model next is workflows and a
workflow puts everything together
including the team and the task you want
to accomplish so here's a travel agent
group workflow let's click that so we
name it a group chat workflow so we have
some options for summary method which is
just defining the method to summarize
the conversation and we can just say
last none or llm and then we have the
sender and the receiver and this is
really important the sender is usually
going to be user proxy although it can
change if you get more complex teams and
then the receiver is going to be the
group chat manager so whenever you have
more than two agents more than just a
user and an assistant agent that's when
you start to use the group chat manager
and I can even click into the group chat
manager and here is where I can add all
of the agents into this group chat so I
have the primary assistant the local
assistant and the language assistant I
give the group chat manager a name I can
give it a description I can give it Max
consecutive auto replies and if a lot of
this stuff seems forign to you check out
my video where I break down autogen in
detail and Define and show you how to
use all of these different settings cuz
they are important I'll drop that link
in the description below human input
mode so we have never only on terminate
or always on every step here we have our
system message where we can actually
just say group chat manager or we can
define a more complex system message
which helps control the agent Behavior
then here is where we can Define our
model we can add multiple models and
it's going to daisy chain them together
so it's going to start with GPT 4 here
and if I added another one it would fall
back to that if for whatever reason GPT
4 didn't work so remember whatever the
first model is here in the list that's
going to be your default model and
unfortunately I couldn't figure out a
way to drag and reorder the models so
you'll actually have to just delete it
and add it in order then down here is
where you can add skills and remember
the skills are pieces is a code that the
agents can run so go ahead and click
that and we have for example this
generate images and we can just add the
skill like that and now this agent the
group chat manager has the skill of
generate images and then down here it
says or replace with an existing agent
so we can just say that and it'll fill
out everything for us so then when we're
done with that we click okay but I'm not
going to do that because I don't want to
save it next we have the playground and
this is actually where you're going to
be testing out the different agent teams
so you can think of a session as a fixed
amount of time where an a agent team
goes to accomplish a task and the cool
thing is I believe this is asynchronous
so let's go ahead and click and create a
new session I'll show you how to do the
mistal workflow in a bit cuz that's a
local model but let's do visualization
agent workflow and we can see all agent
workflows here if we want but let's go
back and let's choose the visualization
agent workflow then we click create and
there we go now from here we can publish
it to the web which is really cool we
can delete it and this is where we give
it the task we want it to complete so
let's say stock price plot h chart of
Nvidia and Tesla stock price for 2023
save the result to a file named Nvidia
Tesla PNG so now it's pinging gp4 to do
that and you can see it's working by
this little waiting icon right here now
one thing I would have liked is if it
streamed the results to this window but
it seems like it waits till it's
completely done before showing the
result all right there so that is again
one thing that I'd like to see a little
bit different is I want to see each step
being output because the only way to
really tell anything is happening is if
I switch over to my terminal and I
actually watch the output so you can see
all the output here this is what autogen
typically looks like and then the UI
just puts it in a really pretty
interface so let's scroll to the top
sure here's the result of your request
okay so we can see the different agent
messages going back and forth so the
user proxy says so this is the user
agent representing me so plot the chart
of Nvidia and Tesla then we have the
visualization assistant which creates
the plan to actually do that writes the
code so here here's the code that I just
wrote so the visualization assistant
says please run the above script to
fetch the stock data once the data is
fetched save it to stock data.csv so it
does that the user proxy says okay done
and fetched and saved then the
visualization assistant says great now
we can run the visualization of it and
then the user proxy runs that code and
saves it to Nvidia Tesla PNG and then
here's the results so we have the stock
data.csv file we have the PNG which is
the actual visualization of the stock
price over time we have the plot stock
chart. py file which is the code to do
it and we have the fetch stock data. py
also and the nice thing is you can
easily turn these into tools so that it
doesn't have to recreate these tools
next time and So currently it doesn't
look like you can do it in a one-click
way what you would do is you would go
back to build go to skills create a new
skill and essentially copy paste what's
in here back into a skill on this page
right here but that's it that shows how
to do it and it is incredible I find
that autogen studio makes it a lot
easier to manage your tools most of all
I always found that tool usage from the
autogen code was a little bit difficult
and let's try one more thing let's
create new and I'm going to do travel
agent group workflow create and I'm
going to click paint here let's see what
it does paint a picture of a glass of
Ethiopian coffee freshly brewed and a
tall glass cup so obviously this is
going to be using Dolly I'm going to
switch over to terminal and we're going
to watch it actually work so here it is
the user proxy agent saying that's what
it's going to do okay so it's saying I'm
unable to physically paint a picture so
what that is telling me is that my agent
team doesn't actually have the right
tool to do that so let's give it that
tool so to fix this problem what we're
going to do is we're actually going to
use a different agent team that actually
has the paint skill so let's go back to
build and let's look at the general
agent workflow now we can see it has a
sender of user proxy and a primary
assistant receiver with two skills if I
click into there I can see one of those
skills is generate images so that should
be good to actually generate a picture
and we can see the daisy chain of
different models that it's using so
first it's using GPT 4 so let's go ahead
and try it now so go back to playground
I'm going to create a new session I'm
going to use the general agent workflow
create and then I'm going to say paint
and now hopefully this works all right
switching over to the terminal it does
look like it generated the image and
let's see what happened there it is
perfect that's exactly what I asked for
wonderful so you can see it is important
you think about which tools are assigned
to which agents which agent teams if
you're asking it to do specific things
that using that tool and we can also
look at the pi file and we can actually
see the code that it wrote to generate
that image and so now I like that one if
I just click publish it says session
successfully published I go over to
gallery and now I can find that session
I just click here and open it up and I
can actually see what just happened all
right now now I want to show you how to
use this completely locally and what
you're going to need for that is two
things olama and light llm oama is a
wonderful tool that allows you to power
models locally super easily and light
llm is a wrapper to expose an API even
if you don't understand what any of that
means it doesn't matter I'm going to
show you how to use it it's dead simple
so we're going to switch back to our
terminal we're going to create a new tab
right here we're going to use the same
cond environment so cond activate AG hit
enter now it's activated and the first
thing you're going to do is install oama
and it really could not be easier go to
ol. click download and go through the
installation process I've already done
it so I'm not going to do it right now
but it's dead simple when you're done
you should see a little llama icon in
your task tray and that's it now to
download a model what we're going to do
is type O Lama run mistol and we're
going to download the mistal model now I
already have it downloaded so it's not
going to download it again for me but
when you run this it will download it
and it's about 4 gigabytes so I hit
enter just to make sure it's working and
there it is so I can just test it with
tell me a joke okay perfect so there it
is now we know this is mistal running
completely locally okay now we're going
to open up another tab again we're going
to use the same cond environment and by
the way you don't actually need to keep
this oama instance open anymore but it
doesn't matter if you do or don't so
over here we're going to switch back to
cond activate AG perfect and now we're
going to install Light llm so pip
install Light llm Das Das up upgrade
just in case you already have it and you
need to upgrade it hit enter now one
issue that I ran into that I already
fixed and I probably won't run into it
again but I want to show you it is it
said that it was missing a module and
the module that was missing is called
gunicorn and I don't know why it wasn't
installed as part of the light llm
package but it wasn't so all I had to do
to fix that was pip install gunicorn so
if you get an error where G unicorn
can't be found that's how to fix it now
to set up a server with mistal running
powered by oama this is all you do light
l m-- model oama SL mistol and then hit
enter and there we go we're all ready to
go and we can see that the server spun
up right here it's Local Host 8000 so
we're going to go ahead copy that URL
now switch back to autogen now we're
going to go back to the build tab then
I'm going to go to agents and I already
have the mistal assistant so I'm going
to delete that as well and now I'm going
to create a new agent first and it's
going to be a mistal agent so I'm going
to say mistal assistant agent
description helpful assistant power by
mistal locally Max consecutive auto
replies I'm going to leave that there
human input mode never and a system
message and I'm just going to keep it
simple you are a helpful assistant now
right here you can see it's defaulting
to gp4 so go ahead and get rid of that
we're going to click add and this is
where we actually tell it to be powered
by the local mistal model so here I'm
going to call it mistel you don't need
an API key the base URL we're going to
click paste this is that local URL that
we just copied everything else we do not
need and then we're going to click add
model we're not going to give it any
skills now but feel free to do that when
you're testing it then click okay now we
have a mistal assistant powered by
mistal and the cool thing is I can also
have a Mixel assistant and I can also
have a nous Hermes assistant and they
can all run at the same time it is truly
incredible now let's go to workflows
we're going to create a new workflow and
we're going to say this is a mistal
workflow workflow description we'll
leave that the same summary method
that's fine user proxy that's fine and
this is going to be gp4 powered so if
you did want to have the user proxy
locally powered go to the user proxy
agent and switch out GPT 4 for mistol
then as the receiver we're going to
change that so we're going to change the
receiver to be the mistal assistant and
so for the agent name we're going to
call it mistal assistant agent
description I'm going to leave it blank
for now again feel free to customize
this as much as you want human input
mode never this is the system message
that is used used for autogen so I'm not
going to touch this then we're going to
delete gp4 here we're going to add a new
model again we're going to say it's
mistol same thing Local Host 8,000 right
there and then we're going to click add
model we're not going to give it any
skills and we're just going to click
okay then okay again now we have a
mistel agent we have a mistel workflow
we should be able to use it powered by
mistal so let's go to the playground
we're going to click new we select
mistal workflow and then create and then
let's say tell me a joke just to see if
it works hit enter and there we can
actually see that it worked so there's
the post two chat completions and this
is the light llm so it should have
worked and it did although it did not
tell me anything good that's fine let's
see it accomplished something a little
bit more difficult write code to Output
numbers 1 to 100 okay there it is so it
was extremely fast switching back over
to the terminal we can see that it
actually did post to chat completion so
it worked and there's the code and
here's the termination messages so user
proxy says write code to Output numbers
1 to 100 the mistal assistant writes the
code and sends terminate which
terminates everything and so that's it
now you know how to power autogen studio
with a local model and what if you did
want to have different models for
different agents to do that we come back
here over to ol llama exit out of there
and so what you would do is AMA run
llama 2 that'll initiate the download
and once it's done downloading with
olama we can leave this light llm that's
running mistl up then we create a new
tab then we cond activate AG light llm
-- model o lama lama 2 hit enter it'll
give you a new URL and you do the same
exact thing you come in here go to build
set up a new agent as llama assistant
and then you input the URL as normal
then you set up the same workflow as
normal and you're done now you have
different assistants powered by
different local models and you can plug
and play as you see fit the best part is
you can find the right fine-tuned model
for the right task and one last thing I
want to mention is it actually has this
sign out functionality but when you
click it it says please Implement your
own logout logic which means you can set
up your own authentication within
autogen studio so if you wanted to share
this amongst your team you could set it
up to do that I am so impressed by
autogen Studio let me know what you
think in the comments if you want me to
do any kind of followup or deeper dive
into autogen Studio let me know what you
want to see in the comments if you liked
this video please consider giving a like
And subscribe and I'll see you in the
next one
Browse More Related Video
An overview of AutoGen Studio 2.0 in under 10 minutes!
AutoGen Quickstart ๐ค Build POWERFUL AI Applications in MINUTES
AutoGen Studio 2.0 Tutorial - Skills, Multi-Agent Teams, and REAL WORLD Use Cases (NO CODE)
AutoGen Studio 2.0 Advanced Tutorial | Build multi-agent GenAI Application!!
AUTOGEN STUDIO : The Complete GUIDE (Build AI AGENTS in minutes)
ๆๆๆๆๅคงๅฎถๆญๅปบAutogen๏ฝๅพฎ่ป็ๅคๆบ่ฝ้ซๅไฝ็ณป็ตฑ๏ฝๅคAIๅ่ชฟ๏ฝ#multi-agent #autogen #openai #chatpgt #gpt4 #gpt #web3
5.0 / 5 (0 votes)