AutoGen Studio Tutorial - NO CODE AI Agent Builder (100% Local)

Matthew Berman
15 Jan 202418:33

Summary

TLDRAutogen Studio, a Microsoft Research-backed project, has launched an open-source tool enabling users to create AI agent teams effortlessly. It supports integration with external models like GPT and local models, facilitating tasks from stock chart plotting to trip planning and coding. This video tutorial guides viewers through the installation, setup, and utilization of Autogen Studio, demonstrating how to harness its capabilities with GPT 4 and local models. It covers creating environments, setting up API keys, defining agents and skills, and constructing workflows. The video also showcases real-time agent interactions and task completion, highlighting the platform's flexibility and potential for complex, multi-agent operations.

Takeaways

  • ๐Ÿš€ Autogen Studio is a new tool released by Microsoft Research, enabling users to create AI agent teams with ease.
  • ๐Ÿ’ก It's an open-source project that can be run locally and supports integration with both GPT and local models.
  • ๐Ÿ› ๏ธ Users can perform a variety of tasks with Autogen Studio, such as plotting stock charts, planning trips, and writing code.
  • ๐Ÿ’ป To use Autogen Studio with GPT, you need to install Conda for managing Python environments and create a new Conda environment.
  • ๐Ÿ”‘ An OpenAI account and API key are required to power Autogen Studio with GPT.
  • ๐Ÿ“ฆ Autogen Studio includes a user interface that simplifies the process of setting up and managing AI agents and their tasks.
  • ๐Ÿ› ๏ธ Skills in Autogen Studio are tools or pieces of code that AI agents can use to accomplish tasks, such as generating images or finding papers.
  • ๐Ÿค– Agents are individual AI entities with roles and tools; they can be set up to use different models, including local models.
  • ๐Ÿ”„ Workflows in Autogen Studio combine agents and tasks, allowing for complex interactions and the creation of agent teams.
  • ๐ŸŒ The platform supports local model usage, which can be set up using tools like Ollama and Light LLM for on-premise AI model execution.
  • ๐Ÿ” Autogen Studio also allows for the creation of custom skills and the ability to assign different tools to different agents for specialized tasks.

Q & A

  • What is Autogen Studio?

    -Autogen Studio is a tool developed by Microsoft Research that allows users to create sophisticated AI agent teams with ease. It is fully open source and can be run locally, powered by models like GPT or local models.

  • How can Autogen Studio be installed and set up?

    -To install Autogen Studio, you need to create a new conda environment with Python 3.11, then install Autogen Studio using pip. After setting up the environment and installing the necessary packages, you can start using it.

  • What is the role of conda in setting up Autogen Studio?

    -Conda is used to manage Python environments, which simplifies the process of setting up the required environment for Autogen Studio. It allows users to create a new environment and install the necessary packages without affecting the system's global Python installation.

  • How do you integrate Autogen Studio with GPT-4?

    -To integrate Autogen Studio with GPT-4, you need to create an API key from your OpenAI account and export it in your environment. This allows Autogen Studio to access the GPT-4 model for its AI agent teams.

  • What are skills in the context of Autogen Studio?

    -Skills in Autogen Studio are tools that AI agents can use. They are usually written in code and can be anything from generating images to fetching data. Skills allow AI agents to perform specific tasks.

  • What is an agent in Autogen Studio?

    -An agent in Autogen Studio is an individual AI with a role, tools, and the capability to perform tasks. It can be configured to use different models and can be part of an AI agent team.

  • How can you create a new skill in Autogen Studio?

    -To create a new skill, you go to the 'Build' tab, click 'New Skill', give it a name, and write out the code for the skill. This code defines the functionality that the AI agents can use.

  • What is a workflow in Autogen Studio and how is it used?

    -A workflow in Autogen Studio puts everything together, including the team and the task to be accomplished. It defines the interaction between agents, the summary method for conversations, and the sequence of tasks to be executed.

  • How can Autogen Studio be used with local models instead of online models like GPT-4?

    -To use Autogen Studio with local models, you can use tools like Olama to download and run models locally, and Light LLM to expose an API for these models. You then configure Autogen Studio to use these local APIs instead of online models.

  • What is the Playground in Autogen Studio and how is it used?

    -The Playground in Autogen Studio is where you test different agent teams. You can create a session, assign a task, and see the agents interact to accomplish the task. It also allows you to publish sessions to the web for further analysis.

  • How can you switch between different models for different agents in Autogen Studio?

    -You can switch between different models for different agents by creating separate agents for each model and configuring their respective workflows. This allows for a flexible setup where each agent can be optimized for specific tasks using the most suitable model.

Outlines

00:00

๐Ÿš€ Introduction to Autogen Studio

Autogen Studio, developed by the Microsoft Research team, is a platform for creating AI agent teams. It's open-source and can be run locally with support for both Chat GPT and local models. The video will guide viewers through installation, setup, and usage with both GPT 4 and local models. The presenter emphasizes the need for 'conda' for managing Python environments and demonstrates creating a new conda environment and installing Autogen Studio. It also covers setting up an OpenAI API key for Autogen to access.

05:00

๐Ÿ›  Setting Up Autogen Studio with Chat GPT

The video provides a step-by-step guide on setting up Autogen Studio with Chat GPT. It starts with activating the conda environment and installing Autogen Studio. The process involves creating an API key from the OpenAI account, setting it in the environment, and starting Autogen Studio with a specified port. The presenter then introduces the user interface, explaining the terminology like 'skills', 'agents', and 'workflows'. Skills are tools for AI agents, agents are individual AIs with roles and tools, and workflows combine agents and tasks.

10:02

๐ŸŽจ Using Autogen Studio with Local Models

The presenter demonstrates how to set up Autogen Studio with local models, specifically using 'olama' and 'light llm'. The process includes installing olama, downloading a local model, and setting up a server with light llm. The video shows how to create a new agent powered by a local model and configure a workflow to use this agent. It also highlights the ability to have multiple agents with different local models running simultaneously, showcasing the flexibility of Autogen Studio.

15:04

๐Ÿ”ง Advanced Configuration and Testing in Autogen Studio

The video concludes with advanced configuration tips and testing the setup. It shows how to create different agents with various local models and incorporate them into workflows. The presenter also discusses the ability to publish sessions to a gallery for review and the potential for custom authentication logic within Autogen Studio. The video ends with a call to action for feedback and suggestions for future content.

Mindmap

Keywords

๐Ÿ’กAutogen Studio

Autogen Studio is an open-source project developed by Microsoft Research, which enables users to create sophisticated AI agent teams with ease. It is a platform that allows for the integration of different AI models and tools to perform a variety of tasks. In the video, the presenter demonstrates how to install and use Autogen Studio, highlighting its user-friendly interface and the ability to connect with models like GPT-4 or local models.

๐Ÿ’กAI Agent Teams

AI Agent Teams refer to a collection of individual AI agents that work together to accomplish tasks. These agents can be programmed with specific roles and tools, allowing them to perform complex operations autonomously or with user input. The video script discusses setting up AI agent teams in Autogen Studio, showing how they can be used for tasks like plotting stock charts or planning trips.

๐Ÿ’กOpenAI API Key

An OpenAI API Key is a unique identifier used to access OpenAI's services programmatically. In the context of the video, the presenter guides viewers on how to create an API key for Autogen Studio to interact with OpenAI's services, such as generating images or accessing GPT models. The API key is essential for enabling Autogen Studio to perform tasks that require access to external AI services.

๐Ÿ’กLocal Models

Local models refer to AI models that are hosted and run on the user's own infrastructure, as opposed to cloud-based services. The video explains how to set up Autogen Studio to use local models like Mistral, which can provide faster and more private interactions. The presenter demonstrates the process of downloading and integrating local models into Autogen Studio, allowing for greater control and customization.

๐Ÿ’กSkills

In Autogen Studio, 'skills' are tools or functionalities that AI agents can utilize to perform specific tasks. These skills are often coded scripts that agents can execute when needed. The video script includes an example where the presenter creates a skill for generating images, showcasing how skills can be added to agents to expand their capabilities.

๐Ÿ’กWorkflows

Workflows in Autogen Studio are the sequences of actions that define how a task is to be accomplished by the AI agent team. They include the agents involved, the order of operations, and the tools or skills to be used. The video demonstrates setting up workflows for different tasks, such as creating visualizations or group chats, to show how tasks can be automated through the platform.

๐Ÿ’กGPT-4

GPT-4 is a reference to a hypothetical next-generation model from OpenAI, which is expected to be more advanced than its predecessors. In the video, the presenter mentions using GPT-4 as the powering model for Autogen Studio, indicating the potential for leveraging cutting-edge AI capabilities within the platform.

๐Ÿ’กConda

Conda is a package manager and environment management system used for installing and managing software packages and their dependencies. In the video, the presenter uses Conda to create a new environment for Autogen Studio, ensuring that all required packages are installed and isolated from other projects.

๐Ÿ’กGroup Chat Manager

The Group Chat Manager is a component within Autogen Studio workflows that handles communication between multiple AI agents. It is especially useful when more than two agents are involved in a task, facilitating the coordination and exchange of information. The video script describes how to set up a group chat manager and add agents to it, which is crucial for complex agent interactions.

๐Ÿ’กPlayground

The Playground in Autogen Studio is an area where users can test and experiment with different agent teams and workflows. It allows for the creation of sessions where tasks are assigned to AI agents, and their execution can be monitored. The video demonstrates using the Playground to create sessions and assign tasks like plotting stock charts or generating images.

๐Ÿ’กMistal

Mistal, in the context of the video, refers to a local AI model that can be used with Autogen Studio. The presenter shows how to download and set up Mistal using tools like Ollama and Light LLM, which allow for the local hosting of AI models. Mistal is used as an example of how Autogen Studio can be powered by different local models, providing flexibility in choosing the right AI for the task.

Highlights

Autogen Studio, a project by Microsoft Research, allows users to create AI agent teams with ease.

It's a fully open-source project that can be run locally and powered by various models.

Autogen Studio can be used for a wide range of tasks, from plotting stock charts to planning trips and writing code.

The video tutorial guides viewers on how to install, set up, and use Autogen Studio with both GPT 4 and local models.

Cond is required to manage Python environments for integration with Chat GPT as the powering model.

Autogen Studio comes with a user-friendly interface that simplifies the creation and management of AI agents.

Skills in Autogen Studio are tools that AI agents can use, often written in code and accessible to any agent.

Agents are individual AIs with roles, tools, and the ability to perform tasks, and can be powered by different models.

Workflows in Autogen Studio combine agents and tasks to accomplish specific objectives.

The Playground feature allows users to test different agent teams and their performance on given tasks.

Autogen Studio enables the creation of custom skills through coding, expanding the capabilities of AI agents.

The video demonstrates how to use Autogen Studio with local models, showcasing its flexibility.

Olama and Light LLM are tools used to power local models, providing an alternative to cloud-based AI services.

Autogen Studio supports the use of different models for different agents, allowing for tailored AI solutions.

The video includes a walkthrough of setting up authentication within Autogen Studio for team sharing.

The tutorial concludes with a demonstration of Autogen Studio's ability to handle complex tasks like generating images and plotting stock data.

Transcripts

play00:00

autogen studio is here the Microsoft

play00:03

research team behind autogen the

play00:06

Revolutionary AI agent project has

play00:09

finally released autogen Studio which

play00:12

allows you to create sophisticated AI

play00:15

agent teams with ease it's a fully open

play00:18

source project you can run it locally

play00:20

you could power it with chat GPT and you

play00:22

can also power it with local models

play00:24

everything from plotting stock charts to

play00:26

planning trips to writing code this is

play00:28

what chaty PT's cust gpts were supposed

play00:31

to be so in this video I'm going to show

play00:33

you how to install it how to set it up

play00:35

and how to use it both with GPT 4 and

play00:38

local models so let's go first the only

play00:41

thing you're going to need to get this

play00:42

to work with chat GPT as the powering

play00:45

model is cond so if you don't already

play00:47

have cond installed it's a super easy

play00:49

way to manage python environments which

play00:51

is always a headache otherwise so if you

play00:54

don't already have it installed go ahead

play00:55

and install it now it's very easy so the

play00:57

First Command we're going to run is to

play00:58

create a new cond in environment and I'm

play01:00

going to say condac create DN AG for

play01:03

autogen and then we're going to use

play01:04

Python equals 3.11 and then just hit

play01:07

enter it's going to ask you if you want

play01:08

to proceed just hit enter again and it's

play01:10

going to install all the packages that

play01:11

we need all right once you're done there

play01:12

you're going to highlight this code

play01:14

right here to activate the new

play01:15

environment so go ahead and highlight it

play01:17

paste it and it's cond to activate Ag

play01:19

and then hit enter and you could tell

play01:21

it's activated now because it says AG

play01:23

right there next and this really

play01:25

couldn't be easier we're going to

play01:26

install autogen studio and by installing

play01:29

autogen Studio we get everything with

play01:31

autogen as well as the user interface so

play01:34

it's pip install autogen studio and

play01:36

remember this installs to the

play01:37

environment that we just created so if

play01:38

you deactivate this environment or you

play01:40

switch environments it's not going to be

play01:41

available there so hit enter and it

play01:43

installs everything we need next open up

play01:45

your open AI account go to the API key

play01:47

section and we're going to create a new

play01:49

key and I'm going to call it AG for

play01:51

autogen and then create secret key I am

play01:53

going to revoke this key before

play01:54

publishing the video go ahead and click

play01:56

copy switch back to your terminal and

play01:58

now we're going to export the open AI

play02:00

key setting it in our environment which

play02:02

allows autogen to access it and to do

play02:04

that we're going to type export open

play02:06

aore API key all capitalized equals and

play02:09

then you're going to paste in your newly

play02:11

created key then just hit enter next

play02:13

we're going to spin up autogen studio

play02:15

now we're pretty much done so just type

play02:17

autogen Studio space ui-- port 8081 and

play02:21

then hit enter and then it's going to

play02:23

spin up autogen studio for us and

play02:24

provide us with a URL and here we go

play02:26

it's Local Host 8081 so we're going to

play02:28

copy this URL right here switch over to

play02:30

your browser and now this is autogen

play02:32

Studio it is absolutely gorgeous and

play02:35

it's super easy to use and I'm going to

play02:36

show you how to do all of it now that's

play02:38

really all you need to do to get autogen

play02:40

Studio working with chat GPT a little

play02:42

bit later in this video I'm going to

play02:43

show you how to set it up with local

play02:45

models including powering individual

play02:47

agents with different models it's pretty

play02:50

amazing and the first tab we're going to

play02:52

start on is built just so I can tell you

play02:54

about all the different terminology so

play02:56

first let's talk about skills skills are

play02:59

tools that that you can give your AI

play03:00

agents and AI agent teams they can be

play03:03

anything but they're usually written in

play03:05

code so there are three by default here

play03:08

generate images and if I click into this

play03:10

we can see this is actually just the

play03:12

code for generating images and what it

play03:14

does is it sets up a method that hits

play03:16

the open aai Dolly endpoint and

play03:19

generates an image and then it Returns

play03:21

the image and that's it and now any

play03:24

agent can use this generate images tool

play03:27

we also have find papers on archives so

play03:29

if I click into there it's exactly what

play03:31

it sounds like it's going to accept a

play03:33

query and it's going to return papers

play03:35

that are found on the archive website

play03:38

now you can probably imagine how

play03:39

amazingly powerful this really is you

play03:41

can give it tools for anything any API

play03:44

that you can connect with you can give

play03:46

it tools for and not only apis you can

play03:48

give it instructions to accomplish

play03:50

pretty much any task and where my mind

play03:52

starts going is connecting it to a

play03:54

service like zapier and then all of a

play03:56

sudden you have Integrations into so

play03:58

many different applic appliations and

play04:00

you can mix and match them and

play04:01

accomplish incredibly sophisticated

play04:04

tasks through giving zappier

play04:05

Integrations as tools to your agents so

play04:08

this is what you do so let's click new

play04:11

skill and it's as simple as that you

play04:13

give it a name and then you write out

play04:15

the code for your actual skill so we're

play04:17

not going to do that but that's how you

play04:19

would accomplish it next are agents and

play04:21

this is the most obvious one an agent is

play04:23

just an individual AI that has a role

play04:26

tools and can perform tasks so by

play04:28

default it comes with two primary

play04:30

assistant and user proxy this mimics the

play04:32

autogen framework for when we didn't

play04:35

have a user interface so user proxy you

play04:37

can think of as you the user and the

play04:40

user can actually jump in and give input

play04:42

or not give input at all and let the

play04:44

autogen team accomplish the task

play04:46

completely autonomously the primary

play04:48

assistant is also exactly what it sounds

play04:50

like it's another AI agent that doesn't

play04:52

represent the actual user and it's

play04:54

completely autonomous it can write and

play04:56

run code it can use tools it takes on a

play04:58

roll a description and everything and

play05:00

I'll show you how to create a new agent

play05:01

in a little bit and by the way this is

play05:03

also where you specify which model the

play05:06

agent is going to use whether you want

play05:07

to use chat GPT 4 or you want to use a

play05:10

local model next is workflows and a

play05:12

workflow puts everything together

play05:14

including the team and the task you want

play05:17

to accomplish so here's a travel agent

play05:19

group workflow let's click that so we

play05:22

name it a group chat workflow so we have

play05:24

some options for summary method which is

play05:26

just defining the method to summarize

play05:28

the conversation and we can just say

play05:30

last none or llm and then we have the

play05:32

sender and the receiver and this is

play05:34

really important the sender is usually

play05:36

going to be user proxy although it can

play05:38

change if you get more complex teams and

play05:40

then the receiver is going to be the

play05:41

group chat manager so whenever you have

play05:43

more than two agents more than just a

play05:45

user and an assistant agent that's when

play05:47

you start to use the group chat manager

play05:49

and I can even click into the group chat

play05:51

manager and here is where I can add all

play05:53

of the agents into this group chat so I

play05:55

have the primary assistant the local

play05:57

assistant and the language assistant I

play05:58

give the group chat manager a name I can

play06:01

give it a description I can give it Max

play06:03

consecutive auto replies and if a lot of

play06:05

this stuff seems forign to you check out

play06:07

my video where I break down autogen in

play06:10

detail and Define and show you how to

play06:11

use all of these different settings cuz

play06:13

they are important I'll drop that link

play06:15

in the description below human input

play06:17

mode so we have never only on terminate

play06:20

or always on every step here we have our

play06:22

system message where we can actually

play06:24

just say group chat manager or we can

play06:26

define a more complex system message

play06:28

which helps control the agent Behavior

play06:31

then here is where we can Define our

play06:32

model we can add multiple models and

play06:34

it's going to daisy chain them together

play06:36

so it's going to start with GPT 4 here

play06:38

and if I added another one it would fall

play06:40

back to that if for whatever reason GPT

play06:42

4 didn't work so remember whatever the

play06:44

first model is here in the list that's

play06:46

going to be your default model and

play06:48

unfortunately I couldn't figure out a

play06:49

way to drag and reorder the models so

play06:52

you'll actually have to just delete it

play06:54

and add it in order then down here is

play06:56

where you can add skills and remember

play06:58

the skills are pieces is a code that the

play07:00

agents can run so go ahead and click

play07:01

that and we have for example this

play07:03

generate images and we can just add the

play07:05

skill like that and now this agent the

play07:08

group chat manager has the skill of

play07:10

generate images and then down here it

play07:12

says or replace with an existing agent

play07:14

so we can just say that and it'll fill

play07:15

out everything for us so then when we're

play07:17

done with that we click okay but I'm not

play07:20

going to do that because I don't want to

play07:21

save it next we have the playground and

play07:23

this is actually where you're going to

play07:24

be testing out the different agent teams

play07:26

so you can think of a session as a fixed

play07:28

amount of time where an a agent team

play07:30

goes to accomplish a task and the cool

play07:32

thing is I believe this is asynchronous

play07:35

so let's go ahead and click and create a

play07:36

new session I'll show you how to do the

play07:38

mistal workflow in a bit cuz that's a

play07:40

local model but let's do visualization

play07:42

agent workflow and we can see all agent

play07:44

workflows here if we want but let's go

play07:47

back and let's choose the visualization

play07:48

agent workflow then we click create and

play07:50

there we go now from here we can publish

play07:52

it to the web which is really cool we

play07:54

can delete it and this is where we give

play07:56

it the task we want it to complete so

play07:57

let's say stock price plot h chart of

play07:59

Nvidia and Tesla stock price for 2023

play08:02

save the result to a file named Nvidia

play08:04

Tesla PNG so now it's pinging gp4 to do

play08:07

that and you can see it's working by

play08:09

this little waiting icon right here now

play08:12

one thing I would have liked is if it

play08:13

streamed the results to this window but

play08:16

it seems like it waits till it's

play08:17

completely done before showing the

play08:18

result all right there so that is again

play08:21

one thing that I'd like to see a little

play08:23

bit different is I want to see each step

play08:25

being output because the only way to

play08:27

really tell anything is happening is if

play08:29

I switch over to my terminal and I

play08:31

actually watch the output so you can see

play08:33

all the output here this is what autogen

play08:35

typically looks like and then the UI

play08:37

just puts it in a really pretty

play08:39

interface so let's scroll to the top

play08:41

sure here's the result of your request

play08:43

okay so we can see the different agent

play08:45

messages going back and forth so the

play08:47

user proxy says so this is the user

play08:49

agent representing me so plot the chart

play08:52

of Nvidia and Tesla then we have the

play08:54

visualization assistant which creates

play08:56

the plan to actually do that writes the

play08:58

code so here here's the code that I just

play09:00

wrote so the visualization assistant

play09:02

says please run the above script to

play09:03

fetch the stock data once the data is

play09:05

fetched save it to stock data.csv so it

play09:08

does that the user proxy says okay done

play09:10

and fetched and saved then the

play09:11

visualization assistant says great now

play09:14

we can run the visualization of it and

play09:16

then the user proxy runs that code and

play09:18

saves it to Nvidia Tesla PNG and then

play09:21

here's the results so we have the stock

play09:23

data.csv file we have the PNG which is

play09:27

the actual visualization of the stock

play09:29

price over time we have the plot stock

play09:32

chart. py file which is the code to do

play09:35

it and we have the fetch stock data. py

play09:38

also and the nice thing is you can

play09:40

easily turn these into tools so that it

play09:43

doesn't have to recreate these tools

play09:44

next time and So currently it doesn't

play09:46

look like you can do it in a one-click

play09:48

way what you would do is you would go

play09:50

back to build go to skills create a new

play09:52

skill and essentially copy paste what's

play09:54

in here back into a skill on this page

play09:57

right here but that's it that shows how

play09:59

to do it and it is incredible I find

play10:01

that autogen studio makes it a lot

play10:03

easier to manage your tools most of all

play10:06

I always found that tool usage from the

play10:08

autogen code was a little bit difficult

play10:10

and let's try one more thing let's

play10:12

create new and I'm going to do travel

play10:15

agent group workflow create and I'm

play10:18

going to click paint here let's see what

play10:19

it does paint a picture of a glass of

play10:21

Ethiopian coffee freshly brewed and a

play10:23

tall glass cup so obviously this is

play10:25

going to be using Dolly I'm going to

play10:27

switch over to terminal and we're going

play10:28

to watch it actually work so here it is

play10:30

the user proxy agent saying that's what

play10:33

it's going to do okay so it's saying I'm

play10:35

unable to physically paint a picture so

play10:37

what that is telling me is that my agent

play10:40

team doesn't actually have the right

play10:41

tool to do that so let's give it that

play10:43

tool so to fix this problem what we're

play10:45

going to do is we're actually going to

play10:46

use a different agent team that actually

play10:49

has the paint skill so let's go back to

play10:51

build and let's look at the general

play10:53

agent workflow now we can see it has a

play10:56

sender of user proxy and a primary

play10:59

assistant receiver with two skills if I

play11:01

click into there I can see one of those

play11:03

skills is generate images so that should

play11:05

be good to actually generate a picture

play11:07

and we can see the daisy chain of

play11:08

different models that it's using so

play11:10

first it's using GPT 4 so let's go ahead

play11:12

and try it now so go back to playground

play11:14

I'm going to create a new session I'm

play11:16

going to use the general agent workflow

play11:18

create and then I'm going to say paint

play11:20

and now hopefully this works all right

play11:22

switching over to the terminal it does

play11:24

look like it generated the image and

play11:27

let's see what happened there it is

play11:29

perfect that's exactly what I asked for

play11:31

wonderful so you can see it is important

play11:33

you think about which tools are assigned

play11:35

to which agents which agent teams if

play11:37

you're asking it to do specific things

play11:40

that using that tool and we can also

play11:42

look at the pi file and we can actually

play11:44

see the code that it wrote to generate

play11:46

that image and so now I like that one if

play11:48

I just click publish it says session

play11:51

successfully published I go over to

play11:52

gallery and now I can find that session

play11:54

I just click here and open it up and I

play11:56

can actually see what just happened all

play11:58

right now now I want to show you how to

play12:00

use this completely locally and what

play12:02

you're going to need for that is two

play12:03

things olama and light llm oama is a

play12:07

wonderful tool that allows you to power

play12:09

models locally super easily and light

play12:12

llm is a wrapper to expose an API even

play12:15

if you don't understand what any of that

play12:16

means it doesn't matter I'm going to

play12:17

show you how to use it it's dead simple

play12:19

so we're going to switch back to our

play12:20

terminal we're going to create a new tab

play12:22

right here we're going to use the same

play12:24

cond environment so cond activate AG hit

play12:27

enter now it's activated and the first

play12:29

thing you're going to do is install oama

play12:31

and it really could not be easier go to

play12:34

ol. click download and go through the

play12:37

installation process I've already done

play12:39

it so I'm not going to do it right now

play12:40

but it's dead simple when you're done

play12:42

you should see a little llama icon in

play12:44

your task tray and that's it now to

play12:46

download a model what we're going to do

play12:48

is type O Lama run mistol and we're

play12:50

going to download the mistal model now I

play12:52

already have it downloaded so it's not

play12:53

going to download it again for me but

play12:55

when you run this it will download it

play12:57

and it's about 4 gigabytes so I hit

play12:59

enter just to make sure it's working and

play13:01

there it is so I can just test it with

play13:03

tell me a joke okay perfect so there it

play13:05

is now we know this is mistal running

play13:07

completely locally okay now we're going

play13:09

to open up another tab again we're going

play13:12

to use the same cond environment and by

play13:14

the way you don't actually need to keep

play13:16

this oama instance open anymore but it

play13:18

doesn't matter if you do or don't so

play13:19

over here we're going to switch back to

play13:21

cond activate AG perfect and now we're

play13:23

going to install Light llm so pip

play13:26

install Light llm Das Das up upgrade

play13:29

just in case you already have it and you

play13:30

need to upgrade it hit enter now one

play13:32

issue that I ran into that I already

play13:33

fixed and I probably won't run into it

play13:35

again but I want to show you it is it

play13:37

said that it was missing a module and

play13:39

the module that was missing is called

play13:41

gunicorn and I don't know why it wasn't

play13:44

installed as part of the light llm

play13:46

package but it wasn't so all I had to do

play13:48

to fix that was pip install gunicorn so

play13:51

if you get an error where G unicorn

play13:53

can't be found that's how to fix it now

play13:54

to set up a server with mistal running

play13:56

powered by oama this is all you do light

play13:58

l m-- model oama SL mistol and then hit

play14:03

enter and there we go we're all ready to

play14:05

go and we can see that the server spun

play14:06

up right here it's Local Host 8000 so

play14:09

we're going to go ahead copy that URL

play14:11

now switch back to autogen now we're

play14:13

going to go back to the build tab then

play14:14

I'm going to go to agents and I already

play14:16

have the mistal assistant so I'm going

play14:18

to delete that as well and now I'm going

play14:20

to create a new agent first and it's

play14:22

going to be a mistal agent so I'm going

play14:23

to say mistal assistant agent

play14:26

description helpful assistant power by

play14:28

mistal locally Max consecutive auto

play14:30

replies I'm going to leave that there

play14:32

human input mode never and a system

play14:34

message and I'm just going to keep it

play14:36

simple you are a helpful assistant now

play14:38

right here you can see it's defaulting

play14:40

to gp4 so go ahead and get rid of that

play14:43

we're going to click add and this is

play14:45

where we actually tell it to be powered

play14:47

by the local mistal model so here I'm

play14:49

going to call it mistel you don't need

play14:51

an API key the base URL we're going to

play14:53

click paste this is that local URL that

play14:56

we just copied everything else we do not

play14:58

need and then we're going to click add

play15:00

model we're not going to give it any

play15:01

skills now but feel free to do that when

play15:03

you're testing it then click okay now we

play15:06

have a mistal assistant powered by

play15:08

mistal and the cool thing is I can also

play15:11

have a Mixel assistant and I can also

play15:14

have a nous Hermes assistant and they

play15:16

can all run at the same time it is truly

play15:19

incredible now let's go to workflows

play15:21

we're going to create a new workflow and

play15:22

we're going to say this is a mistal

play15:24

workflow workflow description we'll

play15:26

leave that the same summary method

play15:27

that's fine user proxy that's fine and

play15:30

this is going to be gp4 powered so if

play15:33

you did want to have the user proxy

play15:34

locally powered go to the user proxy

play15:37

agent and switch out GPT 4 for mistol

play15:40

then as the receiver we're going to

play15:42

change that so we're going to change the

play15:43

receiver to be the mistal assistant and

play15:47

so for the agent name we're going to

play15:48

call it mistal assistant agent

play15:51

description I'm going to leave it blank

play15:52

for now again feel free to customize

play15:53

this as much as you want human input

play15:55

mode never this is the system message

play15:58

that is used used for autogen so I'm not

play16:00

going to touch this then we're going to

play16:01

delete gp4 here we're going to add a new

play16:03

model again we're going to say it's

play16:05

mistol same thing Local Host 8,000 right

play16:08

there and then we're going to click add

play16:10

model we're not going to give it any

play16:12

skills and we're just going to click

play16:14

okay then okay again now we have a

play16:17

mistel agent we have a mistel workflow

play16:20

we should be able to use it powered by

play16:22

mistal so let's go to the playground

play16:24

we're going to click new we select

play16:26

mistal workflow and then create and then

play16:28

let's say tell me a joke just to see if

play16:30

it works hit enter and there we can

play16:33

actually see that it worked so there's

play16:34

the post two chat completions and this

play16:36

is the light llm so it should have

play16:38

worked and it did although it did not

play16:40

tell me anything good that's fine let's

play16:43

see it accomplished something a little

play16:44

bit more difficult write code to Output

play16:46

numbers 1 to 100 okay there it is so it

play16:50

was extremely fast switching back over

play16:52

to the terminal we can see that it

play16:54

actually did post to chat completion so

play16:56

it worked and there's the code and

play16:58

here's the termination messages so user

play17:00

proxy says write code to Output numbers

play17:02

1 to 100 the mistal assistant writes the

play17:05

code and sends terminate which

play17:06

terminates everything and so that's it

play17:08

now you know how to power autogen studio

play17:10

with a local model and what if you did

play17:13

want to have different models for

play17:14

different agents to do that we come back

play17:17

here over to ol llama exit out of there

play17:20

and so what you would do is AMA run

play17:21

llama 2 that'll initiate the download

play17:24

and once it's done downloading with

play17:25

olama we can leave this light llm that's

play17:28

running mistl up then we create a new

play17:30

tab then we cond activate AG light llm

play17:34

-- model o lama lama 2 hit enter it'll

play17:37

give you a new URL and you do the same

play17:40

exact thing you come in here go to build

play17:42

set up a new agent as llama assistant

play17:46

and then you input the URL as normal

play17:48

then you set up the same workflow as

play17:50

normal and you're done now you have

play17:52

different assistants powered by

play17:53

different local models and you can plug

play17:56

and play as you see fit the best part is

play17:58

you can find the right fine-tuned model

play18:01

for the right task and one last thing I

play18:03

want to mention is it actually has this

play18:05

sign out functionality but when you

play18:06

click it it says please Implement your

play18:08

own logout logic which means you can set

play18:11

up your own authentication within

play18:13

autogen studio so if you wanted to share

play18:16

this amongst your team you could set it

play18:18

up to do that I am so impressed by

play18:20

autogen Studio let me know what you

play18:22

think in the comments if you want me to

play18:23

do any kind of followup or deeper dive

play18:25

into autogen Studio let me know what you

play18:27

want to see in the comments if you liked

play18:29

this video please consider giving a like

play18:30

And subscribe and I'll see you in the

play18:32

next one

Rate This
โ˜…
โ˜…
โ˜…
โ˜…
โ˜…

5.0 / 5 (0 votes)

Related Tags
Autogen StudioAI AgentsLocal ModelsGPT IntegrationOpen SourceChat GPTPython EnvironmentAPI KeysWorkflow ManagementSkill Creation