Big AutoGen UPDATE 0.2.28 | Databricks Integration 🎉

Tyler AI
2 Jun 202415:49

Summary

TLDR视频脚本介绍了Autogen版本0.228的更新,包括GPT助手代理、群聊恢复功能、文本压缩工具以及与Databricks的集成。GPT助手代理利用OpenAI的API,支持代码解释器、文件搜索等内置工具,并通过线程技术优化消息历史存储。群聊恢复功能允许用户通过传递先前对话的消息来继续之前的群聊。文本压缩工具LLM Lingua旨在提高LLM操作的效率和成本效益。Databricks集成展示了如何将Databricks的通用LLM与Autogen集成,为用户提供更多模型选择和集成可能性。这些更新为Autogen框架带来更广泛的应用场景和灵活性。

Takeaways

  • 🆕 Autogen发布了0.228版本更新,带来了许多新功能和改进。
  • 🤖 新增GPT助手代理,由OpenAI助手API支持,可以利用代码解释器、文件搜索和函数调用等内置工具。
  • 📑 助手代理能够生成文件,如图像和电子表格,并通过线程自动存储消息历史并调整上下文长度。
  • 🔄 引入了群聊恢复功能,允许通过传递先前对话的消息来恢复之前的群聊。
  • 🔗 展示了如何通过代码示例继续已终止的对话,包括如何加载JSON字符串或消息列表。
  • 🗜️ 介绍了使用LLM Lingua压缩文本的工具,有效提高了LLM操作的效率和成本效益。
  • 📚 举例说明了如何使用LLM Lingua压缩Autogen研究论文,节省了近20,000个token。
  • 🔧 提供了如何将文本压缩器与Autogen代理集成的示例,展示了如何在研究代理中使用文本压缩器。
  • 👨‍💻 Autogen对.NET开发者的支持明显,有多个.NET相关的更新和示例。
  • 🔗 Autogen正在增加与其他服务的集成,如Databricks,为用户提供了更多的模型选择和集成可能性。
  • 📈 Databricks的dbrx是一个通用的LLM,为开放的LLM设定了新标准,并在Hugging Face上开源了模型。

Q & A

  • Autogen的最新更新版本是什么?

    -Autogen的最新更新版本是0.228。

  • GPT助理代理是什么,它有哪些功能?

    -GPT助理代理是由Open AI助理API支持的代理,可以使用多种工具,如代码解释器、文件搜索和功能调用等内置工具。此外,它还可以生成文件,如图像和电子表格,并且可以利用线程自动存储消息历史并根据模型的上下文长度进行调整。

  • 如何设置GPT助理代理?

    -设置GPT助理代理非常简单,只需创建一个GPT助理代理,并定义助理配置,指定想要代理具备的工具或内置功能。

  • 什么是恢复群聊功能,它如何工作?

    -恢复群聊功能允许用户通过传递先前对话的消息给群聊管理器的恢复函数来恢复之前的群聊。恢复函数会返回最后一条消息和最后一个代理,这些可以用来重新启动聊天。

  • 如何使用文本压缩工具LLM Lingua?

    -LLM Lingua是一个设计用来压缩提示的工具,可以有效地提高LLM操作的效率和成本效益。通过实例化文本消息压缩器对象,并将其应用于提取的文本,可以节省大量令牌,从而节省成本并增加上下文窗口中的令牌数量。

  • 如何在Autogen代理中集成LLM Lingua进行文本压缩?

    -在Autogen代理中集成LLM Lingua进行文本压缩,需要在代理设置中添加文本压缩处理,通过transform messages with the text compressor来实现。

  • Autogen是否支持.NET开发者?

    -是的,Autogen支持.NET开发者,并且在更新中包含了多个.NET相关的更新和示例。

  • Autogen有哪些集成示例?

    -Autogen提供了多种集成示例,包括与Databricks的集成、加密交易代理、虚拟焦点小组等。

  • Databricks的dbrx是什么,它与Autogen如何集成?

    -Databricks的dbrx是一个通用的LLM,为开放LLM设定了新的标准。它与Autogen的集成示例包括设置API令牌,并通过Databricks主机或ADS工作区进行配置,以实现基本的聊天功能。

  • 如何通过Autogen框架使用不同的模型或服务?

    -Autogen框架通过集成不同的服务和模型,如Databricks的dbrx,为用户提供了使用不同模型或服务的可能性。用户可以根据个人喜好选择使用Open AI的Assistant API或尝试其他类型的模型。

Outlines

00:00

🔄 Autogen 0.228版本更新概览

本次更新介绍了Autogen 0.228版本,其中包括了多项新功能和改进。首先,新增了基于Open AI助手API的GPT助手代理,该代理支持代码解释器、文件搜索和函数调用等内置工具。此外,还引入了'threads'功能,它能够自动存储消息历史并根据模型的上下文长度进行调整。用户还可以利用助手生成图像和电子表格等文件。另一个亮点是群聊恢复功能,允许用户通过传递先前对话的消息来恢复之前的群聊。示例中展示了如何设置群聊对象,并使用'resume'功能来继续之前的对话。

05:02

📄 利用LLM Lingua压缩文本提高效率

介绍了使用LLM Lingua工具压缩提示的功能,这可以有效地提高大型语言模型(LLM)操作的效率和成本效益。示例中展示了如何将一篇研究论文的文本进行压缩,通过应用LLM Lingua的文本压缩器,节省了近20,000个token。这对于具有较小上下文窗口的模型尤其有用,因为它可以帮助用户在有限的上下文中保存更多的信息。此外,还展示了如何将文本压缩器集成到Autogen代理中,以自动处理文本压缩,从而在保持关键信息的同时节省成本。

10:02

🛠️ Autogen与Data Bricks的集成示例

讨论了Autogen与Data Bricks的集成,Data Bricks是一个提供通用大型语言模型(LLM)的平台,其模型可在Hugging Face上找到。文档中提供了如何将Data Bricks与Autogen集成的示例,包括设置API令牌和基本的'hello world'示例。此外,还提供了一个简单的编码代理示例,展示了如何使用Data Bricks助手代理进行基本操作。这表明Autogen正在扩展其集成能力,为用户提供更多的可能性和灵活性。

15:04

🎉 Autogen更新带来的新机遇

总结了Autogen更新带来的新机遇,强调了框架的开放性,允许用户尝试不同的模型和工具。提到了Autogen开始集成更多服务,如Data Bricks,这为用户提供了更多的选择和灵活性。还提到了Autogen对.NET开发者的支持,以及社区中分享的各种应用和集成示例,如加密交易代理和虚拟焦点小组。最后,鼓励用户尝试这些更新,并提供了一个Autogen初学者课程链接,帮助用户更好地理解Autogen。

Mindmap

Keywords

💡Autogen

Autogen是一个框架,它允许用户创建和运行各种智能代理。在视频中,Autogen是更新的核心主题,讨论了其最新版本0.228的多个功能和改进。例如,它提到了Autogen的GPT助手代理,这是通过Open AI助手API支持的,以及如何使用Autogen进行群聊恢复。

💡GPT助手代理

GPT助手代理是Autogen更新的一部分,它利用Open AI助手API提供多种工具,如代码解释器、文件搜索和函数调用。这些工具是OpenAI内置的,允许用户执行复杂任务,例如生成文件或恢复群聊。在视频中,提到了如何通过设置助手配置来定义这些工具。

💡代码解释器

代码解释器是Autogen更新中提到的一个工具,它允许用户执行和测试代码。在视频中,它被提及为GPT助手代理可以使用的工具之一,这表明用户可以利用这个功能来增强他们的自动化脚本或智能代理。

💡文件搜索

文件搜索是Autogen更新中引入的另一个工具,它使智能代理能够搜索和访问文件系统中的文件。视频中提到,这个功能可以集成到GPT助手代理中,从而提高代理处理和响应用户请求的能力。

💡函数调用

函数调用是Autogen更新中的一个特性,它允许用户定义特定的函数,并通过API模式在智能代理中使用这些函数。视频中给出了一个例子,如何定义一个获取当前天气的函数,并在助手配置中将其设置为代理的工具。

💡群聊恢复

群聊恢复是Autogen更新中的一个新功能,它允许用户恢复之前终止的群聊。视频中详细讨论了如何通过传递先前对话的消息给群聊管理器的恢复函数来实现这一点。这个功能可以让用户继续之前的讨论,而不需要重新开始。

💡LLM Lingua

LLM Lingua是视频中提到的一个文本压缩工具,它旨在提高大型语言模型(LLM)操作的效率和成本效益。通过压缩提示,它可以减少所需的令牌数量,从而节省成本并适应具有较小上下文窗口的模型。

💡文本压缩

文本压缩是LLM Lingua提供的一个功能,它通过减少文本数据的大小来优化LLM的操作。视频中提到了一个例子,使用LLM Lingua压缩Autogen研究论文,节省了近20,000个令牌,这有助于在具有限制上下文窗口的模型中保持更多的对话内容。

💡Databricks

Databricks是一个用于数据工程和分析的平台,它在视频中被提到,因为它与Autogen的集成。Databricks提供了一个名为DBRX的通用大型语言模型,并且Autogen提供了集成Databricks的示例,这表明用户可以利用Databricks的模型来增强他们的Autogen代理。

💡集成

集成是视频中讨论的一个关键概念,指的是将不同的系统、服务或模型整合到一起,以提供更全面的解决方案。例如,Autogen与Databricks的集成允许用户利用Databricks的模型和Autogen的功能来创建更强大的智能代理。

Highlights

Autogen发布了0.228版本,带来了许多新变化。

新增GPT助手代理,由OpenAI助手API支持,可以利用多种内置工具如代码解释器、文件搜索和函数调用。

助手代理能够自动存储消息历史并根据模型的上下文长度进行调整,称为threads。

助手代理可以生成文件,如图像和电子表格。

介绍了如何设置GPT助手代理,包括创建代理和配置助手。

展示了如何定义助手代理的工具或内置功能,例如代码解释器和文件搜索。

介绍了恢复群聊的功能,允许通过传递先前对话的消息来继续之前的群聊。

群聊管理器新增了resume函数,可以返回最后的消息和代理。

通过示例演示了如何继续终止的对话,包括加载先前消息和使用resume函数。

展示了如何在没有终止消息的情况下继续群聊。

介绍了使用LLM Lingua压缩文本的工具,可以提高LLM操作的效率和成本效益。

示例演示了如何使用LLM Lingua压缩Autogen研究论文的文本。

展示了如何将文本压缩器与Autogen代理集成,以节省令牌并提高上下文窗口的使用。

提到了对.NET开发者的支持,展示了.NET的更新和示例。

介绍了与Databricks的集成,Databricks是一个通用的LLM,提供了新的开放LLM标准。

展示了如何设置Databricks与Autogen的集成,包括获取API令牌和基本示例。

提到了Autogen框架的开放性,允许用户尝试不同的模型和集成。

最后,提到了一个面向初学者的Autogen课程,帮助用户更好地理解和掌握Autogen。

Transcripts

play00:00

it's probably been a little bit over a

play00:01

month since the last update that we got

play00:02

from autogen but we finally got one and

play00:05

there's a lot to it okay so this is

play00:07

version

play00:08

0.228 and as you can see there are a lot

play00:11

of changes in this update we're not

play00:13

going to go over exactly all of these

play00:16

but we'll go over the highlights all

play00:17

right so for the first update we have

play00:19

the GPT assistant agent which is an

play00:21

agent backed by open AI assistant API

play00:24

and we can use multiple tools such as

play00:25

the code interpreter the file search and

play00:28

function calling and what those are are

play00:30

built-in tools from open Ai and whenever

play00:32

you use this assistant then you also get

play00:34

benefits from what they call threads

play00:37

which automatically store message

play00:38

history and adjust based on the model's

play00:40

context length and we can also have

play00:42

agents generate files such as images and

play00:45

spreadsheets and if you want to come

play00:46

visit this page you know they have

play00:48

examples of function call a code

play00:50

interpreter and a group chat with the

play00:51

GPT assistant agent that you can look at

play00:54

uh but this is pretty simple to set it

play00:55

up you're just going to create a GPT

play00:57

assistant agent and the only other thing

play01:00

here that looks a little different is

play01:01

you know we have the instructions you

play01:03

still have the name the llm config but

play01:05

also the assistant config okay and they

play01:08

have that right here it's not really

play01:09

defined yet but when we scroll down

play01:12

there's going to be different ways you

play01:13

can Define this and it's going to kind

play01:14

of be added to the GPT assistant agent

play01:17

so that you can Define what tools or

play01:19

built-in functionality you want this

play01:20

assistant to have for instance they have

play01:22

the assistant config here to be a code

play01:24

interpreter here you set up to have file

play01:26

search and so another example for

play01:28

function calling is you define a

play01:31

function this is get current weather and

play01:33

then you have an API schema which you

play01:35

use this function get function schema

play01:37

give it the actual function you want to

play01:39

use the name and a description and then

play01:41

in the assistant config uh for the tools

play01:44

you just say hey I want this API schema

play01:47

to be the function for the assistant

play01:49

agent and for this next one I thought

play01:51

this was pretty interesting this is

play01:52

resuming a group chat and it may be kind

play01:55

of how you're thinking right now is

play01:57

whenever you end a group chat normally

play01:59

we set a variable which I'll show you in

play02:01

just a minute Whenever you set that

play02:02

variable and you terminate the chat so

play02:04

you're done well whenever you want to uh

play02:07

have that chat again you can take that

play02:10

group chat that you had you can

play02:13

basically get the last messages and then

play02:15

resume it with another group chat let's

play02:18

just let's kind of see how they Define

play02:20

that here and how it works but like I

play02:22

said we can resume a previous group chat

play02:24

by passing the messages from that

play02:26

conversation to the group chat manager

play02:28

resume function so there's a different

play02:30

they've added a function to the group

play02:32

chat manager so the resume function

play02:34

Returns the last agent in the messages

play02:36

as well as the last message itself these

play02:39

can be used to run the initiate chat the

play02:42

messages passed into the resume function

play02:44

can be in a Json string or a list of

play02:46

dictionary messages so here is what we

play02:48

here's what we want to see an example of

play02:50

how to actually continue a terminated

play02:53

conversation so let's look at this

play02:54

example all right so we had the basic

play02:56

setup for autogen and then we create the

play02:58

group chat objects one thing to note

play03:00

here is it says they should have the

play03:01

same name as the original group chat

play03:03

I'll probably come out with an example

play03:05

of as I explore this a little bit more

play03:07

but so they have the typical agents here

play03:10

right there's nothing nothing crazy here

play03:12

um they have about five five of them so

play03:15

here we had the group chat right we had

play03:17

the agents you know this if you've done

play03:19

group chats before this is nothing

play03:20

really new then we had the manager so

play03:23

you say allen. group chat manager give

play03:25

it the group chat and the llm config

play03:27

okay this is really nothing new so far

play03:29

that we done okay but now we want to

play03:32

load previous messages from a Json

play03:34

string or messages or like a list

play03:37

dictionary and so what I think of

play03:39

they've done is they've used the two

play03:40

methods that we uh saw above they just

play03:43

probably went ahead and did that and

play03:44

then this is this whole line here all

play03:46

right so I basically just put that in a

play03:48

quick formatter so the initial message

play03:49

was to finalize paper um on Arch GPT 4

play03:53

on archive and it's potential

play03:55

applications and then um basically this

play03:59

right here is is all of the

play04:00

conversations of all of the agents in

play04:03

the group chat um the user said agree

play04:07

and then the planner said great let's

play04:09

proceed okay so then I think they

play04:11

terminated the they probably terminated

play04:14

the chat there and so now when we come

play04:16

back how they set this up is you say

play04:20

last agent so it looks like manger.

play04:22

resume gives back uh it gives back two

play04:26

different it returns two different um

play04:28

variables so last agent and last message

play04:31

okay and you give it you say manager

play04:34

which was the group chat manager up here

play04:36

you say resume messages equals previous

play04:39

St so this is the here it's a Json

play04:42

string okay and then we can say result

play04:45

equals last agent. initiate chat so

play04:48

probably the last agent that was part of

play04:50

that chat we're going to initiate the

play04:51

chat from there and as you can see here

play04:54

says great the planner that was the last

play04:57

one we just saw right so the planner we

play04:59

come back here the the name was the

play05:01

planner great let's proceed with the

play05:04

plan outlined earlier we come back

play05:06

that's exactly what this is so this is

play05:09

this previous state is where we ended or

play05:12

terminated the chat and now we're just

play05:14

simply resuming it from here and then

play05:17

they continue the engineer uh creates

play05:20

some code we come down some more you

play05:23

know and then now all the agents are

play05:24

just continuing this chat right so we go

play05:27

on uh we go on and on and you know we

play05:30

have U we have the output so it looks

play05:32

like that was if you just maybe close

play05:35

the chat yourself because they have also

play05:37

have an example of whenever you uh

play05:39

resume a terminated group chat okay so

play05:43

this is basically whenever uh at the end

play05:46

you know and one of the agents says okay

play05:48

terminate the user says okay and then it

play05:50

cuts out the conversation like we're

play05:51

done and what this and what this is

play05:53

doing right now um they're saying that

play05:57

when they go ahead and say initiate the

play05:59

chat again with the last message here

play06:02

right so they resume from the previous

play06:05

date um this warning is saying the last

play06:08

message from that previous group chat

play06:10

meets a termination criteria meaning

play06:11

there was a termination like the string

play06:14

was there then this is what we get right

play06:17

so thank you um so the last message was

play06:20

from digital marketer to the chat

play06:22

manager it says BL you know whatever

play06:24

they did I'm I'm assuming if we go all

play06:26

the way to the end here yep right

play06:28

there's a terminate so that's done and

play06:30

then this time they're going to remove

play06:32

that message by using the remove

play06:34

termination string parameter and then

play06:36

resume so the same manager. resume

play06:39

except now they have this okay and then

play06:43

uh after the the digital marketer says

play06:45

the same thing and there is a termin

play06:47

there's all the way to the end oh see it

play06:49

removed it right so this time instead of

play06:53

you know this is the same text as up

play06:55

here but they got rid of the terminate

play06:57

so now we don't know now it's going to

play07:00

continue the conversation because it's

play07:01

not looking cuz it's still looking for a

play07:03

terminate message and then the chief

play07:05

marketing officer finally comes to

play07:07

terminate and they're the ones that

play07:10

actually terminate the conversation now

play07:12

that's interesting I think that's uh I

play07:14

think that's pretty interesting so I

play07:15

think that's a pretty interesting update

play07:16

let me know what you think are maybe the

play07:18

use cases of being able to resume a

play07:20

group chat and potentially extending the

play07:22

group chat after it terminated which is

play07:24

what we just talked about okay so for

play07:26

the next one we have compressing text

play07:28

with llm lingua and what this is is a

play07:31

tool designed to compress prompts

play07:33

effectively enhancing the efficiency and

play07:34

cost effectiveness of llm operations

play07:37

okay in this first example we have

play07:39

compressing their re the autogen

play07:40

research paper using uh using this

play07:42

Library they're using this text

play07:44

compression and so what we have here

play07:46

just go through this real quick so we

play07:47

have the the archive link for the paper

play07:51

this right here this extract text from

play07:52

PDF just basically Returns the text

play07:55

within the PDF so we come down here you

play07:57

know we had the PDF text we instantiate

play08:00

LM lingua we also instantiate the text

play08:04

compressor for a text message compressor

play08:06

object to be llm llm linga and now we

play08:10

say we want to

play08:12

apply the text compressor from llm

play08:15

lingua to the PDF text that we extracted

play08:19

and then we print um we print the logs

play08:22

and what this is saying is that we've

play08:24

saved almost 20,000 tokens right so what

play08:26

this is going to do is going to save

play08:28

like the window I know that models you

play08:30

know what Gemini 1.5 just had what 1

play08:33

million is or is that the one with two

play08:35

million context window right so it's not

play08:38

like the end of last year when I first

play08:39

started this I remember that uh gbt 3.5

play08:43

you know we wor about 4,000 tokens right

play08:46

and we were always constantly trying to

play08:48

figure out how um how we can fit more

play08:50

tokens inside of that and what we can

play08:52

remember how do we preserve

play08:54

conversations through long-term memory

play08:55

so I understand that that's not as much

play08:58

of an issue as it was then but there are

play09:01

still good models out there that maybe

play09:03

don't have a great deal of context

play09:06

window just yet and so you still want to

play09:09

save you still want to save um tokens if

play09:11

you can so that you can have more in

play09:13

your context window because maybe you

play09:15

need to give it um maybe you need to

play09:17

give more context that is like you know

play09:20

say 50,000 tokens right but maybe the

play09:22

model only holds 32,000 that you want to

play09:24

use for your use case so something like

play09:26

this can really still really help you

play09:28

out okay so that was a quick example now

play09:29

how do we actually integrate this with

play09:31

an autogen agent so all of this you know

play09:35

all of this is the same setup and now

play09:37

and now we want to add the contact

play09:38

handling to the researcher agent right

play09:41

up here right and so this transform

play09:43

messages with the text compressor that's

play09:45

um that's what allows us to use LL llm

play09:48

lingua with uh with this agent so we

play09:50

just basically uh want to research this

play09:53

paper include the important information

play09:55

and then you know we add the context

play09:57

with the whole PDF right and we don't

play10:00

and we're going to let the text

play10:02

compressor handle all of this so the

play10:04

result you know we just want to initiate

play10:07

the chat uh the user initiate the chat

play10:09

with the researcher right here and then

play10:11

we print um we print the chat history

play10:15

well it's saying that almost 20,000

play10:16

tokens were saved and you know it still

play10:19

describes the paper and gives the key

play10:22

components right so it's going to help

play10:24

with the saving the cost on the tokens

play10:27

per you know whatever million that um if

play10:30

you're using open AI right how much they

play10:32

cost so you're still going to save with

play10:34

that and the context window for whatever

play10:36

model you're using and then this last

play10:38

example they just give you ways on how

play10:40

you can modify see they instantiate the

play10:43

text message compressor they

play10:46

specifically say it's llm lingua which

play10:48

it will I think it is by default

play10:50

probably um and you can just modify some

play10:52

of the parameters for this and just one

play10:54

thing one thing I want to note now

play10:56

before I get into my next one cuz I'm

play10:58

not going to go over all these is they

play11:00

really I mean it seems like they really

play11:02

are supporting if you're a net developer

play11:05

I mean look at how many dot uh look how

play11:07

many net updates there were right these

play11:10

are all like the commits but I mean

play11:12

there's there's a good many even though

play11:15

some of them might just be read read me

play11:17

readme sections right it's not all these

play11:18

aren't probably all code changes I

play11:21

haven't like gone through them so I mean

play11:22

I could be wrong about that uh like

play11:24

here's with net here add o llama sample

play11:26

so those of you who are wondering um if

play11:28

you can get Lama working locally

play11:31

with.net here you go they are working on

play11:34

it and making it happen and just as a

play11:36

side note in their Gallery also they

play11:38

have some other you know they update

play11:39

this so F if you're curious about how

play11:42

like maybe coding something or how

play11:44

something works that other people have

play11:46

worked on and it may give you uh some

play11:48

insight or inspiration to develop

play11:50

something similar just come here they've

play11:52

updated this right so here's a crypto

play11:54

transactions agent here's a virtual

play11:56

focus group a um you know this is an

play11:58

application developed it it's like a

play12:00

group of Agents with streamlet I believe

play12:02

so check that out and look somebody here

play12:04

has created an autogen robot that's

play12:06

pretty cool right so just come here and

play12:08

make sure you check this out and the

play12:09

last one I'm going to talk about is an

play12:11

integration with data bricks and I think

play12:14

that you know if if those of you have

play12:15

used Lang chain one of the nice things

play12:18

about Lang chain um and they actually

play12:20

have just updated their documentation

play12:21

which is better but their documentation

play12:23

was lacking because they have all of

play12:25

these Integrations right but you know

play12:28

things right now the a world are

play12:29

constantly being updated so some of them

play12:31

were maybe deprecated and not quite

play12:33

working like their documentation was

play12:34

saying so it can kind of make it

play12:36

frustrating to use but they do have a

play12:39

lot of ways to make the integration

play12:40

simpler and one thing that I'm working

play12:42

on and I actually have a video coming

play12:44

out is um having using autogen and

play12:47

integrating other services such as my

play12:50

next one will be air table which is an

play12:51

online database among other things um

play12:53

and then also like a Wikipedia search

play12:54

but you can do a lot more with it than

play12:57

what Lang chain does but in this update

play12:59

we have a data bricks integration so in

play13:02

2024 they released uh dbrx just I guess

play13:06

how you would maybe spell data bricks um

play13:08

a general purpose llm that sets new

play13:09

standards for open llms they they have

play13:11

open source models on hugging face so

play13:13

you can check those out here and what

play13:15

they have here is examples of

play13:17

integrating data bricks with autogen so

play13:19

basically if we just let's just scroll

play13:21

down here a little bit um so we have the

play13:24

the first of the setup right so for any

play13:26

integration right you're going to have

play13:28

to the the setup so so if you go to they

play13:30

give you examples of um different ways

play13:32

you can set it up with your ads

play13:33

workspace Aur workspace or just straight

play13:36

from datab bricks host um you have to

play13:39

get I guess an API token and then you

play13:42

can just set that up here right so

play13:43

actually probably take CLE minutes a

play13:44

couple minutes to set that up they give

play13:46

you the hello world example which is you

play13:49

know import look this is this is about

play13:52

as basic as you can get with an autogen

play13:54

example right so we have an assistant

play13:56

agent a user proxy agent and then they

play13:59

just initiate the chat right so this

play14:01

isn't like you know this this isn't like

play14:03

something groundbreaking but you know

play14:06

data breaks has really grown and I think

play14:10

I like the idea that we're starting the

play14:11

ALG is starting to integrate more things

play14:14

with itself and it's really going to

play14:16

open up more possibilities right because

play14:18

I think one of the things that people

play14:19

are kind having trouble with or just

play14:21

from kind of what I guess understanding

play14:23

and my comment section is you know if I

play14:26

want to do this with autogen how do I do

play14:28

that right it can be something as simple

play14:30

as you know um reading a PDF and adding

play14:33

its context into whatever you want to

play14:36

ask the agent about with that right and

play14:38

so being able to introduce other uh

play14:42

models or other companies and use their

play14:44

models in here is amazing you know and

play14:46

then they have a coding a a simple

play14:48

coding agent so you know they come down

play14:50

here and they just have um examples from

play14:53

the data breaks assistant agent and you

play14:56

know just come here and you can try this

play14:57

out for yourself right so pretty cool

play14:59

that that they add this in um so that

play15:02

there are other Integrations that we can

play15:04

try with autogen okay I hope you're

play15:05

excited about this update as much as I

play15:07

was you know it's just always nice that

play15:09

you know more tools and more things are

play15:11

being involved with uh with this autogen

play15:13

framework because the thing is you know

play15:15

not everybody wants to maybe run open AI

play15:19

right maybe somebody wants to try uh

play15:21

different types of models or just run

play15:22

things locally or maybe you don't want

play15:24

to run things locally right maybe you

play15:26

you like open ai's Assistant API that's

play15:29

what you want to use well now they're

play15:30

integrating more with autogen so you can

play15:32

do that the idea is that this framework

play15:34

is opening up for more people to try

play15:37

what you like thank you for watching and

play15:38

I have a beginner course right here for

play15:40

autogen that you can understand and get

play15:42

a better grasp of what it is before you

play15:45

try these updates thank you for watching

play15:47

I'll see you next video by

Rate This

5.0 / 5 (0 votes)

Related Tags
Autogen更新GPT助手群聊恢复文本压缩LLM集成开发工具API调用文件生成效率提升成本节约技术教程
Do you need a summary in English?