The power of AI and Copilot for Azure Databases | BRK171
Summary
TLDRこのビデオスクリプトは、MicrosoftのAzure Databaseが人工知能を統合し、データベースの未来を形作ることについて焦点を当てています。Shireesh Thota氏は、AIがデータレイヤーのあらゆる隅々に聡明さをもたらし、開発者がインテリジェントなアプリケーションを構築するためのプラットフォームを提供するMicrosoftの役割に触れています。また、Azure Cosmos DB、Azure Database for PostgreSQL、Azure Database for MySQLなどのサービスにおけるベクター検索やRAGデザインパターン、そして新しいCopilot機能の紹介もされています。
Takeaways
- 🌐 Microsoftは、AIがデータベースに与える可能性を拡大し、Azure Databasesを新しい時代に合わせて構築している。
- 🤖 AIの主要な役割は、データベースの機能性と能力を向上させることであり、開発者がより高度なAIアプリケーションを構築できるようにする。
- 🔍 ベクター検索は、データベースの検索機能を強化し、類似した応答を意味的な検索の範囲内で提供する。
- 📚 RAGデザインパターンは、大規模な言語モデルと外部データソースを組み合わせて、より正確な応答を提供する。
- 📈 Azure Cosmos DBは、ベクター検索機能を備え、NoSQL APIを通じてAIとデータベースを統合している。
- 📊 ベクター埋め込みは、複雑な情報を数値表現に変換し、類似性のスコアを計算できる。
- 🛠️ Azure Database for PostgreSQLは、データベース内での埋め込み生成をサポートし、待機時間の短縮とコストの予測性を提供する。
- 🔧 Azure Database for PostgreSQLのインデックス推奨機能は、クエリのパフォーマンスを最適化し、自動的にインデックスの作成や削除を提案する。
- 🗣️ AzureのCopilot機能は、データベースの管理と開発を簡素化し、自然言語をクエリに変換する能力を持つ。
- 🛡️ Copilotsは責任あるAIの原則に則して構築され、セキュリティとプライバシーを重視している。
Q & A
ハイブリッドセッションとは何を意味するのですか?
-ハイブリッドセッションとは、オンラインとオフラインの両方の参加者がいるセッションを指します。スクリプトでは、irtual参加者と現地の参加者とがいます。
Shireesh Thotaがどのポジションに就いているか教えてください。
-Shireesh Thotaは、Azure DatabasesのCorporate Vice Presidentに就いているそうです。
AIがデータベースに与える可能性とは何ですか?
-AIはデータベースにインテリジェントな機能、統合性、信頼性をもたらし、データレイヤーのあらゆる隅に触れることができる可能性を拡大します。
Azure DatabasesはどのようにAI時代に適応しているのですか?
-Azure Databasesは、AI時代に適応するために、パフォーマンス、簡素化されたデータ管理、耐久性などの機能を備えたり、ベクター検索やRAGデザインパターンのような新しい機能を組み込んでいます。
ベクター検索とは何で、どのような利点がありますか?
-ベクター検索は、類似した応答をセマンティック検索の範囲内で得るための方法で、特定のパラメーターを知らなくても関連する情報を検索することができます。
RAGとは何で、どのような役割を果たしますか?
-RAGは、事前トレーニングされた大きな言語モデルと外部データソースの力を組み合わせ、より正確なコンテキストに応じた応答を提供するデザインパターンです。
Azure Cosmos DBはどのようなデータベースですか?
-Azure Cosmos DBは、MicrosoftのフラッグシップNoSQLデータベースで、AIと統合された新しいアプリケーションを構築するための機能を提供しています。
ベクター埋め込みとは何で、どのような役割がありますか?
-ベクター埋め込みは、複雑な情報を数値表現に変換するメカニズムで、RAGパターン、自然言語処理、AI検索など多くのユースケースを可能にします。
Azure Database for PostgreSQLの最近のアップデートは何ですか?
-Azure Database for PostgreSQLでは、データベース内での埋め込み生成が発表され、MicrosoftのオープンソースE5 tech small modelがデータベースエンジン内に組み込まれています。
Azure AI拡張機能とは何で、どのような利点がありますか?
-Azure AI拡張機能は、Azure AIサービス、Azure Machine Learning、AI言語、AI翻訳器など、Azure AIサービスの幅広い統合をPostgreSQLの内部から呼び出すための簡素化されたインターフェースを提供します。
Copilotの機能とは何で、どのような助けを提供しますか?
-Copilotは、アプリケーションの動的スケールやインテリジェントなデータ管理の自動化を可能にし、開発者がより多くのトップスタックに焦点を当てることができるようにします。
Azure SQL DatabaseのCopilotはどのように動作するのですか?
-Azure SQL DatabaseのCopilotはチャットウィンドウと自然言語クエリ機能の2つの形態で動作し、ユーザーがデータベースに関する質問や操作を行う際に支援します。
SQL ServerのCopilotはどのような機能を提供していますか?
-SQL ServerのCopilotは、オンプレミスのSQL Serverインスタンスに関連する情報を提供し、セキュリティ更新の不足、サポートの切れたバージョン、リソース使用率の問題、待機統計、および可用性グループの正常性などを検出することができます。
Outlines
🎤 ウェルカムとハイブリッドセッションの案内
スピーカー1は、参加者に対してウェルカムと、ハイブリッド形式でのセッションの案内を行い、QRコードをスキャンしてQ&Aに参加するよう促しました。また、レベル5での専門家ミートアップイベントも紹介しました。次に、Shireesh Thotaが登場し、Azure DatabasesのCorporate Vice Presidentとして、AIの可能性と、それをデータベースにどのように取り入れていくかについて話しました。
🌟 AIとデータベースの融合:未来のビジョン
Shireesh Thotaは、AIがデータベースに与える影響と、Microsoftがどのようにそのプラットフォームを構築しているかを説明しました。AIを活用したデータベースの機能強化、開発者体験の簡素化、自己最適化、自己修復、自己保護のデータベースの重要性について語りました。さらに、ベクター検索とRAG(Retrieval-Augmented Generation)という2つの主なアプリケーションが、データベースの未来を変える可能性について触れました。
🔍 ベクター検索とRAGの重要性と実装
ベクター検索とRAGのしくみと、それらがどのようにデータベースに統合されるかが説明されました。ベクター埋め込みは、複雑な情報を数値表現に変換し、類似性のスコアを計算できる機能です。RAGは、事前トレーニングされた大きな言語モデルと外部データソースを組み合わせ、より正確な応答を提供するデザインパターンです。これらの技術が、次世代のアプリケーションを構築する上でどのように役立つかが強調されました。
🚀 Azure Cosmos DBの進化と新しい機能
Azure Cosmos DBのベクター検索機能の追加と、DiskANNアルゴリズムの導入が発表されました。DiskANNは、効率的なグラフインデックスを構築し、高正確性と低レイテンシの検索を実現します。また、Cosmos DBはNoSQL APIでベクター検索をサポートし、開発者がAIアプリケーションを構築する際の複雑さを大幅に軽減します。
📈 Cosmos DBのパフォーマンスとスケーラビリティ
Cosmos DBのパフォーマンス、スケーラビリティ、および可用性が議論され、DiskANNアルゴリズムがこれらの特性をどのように強化するかが説明されました。また、ベクターインデックスの柔軟性と、さまざまなデータタイプへの対応についても触れられました。
📊 Azure Database for PostgreSQLの新しい機能
Azure Database for PostgreSQLに組み込まれる新しい機能が紹介され、特にデータベース内での埋め込み生成と、PG Vector拡張子の活用が強調されました。これにより、開発者はローカルで埋め込みを生成し、データのプライバシーとセキュリティを確保しながら、高性能の検索体験を提供できます。
🛠️ インデックス推奨と開発者体験の簡素化
インデックス推奨機能がAzure Database for PostgreSQLに追加され、自動的にクエリのワークロードを理解し、インデックスの作成や削除を行います。また、Azure OpenAIサービス、Azure Machine Learning、Copilotsが開発者体験を簡素化するのにどのように役立つかが語られました。
🤖 Copilotsの登場とその機能の紹介
Azure SQL Databaseに搭載されるCopilotsの機能が紹介され、チャットウィンドウや自然言語クエリのサポートが含まれています。これにより、データベース管理者はより効率的にデータベースを管理できるようになります。
🔎 データベースのトラブルシューティングとパフォーマンスチューニング
Bob Ward氏が、Copilotsがデータベースのパフォーマンス問題の特定と解決にどのように役立つかをデモンストレーションしました。Copilotsは、SQLエンジンの豊富なテレメトリを活用し、ブロックチェーンやCPUの問題、クエリの最適化など、多岐にわたる問題に対処します。
🎉 SQL ServerのためのCopilotsの紹介
Bob Ward氏は、SQL ServerのためのCopilotsの開発が進んでおり、Azure Arcを活用してオンプレミスのSQL Serverインスタンスを管理できるようになるというニュースを発表しました。これにより、データベース管理者はより効率的な洞察を得られ、SQL Serverの運用が簡素化されることが期待されます。
🌍 AIとデータベースの統合:総括と今後の展望
Shireesh Thota氏は、AIがデータベースに与える影響と、Microsoftが提供するAzure Databaseサービスの進化について総括しました。Copilots、ベクターインデックス、RAGデザインパターンのサポートが、データベースの未来を変える上で重要な役割を果たすと強調されました。
Mindmap
Keywords
💡Azure Databases
💡AI
💡Hybrid session
💡Q&A
💡Vector search
💡RAG
💡Azure Cosmos DB
💡Semantic Kernel
💡Azure Database for PostgreSQL
💡Copilot
Highlights
会议开始,介绍了会议为混合形式,参与者可以通过QR码加入Q&A环节。
Shireesh Thota作为Azure数据库副总裁,欢迎与会者并强调AI对各行各业的影响。
微软致力于构建一个智能化、集成化和值得信赖的平台,以支持数据层的每个角落。
提出了关于AI在数据库中角色的关键问题,包括是否为构建通用AI应用提供了正确的特性和能力。
介绍了Azure数据库如何内置并针对新AI时代进行优化。
讨论了数据库未来需要具备的自我优化、自我修复和自我安全的能力。
解释了向量搜索和RAG设计模式的重要性,以及它们如何增强数据库的功能。
展示了Azure数据库如何利用向量嵌入来支持自然语言处理和AI搜索等用例。
介绍了Azure Cosmos DB的向量搜索能力,包括其公共预览和使用DiskANN算法。
讨论了向量索引如何支持图像、音频、视频等多种数据类型,并超越传统的关键词搜索。
RAG设计模式如何帮助克服大型语言模型的局限性,通过结合数据库提供更准确的响应。
Azure数据库如何整合到Semantic Kernel和LangChain框架中,以简化开发体验。
Azure数据库为PostgreSQL提供数据库内嵌入生成,以提高性能和降低延迟。
展示了如何在PostgreSQL中使用本地嵌入来构建交互式应用程序,并提高查询性能。
介绍了Azure数据库如何自动推荐索引,以优化查询性能和减少维护负担。
讨论了Azure OpenAI服务、Azure机器学习以及Copilot如何简化开发者体验。
展示了Azure SQL数据库中的Copilot功能,包括聊天窗口和自然语言查询。
Bob Ward展示了SQL Server上的Copilot能力,包括资源利用和高可用性组管理。
总结了Azure数据库在AI集成方面的所有公告和创新,强调了新数据库时代的兴奋时刻。
Transcripts
[MUSIC]
SPEAKER 1: Welcome, everybody. We're about
to get our session started.
Hey, Day 3, Build.
Awesome. So real quick just remember,
this is going to be a hybrid session.
We do have people joining us
virtually as well as we have our
attendees here in person.
If you just take a quick moment to scan those QR codes on
your left and right-hand sides
just to go ahead and join in the Q&A.
Go ahead and ask your questions there.
We also have our expert meet-up on Level 5,
so definitely stop by there.
Check out some cool demos and
some cool stuff that's up there too.
Without further ado, let's take it away.
[applause]
SHIREESH THOTA: Hello, everyone.
[applause]
SHIREESH THOTA: Hello and welcome.
I am Shireesh Thota.
I'm Corporate Vice President for Azure Databases.
Thank you-all for starting your day with us today here.
I really hope you had a fantastic Build session so far.
There's still a lot of great sessions. Let's dive in.
There's one concept that has taken center stage of late,
certainly this week, and for very good reasons.
There's no device, no role of function,
no industry that has been untouched from the promise of
AI. This is really important
for all of you and for
all the organizations that we represent.
We at Microsoft have
a very important role here to make sure
that we're building a platform that is ready.
It's going to accompany you for your growth.
We want to make sure that our platforms are intelligent,
are integrated, and trusted,
touch every corner of your data layer.
Now, with the arrival of AI,
we could certainly question the art of possible.
We could question the status quo in terms of what
the Cloud providers are going to offer
you in terms of every aspect of your data,
be it the database, the applications
that you build on top of the databases,
or the analytics that kind of connects it all.
In this session, I'm going to
walk you through our vision of what
the role of AI is going to be in
terms of databases and how
Azure Databases are built-in and tuned for this new era.
Let's dive in. The most important thing
for us is to ask the right questions.
The first one here is,
are we building the right features and
capabilities for you to go build the Gen AI applications?
Features such as performances,
simplified data management,
and all the goodness of databases, durability,
really efficient way of retrieving data,
these are all super important.
We all know that, and there are table stakes.
The database developers do expect that,
but they want more than that.
They want to make sure that their applications
can be seamlessly built,
which essentially means that they need to
have capabilities such as vector search,
RAG design patterns that
are deeply built inside the database.
I'm going to walk through some of
these examples and some of
the services that we're bringing in here.
We'll talk a lot about the first pillar.
The second one, briefly,
this is the goal of simplifying developer experiences.
Databases absolutely should have
all the good features that I talked about,
but they need to become intelligent in themselves.
The database of the future needs to power and empower
you basically to do less so
that you can go focus on what matters the most for you.
You should be spending all your time building
the great applications higher up the stack.
Databases need to become self-optimized.
They need to become self-repairing.
They need to become self-secured.
Let's walk through these two aspects,
two goals that we have here,
starting with the first one, how
do we power the intelligent applications.
Then the question is,
what are the key applications that
the database is going to really go empower?
There's lots of them, but it comes down
to these two primitives, vector search.
Vector search is effectively a way for you to get
similar responses
in the semantic search, semantic realm.
You could be asking questions such as, hey,
give me boots of certain color,
of certain cut, etc.
These are specific predicates,
specific range filters,
and you would get the specific answers.
This is an important part of database querying,
not going away, absolutely not.
We nurture that. We're going to invest in it.
But now with vector search,
you could ask questions such as,
I got these other boots, can you give me something
like this without knowing all these parameters?
It figures out all the parameters itself.
It does a lot more complicated math on top of it.
We'll dive into it in a minute.
RAG, on the other hand,
is this incredible design pattern that combines
the power of a pre-trained large language model like
Chat GPT with that of
an external data source for
enhancing contextually more accurate responses.
Since databases are these sources
for the enterprise and structured data,
they're often invaluable tools in
terms of augmenting your responses from
the large language models and factually ground them.
These two applications together make up
for the most important aspects of the future databases.
Let's look into some of these workhorses
that make up the vector search
here starting with vector embeddings.
Vector embeddings really are the ones that can
empower use cases such as RAG patterns,
natural language processing, agents, AI search, etc.
Put simply, this is
a mechanism for you to go take complex information,
audio, video, images,
etc, and transform them into
numerical representation that then
can be stored and retrieved efficiently.
You can effectively take two pieces
of information and basically ask,
"Hey, just tell me the score of similarity between them."
Because of this, vector embeddings can help you go
build applications such as recommendations,
answers, anomaly detections, search, and many more.
Vector search then comes on top of vector embeddings
and make it possible for you
to retrieve relevant information,
get the precise search aspect possible.
It helps you build applications that can
get valuable information between many pieces of the data.
The way it does is by connecting the nuances,
understanding the dimensionality between
these two pieces of data through
what we call as vector indexing.
Because of the applicability of this,
it has to serve the needs of various types.
It supports images, audio,
video, graphs, sensors, etc.
It goes much further than the usual keyword search.
It enables your application to run similarity queries so
that you could basically go do more than just the
term-by-term mapping and given all this stuff.
It really helps developers build applications that can
give you natural responses in the natural language.
It can identify patterns.
It can help you detect fraud detection, many such things.
RAG, I touched upon it briefly.
It's, again, a very important design pattern.
Let me spend a few minutes here about RAG.
It basically can help you offset
the limitations of large language models.
The way it does is by helping you customize,
improving the performance without
you having to retrain on new data,
without you having to fine-tune,
which both are time expensive and costly.
You don't need to do all that stuff when you
combine your data with RAG.
It helps you avoid misleading information,
inaccurate responses because it factually
grounds the response in some contextual information.
Let's take an example.
If you're basically going up to
an application and sending a prompt,
the application can directly send
the prompt to the large language model like Chat GPT.
If it does that, most of the times,
if you're asking for some specific information
that may not be available for a large language model,
the response is not going to be that great.
Instead, application can send that prompt
first to a database
where there is contextual information for that prompt.
Now, the database, thankfully,
has vector indexing in it like Azure Managed Databases,
so it can retrieve the right information for you,
send it to the application.
The application can then augment
the prompt that the user gave with
the information that it got from the database.
Now, take this augmented prompt and then send it
to the large language model like Chat GPT.
Your responses are going to be a lot more accurate.
Imagine an employee in
a private company asking questions about their benefits.
Now, if that query directly goes to Chat GPT,
the odds are that Chat GPT has no idea about
your private company's private benefits information.
It's not trained on that. It doesn't know it.
Instead, you build an application that basically
combines the power of your local database,
whether you're using any of Azure Managed Databases,
SQL, Cosmos, Postgre, MySQL,
etc, and it combines that power,
augments the prompt,
and then sends it to the Chat GPT,
you're going to get a lot better answers.
All these cool things,
the vector search, vector embeddings, RAG,
etc, they're all going to come together to
build the next generation of applications.
I'm excited to talk about how Azure Databases are
really answering the call
to get these features in your hands.
Earlier this week, I have walked you
through about our vision of Azure Managed Databases.
If you were there in that session, you have seen this.
We were talking about how
Azure Databases are going to be built in and
tuned for generative AI applications
with hyperscale performances.
We're going to have Copilots to boost your productivity.
Our managed databases are
going to do a lot more on your behalf.
If you have investments and experiences in on-prem,
they need to seamlessly scale to the Cloud.
All that is possible because
of our intelligent databases.
They've got to be integrated because we'll
have a ton of fabric integrations.
You can harness all that power to focus on innovation.
You can trust our platform because we have
unprecedented privacy,
security, sustainability guarantees.
All this will help you go
really power your applications of the future.
Now, here, I'm going to pivot
a little bit and walk you through some of
our core databases and how they're
helping you build these new applications.
Let's start with Cosmos DB.
Azure Cosmos DB is our flagship NoSQL database.
How many of you have used or played with
Cosmos DB so far? Quite a lot.
Well, thank you. In the last month,
we have run our fourth worldwide annual conference.
This is a virtual event where we had
12,000 views across the globe from
growing community of developers who building
their new apps powered with AI and Azure Cosmos DB.
I am beyond thrilled to announce
that the vector search capabilities coming to
Azure Cosmos DB's NoSQL API. We did announce it.
It's in public preview since Tuesday,
and it is possible because we brought in the
state-of-the-art very
powerful algorithms known as DiskANN.
I'm going to dive in a little bit about DiskANN,
so bear with me a little bit technical here.
To start with, what we are doing here is to bring
the transactional data and vector data
together in one simplified data engine,
which is Cosmos DB here.
By doing so, we are massively reducing the complexity
of building an intelligent applications.
This is going to be low cost.
It's actually going to be super intuitive for you
because you're not dealing with
lots of different engines.
It's not just about combining the data.
We're actually combining the power of
different engines and index engines and query engines.
For that matter, you could use as
your Cosmos DB's ability to filter on equality,
ranges, and even spatial filters for that matter,
completely different index in
conjunction with the vector indexing.
Because you could do
these filters,
the hybrid queries, what we call.
You could absolutely go work
on really large massive datasets.
To make it easier for you,
you probably have different knobs,
different requirements.
We have indexing that is a lot more flexible.
You could index with absolutely exact queries,
like don't want approximations here,
you want exact, we got that.
If you want to quantize before doing the exact queries,
I'll touch upon quantities in a minute.
It's effectively a way to compress.
You'll get some really fast answers,
and then you can sort re rank again. We got that.
All the way to a very scalable solutions
such as DiskANN so we've got different knobs here.
I highly encourage you to go check it out.
The great thing about all this stuff is
that this is working on
top of everything that you
care and love about Azure Cosmos DB.
Azure Cosmos DB, as you all know,
has industry leading SLAs,
single digit millisecond latencies,
financially back throughput guarantees
and high availability up to five nines.
Everything that I talked about is working in
conjunction with all the goodness that we have here.
In fact, we're not just
focused on the large scale workloads.
We're also bringing this to
your serverless workloads, as well.
This is all possible because
Microsoft research and Azure Cosmos DB,
and all the Azure managed databases,
we're all collaborating together to bring the state of
the art powerful set of algorithms known as DiskANN.
This is going to give you
some amazing characteristics across,
recall latency, cost,
robustness to changes, and scalability.
The quick gist of
DiskANN is that it's just building a graph index for you,
where the compressed vectors are in memory,
the full fidelity graph is in SSD.
Hence, a vector search
effectively becomes a graph traversal problem.
The notion of graph index here
brings in a lot of cool advantages
as a slide calls out here.
It has high accuracy because we have what we call
as directionally diverse edges in the graph,
so we can go and find the accurate information.
We, in fact, search for the compressed data in memory,
and then for accuracy,
we re rank them on the high fidelity graph and the SSDs.
It has low latency because in the graph,
we store both the short edges as well as
long edges so that we converge faster,
and the number of hops that you need
to get to the answer is much lower,
so it gives you low latency.
Since, compressed data is in memory,
we're not as memory bound like other state of
the art vector indexings that are out there right now.
That makes the cost of
doing vector indexing much cheaper.
Better yet, it's going to do that at
scale because the full index is actually on the disc.
You don't pay and you're not
constrained by the memory challenges
that you typically have
to if you go to the other indexes.
The full index is on disc.
Of course, everything about
Azure Cosmos DB is about elasticity.
Now, vector indexing is
going to go in conjunction with that property.
That's why it's one of the most scalable,
most highly available, awesome
accuracy and low latency vector index that's out there.
It's industry first.
The great news here is that DiskANN is
not new in Microsoft.
We've been basic incorporating
it into many of our products.
Bing, for instance, has been using
it in a family of offers.
It uses almost 400 billion vectors
with trillions of points in the graph.
Use it for its accuracy and latency promises.
It's also used in our M365 applications such as
Outlook, SharePoint, even Teams.
A little bit deep dive into the how part,
there is a property called quantization.
I referred to that a moment ago.
Quantization effectively, think of it
as you have this large vectors.
If you all use OpenAI,
you probably know that it has 1536
embeddings for any vector.
That's a large data point. It's almost 6k.
If you put all that data in memory, it's really hard.
It's going to be memory constrained.
What we do here is there's
a property called quantization where it compresses.
It reduces the dimensions into groups
where each group effectively has equal number of points,
approximately equal number of points.
Because these compressed data points are in memory,
it's easier and faster and to go search them
while making sure that
the full truth is still on SSDs.
There's a family of algorithms here that are working
here to make the optimal search possible.
Vamana is the biggest one here.
It has a property called building
a relative neighborhood graph
that takes the most relevant edges.
It just takes the most relevant edges for you to go
search instead of just
completely spanning the whole thing.
That makes the search super optimal.
The pruning algorithms over there will help
create an optimal graph in
what is called as a low diameter graph.
If you're interested, we'll deep dive more into this.
There's a lot of literature about DiskANN there.
There's lots of good papers
that research team has published here.
This is truly one of the most important vector indexing
that's out there has the best recall capability.
Perhaps, this is really my favorite feature of DiskANN.
Often, what happens is that as you keep
inserting and deleting the vector indexes,
the recall regresses significantly.
As you seen HNSW in this case,
just went down massively
as the insertions and deletions happen.
This is bad because you have to now re-index.
The efficiency of that goes down.
It shows up in your latency,
shows up in the accuracy, etc.
Not with DiskANN.
It continues to have the same recall.
That's because of all the powerful
set algorithms that we brought in here.
That, in essence,
is really what I wanted to present about DiskANN.
It is today with Azure Cosmos DB.
We're going to bring it to all Azure managed databases,
but this is in public preview today.
I highly request you to go play with this.
The next announcement that I want to talk about
with Cosmos DB is
the integration into frameworks such as Semantic Kernel,
in .NET And Python,
and large language model, Python and Java.
Semantic Kernel and LangChain are frameworks that
basically helps you go build these applications in
conjunction, playing with LLMs.
LangChain is an open source model, that is, of course,
thriving on the commuters tools
and the integrations that are out there
in the open source community.
Today, you have it now with
Azure Cosmos DB for NoSQL as well.
Semantic Kernel, on the other hand,
is a lightweight framework that was built by Microsoft.
If you're looking for something simple and efficient,
Semantic Kernel is definitely
worth the consideration here.
Again, vector search,
all the integrations are
going to help you build applications,
super easy. Go check it out.
By the way, all these frameworks are
going to be available for
all the other databases as well.
In fact, LangChain has
integrations to open source databases,
PostgreSQL, MySQL.
SQL DB already has connections to Semantic Kernel.
We're going to bring LangChain as well pretty soon.
This is all about Azure Cosmos DB.
Let me switch gears and talk about another database here.
Azure Database for PostgreSQL.
Now, This is one of
the most popular open source database out
there really fastly, rapidly being adopted.
It is one of the most, front and center for
application developers because the community
has been investing quite a lot.
Already has a natural way of storing and
retrieving vectors through what
we call a PG vector extension.
We are contributing quite a lot here.
In fact, if you don't know,
Microsoft does quite a bit here in terms of
our contributions to the open source database.
Trilled to announce the in-database embedding generation
for Azure Database for PostgreSQL.
With this, we are bringing the Microsoft's open source
E5 tech small model and embedded inside the engine.
You could execute for
getting the embeddings locally within the database.
You don't have to make a call to an external model like
OpenAI or any other model. You could do that.
They are slightly different accuracy-latency tradeoffs.
But since the embeddings are local,
you'd get really great latency.
Single digit millisecond latency
for you to generate these embeddings,
because they're all being generated locally.
It's all packaged inside of
Azure Database for PostgreSQL here,
so you get predictable cost and very high throughput.
Finally, you could keep the data complied.
You're not moving anything outside the database,
so you can bring in something that requires you to
have private or highly confidential information.
That is announcing today.
If you are aware,
PG has this extension called PG Vector.
I referred to it earlier.
It has a way to store and
retrieve vector indexes locally.
Break news is that the local
embeddings that I just mentioned
works in conjunction with this extension.
You could continue to use PG vectors to store
and retrieve vectors while you use
the local embeddings that can
help you generate the embeddings that then can be
stored into the vector type using PG Vector.
We support various indexes.
This is, thanks to the power of OSS community here,
we support inverted file index IV flat index. This is good.
When you basically have some limited data point,
something like let's say a million data points.
Often when the data points are not
changing off and you're not doing too many insertions,
etc, this is a great option.
It has amazing memory and speed characteristics.
It basically partitions the data
into different clusters and then
tries to help you get to the query fast.
On the other hand, we also support HNSW.
Hierarchical, Navigable,
Small Word, it's a mouthful.
The way that this data structure works,
is very similar to skip list.
If you remember, skip list from
the other side of the world in the databases world.
It has different layers
where the graph is layered and each layer's
edges are in that layer
as well as across the layers effectively.
If you have more than finite data points,
if you're trying to do more Cloud, more inserts,
deletes, etc, it has
slightly different characteristics than IVF Flat,
but it is also a pretty efficient index.
It's one of the most popular ones out there today,
but they all have some limitations.
As I said, recall is one of them that I mentioned here.
Hence, we are bringing all the power of
DiskANN that I mentioned a while ago.
Two, Azure Database for PostgreSQL.
This is the goodness of basically
betting on Microsoft and Azure
databases because we bring in the best of our
Microsoft research to all our managed databases.
We're going to commit this to PostgreSQL as well.
This is coming before the end of summer.
Just several few more months.
It's coming to Azure Database for PostgreSQL.
We are super committed to making sure that this
comes to MySQL databases as well.
To show all this goodness,
to show how all this works in PostgreSQL,
I'm going to call on the stage Charles Feddersen,
our product leader who's going to
show all this cool work
is coming into action. Charles, please, take it away.
[applause]
CHARLES FEDDERSEN: Thank you, Shireesh.
Good morning,
everybody. My name's Charles Feddersen.
Today, I'd like to show you how
the new local embeddings in Postgres enable you to build
incredibly interactive applications for
vector search on Postgres that can then power
those RAG based applications that Shireesh mentioned.
I'm going to start in a basic web app.
This is a travel site, and effectively,
what I'm doing here is I'm
using search as we're all familiar
with with search engines to
find somewhere that I'd like to stay.
I'm going to go and search for somewhere that allows
small dogs because I'm traveling with my pet.
If I go run this search,
it's going to take a little while to
return the 90 odd results.
It's not bad, but it's not
the interactive experience that we'd expect.
Let me show you how it's built
today on PostgreSQL and how you can
now build it using
the local embeddings and
the performance that that provides.
Here I am in my favorite Postgres editor,
PG Admin and I'm going to go
ahead and run a simple select query.
I'm taking four columns,
and this order by clause is really important.
This is where I'm invoking
the Azure AI model that's making a remote call.
There are 384 dimensions,
and this single query comes back in
about 45, 46 milliseconds.
It's not too bad,
but we can do better because we're
running a SQL query where we
expect really fast performance,
but we're making a call to a remote service.
Let's zoom out again. Now I'll show you how
the new local embeddings can work
to make these applications really quick.
Same four columns.
The subtle difference is that Azure
local AI in the order by clause.
I'll go ahead and run this.
What's happening is that allows
small dogs text that I put in
my search is being converted
in the database to an embedding that I can then do
the comparison against all of the data in my database.
Like Shireesh mentioned, this is where we're
using your enterprise data.
Think about all the applications you
run that have got embeddings created
for them in each row to go do
a similarity search using vector comparisons.
But to do that, I had to create the vector
first for the search that I put in.
You need to do this for every
different search that you run.
That's exactly what's happening here,
and now that query is down to about four milliseconds.
But we really want to test this at scale.
Let's go and run now my remote embeddings in
the left hand side and
the local embeddings in
the right hand side, as you can see here.
What I'll do is we'll kick this off on
the left hand side to run a series of transactions.
Because we're making these calls out to main service,
we're only able to sustain in
around about 15-20 transactions
per second out of the database.
The right hand side is already finished.
We asked it to run 800 queries,
and it ran all of them in
a little over three seconds with
a sustained rate of about 242 transactions per second.
That's the performance of running locally in Postgres.
The left one ultimately finished,
and you can see the average latencies there.
Now, the important thing about this performance
is the data is always changing as well.
If the data that I'm
changing is based on the embedding I've created,
I need to recreate the embedding as well.
Here's a standard,
simple SQL statement that's going to run
an update for where I may have added
new product reviews or
changed the description of a product text.
I can create those five embeddings
faster than I could create a single remote embedding.
Now, remote embeddings are no less
powerful because of the latency,
they provide greater flexibility for all of
the models that are available in Azure AI services.
But as Shireesh mentioned at Build,
we're now shipping Microsoft's E5 model
locally in Postgres out of the box,
and it just works for running SQL queries like this.
Now, if I come back to my app and we go and run
the small dog search
again, it's effectively instantaneous.
We've localized the embedding creation and comparison,
and now we can go power those LLMs with the context
of our data to give you
a really rich and interactive response.
Thank you, everybody. Back to you Shireesh.
SHIREESH THOTA: Well, thank you, Charles.
I want to move on a little bit
to other announcements here.
We are bringing in index recommendation
for Azure Database for PostgreSQL.
Indexing, of course, super important.
It's really essential for performances.
As time goes by, as
cardinality of your data changes, your schema changes,
etc, it's often really
hard to keep up with the accuracy of the indexing.
We are bringing in support
automatically understanding all your query
workloads and detecting what needs to
happen to make sure that
your workload has the right indexes.
Sometimes you have to create new indexes,
sometimes you have to drop some indexes.
Obviously, it's a tax for insertions.
We take care of all this stuff,
even tell you which queries are going to be impacted.
Can you even forecast
the performance improvements for this?
This is what I wanted to talk about in
terms of powering the intelligent applications.
I want to move on to the next pillar
and talk a little bit about
how we are doing in terms of
simplifying the developer experience.
Here, we got to bring the power of
Azure OpenAI service, Azure Machine Learning,
and Copilots to streamline
the experience for your developers,
to make sure that your developers and
database administrators can focus
more on top of the stack.
Again they are very comfortable
in databases taking the backstage.
We want to be the database to be
the humble supporter from
behind and let the app take the center stage.
That is really the promise
of what we are trying to do here.
We have lots of announcements in this direction.
Azure Database for
Postgres's Azure AI extension
is going generally available.
This is something that we've shipped at Ignite.
This enables you to have cool integrations,
integrations with a broad array
of all of the Azure AI services,
Azure OpenAI, Azure Machine Learning,
AI language, AI translator.
There's a consistent, simplified interface,
to be invoked from
within the sequel functions of PostgreSQL.
Because of this, you could basically go build, generate
rich generative AI experiences for
your PostgreSQL workflows and applications.
Now, let's talk a little bit about Copilots.
Now you've been waiting for Copilots.
There's lots of Copilots here.
We want to bring it all to our databases.
Copilots, we believe,
are going to be very important to make sure that we can
help you build applications that have dynamic scale.
They need to make sure that they have
intelligence built in so that you're productive.
They will enable you to have
great automation to do data management.
All of our databases are going to
have lots of Copilot capabilities,
starting with Azure database,
Azure SQL Database here.
Here, Azure SQL Database is
bringing in the power of Copilots in two form factors.
One is in the chat window.
The second is natural language to query language.
Let's dive in a little bit here.
The chat window.
You could totally ask and receive
helpful contextual information inside
the context of the Azure portal blade.
You could ask questions such as,
hey give me the size of my SQL base.
Give me the active connections.
In fact, what were
the most resource consuming queries just last week?
You could open up the chat window
in the context of the database that
you're operating in and just
start asking questions such as,
why is my database slow today?
We want the Copilot to be collaborative.
We want it to be working in conjunction with you to make
sure that you could gather
insights and troubleshoot them.
The great news is that this Copilot works in the context
of the database that you opened the portal blade,
and it obviously understands what's happening with
the database and starts enumerating lots
of details for you to go work on.
Great news, it works
within the confines of the permissions that you have.
We obviously take security very seriously.
We don't want you to exfiltrate
unnecessary information, so this is pretty safe.
Natural language to query.
Again, we want you to be able to
interoperate with the SQL database intuitively,
with natural language queries.
We got to take care of understanding your table,
schemas, column names,
your primary keys, foreign keys,
all metadata, and generate highly accurate information.
You can look at it, you can
edit if you want to, and then execute it.
It's not just for some simple queries.
We're doing it to make sure that we
support really complicated ones,
things such as multi-table joins,
aggregates, pivot tables, CTs.
All those things are going to be possible because of
all the cool things that we're doing with
the Copilot in SQL base.
To show more of this, I'm going to call on
the stage a star architect Bob Ward.
Bob, please take it away.
BOB WARD: Thank you, Shireesh.
Folks, some time ago,
our data science team and the SQL team said,
"Bob, we're going to build a Copilot.
You've been around for a while,
what do you think we should do for self help?"
I said, "Hey, for decades,
we have rich telemetry built
into the SQL engine for performance,
for everything you need,
and you can run SQL queries on top of it.
So why don't you start with that?"
They're like, "That sounds pretty cool."
So we spent time taking what we call Bob in the box,
putting inside in the Copilot a deep set
of knowledge of how to tackle really complex problems.
The team said to me sometime
ago, "Hey, why don't you take a look at it?"
So I'm going to show you what it looks like.
First of all, what I told the team was,
"What can this thing do?"
I'm in the context of my database,
that's what I love about this whole thing.
I'm in a natural thing in
the portal here in the context of my database,
and I'm asking, what is it
possible that you can do as an actual Copilot?
I'm going to ask this question, what can you actually do?
In the chat window, I'm just going to say,
what things can you do to help me with my database?
I'm actually using a hyperscale database,
which you can start, a very great way
to start your database to scale up,
and Copilot is going to actually tell
me specific things it can do
to help me in the context of the database I've deployed.
Now, what I love about the top list here is
these two areas:
troubleshooting and diagnostics and performance tuning,
things that are very, very difficult
to do for a lot of customers,
developers, even really expert DBAs.
The team said to me, "What's the litmus test for this?"
I said, "Well, if I can take a
performance problem," which they
didn't tell me what it was,
"and I can ask Copilot something very vague.
This is the promise of GenAI. My database is slow.
Who can do this today in a very, very fast way?
We've got all the telemetry in the engine,
but you've got to know how to go navigate all that."
So Copilot is going in and looking at,
based on your permissions,
things you can look at yourself
that may be difficult to navigate,
and it already knows, "Hey,
you've got a high CPU problem here,
and by the way, it's been going on for a
while," and I'm like, "Okay, well, that's great.
What's the next step?" Well, probably,
I need to figure out what query is involved.
Everybody wants to know what's the query as part of it.
So I can scroll down and say, "Yeah, you know what?
I can actually tell you the query."
Well, this is great, I'm already on
the path to figuring out what the problem is.
What typically happens in
this high CPU scenarios is maybe I'm missing an index.
Instead of asking the Copilot,
it's already prompted me and said,
"Hey, maybe I can help you optimize this query."
I'll just click this prompted button
here already and say,
let's see what you have.
Sure enough, it can go look at inside
the telemetry in what we call
the query store, and say, "You know what?
As it turns out, there is an index you could be using."
It's already told me, "Hey, here's the query.
Here's the actual index you could use,"
solving a problem in minutes that could take hours.
Then the team said like, "Well, what else can you do?"
I said, "Well, there's other types of
performance problems where the database
is slow that are not as obvious."
I'm like, "Okay, well, I'm going to ask
the same question in a different scenario,
'Hey, my database is slow'."
I'm like, "Okay,
I'll wait and see what Copilot has to say."
Now, at this point, I still don't know what's going on.
Maybe it's the same high CPU problem
or maybe it's something what I call a waiting problem.
In here, the Copilot has already
determined, "Hey, you know what?
There's not a CPU issue going on."
But as you scroll down,
it said, "You know what I detected though?
I detected a blocking chain."
It's like, "I can even tell you the specific sessions in
SQL that are causing the blocking
and the query that is part of this."
Now, I know performance
troubleshooting a little bit and I'm testing the system,
and I'm like, "Well,
I got the blocking chain."
Maybe in the scenario,
which is very common for applications,
it's got a transaction open.
It said session 110 was my problem.
Again, I'm having a conversation with
this Copilot based on my telemetry saying,
"Hey, is session 110,
does it have an open transaction?"
I asked this question, Copilot can go look at
again this telemetry based on what's in SQL already,
and say, "As it turns out,
there is a session that has an open transaction."
I want to go deeper, even what Copilot is telling me.
I'm going to use what I call my perf mod in the Cloud,
Database Watcher, something
that's in public preview today.
It's storing data in a Kusto database,
which I can get deep information in
dashboards or even in the Microsoft fabric.
I'm using the actual dashboard scenario
here going, "Hey, why don't you tell me over the last 15 minutes
what does the activity look like, and what's blocking?"
Look at the deep information I already have from Watcher.
I can see in here now the blocking chain,
I can see the lead blocker,
I can scroll over to see what is being blocked,
and then I can scroll to the right and get
more deep information like, as it turns out,
there is open transactions here I need to deal with
or even other telemetry like, what's the program name?
Who's the user involved?
This is really powerful stuff. Here's the thing.
It's happening right now,
but maybe what happened in the past.
Is this a pattern that's going on?
Watcher stores historical information
even for blocking problems.
I'm going to go up here and say, You know what?
What's happening over the last,
say, I don't know, four hours.
I'll go historical mode,
pick the last four-hour window, and see what I get.
Look at this. It said,
there wasn't a blocking plane just now.
There was a blocking chain several hours ago.
I can then go into the graph, historically,
drill into it, and see
the same blocking problems occurring.
I'm telling you right now,
this is a problem that no one can solve today.
Usually, they have a problem that's happening
a blocking problem. Yeah, there we go, thank you.
Not me, it's the team that built it.
Because normally people have a blocking problem,
but they don't know what happened in the past.
The combination of Copilot and Watcher gave that to me.
This is a fun one, and it's a fun database name.
The team said like, "Okay, we're
going to stump you on this one, Bob.
We're not going to tell you even what's going on here.
Don't say slow." I'm like,
"All right, can I detect there's any problems with this thing?"
Copilot, just look and see,
is there anything you notice about the scenario?
So Copilot is going to go in and use
the same pattern of
looking at running versus waiting issues.
Again, this is something that I would do today if
somebody asked me about a problem with my database.
It turns out there is a CPU problem here.
I'm like, "Okay, team,
nice try trying to stump me.
It's the same thing before.
It's probably missing an index or something like that."
I'm looking down here and it's looking
like, "Okay, no problem."
But it's telling me the query,
I know the query that's going on.
I'm like, "Okay, you know what I love?
I can click this button and see what it's doing."
Now I've got insights to what the Copilot is doing,
so I know behind the scenes based on my permissions
using the deep telemetry what is it
doing to try to go and solve these problems.
Any Copilot answer has the ability to
see the insights of what it's doing behind the scenes.
Okay, fine, try to go optimize my query for me.
I'll click this option like I did before,
and I'm like, "Okay, it's got to be probably
a missing index again."
There's something called an anti-pattern query,
that's probably what the problem is.
I'm looking down a couple and it's like, as it turns out,
there are no missing indexes in this query,
and I've detected and looked at your queries.
There is no anti- pattern problems here.
So I'm like, "Maybe they have stumped
me." You know what, though?
Maybe it's tuned,
but I'm hitting a resource limit in Azure.
So I'm going to go like, "Hey, Copilot,
do I have any resource limit issues going on?"
We'll ask this question, again,
in a very natural language way.
The telemetry is all there to tell us this.
Copilot is like, "You know what? I'll take
a look at your question and look to
see what is your current CPU configuration,
and what is the CPU should look like based on that?"
Sure enough, it's saying, "You know what?
You're tuned, you're just hitting a limit."
This is incredible. To be able to go in and do this
today takes an incredible amount of time.
I've shown you three scenarios
where something that takes hours or even a support
call to Microsoft can be solved at
minutes using the built-in rich telemetry of SQL.
We talked about natural language to SQL. You know what?
I've been writing SQL queries for a while,
but one thing that's difficult for some people writing
SQL is to do hierarchy navigation
of your relational database.
I went into Copilot,
in the query auditor, and said,
"Can you give me a hierarchy of
product levels in my database?"
Now, here's what we do in natural language.
You pick your tables,
the schema that we're going to feed
the model. I'll pick all of them.
We know the names of the columns, the tables,
and even the key relationships to build joins for you.
Here, I'm going to accept what Copilot is doing.
Here, inside the editor,
it's given me comments to show
me what did it actually try to generate.
I can scroll down and look and see,
it's using what's called a
recursive common table expression.
Even some of the deepest experts in the world
for SQL don't know what queries like that.
Now, with the Copilot,
I have generated a query that can be very
complex to do in a very accurate and performant way.
When it's done, I can finish it and run it.
What an amazing system.
Now, even people that don't know
complexity SQL queries can use Copilot to do that.
Shireesh, what do you think, man?
[applause]
SHIREESH THOTA: It's super cool, Bob.
We didn't manage to stump you.
How many years have you been in SQL business?
BOB WARD: 30, almost 31 years.
SHIREESH THOTA: Thirty one years and you still
can't write recursive CTEs.
BOB WARD: That was rough.
SHIREESH THOTA: Well,
everybody needs Copilot.
Even Bob needs Copilot.
But thank you Bob. I want
call on a few other announcements here.
Obviously, you probably have seen
Azure Cosmos DB Copilots it's been in public preview.
Lots of great usages.
We're going to refresh them to
get a lot more other capabilities.
One thing that I wanted to just call
out about the Copilots, is that,
we build all our Copilots
with our stringent principles
of responsible AI at Microsoft.
We want to make sure that we are not
just giving you accurate information.
We want to ensure that they are coming in with
the right security guard rails with the privacy
and all the security assurances that Microsoft has.
Of course, SQL Copilot that Bob showed,
Cosmos DB1 and everything that we build
are all built with those principles in mind.
We want to make sure that witth Azure Cosmos DB,
we can optimize the queries really, really well,
reduce your latencies, make sure
that you have the right configurations.
Then the natural language to query language,
Azure Cosmos DB also has one.
It's available inside the data explorer.
It's free, by the way. You should really use it.
It's super easy to intuitively
operate with natural language and get to noSQL queries,
just like the TSQL queries that Bob
showed. These are all available.
We are looking to refresh them to
move into the Copilot in Azure,
so we are trying to streamline and
simplify the gazillion Copilots that we have today.
We're also announcing Copilot capabilities
in Azure Database for MySQL, and this is cool.
We basically have an ability for you
to interoperate with the public documentation.
It can be daunting sometimes to just look at
the documentation and troubleshoot issues.
This will enable your application developer to
understand the content all the way from
Microsoft's Learn content,
best practices, etc,
and troubleshoot issues easily.
This is all about the Copilots that we have here.
I want to just quickly
summarize everything that we are announcing here.
It's quite a lot here. We have Azure Database for
PostgreSQLs, AI extension.
Bob, I need to finish.
BOB WARD: I forgot to show something.
We built
a Copilot for SQL Server.
SHIREESH THOTA: Are you trying to stump me?
BOB WARD: I'm trying to stump you.
We forgot to tell you we built a Copilot for SQL Server.
SHIREESH THOTA: Really? I know
that, but you're going to show it now.
BOB WARD: I'm going to show now.
Can I do it? I just want to show.
You were asking for this. So yeah, here's a first look.
SHIREESH THOTA: Well, I'm going to hang in and see it.
BOB WARD: I first look at a Copilot for SQL Server.
Shireesh, I'm connected with Azure Arc for SQL.
I've got a bunch of SQL server on
premise instances connected with Azure Arc.
I want Copilot to help me get
rich information about my instances,
so I just ask it, what's
the inventory of instances you have?
Now, I don't want to just get the list of SQL servers.
I want to give me insights,
the promise of GenAI on actions to take.
Here's all my SQL servers, additions, versions,
etc. But what's interesting about them?
Well, number 1, one of these instances,
Shireesh, is running SQL 16,
but is missing a security update.
It's already knows based on I'm
configured, you need to take some action.
The second one's 2014,
it's running out of support.
Now I've got insights here.
Hey, you may actually do something about
what's happening with this instance. That's interesting.
What about resource utilization?
Again, this is running on premises.
But using the telemetry of Azure Arc,
I go to Copilot and say,
what does my resource utilization look
like for this SQL Server on premise here?
Copilot can go look at
the basic telemetry and say, "Well,
as it turns out, you don't really
have a lot of CP utilization going here."
Well, this doesn't make any sense to me.
I'm like, "This is a very active SQL Server.
I think it should have something happening in the system."
Remember, before I showed you about performance
about my database is slow.
I wonder if there's any waiting problems.
A lot of SQL experts, what they do,
is they go in and say, what does my wait stats look like?
I go to Copilot and say, "Hey,
do I have slow wait stats going in this instance?"
Again, this is a SQL Server on premises.
But I'm using Arc with copilot to look at it.
It's like, it turns out, you do.
It's giving me a list of
waits that are happening in the system.
Any SQL person where they see
LCK that's a blocking problem.
Now I know, with telemetry in a very,
very quick amount of time how to go attack a problem.
The final scenario, again,
just a first look here, Shireesh, not a preview yet,
it's coming, is about HADR.
When's the last time I've had a log
backup for this database?
Kind of important, right, to my database.
You can go in and just quickly look at telemetry and say,
"Hey, there's a log backup that has been taken here."
That's a long time ago.
You better go do something about it.
Finally, availability groups.
They're great technology, but often sometimes hard to
manage and detect the health for them.
I'm going to go to Copilot and say,
"Hey, what does my AG look like?"
I've got an AG for this system.
Again, notice I'm not asking about the health of the AG.
I'm just saying, what do I have as an AG?
Copilots going to give me
extra insights to say, "It's not synchronized.
It's not healthy." So Shireesh, this is all about making sure we
can help administrators manage their SQL estate
and not just give them information,
but take insights on it.
Sorry to interrupt you but now there is.
SHIREESH THOTA: That's super cool.
One of the things that we see.
[applause]
BOB WARD: More coming.
SHIREESH THOTA: This is the most important question
that I get all the time from the customers.
Like all of you here. When is it coming to SQL Server?
We absolutely love and care about SQL Server.
We're making sure that Gen
AI we're not leaving behind SQL Server.
This is a great example. Thank you.
BOB WARD: Thank you, sir. Sorry to interrupt you.
SHIREESH THOTA: I lost my thought here a little bit.
But, I just want to thank y'all here.
This is a great summary of everything that we've done
here in terms of Copilots, the vector indexing,
enabling you to go build RAG design patterns
across all our databases. It's a new era.
It's a super exciting time.
On behalf of Microsoft,
on behalf of all my team,
thank y'all for starting your day with us today.
I really hope you walk out as
excited as we are in terms of
bringing AI into Azure managed databases.
There's lots of pointers here,
get started in Skillabs.
There's still some really good sessions left.
Scandinavian Airlines, for instance,
170 they're coming in today to talk about their use case,
highly recommend this, PostgreSQL sessions.
There's a lot of other database sessions.
I hope you enjoy them all.
Thank you so much.
Enjoy the rest of the conference.
Thank you for attending today.
[applause]
Browse More Related Video
Build a RAG app in minutes using Langflow OpenAI and Azure | StudioFP101
Build AI-powered apps on the infrastructure that runs Microsoft Teams
Getting started with Azure Container Storage
RAG at scale: production-ready GenAI apps with Azure AI Search | BRK108
NEW Copilot in Azure AI Studio *2024*
Multicloud Development with Microsoft Azure and Oracle Database Cloud | Oracle DatabaseWorld AI Edit
5.0 / 5 (0 votes)