Responsible AI practices and the Innovation Challenge Hackathon
Summary
TLDRこのビデオスクリプトでは、マイクロソフトの責任あるAIの基準について詳しく説明しています。AI技術の進歩に伴い生じる新たな課題に対処するために、責任あるAIの重要性が強調されています。スクリプトでは、透明性、公平性、信頼性、プライバシー、包括性、責任などの原則を紹介し、それらを適用することでAIアプリケーションを安全かつ信頼性の高いものにすることができました。さらに、ツールやフレームワークを用いてAIアプリケーションのリスクを緩和する方法も提案されています。最後に、問題が発生した場合の対処策や事前テストの重要性についても触れられています。
Takeaways
- 📢 マイクロソフトの責任あるAI標準についての重要性
- 👨💼 講演者ルチョの役職と彼の日常業務の説明
- 🔍 AI技術の進化に伴う新しいルールと課題
- 🚗 自動運転車の安全性と責任問題の例
- 📝 責任あるAIの原則についての詳細説明
- ⚖️ 公平性:AIが偏見なく公正であることの重要性
- 🔒 プライバシーとセキュリティ:データ保護の必要性
- 🌐 包括性:全ての人々が技術にアクセスできるようにすること
- 📜 透明性:AIの動作や結果を明示することの重要性
- 👥 責任:AIの使用における人間の関与の必要性
- 🛠️ AIモデルのテストとセキュリティ層の導入
- 📊 マイクロソフトのAIスタジオでのモデルテスト方法
- 📈 メタプロンプトの使用によるAIモデルの精度向上
- 🔧 アプリケーションユーザーの教育とUXの向上
- 📑 責任あるAIの実践のためのリソースとツール
Q & A
マイクロソフトが推進する責任あるAIとは何ですか?
-責任あるAIとは、技術革新を通じて生み出された新たなルールやパラダイムを適切に扱い、適切に処理されなければ生じる可能性のある課題を未然に防ぐための標準的なアプローチです。
なぜ責任あるAIが重要なのですか?
-責任あるAIは、技術によって生じるかもしれない偏りや安全性の問題を未然に防ぎ、倫理的な原則に基づいた信頼性の高いAIを構築するために重要です。
Luchoが所属するAzure AIとはどのようなサービスですか?
-Azure AIは、マイクロソフトが提供するクラウドベースのAIサービスで、売り手のサポートや顧客ニーズの理解を通じてAIをビジネスに融合させることを助けるプラットフォームです。
責任あるAIの原則には何が含まれますか?
-責任あるAIの原則には、公正性、信頼性と安全性、プライバシーとセキュリティ、包括性、透明性、アカウンタビリティが含まれます。
透明性とはどのような原則ですか?
-透明性は、AIが何を行うものかをユーザーや関係者に明確に伝え、そのプロセスと結果を理解しやすくする原則です。
AIによるコンテンツフィルタリングとはどのような機能ですか?
-AIによるコンテンツフィルタリングは、適切でないコンテンツや暴力、自己傷害、性的暴力などの疑わしいプロンプトを検出し、適切な応答を提供しない機能です。
メタプロンプトとは何で、どのような役割を果たしますか?
-メタプロンプトは、AIモデルに対してどのように反応や行動をすべきかを教育する一種のガイドラインで、システムの安全性と正確性を高めるのに役立ちます。
ユーザーエクスペリエンスの改善に役立つツールはどこで見つけることができますか?
-ユーザーエクスペリエンスの改善に役立つツールは、Microsoftのhack PlaybookサイトやAI Studioなどにあるほか、マイクロソフトが提供する様々なリソースから見つけることができます。
AIを用いたテストとはどのようなものですか?
-AIを用いたテストとは、Adversary AIと呼ばれる手法で、AI自身を使ってAIアプリケーションをテストし、潜在的な問題を特定し、改善するプロセスです。
何か問題が発生した際の対処法について教えてください。
-問題が発生した際は、事前に計画された対策を実行し、迅速に対応することが重要です。また、AI Studioの評価ツールを用いてアプリケーションをテストし、問題を事前に特定し改善することが推奨されます。
Outlines
🤖 AIの責任基準の重要性
Lucho Lucho氏は、AIの責任基準についてMicrosoftが長い間取り組んできたことを紹介し、その重要性を強調しました。彼は、AI技術の新しい規範や課題について説明し、開発者がアプリケーションを作成する際には、この強力な技術が社会に与える影響を考慮する必要があると述べています。また、透明性報告の重要性と、Microsoftが公開しているベストプラクティスについても言及しました。
📜 AIの原則について
Microsoftが2006年からAIに取り組んできた歴史を説明し、AIの原則として公平性、信頼性と安全性、プライバシーとセキュリティ、包括性、透明性、そして責任について詳しく説明しました。これらの原則がAIの開発においてどのように適用されるべきかを強調し、具体的な例を挙げて各原則の重要性を説明しました。
🔍 テストとツールの活用
AIモデルのセキュリティテストの重要性を強調し、Microsoftが提供するツールやフィルターを使用して、AIモデルが安全かつ責任を持って使用されるようにする方法を紹介しました。特に、Azure OpenAIのコンテンツフィルターや、モデルのテスト機能について詳しく説明し、どのようにしてモデルの選択や運用時のリスクを軽減できるかについて述べました。
🔒 モデルのセキュリティとフィルタリング
AIモデルのセキュリティ層について説明し、特に最新のGPT-4モデルに導入されたコンテンツセーフティフィルターについて詳しく述べました。また、Microsoftが提供するコンテンツフィルタリング機能を利用して、アプリケーションの安全性を確保する方法を説明しました。このフィルタリング機能により、ユーザーはモデルへの問い合わせが適切かどうかを確認し、必要に応じて人間による審査を行うことができます。
🛠️ ユーザーエクスペリエンスの強化
ユーザーに対する教育の重要性を強調し、具体的な例として、Bing Chatの使用方法を紹介しました。ユーザーにAIモデルの動作や情報源を明示することが重要であり、透明性を持たせることで信頼性を高めることができると述べました。また、Microsoftが提供するツールキットを活用して、ユーザーエクスペリエンスを向上させる方法についても説明しました。
📚 テストとトレーニングの重要性
責任あるAIのためのテストとトレーニングの重要性を強調しました。AIを使用してAIをテストする手法として、MicrosoftのAI Studioで提供されるアドバーサリアルAIについて説明しました。これにより、AIモデルがリリース前に安全で信頼性の高いものであることを確認できます。また、テスト結果に基づいてモデルの修正や改善を行うことで、アプリケーションの品質を向上させる方法について述べました。
💡 AIの責任ある活用
責任あるAIの実装の重要性と、そのための具体的な手法について説明しました。ユーザーが安心してAI技術を利用できるようにするためには、適切なガイドラインとツールを使用することが重要であると述べました。さらに、継続的な学習と改善を通じて、AIの安全性と信頼性を維持することの重要性を強調しました。最後に、提供されたリソースとリンクを活用して、さらに学習を深めることを推奨しました。
Mindmap
Keywords
💡責任あるAI
💡公平性
💡信頼性と安全性
💡プライバシーとセキュリティ
💡透明性
💡アカウンタビリティ
💡インクルーシブネス
💡マイクロソフト
💡AIモデル
💡トレーニング
Highlights
AIの責任基準について、Microsoftの長年の取り組みの紹介。
責任あるAIの基準がハックプロジェクトの評価基準に含まれている理由。
AIの新しい技術革新に伴う課題やルールの必要性について。
フェイクニュースや自動運転車のセキュリティ問題などの具体例の紹介。
AIモデルの透明性を確保するためのMicrosoftの報告書の紹介。
AIの責任を問うための新しい規制の必要性について。
AIモデルにおける著作権問題の議論。
責任あるAIの原則としての透明性の重要性。
Microsoftが公開しているAIのベストプラクティスと学習の共有。
責任あるAIの原則に基づくツールやアイデアの紹介。
責任あるAIの原則の詳細な説明とそれぞれの実装例。
フェアネス、信頼性、安全性、プライバシー、セキュリティ、インクルーシブネス、透明性、アカウンタビリティの重要性。
AIシステムのテストとレッドチームによる攻撃シミュレーションの重要性。
AIアプリケーションのユーザー体験を向上させるための教育とガイドライン。
AI Studioで利用可能なテストと評価ツールの紹介。
責任あるAIの実現に向けたモデル選択と評価の重要性。
Microsoftのコンテンツフィルターとその役割。
責任あるAIの実装に向けた具体的な提案と練習方法。
AIモデルのセキュリティ機能とそのテスト方法の紹介。
Transcripts
[Music]
[Music]
hello Innovation Challenge hackers I am
uh here with my colleague Lucho Lucho
tar who is is uh one of our subject
matter experts on AI and uh we're going
to talk about the responsible AI
standard uh what it is why we care about
it share some tools and ideas and again
it's important because well actually
that's our that's our first question
Lucho um aside from the fact that hack
projects are getting judged um un using
the standard what uh why why is the
responsible AI standard Microsoft has
been working on it for a while it's not
new um tell us about it totally first
let me introduce myself what I do in my
day job yes I'm Ms me on the AI it's
it's actually my passion I I really a
technology so I like to do to learn or
even my free space but what I do during
the day I am the goto
Market uh lead for the Americas on the
Azure Ai and what it means is two things
first I help our sellers to sell but
also I I I want to understand what is
the customer need so I spend most of the
time with our customers and with you
guys so if you guys have any question
please uh put in the chat we would like
to hear what are your challenges your
your your um um you know your feedbacks
and so because we always want to improve
this is a new things for everybody so
very very great to be to be here so uh
back to your question why responsible I
and why this is important I mean like
any new uh Innovations or technology
disruption uh there creation of new
rules new paradigm new new capabilities
there really
um might create some challenges if it's
not handled properly uh so you know for
example like fake news uh or
self-driving cars to security this is
something that you know it was
impossible before nobody thought about
this selfcare security about you know uh
who's responsible when you have an
accident like the machine is the The
Constructor is the user so we need to
start thinking differently as some of
the things is happening intentionally
but sometimes it's also unintentionally
so this is so important then when you
guys are building an application think
about uh uh the the consequences uh uh
that this uh this new powerful
technology can create uh or or you know
um Can impact uh the
society that's why important yeah yeah
we so we talked a little bit about this
the um the the transparency report and
you know talking about like who's
responsible should should I go ahead and
this is one of the links in the in the
discussion forum and click through on
this
what was what was this report yeah yeah
yeah yeah that's thank you for for
sharing that because I guess the uh the
other question that I know I hear a lot
from uh from our customers is okay who's
responsible for all this right is a new
space is that Microsoft responsible for
it is the government reality is as I
said this is a new thing and we don't
really have a uh High regulation in this
space like for example we have in
medicine medicine we have is high
regulated not so much in AI space uh so
there are some rules uh you know around
of course data privacy copyright
infringement they are still valid but
think about like copyright
infringements you can register an IP if
it's human created now if a machine
creat something can you register or not
is that considered copyright or not
right the reality is depends how much
human is is part of it so who's created
the model as as you understand it can be
very very
complicated uh and are still you know we
need to learn how how the best to do
that so uh I'm going to share in a
second something called responsible AI
principle so one of those is called
transparency so the very least we can do
at this point is everything we do with
AI we need to make sure we tell the
users or uh you know whoever is working
on this project what the I is meant to
be because we need to be transparent on
those so we can easily or we can do like
reverse engineer can find uh you know
what what's happening otherwise it's
going to be out of control as part of
that transparency principle we publish
uh the the report us just said you just
shared mat before it's a voluntary act
from
Microsoft uh if you go back on the top
just uh yeah this this link over there
exactly so this is available we
basically share our best practices very
openly with everybody there is no secret
here we are all learning and so I
strongly encourage you guys to to have a
look uh uh share our learning Implement
those learning and and really learn this
is a space where we we really want to
learn yeah there's a lot in here this is
this is this is good I'm not I I admit
this is the first time I click through
on the link I need to take some time to
read this oh yeah and we do every year
so we have a year to to next year we're
gonna make another one then we want to
just sort of drill down into the into
the principles a little more like
transparency is nice how do we we learn
more about transparency
yeah yeah yeah yeah yeah so uh again if
you go back uh uh can you go back second
on the six yeah I would like to walk you
through walk this team through this
principles right so as you said
Microsoft it's been a long time that we
have been working on AI I think since
2006 that doesn't look a lot long time
but in AI time it's a long time because
honestly at the beginning was more
experimentation so since then we start
to implement and refine uh this
uh principles that we apply in
everything we do with AI and we also
share this publicly you we're going to
share in a second because we want
everybody to implement because it's as I
mentioned before it's everybody
responsibility about making sure this AI
is going to be safe reliable uh for for
the stuff so let me quickly walk through
each of these principle I mean this is
public uh open to everybody so you can
definitely uh go down but because you
guys are developing an application I
want to make sure that before actually
writing any code using AI you start
thinking about uh this principle uh a
front as I said so the first one is
fairness um so we you want to make sure
that everything you do with your
application you're going to be fair uh
with the audience so you don't want to
have data biased so for example if you
create I don't know an application to uh
to decide if somebody can get a loan or
not you don't want to discriminate one
group versus the other because maybe you
don't have enough data or you have a
biased data so this fairness is
something super super important because
again more and more we see less and less
human as part of the process and so you
want to make sure the the system is
reliable otherwise again it's going to
be a biased project process sorry uh the
second one uh Rel reliability and safety
I mean this is it's easy to understand
again I mentioned the example of of the
autonomous
car so make sure when you know you
develop a system make sure it's reliable
and and and safe it's not just about
testing it's understanding the
consequences and also having a
remediation plan in place because you
know if you start getting cars and gos
whatever they want go fast don't stop at
the stop sign it's going to be a
disaster right so this is this is super
important and everything we do
everything uh we want to do we want to
make sure it's reliable and safety and
safe uh the third one is privacy and
security this is not in you I mean uh
privacy we have low regulations and so
on the thing is uh with AI you can
actually if you trick the model or you
don't you don't have a safe safety
around your model you can actually pull
information very very quickly you know
you can ask Chach PD anything it will
tell you everything he knows right so
you want to make
sure uh heyy I somebody from Venezuela
awesome very International crowd this
day today all right uh so uh privacy uh
it's is super super important so you
need to protect your data but also the
model because otherwise you will access
the data in a in a in a smart way uh the
the next one is inclusiveness this is
very close to our mission mat I know
this is a passion of yours uh is we want
to make sure I mean AI has some natural
capabilities to make technology uh a
available to everybody I think about
just simply like language translation I
mean if somebody doesn't speak English
or any other language you can actually
communicate that was not possible in the
past um but also you know making sure
that people has
like uh they blind or other uh other
other problems so not problem other um
disabilities we can actually argument
those with the technology so when you
build your application making sure you
are inclusive because this is a terrible
mistake we can do and we are not
inclusive and we create a technology
only for a piece of people and then you
know it will create problem later on so
uh inclusiveness is part of our
principle so everything we do we take
always this in
consideration the next one is
transparency as I mentioned before uh
you want to make sure that everything
the machine does you have a way as a
user or as a programmer to know what
what it did so for example if you're use
being chat they using gen models in the
back in the back end they every time it
reports result they always give you
citations they basically tell you oh
this is where it come from so you can
always double check and make and make
correction as needed right and the last
the last but not the least is
accountability so this is not an after
thought I mean none of this is an
afterthought right the responsibility
again I like the example of the
connected car m
uh as I mentioned before if there's an
accident who's responsible for it so you
want to make sure you create a system
and in and you have some human always
involved in that uh and making sure who
who does what and what the remediation
plan you don't want your machine to send
email on your behalf because he can do
terrible mistakes right so you got me
thinking about insurance adjusters um
yeah which is right that's a problem you
know it is that's why actually m this
concept of co-pilot but actually I mean
human is always at the center we we we
control the machine it will help us but
you always want to have human
interaction before you're actually doing
something that to your point it can be
potentially dangerous right yeah yeah
let's let's go ah and um so if we go
through this link I think this gets us
into some of the yes you've got the
standard I'm gonna scroll down real
quick you've got more about the
principles here and then a lot of
interesting information where you can go
even deeper on each principle and then
you get down to the bottom and there's
some good tools but I think we want to
focus on the one up at the top first
right yeah if I may that so this
principle it's it's a great narrative
but what doesn't really mean for you
right so yeah gotta write some code yeah
you at at the end of the day you're
writing code
right so as part of this responsible AI
standard we actually for each of this
principle we created some uh goals and
if you could click on the documents
actually you can see those
goals uh yeah for example you see for
each of the principle they will tell you
uh in
details um uh you know what yes they
will tell you know what what exactly you
want to mean and this can be customized
depending on your on your use
cases uh that's what uh um some of the
the thing you want to answer to your uh
on your
application um we also provide a
template uh I it's somewhere in the site
uh actually I've got let me pull that
one up real quick pull my notes off my
screen oh no worries this is live
so by the way folks if you guys have any
question do you think this is
interesting so let us know it's
like it's that one hold on let me see if
I can find this real quick I'll pull up
my other notes
yeah this is one I think we're looking
for exactly exactly this has the this
was the table I really like stakeholders
benefits harms yeah this is the one
actually you share with me I haven't
seen that before so I think it's kind of
new it was released uh quite kind of
recently so this is awesome again you go
here you do an exercise with your
application it will help you to to avoid
problem uh in the future yeah and the
last piece about this principle is about
tools there are a lot of tools I'm going
to talk in a second that can help you to
uh to really create a safe or
responsible
application do you want should we go
into the tools now do you want to should
I pull up your slides yes yes why not so
uh um let me share my screen oh you have
it already fantastic so um one of the
one of the challenge uh in uh uh in AI
of course is people are using AI uh
sorry people try to trick AI to gather
information right or maybe use AI in a
in a way that is not supposed to be and
so as as I can imagine there can be some
some issues so uh in Microsoft we
created this framework or I would say
mitigation layers that will help you uh
when you build your application to
mitigate those risk I mean you cannot
remove risks I mean like security there
will always be uh hackers and people try
to
um to bypass security but you know with
this framework we also improve and and
try to keep up with with that
so um let's start with uh um all the
options you do have uh on um uh to to
mitigate the risk so starting with the
with the middle uh at the at the at the
center uh at the very low level I would
say you have the model it can be chpt it
can be uh any Azure AI services you know
any anyi is part of the model so the
model itself has some security uh
features so if you try to ask some
question it will stop you right away
right it will not process that it's like
intrinsic so if you look about the
latest GPT 4 they actually introduced
some content safety filter within the
model right so but models are are are
different right not all the models are
equal some models are more advanced and
cuties and models are more advanced in
other part and so on so something that
uh uh we strongly recommend is before
you building your app before you go live
try to test the model so what you can
see here on the screen these are all the
uh all the model that are available uh I
think as the generative AI model we have
1667 as per today available so within
Aur I Studio you can actually do some
test directly directly there so you
don't need to buy the model you just
deploy there you can ask run some query
and see if the model behaves as you
expect uh testing is going to be
fundamental for uh for uh for making
sure application is secure so again
before choosing the model one of the
criteria to choose the model is about
security so please do test the model um
in aure AI studio uh do run the query
run the prompt directly there so you can
see what what what is capable to do do
it or
not the second layer uh uh is
um where is it here so uh this is the
it's outside the model uh it's Microsoft
created another layer uh called uh
content filter and this is super simple
uh concept so when you have your
application here you send your prompt
your request you go to Azure openi uh
there is this filter here uh
called aury ey safety that intercept
every prompt you make and basically what
it does is it will analyze your prompt
and categorize or uh score based on four
criterias like hate uh sexual violence
and self harm so if he if the model
thinks hey uh this is maybe uh something
to do with violence it will stop here so
it will basically uh return an error
that you know you can capture and and
and Define but it will not return the
results to the model more to the to the
application more importantly it does not
even go to the model it will not
actually ask the query to the model
because you don't want to use the model
unless unless you want to because using
the model has a cost so you want to
protect and be safe as you can see here
there is one line here so this is uh
when the model is not sure about
something perhaps I don't know it's
borderline it's like uh it's about
violence or not uh in that case it will
deny the request but it sends this query
this prompt to a to a human and then we
can analyze and perhaps improve the
model now this is the only situation
where Microsoft give put data outside
your tenant so if you have a
confidential information and you don't
want to do that you can disable this
feature right so because our promise is
our dat is your data sorry your data is
your
data your data is your data will not
touch it this is the only exception but
again you can disable so make sure that
when you build your application if
you're working with sensitive data you
don't want to be used for training
disable this feature it's super simple
uh and you can do that one important one
other important feature here
is you can Define the filter level here
because for example if you're riding uh
a game uh especially I don't know Call
of Duty I'm big fan of Call of
Duty uh so maybe violence is not uh uh
is not I mean it's it's part of the game
right so you want to lower the threshold
bi about violence or maybe if you're
writing an Hospital application self har
it might be one of the symptom or one of
the reason why people go into the
hospital so you don't want to filter
those queries right so in some
environment uh it might be uh okay to
reduce but you have full control on how
you want to uh move this uh you know
this trigger to make sure you your
application is
safe all right
uh yes so this is what I meant before
for each of the layer you can select how
strict you want to be uh uh this is
really fully conf configurable and you
you can decide uh what to do my our
recommendation is usually start with the
most safe one and then perhaps try to
lower uh because again uh you want to
start safe and then the more you learn
the more secure it will be all right so
the the next layer is about Mega prompt
and and granted so this is a layer still
in AI Studio that is between your
application and the model right so you
don't need to modify all the time the
application if you want to do some
granting because you can make some easy
changes there that may help you to make
your system more safe and I give you
some principles here uh that we
are uh we we strongly recommend to I
would say educate your
model um so it will it will act safely
so the first part is you want to Define
how the model you want to react or
behave so for example you want to have
what is the tone of the of the model you
want to be you know respectful you want
to be quiet you want to be uh Prof
professional so this is something
important then you you need to teach the
model as well as you need to ground the
model grounding the model means hey you
need to talk only about this product if
somebody asking any other question you
shouldn't answer so your your purpose of
this model is to answer question about
this
model uh the second part is about Define
the output now if somebody stck in the
model you can actually take out files
from your data right if you if you don't
have the proper uh
uh security so defining hey I only
create text I only create Json file or
whatever you want to it will help you to
secure some of the uh some of the attack
um the the next one the third one is
there are some case where are very
complicated some difficult query maybe
somebody tried to cheat the model you
need to tell the model how to react and
behave um uh in this situation right so
for example if if the request is uh you
know not not something you expected just
say hey I'm not responding the model is
not responding to this and so
on and and the last one you want to
create some mitigation so if somebody's
trying to attack to uh or try to access
one file you might decide I don't know
create another API you can disable that
file whatever is needed but definitely
it will give you all the levers uh to
Define um uh your uh uh your protection
here is an example on the chat bot again
depending on the use case these things
might be different I don't know if you
guys want to take a screenshot here this
is real example so you can actually use
this prompt or Mega prompt in azury I
studio in your model so it will make
your application or your chatbot safer I
mean it's just minutes because you just
need to do a cut and paste right if you
go into in Internet uh you definitely uh
find even more than that so uh this is a
good tip that I would give give it to
everybody all
right next one is about uh uh of course
testing is is is critical here right so
we did some analysis and um these four
lines uh it's how much we use meta
prompt so if you don't give any
instructions you see that the the DAT is
67% so the model is very inaccurate it's
less than 50% so but when you start
adding some of the uh information to the
model as you can see at the bottom one
actually the defected is less than 1% so
the model became very very accurate and
this is without the training the model
without doing anything else other than
inform the model how they should behave
and how to to to limit the stuff so it's
responsible ey but also it's about model
accuracy so this is something important
to take in
consideration all right so moving on to
the next layer uh uh this is about your
application user experience so because
this is something new you want to make
sure you
educate uh the user how to use so I mean
any any new things you do like even if
you drive a car they will tell you what
to do right so this is an example or our
uh chat being chat you probably have
been using so far and and there are some
example on you know how we do that so
for example you can ask give you
example uh inform the user about the
risks uh what is the tone they want to
you want the mod model to to respond
because has to do with accuracy uh as
well as in the results you want to tell
this is part of the transparency we
talked before so first of all what the
model is doing and the other one is for
me it's a killer feature is tell where
you get the information from because as
a human I want to control I don't trust
the machine as also mentioned in the
other
part um and and for for the ux for the
interface actually there is a a lot of
tool available um from Microsoft that
can help you to create a toolkit and I
know mat I think you share something you
are
also you created a um an example of
that yeah was the yeah it's one here
I'll go back to sharing my
screen um so this is the hack Playbook
um so this is the hack toolkit site here
go this is what you're just showing I'll
go ahead and put the link in
chat and then we go into the Playbook
this is kind of kind of nice it's sort
of a question and answer thing so let's
say okay I'm going to create a
recommendation system sort of gives me
scenarios for correct operation input
let's say
speech sort of starts to give you what
errors might be sort of things to look
out for will it have a c clear way of
knowing when it should
trigger no it's got to figure it out on
own that kind of presents some other
problems and you can kind of keep going
through and get recommendations and
guidance um you know along that line
which I found be pretty effective in
terms of again getting to like hey let's
write some code what's the functionality
have to be um I think that some of the
this tools that we've got available you
know in addition to checklist and
process and policy um are there there's
a lot of good stuff
coming and the I think the you know it's
at Microsoft it's interesting I think I
first had to take the um access the
responsible AI training the required
training it was probably
2019 was the so it's been you know
almost five years um that we've been
building on this and it's Microsoft
right so we keep we build tools we build
process we build policy um and it's very
practical um and it's been kind of
fascinating to you know a long time ago
um I was a philosophy major and and so
it's been like this is how Microsoft
does ethics wow um and it's uh we do
we're doing a pretty good job I have to
say yeah um I think one last thing we
wanted to touch on I know we're almost
out of time for you Lucha um have a have
a plan for when something goes wrong
right we had the Bing
examples um yeah totally totally so
first of all I want to comment on what
you just said right there are millions
of resources it might be overwhelming
and it is in a good way one thing it's I
think it's important is when you
understand the framework you know how
this thing what we discuss about the
four layers and so on then it's easier
for you to find information because you
know you you identify the principle you
already have some information and then
you identify I don't know which uh which
layer you want to work on and then
finding information or tools it's easier
but you need to have clear what is the
process you need to go through right so
yes let's close on the um and maybe if
there are any question I mean I like to
do this interactive because this is
super important nothing nothing has come
up in the chat yet but if something does
we'll answer it live I'll get back
everyone knows where to find me so all
right including the police right oh I
shouldn't say that okay
sorry all right so um well last thing is
about testing so testing it's U uh as I
said is um uh is crucial right like any
application you want to test it also for
responsible Ai and so on I mean you can
do in two ways so the one is of course
you uh you do test on your own you have
some prompt try to break the model try
to break your application you test the
different layers you test the the safety
the content safety uh you can hire a a
red team red team is somebody who try to
hack real hacker or behave like hacker
try to bre break your Mo your
application you want to do this test
learn Implement changes and so on this
is always valid and
and available however we have ai why
don't we use AI to break AI so Microsoft
actually in AI Studio there are some uh
uh models or um I don't know if that
really models but there are some
evaluation that you can run those on
your application and it will do the job
for you so it's called adversary AI so
it AI try to break AI yeah like
automated testing for regular code you
know ex oh perfect exactly testing so
try this
uh it will it will before you go live
you make uh you will save lot of lot of
uh uh you know uh hustle in for for your
application yeah and for everyone that
works with you exactly exactly because a
I can be you know you can be on the
newspaper quite soon so the last thing I
wanted to say and again if there are
questions more than happy uh is be
confident in I mean hopefully I scare
you enough but I didn't scare you enough
enough be responsible right right so
it's a scar it's a scar M uh subject but
at the same time if you do it right it's
super powerful so if you do it
responsibly I think uh uh you can get
all the benefit uh I think you can
confident
confidently innovate uh using our
platform using this guidance I gave it
to you but again don't think that is
safe by uh By Nature you need to put you
need to do your part so
uh and I think we also have a resource
slide or some links that I think mat is
going to share with you guys so you can
learn more all the links we share uh
will be available to you yeah I'll be
adding those to the discussion board
there's quite a bit to publish up there
so I'll be doing it throughout the day I
think all right L that's all I have Matt
yeah thank you so much for taking the
time um really you know honestly you and
I don't talk enough so this if I've got
a schedule for video it's the way I'll
do it no something we realize is
everybody's is is an expert of
responsible ey I mean even I mean
talking to mat is like we learn from
each other it's like don't think there's
somebody knows everything in this world
so I encourage you guys to do as well so
talk to your peers talk about this topic
it's it's going to
be a good experience for you co thanks
so much Lucha Take Care thank you all
thank you m
[Music]
n
تصفح المزيد من مقاطع الفيديو ذات الصلة
【B6】Copilot for Microsoft 365 で実現する未来の働き方とその準備のポイント
Bioshokさんシリーズ1〜AI脅威論がなぜ最近話題なのか?
3.顧客が品質の良い建築会社を選ぶチェックシート①
【生成AI後の資本主義】天才が経営する社員ゼロ企業が増える/スティーブ・ジョブズと英語を学ぶ/ビジネス芸人が廃れた理由/PIVOTが生成AIを活用するなら【Kaizen Platform 須藤】
Determinism in the AI Tech Stack (LLMs): Temperature, Seeds, and Tools
Fireside Chat: Blockchain x AI
5.0 / 5 (0 votes)