Responsible AI practices and the Innovation Challenge Hackathon

Microsoft DevRadio
30 May 202431:32

Summary

TLDRこのビデオスクリプトでは、マイクロソフトの責任あるAIの基準について詳しく説明しています。AI技術の進歩に伴い生じる新たな課題に対処するために、責任あるAIの重要性が強調されています。スクリプトでは、透明性、公平性、信頼性、プライバシー、包括性、責任などの原則を紹介し、それらを適用することでAIアプリケーションを安全かつ信頼性の高いものにすることができました。さらに、ツールやフレームワークを用いてAIアプリケーションのリスクを緩和する方法も提案されています。最後に、問題が発生した場合の対処策や事前テストの重要性についても触れられています。

Takeaways

  • 📢 マイクロソフトの責任あるAI標準についての重要性
  • 👨‍💼 講演者ルチョの役職と彼の日常業務の説明
  • 🔍 AI技術の進化に伴う新しいルールと課題
  • 🚗 自動運転車の安全性と責任問題の例
  • 📝 責任あるAIの原則についての詳細説明
  • ⚖️ 公平性:AIが偏見なく公正であることの重要性
  • 🔒 プライバシーとセキュリティ:データ保護の必要性
  • 🌐 包括性:全ての人々が技術にアクセスできるようにすること
  • 📜 透明性:AIの動作や結果を明示することの重要性
  • 👥 責任:AIの使用における人間の関与の必要性
  • 🛠️ AIモデルのテストとセキュリティ層の導入
  • 📊 マイクロソフトのAIスタジオでのモデルテスト方法
  • 📈 メタプロンプトの使用によるAIモデルの精度向上
  • 🔧 アプリケーションユーザーの教育とUXの向上
  • 📑 責任あるAIの実践のためのリソースとツール

Q & A

  • マイクロソフトが推進する責任あるAIとは何ですか?

    -責任あるAIとは、技術革新を通じて生み出された新たなルールやパラダイムを適切に扱い、適切に処理されなければ生じる可能性のある課題を未然に防ぐための標準的なアプローチです。

  • なぜ責任あるAIが重要なのですか?

    -責任あるAIは、技術によって生じるかもしれない偏りや安全性の問題を未然に防ぎ、倫理的な原則に基づいた信頼性の高いAIを構築するために重要です。

  • Luchoが所属するAzure AIとはどのようなサービスですか?

    -Azure AIは、マイクロソフトが提供するクラウドベースのAIサービスで、売り手のサポートや顧客ニーズの理解を通じてAIをビジネスに融合させることを助けるプラットフォームです。

  • 責任あるAIの原則には何が含まれますか?

    -責任あるAIの原則には、公正性、信頼性と安全性、プライバシーとセキュリティ、包括性、透明性、アカウンタビリティが含まれます。

  • 透明性とはどのような原則ですか?

    -透明性は、AIが何を行うものかをユーザーや関係者に明確に伝え、そのプロセスと結果を理解しやすくする原則です。

  • AIによるコンテンツフィルタリングとはどのような機能ですか?

    -AIによるコンテンツフィルタリングは、適切でないコンテンツや暴力、自己傷害、性的暴力などの疑わしいプロンプトを検出し、適切な応答を提供しない機能です。

  • メタプロンプトとは何で、どのような役割を果たしますか?

    -メタプロンプトは、AIモデルに対してどのように反応や行動をすべきかを教育する一種のガイドラインで、システムの安全性と正確性を高めるのに役立ちます。

  • ユーザーエクスペリエンスの改善に役立つツールはどこで見つけることができますか?

    -ユーザーエクスペリエンスの改善に役立つツールは、Microsoftのhack PlaybookサイトやAI Studioなどにあるほか、マイクロソフトが提供する様々なリソースから見つけることができます。

  • AIを用いたテストとはどのようなものですか?

    -AIを用いたテストとは、Adversary AIと呼ばれる手法で、AI自身を使ってAIアプリケーションをテストし、潜在的な問題を特定し、改善するプロセスです。

  • 何か問題が発生した際の対処法について教えてください。

    -問題が発生した際は、事前に計画された対策を実行し、迅速に対応することが重要です。また、AI Studioの評価ツールを用いてアプリケーションをテストし、問題を事前に特定し改善することが推奨されます。

Outlines

00:00

🤖 AIの責任基準の重要性

Lucho Lucho氏は、AIの責任基準についてMicrosoftが長い間取り組んできたことを紹介し、その重要性を強調しました。彼は、AI技術の新しい規範や課題について説明し、開発者がアプリケーションを作成する際には、この強力な技術が社会に与える影響を考慮する必要があると述べています。また、透明性報告の重要性と、Microsoftが公開しているベストプラクティスについても言及しました。

05:00

📜 AIの原則について

Microsoftが2006年からAIに取り組んできた歴史を説明し、AIの原則として公平性、信頼性と安全性、プライバシーとセキュリティ、包括性、透明性、そして責任について詳しく説明しました。これらの原則がAIの開発においてどのように適用されるべきかを強調し、具体的な例を挙げて各原則の重要性を説明しました。

10:00

🔍 テストとツールの活用

AIモデルのセキュリティテストの重要性を強調し、Microsoftが提供するツールやフィルターを使用して、AIモデルが安全かつ責任を持って使用されるようにする方法を紹介しました。特に、Azure OpenAIのコンテンツフィルターや、モデルのテスト機能について詳しく説明し、どのようにしてモデルの選択や運用時のリスクを軽減できるかについて述べました。

15:02

🔒 モデルのセキュリティとフィルタリング

AIモデルのセキュリティ層について説明し、特に最新のGPT-4モデルに導入されたコンテンツセーフティフィルターについて詳しく述べました。また、Microsoftが提供するコンテンツフィルタリング機能を利用して、アプリケーションの安全性を確保する方法を説明しました。このフィルタリング機能により、ユーザーはモデルへの問い合わせが適切かどうかを確認し、必要に応じて人間による審査を行うことができます。

20:04

🛠️ ユーザーエクスペリエンスの強化

ユーザーに対する教育の重要性を強調し、具体的な例として、Bing Chatの使用方法を紹介しました。ユーザーにAIモデルの動作や情報源を明示することが重要であり、透明性を持たせることで信頼性を高めることができると述べました。また、Microsoftが提供するツールキットを活用して、ユーザーエクスペリエンスを向上させる方法についても説明しました。

25:04

📚 テストとトレーニングの重要性

責任あるAIのためのテストとトレーニングの重要性を強調しました。AIを使用してAIをテストする手法として、MicrosoftのAI Studioで提供されるアドバーサリアルAIについて説明しました。これにより、AIモデルがリリース前に安全で信頼性の高いものであることを確認できます。また、テスト結果に基づいてモデルの修正や改善を行うことで、アプリケーションの品質を向上させる方法について述べました。

30:06

💡 AIの責任ある活用

責任あるAIの実装の重要性と、そのための具体的な手法について説明しました。ユーザーが安心してAI技術を利用できるようにするためには、適切なガイドラインとツールを使用することが重要であると述べました。さらに、継続的な学習と改善を通じて、AIの安全性と信頼性を維持することの重要性を強調しました。最後に、提供されたリソースとリンクを活用して、さらに学習を深めることを推奨しました。

Mindmap

Keywords

💡責任あるAI

責任あるAIとは、AI技術の開発と使用において倫理的かつ社会的に責任を持つことを意味します。このビデオでは、マイクロソフトが責任あるAI基準を導入し、ハッカソンプロジェクトがこの基準に従うことが求められています。

💡公平性

公平性とは、AIシステムが偏りなく全てのユーザーに対して公正に機能することを指します。ビデオでは、データの偏りを避けるための重要性が強調されており、特にローン申請などのアプリケーションにおいて差別を防ぐことが必要です。

💡信頼性と安全性

信頼性と安全性は、AIシステムが予測可能で安全に動作することを意味します。ビデオでは、自動運転車の例を挙げて、システムが適切に機能しない場合のリスクを説明しています。

💡プライバシーとセキュリティ

プライバシーとセキュリティは、AIシステムがユーザーデータを保護し、不正アクセスを防ぐことを指します。ビデオでは、データの安全性とモデルのセキュリティの重要性が強調されています。

💡透明性

透明性とは、AIシステムの動作や意思決定プロセスが明確であることを意味します。ビデオでは、ユーザーに対してAIの機能や意図を明示することの重要性が強調されています。

💡アカウンタビリティ

アカウンタビリティは、AIシステムの行動や結果に対する責任を持つことを指します。ビデオでは、自動運転車の事故などのシナリオで誰が責任を負うべきかについて議論されています。

💡インクルーシブネス

インクルーシブネスは、AIシステムがすべての人々に対して利用可能であり、特定のグループを排除しないことを意味します。ビデオでは、技術が言語の壁を超えたり、障害を持つ人々を支援する方法について説明されています。

💡マイクロソフト

マイクロソフトは、責任あるAI基準を開発し、その実践を推進している企業です。ビデオでは、マイクロソフトの取り組みとして、AIの透明性報告や責任あるAIの原則の公開が紹介されています。

💡AIモデル

AIモデルは、データをもとに学習し、予測や意思決定を行うアルゴリズムのことを指します。ビデオでは、様々なAIモデルがあり、それぞれのセキュリティと信頼性がテストされるべきだと強調されています。

💡トレーニング

トレーニングは、AIモデルがデータから学習するプロセスを指します。ビデオでは、マイクロソフトが従業員に対して責任あるAIのトレーニングを行っていることが述べられています。

Highlights

AIの責任基準について、Microsoftの長年の取り組みの紹介。

責任あるAIの基準がハックプロジェクトの評価基準に含まれている理由。

AIの新しい技術革新に伴う課題やルールの必要性について。

フェイクニュースや自動運転車のセキュリティ問題などの具体例の紹介。

AIモデルの透明性を確保するためのMicrosoftの報告書の紹介。

AIの責任を問うための新しい規制の必要性について。

AIモデルにおける著作権問題の議論。

責任あるAIの原則としての透明性の重要性。

Microsoftが公開しているAIのベストプラクティスと学習の共有。

責任あるAIの原則に基づくツールやアイデアの紹介。

責任あるAIの原則の詳細な説明とそれぞれの実装例。

フェアネス、信頼性、安全性、プライバシー、セキュリティ、インクルーシブネス、透明性、アカウンタビリティの重要性。

AIシステムのテストとレッドチームによる攻撃シミュレーションの重要性。

AIアプリケーションのユーザー体験を向上させるための教育とガイドライン。

AI Studioで利用可能なテストと評価ツールの紹介。

責任あるAIの実現に向けたモデル選択と評価の重要性。

Microsoftのコンテンツフィルターとその役割。

責任あるAIの実装に向けた具体的な提案と練習方法。

AIモデルのセキュリティ機能とそのテスト方法の紹介。

Transcripts

play00:00

[Music]

play00:08

[Music]

play00:23

hello Innovation Challenge hackers I am

play00:26

uh here with my colleague Lucho Lucho

play00:28

tar who is is uh one of our subject

play00:31

matter experts on AI and uh we're going

play00:35

to talk about the responsible AI

play00:36

standard uh what it is why we care about

play00:39

it share some tools and ideas and again

play00:42

it's important because well actually

play00:44

that's our that's our first question

play00:45

Lucho um aside from the fact that hack

play00:48

projects are getting judged um un using

play00:50

the standard what uh why why is the

play00:53

responsible AI standard Microsoft has

play00:54

been working on it for a while it's not

play00:57

new um tell us about it totally first

play01:00

let me introduce myself what I do in my

play01:02

day job yes I'm Ms me on the AI it's

play01:06

it's actually my passion I I really a

play01:08

technology so I like to do to learn or

play01:10

even my free space but what I do during

play01:12

the day I am the goto

play01:14

Market uh lead for the Americas on the

play01:16

Azure Ai and what it means is two things

play01:19

first I help our sellers to sell but

play01:23

also I I I want to understand what is

play01:25

the customer need so I spend most of the

play01:27

time with our customers and with you

play01:29

guys so if you guys have any question

play01:31

please uh put in the chat we would like

play01:33

to hear what are your challenges your

play01:35

your your um um you know your feedbacks

play01:38

and so because we always want to improve

play01:40

this is a new things for everybody so

play01:42

very very great to be to be here so uh

play01:45

back to your question why responsible I

play01:47

and why this is important I mean like

play01:49

any new uh Innovations or technology

play01:53

disruption uh there creation of new

play01:56

rules new paradigm new new capabilities

play01:59

there really

play02:00

um might create some challenges if it's

play02:02

not handled properly uh so you know for

play02:05

example like fake news uh or

play02:07

self-driving cars to security this is

play02:10

something that you know it was

play02:11

impossible before nobody thought about

play02:12

this selfcare security about you know uh

play02:15

who's responsible when you have an

play02:17

accident like the machine is the The

play02:20

Constructor is the user so we need to

play02:22

start thinking differently as some of

play02:24

the things is happening intentionally

play02:26

but sometimes it's also unintentionally

play02:28

so this is so important then when you

play02:29

guys are building an application think

play02:32

about uh uh the the consequences uh uh

play02:36

that this uh this new powerful

play02:39

technology can create uh or or you know

play02:43

um Can impact uh the

play02:46

society that's why important yeah yeah

play02:48

we so we talked a little bit about this

play02:49

the um the the transparency report and

play02:52

you know talking about like who's

play02:53

responsible should should I go ahead and

play02:55

this is one of the links in the in the

play02:57

discussion forum and click through on

play02:59

this

play03:00

what was what was this report yeah yeah

play03:02

yeah yeah that's thank you for for

play03:04

sharing that because I guess the uh the

play03:07

other question that I know I hear a lot

play03:09

from uh from our customers is okay who's

play03:12

responsible for all this right is a new

play03:14

space is that Microsoft responsible for

play03:16

it is the government reality is as I

play03:19

said this is a new thing and we don't

play03:21

really have a uh High regulation in this

play03:23

space like for example we have in

play03:25

medicine medicine we have is high

play03:27

regulated not so much in AI space uh so

play03:30

there are some rules uh you know around

play03:32

of course data privacy copyright

play03:34

infringement they are still valid but

play03:35

think about like copyright

play03:38

infringements you can register an IP if

play03:40

it's human created now if a machine

play03:43

creat something can you register or not

play03:46

is that considered copyright or not

play03:48

right the reality is depends how much

play03:50

human is is part of it so who's created

play03:53

the model as as you understand it can be

play03:55

very very

play03:56

complicated uh and are still you know we

play03:59

need to learn how how the best to do

play04:01

that so uh I'm going to share in a

play04:04

second something called responsible AI

play04:06

principle so one of those is called

play04:08

transparency so the very least we can do

play04:10

at this point is everything we do with

play04:12

AI we need to make sure we tell the

play04:15

users or uh you know whoever is working

play04:19

on this project what the I is meant to

play04:21

be because we need to be transparent on

play04:24

those so we can easily or we can do like

play04:27

reverse engineer can find uh you know

play04:29

what what's happening otherwise it's

play04:30

going to be out of control as part of

play04:32

that transparency principle we publish

play04:36

uh the the report us just said you just

play04:38

shared mat before it's a voluntary act

play04:41

from

play04:42

Microsoft uh if you go back on the top

play04:45

just uh yeah this this link over there

play04:48

exactly so this is available we

play04:50

basically share our best practices very

play04:52

openly with everybody there is no secret

play04:54

here we are all learning and so I

play04:57

strongly encourage you guys to to have a

play05:00

look uh uh share our learning Implement

play05:03

those learning and and really learn this

play05:05

is a space where we we really want to

play05:08

learn yeah there's a lot in here this is

play05:10

this is this is good I'm not I I admit

play05:13

this is the first time I click through

play05:14

on the link I need to take some time to

play05:15

read this oh yeah and we do every year

play05:19

so we have a year to to next year we're

play05:21

gonna make another one then we want to

play05:23

just sort of drill down into the into

play05:25

the principles a little more like

play05:26

transparency is nice how do we we learn

play05:28

more about transparency

play05:30

yeah yeah yeah yeah yeah so uh again if

play05:32

you go back uh uh can you go back second

play05:37

on the six yeah I would like to walk you

play05:39

through walk this team through this

play05:41

principles right so as you said

play05:43

Microsoft it's been a long time that we

play05:44

have been working on AI I think since

play05:47

2006 that doesn't look a lot long time

play05:50

but in AI time it's a long time because

play05:53

honestly at the beginning was more

play05:55

experimentation so since then we start

play05:57

to implement and refine uh this

play06:00

uh principles that we apply in

play06:02

everything we do with AI and we also

play06:04

share this publicly you we're going to

play06:06

share in a second because we want

play06:08

everybody to implement because it's as I

play06:10

mentioned before it's everybody

play06:11

responsibility about making sure this AI

play06:14

is going to be safe reliable uh for for

play06:17

the stuff so let me quickly walk through

play06:19

each of these principle I mean this is

play06:21

public uh open to everybody so you can

play06:23

definitely uh go down but because you

play06:26

guys are developing an application I

play06:29

want to make sure that before actually

play06:31

writing any code using AI you start

play06:34

thinking about uh this principle uh a

play06:38

front as I said so the first one is

play06:40

fairness um so we you want to make sure

play06:43

that everything you do with your

play06:45

application you're going to be fair uh

play06:48

with the audience so you don't want to

play06:50

have data biased so for example if you

play06:52

create I don't know an application to uh

play06:55

to decide if somebody can get a loan or

play06:58

not you don't want to discriminate one

play07:01

group versus the other because maybe you

play07:03

don't have enough data or you have a

play07:04

biased data so this fairness is

play07:07

something super super important because

play07:09

again more and more we see less and less

play07:11

human as part of the process and so you

play07:13

want to make sure the the system is

play07:16

reliable otherwise again it's going to

play07:17

be a biased project process sorry uh the

play07:21

second one uh Rel reliability and safety

play07:25

I mean this is it's easy to understand

play07:28

again I mentioned the example of of the

play07:29

autonomous

play07:31

car so make sure when you know you

play07:34

develop a system make sure it's reliable

play07:36

and and and safe it's not just about

play07:38

testing it's understanding the

play07:40

consequences and also having a

play07:41

remediation plan in place because you

play07:43

know if you start getting cars and gos

play07:45

whatever they want go fast don't stop at

play07:48

the stop sign it's going to be a

play07:50

disaster right so this is this is super

play07:54

important and everything we do

play07:55

everything uh we want to do we want to

play07:57

make sure it's reliable and safety and

play08:01

safe uh the third one is privacy and

play08:03

security this is not in you I mean uh

play08:06

privacy we have low regulations and so

play08:08

on the thing is uh with AI you can

play08:12

actually if you trick the model or you

play08:15

don't you don't have a safe safety

play08:17

around your model you can actually pull

play08:18

information very very quickly you know

play08:21

you can ask Chach PD anything it will

play08:23

tell you everything he knows right so

play08:25

you want to make

play08:27

sure uh heyy I somebody from Venezuela

play08:30

awesome very International crowd this

play08:32

day today all right uh so uh privacy uh

play08:37

it's is super super important so you

play08:39

need to protect your data but also the

play08:40

model because otherwise you will access

play08:42

the data in a in a in a smart way uh the

play08:46

the next one is inclusiveness this is

play08:48

very close to our mission mat I know

play08:50

this is a passion of yours uh is we want

play08:53

to make sure I mean AI has some natural

play08:56

capabilities to make technology uh a

play08:59

available to everybody I think about

play09:01

just simply like language translation I

play09:03

mean if somebody doesn't speak English

play09:05

or any other language you can actually

play09:07

communicate that was not possible in the

play09:09

past um but also you know making sure

play09:11

that people has

play09:13

like uh they blind or other uh other

play09:17

other problems so not problem other um

play09:22

disabilities we can actually argument

play09:24

those with the technology so when you

play09:26

build your application making sure you

play09:27

are inclusive because this is a terrible

play09:30

mistake we can do and we are not

play09:31

inclusive and we create a technology

play09:33

only for a piece of people and then you

play09:36

know it will create problem later on so

play09:40

uh inclusiveness is part of our

play09:41

principle so everything we do we take

play09:43

always this in

play09:45

consideration the next one is

play09:47

transparency as I mentioned before uh

play09:50

you want to make sure that everything

play09:52

the machine does you have a way as a

play09:54

user or as a programmer to know what

play09:57

what it did so for example if you're use

play10:00

being chat they using gen models in the

play10:03

back in the back end they every time it

play10:06

reports result they always give you

play10:09

citations they basically tell you oh

play10:11

this is where it come from so you can

play10:12

always double check and make and make

play10:15

correction as needed right and the last

play10:19

the last but not the least is

play10:21

accountability so this is not an after

play10:23

thought I mean none of this is an

play10:24

afterthought right the responsibility

play10:26

again I like the example of the

play10:28

connected car m

play10:30

uh as I mentioned before if there's an

play10:32

accident who's responsible for it so you

play10:35

want to make sure you create a system

play10:37

and in and you have some human always

play10:40

involved in that uh and making sure who

play10:43

who does what and what the remediation

play10:45

plan you don't want your machine to send

play10:46

email on your behalf because he can do

play10:48

terrible mistakes right so you got me

play10:51

thinking about insurance adjusters um

play10:53

yeah which is right that's a problem you

play10:57

know it is that's why actually m this

play10:59

concept of co-pilot but actually I mean

play11:02

human is always at the center we we we

play11:04

control the machine it will help us but

play11:07

you always want to have human

play11:09

interaction before you're actually doing

play11:11

something that to your point it can be

play11:13

potentially dangerous right yeah yeah

play11:16

let's let's go ah and um so if we go

play11:18

through this link I think this gets us

play11:20

into some of the yes you've got the

play11:22

standard I'm gonna scroll down real

play11:23

quick you've got more about the

play11:24

principles here and then a lot of

play11:27

interesting information where you can go

play11:28

even deeper on each principle and then

play11:31

you get down to the bottom and there's

play11:32

some good tools but I think we want to

play11:33

focus on the one up at the top first

play11:35

right yeah if I may that so this

play11:38

principle it's it's a great narrative

play11:40

but what doesn't really mean for you

play11:42

right so yeah gotta write some code yeah

play11:45

you at at the end of the day you're

play11:47

writing code

play11:48

right so as part of this responsible AI

play11:52

standard we actually for each of this

play11:54

principle we created some uh goals and

play11:57

if you could click on the documents

play11:58

actually you can see those

play12:01

goals uh yeah for example you see for

play12:03

each of the principle they will tell you

play12:06

uh in

play12:07

details um uh you know what yes they

play12:11

will tell you know what what exactly you

play12:13

want to mean and this can be customized

play12:14

depending on your on your use

play12:17

cases uh that's what uh um some of the

play12:22

the thing you want to answer to your uh

play12:24

on your

play12:25

application um we also provide a

play12:27

template uh I it's somewhere in the site

play12:31

uh actually I've got let me pull that

play12:33

one up real quick pull my notes off my

play12:35

screen oh no worries this is live

play12:39

so by the way folks if you guys have any

play12:42

question do you think this is

play12:44

interesting so let us know it's

play12:46

like it's that one hold on let me see if

play12:48

I can find this real quick I'll pull up

play12:49

my other notes

play12:57

yeah this is one I think we're looking

play12:59

for exactly exactly this has the this

play13:02

was the table I really like stakeholders

play13:05

benefits harms yeah this is the one

play13:08

actually you share with me I haven't

play13:10

seen that before so I think it's kind of

play13:11

new it was released uh quite kind of

play13:14

recently so this is awesome again you go

play13:17

here you do an exercise with your

play13:18

application it will help you to to avoid

play13:21

problem uh in the future yeah and the

play13:24

last piece about this principle is about

play13:26

tools there are a lot of tools I'm going

play13:27

to talk in a second that can help you to

play13:30

uh to really create a safe or

play13:33

responsible

play13:36

application do you want should we go

play13:38

into the tools now do you want to should

play13:39

I pull up your slides yes yes why not so

play13:43

uh um let me share my screen oh you have

play13:46

it already fantastic so um one of the

play13:51

one of the challenge uh in uh uh in AI

play13:55

of course is people are using AI uh

play14:00

sorry people try to trick AI to gather

play14:02

information right or maybe use AI in a

play14:06

in a way that is not supposed to be and

play14:08

so as as I can imagine there can be some

play14:10

some issues so uh in Microsoft we

play14:13

created this framework or I would say

play14:15

mitigation layers that will help you uh

play14:19

when you build your application to

play14:20

mitigate those risk I mean you cannot

play14:22

remove risks I mean like security there

play14:24

will always be uh hackers and people try

play14:28

to

play14:29

um to bypass security but you know with

play14:32

this framework we also improve and and

play14:34

try to keep up with with that

play14:37

so um let's start with uh um all the

play14:41

options you do have uh on um uh to to

play14:46

mitigate the risk so starting with the

play14:48

with the middle uh at the at the at the

play14:50

center uh at the very low level I would

play14:52

say you have the model it can be chpt it

play14:55

can be uh any Azure AI services you know

play14:59

any anyi is part of the model so the

play15:02

model itself has some security uh

play15:04

features so if you try to ask some

play15:07

question it will stop you right away

play15:09

right it will not process that it's like

play15:12

intrinsic so if you look about the

play15:13

latest GPT 4 they actually introduced

play15:17

some content safety filter within the

play15:20

model right so but models are are are

play15:24

different right not all the models are

play15:26

equal some models are more advanced and

play15:28

cuties and models are more advanced in

play15:30

other part and so on so something that

play15:33

uh uh we strongly recommend is before

play15:36

you building your app before you go live

play15:39

try to test the model so what you can

play15:42

see here on the screen these are all the

play15:45

uh all the model that are available uh I

play15:48

think as the generative AI model we have

play15:51

1667 as per today available so within

play15:54

Aur I Studio you can actually do some

play15:57

test directly directly there so you

play16:00

don't need to buy the model you just

play16:02

deploy there you can ask run some query

play16:04

and see if the model behaves as you

play16:06

expect uh testing is going to be

play16:09

fundamental for uh for uh for making

play16:12

sure application is secure so again

play16:14

before choosing the model one of the

play16:17

criteria to choose the model is about

play16:18

security so please do test the model um

play16:22

in aure AI studio uh do run the query

play16:25

run the prompt directly there so you can

play16:27

see what what what is capable to do do

play16:29

it or

play16:30

not the second layer uh uh is

play16:34

um where is it here so uh this is the

play16:39

it's outside the model uh it's Microsoft

play16:41

created another layer uh called uh

play16:44

content filter and this is super simple

play16:48

uh concept so when you have your

play16:50

application here you send your prompt

play16:53

your request you go to Azure openi uh

play16:56

there is this filter here uh

play17:00

called aury ey safety that intercept

play17:02

every prompt you make and basically what

play17:06

it does is it will analyze your prompt

play17:09

and categorize or uh score based on four

play17:13

criterias like hate uh sexual violence

play17:16

and self harm so if he if the model

play17:18

thinks hey uh this is maybe uh something

play17:22

to do with violence it will stop here so

play17:25

it will basically uh return an error

play17:28

that you know you can capture and and

play17:29

and Define but it will not return the

play17:31

results to the model more to the to the

play17:34

application more importantly it does not

play17:37

even go to the model it will not

play17:39

actually ask the query to the model

play17:40

because you don't want to use the model

play17:42

unless unless you want to because using

play17:45

the model has a cost so you want to

play17:47

protect and be safe as you can see here

play17:50

there is one line here so this is uh

play17:54

when the model is not sure about

play17:55

something perhaps I don't know it's

play17:57

borderline it's like uh it's about

play18:00

violence or not uh in that case it will

play18:03

deny the request but it sends this query

play18:06

this prompt to a to a human and then we

play18:09

can analyze and perhaps improve the

play18:11

model now this is the only situation

play18:15

where Microsoft give put data outside

play18:19

your tenant so if you have a

play18:21

confidential information and you don't

play18:23

want to do that you can disable this

play18:25

feature right so because our promise is

play18:28

our dat is your data sorry your data is

play18:31

your

play18:32

data your data is your data will not

play18:34

touch it this is the only exception but

play18:36

again you can disable so make sure that

play18:39

when you build your application if

play18:40

you're working with sensitive data you

play18:42

don't want to be used for training

play18:44

disable this feature it's super simple

play18:46

uh and you can do that one important one

play18:49

other important feature here

play18:51

is you can Define the filter level here

play18:54

because for example if you're riding uh

play18:57

a game uh especially I don't know Call

play18:59

of Duty I'm big fan of Call of

play19:01

Duty uh so maybe violence is not uh uh

play19:06

is not I mean it's it's part of the game

play19:07

right so you want to lower the threshold

play19:10

bi about violence or maybe if you're

play19:13

writing an Hospital application self har

play19:15

it might be one of the symptom or one of

play19:17

the reason why people go into the

play19:19

hospital so you don't want to filter

play19:21

those queries right so in some

play19:23

environment uh it might be uh okay to

play19:27

reduce but you have full control on how

play19:29

you want to uh move this uh you know

play19:32

this trigger to make sure you your

play19:34

application is

play19:37

safe all right

play19:40

uh yes so this is what I meant before

play19:43

for each of the layer you can select how

play19:45

strict you want to be uh uh this is

play19:48

really fully conf configurable and you

play19:52

you can decide uh what to do my our

play19:55

recommendation is usually start with the

play19:57

most safe one and then perhaps try to

play20:00

lower uh because again uh you want to

play20:03

start safe and then the more you learn

play20:05

the more secure it will be all right so

play20:09

the the next layer is about Mega prompt

play20:12

and and granted so this is a layer still

play20:14

in AI Studio that is between your

play20:17

application and the model right so you

play20:19

don't need to modify all the time the

play20:21

application if you want to do some

play20:23

granting because you can make some easy

play20:25

changes there that may help you to make

play20:28

your system more safe and I give you

play20:31

some principles here uh that we

play20:34

are uh we we strongly recommend to I

play20:37

would say educate your

play20:39

model um so it will it will act safely

play20:42

so the first part is you want to Define

play20:45

how the model you want to react or

play20:48

behave so for example you want to have

play20:51

what is the tone of the of the model you

play20:53

want to be you know respectful you want

play20:55

to be quiet you want to be uh Prof

play20:58

professional so this is something

play21:00

important then you you need to teach the

play21:01

model as well as you need to ground the

play21:04

model grounding the model means hey you

play21:07

need to talk only about this product if

play21:09

somebody asking any other question you

play21:11

shouldn't answer so your your purpose of

play21:13

this model is to answer question about

play21:16

this

play21:17

model uh the second part is about Define

play21:20

the output now if somebody stck in the

play21:23

model you can actually take out files

play21:25

from your data right if you if you don't

play21:27

have the proper uh

play21:29

uh security so defining hey I only

play21:31

create text I only create Json file or

play21:34

whatever you want to it will help you to

play21:36

secure some of the uh some of the attack

play21:39

um the the next one the third one is

play21:41

there are some case where are very

play21:43

complicated some difficult query maybe

play21:46

somebody tried to cheat the model you

play21:48

need to tell the model how to react and

play21:50

behave um uh in this situation right so

play21:53

for example if if the request is uh you

play21:57

know not not something you expected just

play22:00

say hey I'm not responding the model is

play22:02

not responding to this and so

play22:04

on and and the last one you want to

play22:07

create some mitigation so if somebody's

play22:08

trying to attack to uh or try to access

play22:11

one file you might decide I don't know

play22:14

create another API you can disable that

play22:16

file whatever is needed but definitely

play22:19

it will give you all the levers uh to

play22:22

Define um uh your uh uh your protection

play22:27

here is an example on the chat bot again

play22:29

depending on the use case these things

play22:31

might be different I don't know if you

play22:33

guys want to take a screenshot here this

play22:34

is real example so you can actually use

play22:37

this prompt or Mega prompt in azury I

play22:41

studio in your model so it will make

play22:43

your application or your chatbot safer I

play22:47

mean it's just minutes because you just

play22:48

need to do a cut and paste right if you

play22:50

go into in Internet uh you definitely uh

play22:54

find even more than that so uh this is a

play22:57

good tip that I would give give it to

play22:59

everybody all

play23:01

right next one is about uh uh of course

play23:06

testing is is is critical here right so

play23:08

we did some analysis and um these four

play23:12

lines uh it's how much we use meta

play23:15

prompt so if you don't give any

play23:17

instructions you see that the the DAT is

play23:20

67% so the model is very inaccurate it's

play23:24

less than 50% so but when you start

play23:27

adding some of the uh information to the

play23:29

model as you can see at the bottom one

play23:31

actually the defected is less than 1% so

play23:33

the model became very very accurate and

play23:35

this is without the training the model

play23:37

without doing anything else other than

play23:40

inform the model how they should behave

play23:42

and how to to to limit the stuff so it's

play23:46

responsible ey but also it's about model

play23:48

accuracy so this is something important

play23:50

to take in

play23:52

consideration all right so moving on to

play23:55

the next layer uh uh this is about your

play23:58

application user experience so because

play24:00

this is something new you want to make

play24:01

sure you

play24:03

educate uh the user how to use so I mean

play24:07

any any new things you do like even if

play24:09

you drive a car they will tell you what

play24:10

to do right so this is an example or our

play24:14

uh chat being chat you probably have

play24:16

been using so far and and there are some

play24:18

example on you know how we do that so

play24:21

for example you can ask give you

play24:24

example uh inform the user about the

play24:27

risks uh what is the tone they want to

play24:30

you want the mod model to to respond

play24:32

because has to do with accuracy uh as

play24:35

well as in the results you want to tell

play24:38

this is part of the transparency we

play24:40

talked before so first of all what the

play24:42

model is doing and the other one is for

play24:45

me it's a killer feature is tell where

play24:47

you get the information from because as

play24:49

a human I want to control I don't trust

play24:52

the machine as also mentioned in the

play24:54

other

play24:54

part um and and for for the ux for the

play24:58

interface actually there is a a lot of

play25:00

tool available um from Microsoft that

play25:03

can help you to create a toolkit and I

play25:06

know mat I think you share something you

play25:08

are

play25:09

also you created a um an example of

play25:15

that yeah was the yeah it's one here

play25:19

I'll go back to sharing my

play25:20

screen um so this is the hack Playbook

play25:24

um so this is the hack toolkit site here

play25:27

go this is what you're just showing I'll

play25:28

go ahead and put the link in

play25:31

chat and then we go into the Playbook

play25:35

this is kind of kind of nice it's sort

play25:38

of a question and answer thing so let's

play25:40

say okay I'm going to create a

play25:41

recommendation system sort of gives me

play25:44

scenarios for correct operation input

play25:46

let's say

play25:47

speech sort of starts to give you what

play25:50

errors might be sort of things to look

play25:51

out for will it have a c clear way of

play25:54

knowing when it should

play25:56

trigger no it's got to figure it out on

play25:58

own that kind of presents some other

play25:59

problems and you can kind of keep going

play26:01

through and get recommendations and

play26:03

guidance um you know along that line

play26:05

which I found be pretty effective in

play26:07

terms of again getting to like hey let's

play26:09

write some code what's the functionality

play26:11

have to be um I think that some of the

play26:14

this tools that we've got available you

play26:16

know in addition to checklist and

play26:18

process and policy um are there there's

play26:22

a lot of good stuff

play26:24

coming and the I think the you know it's

play26:30

at Microsoft it's interesting I think I

play26:31

first had to take the um access the

play26:33

responsible AI training the required

play26:36

training it was probably

play26:38

2019 was the so it's been you know

play26:40

almost five years um that we've been

play26:42

building on this and it's Microsoft

play26:44

right so we keep we build tools we build

play26:47

process we build policy um and it's very

play26:50

practical um and it's been kind of

play26:52

fascinating to you know a long time ago

play26:56

um I was a philosophy major and and so

play26:58

it's been like this is how Microsoft

play27:00

does ethics wow um and it's uh we do

play27:03

we're doing a pretty good job I have to

play27:05

say yeah um I think one last thing we

play27:07

wanted to touch on I know we're almost

play27:08

out of time for you Lucha um have a have

play27:12

a plan for when something goes wrong

play27:13

right we had the Bing

play27:15

examples um yeah totally totally so

play27:18

first of all I want to comment on what

play27:19

you just said right there are millions

play27:21

of resources it might be overwhelming

play27:23

and it is in a good way one thing it's I

play27:26

think it's important is when you

play27:29

understand the framework you know how

play27:31

this thing what we discuss about the

play27:33

four layers and so on then it's easier

play27:35

for you to find information because you

play27:37

know you you identify the principle you

play27:38

already have some information and then

play27:41

you identify I don't know which uh which

play27:44

layer you want to work on and then

play27:45

finding information or tools it's easier

play27:48

but you need to have clear what is the

play27:50

process you need to go through right so

play27:54

yes let's close on the um and maybe if

play27:57

there are any question I mean I like to

play27:59

do this interactive because this is

play28:01

super important nothing nothing has come

play28:03

up in the chat yet but if something does

play28:05

we'll answer it live I'll get back

play28:07

everyone knows where to find me so all

play28:11

right including the police right oh I

play28:13

shouldn't say that okay

play28:16

sorry all right so um well last thing is

play28:19

about testing so testing it's U uh as I

play28:22

said is um uh is crucial right like any

play28:26

application you want to test it also for

play28:28

responsible Ai and so on I mean you can

play28:30

do in two ways so the one is of course

play28:33

you uh you do test on your own you have

play28:35

some prompt try to break the model try

play28:37

to break your application you test the

play28:39

different layers you test the the safety

play28:41

the content safety uh you can hire a a

play28:44

red team red team is somebody who try to

play28:46

hack real hacker or behave like hacker

play28:50

try to bre break your Mo your

play28:51

application you want to do this test

play28:54

learn Implement changes and so on this

play28:55

is always valid and

play28:58

and available however we have ai why

play29:01

don't we use AI to break AI so Microsoft

play29:05

actually in AI Studio there are some uh

play29:07

uh models or um I don't know if that

play29:10

really models but there are some

play29:11

evaluation that you can run those on

play29:14

your application and it will do the job

play29:16

for you so it's called adversary AI so

play29:19

it AI try to break AI yeah like

play29:21

automated testing for regular code you

play29:23

know ex oh perfect exactly testing so

play29:27

try this

play29:28

uh it will it will before you go live

play29:30

you make uh you will save lot of lot of

play29:34

uh uh you know uh hustle in for for your

play29:38

application yeah and for everyone that

play29:40

works with you exactly exactly because a

play29:43

I can be you know you can be on the

play29:45

newspaper quite soon so the last thing I

play29:47

wanted to say and again if there are

play29:49

questions more than happy uh is be

play29:53

confident in I mean hopefully I scare

play29:55

you enough but I didn't scare you enough

play29:58

enough be responsible right right so

play30:01

it's a scar it's a scar M uh subject but

play30:05

at the same time if you do it right it's

play30:07

super powerful so if you do it

play30:09

responsibly I think uh uh you can get

play30:12

all the benefit uh I think you can

play30:15

confident

play30:16

confidently innovate uh using our

play30:19

platform using this guidance I gave it

play30:21

to you but again don't think that is

play30:23

safe by uh By Nature you need to put you

play30:26

need to do your part so

play30:28

uh and I think we also have a resource

play30:31

slide or some links that I think mat is

play30:34

going to share with you guys so you can

play30:36

learn more all the links we share uh

play30:39

will be available to you yeah I'll be

play30:41

adding those to the discussion board

play30:42

there's quite a bit to publish up there

play30:43

so I'll be doing it throughout the day I

play30:46

think all right L that's all I have Matt

play30:50

yeah thank you so much for taking the

play30:52

time um really you know honestly you and

play30:55

I don't talk enough so this if I've got

play30:57

a schedule for video it's the way I'll

play30:59

do it no something we realize is

play31:02

everybody's is is an expert of

play31:04

responsible ey I mean even I mean

play31:06

talking to mat is like we learn from

play31:09

each other it's like don't think there's

play31:11

somebody knows everything in this world

play31:13

so I encourage you guys to do as well so

play31:15

talk to your peers talk about this topic

play31:17

it's it's going to

play31:18

be a good experience for you co thanks

play31:22

so much Lucha Take Care thank you all

play31:24

thank you m

play31:27

[Music]

play31:30

n

Rate This

5.0 / 5 (0 votes)

Related Tags
AI責任倫理透明性プライバシー安全性技術革新顧客ニーズツール規制
Do you need a summary in English?