Legal implications of the Artificial Intelligence Act for cities, regions, and communities 20240619

LivinginEU
19 Jun 202449:38

Summary

TLDRLe script d'une conférence aborde en détail les implications du futur AI Act de l'UE, expliquant les obligations légales pour les fournisseurs et les utilisateurs d'IA, ainsi que les risques et les contrôles associés. Il met en lumière les systèmes d'IA à haut risque, la nécessité d'une évaluation des impacts sur les droits fondamentaux et la conformité avec la réglementation existante, comme le Règlement Généal sur la Protection des Données (RGPD). Les intervenants discutent également des défis et des stratégies pour l'adoption responsable de l'IA au sein des gouvernements régionaux et locaux.

Takeaways

  • 📋 Le système de notation sociale en Chine est interdit pour le secteur public et privé, bien que certaines exceptions soient autorisées sous certaines circonstances.
  • 🚦 La catégorisation biométrique et la reconnaissance facial sont des technologies sensibles qui ne sont pas autorisées sans autorisation préalable d'un juge, même si elles sont interdites.
  • 🈲 L'IA prédictive et l'IA de reconnaissance des émotions dans le lieu de travail sont interdites, sauf si elles sont utilisées dans des contextes spécifiques et contrôlés.
  • 🔏 La collecte non ciblée de données faciales est interdite, et les exceptions pour les cas de sécurité médicale sont soigneusement réglementées.
  • ⚠️ Les systèmes à haut risque d'IA sont soigneusement définis et nécessitent des critères de qualité minimum pour assurer la sécurité et la protection des droits fondamentaux.
  • 🛠️ Les systèmes d'IA à haut risque concernent des infrastructures critiques et des domaines tels que l'éducation, l'emploi, et les services essentiels, nécessitant une évaluation des impacts sur les droits fondamentaux.
  • 🏛️ Les autorités publiques ont des obligations spécifiques en matière d'évaluation des impacts sur les droits fondamentaux et doivent être conscientes des obligations de l'AI Act.
  • 📈 L'AI Act introduit un nouveau système de régulation pour les systèmes d'IA à grande portée et potentiellement à risque systémique, avec un bureau centralisé pour l'enforcement.
  • 🔄 L'AI Act entrera en vigueur progressivement, avec des délais différents pour les interdictions, les systèmes à haut risque et les applications physiques à haut risque.
  • 🤝 L'AI Pact a été créé pour encourager les entreprises à s'engager à appliquer les règles de l'AI Act avant leur entrée en vigueur officielle, montrant une approche proactive de la régulation de l'IA.

Q & A

  • Qu'est-ce que le scoring social et pourquoi est-il interdit ?

    -Le scoring social est une pratique d'origine chinoise où les individus commencent l'année avec un certain nombre de points, qui diminuent en fonction de leurs infractions (comme traverser un feu rouge). Lorsque le score devient trop bas, certaines libertés, comme prendre le train, sont restreintes. Cette pratique est interdite pour éviter une surveillance excessive et des restrictions injustes des libertés individuelles.

  • Dans quelles conditions la reconnaissance faciale est-elle autorisée pour les autorités publiques ?

    -La reconnaissance faciale est interdite sauf dans des circonstances spécifiques telles qu'une menace imminente d'attentat terroriste. Dans ce cas, une autorisation préalable par un juge est nécessaire.

  • Pourquoi l'émotion et la reconnaissance biométrique sont-elles interdites sur le lieu de travail ?

    -Ces pratiques sont interdites pour protéger les droits des travailleurs et éviter la surveillance intrusive. Par exemple, vérifier le sourire d'un employé lorsqu'il interagit avec des clients n'est pas autorisé.

  • Quelles sont les obligations des employeurs concernant l'utilisation de systèmes d'IA à haut risque ?

    -Les employeurs doivent suivre les instructions d'utilisation fournies par le fournisseur de l'IA, garantir une surveillance humaine adéquate, surveiller les systèmes d'IA pour les risques systémiques et les incidents graves, et informer les travailleurs et leurs représentants de l'utilisation de l'IA.

  • Qu'est-ce qu'un 'usage de haute risque' en matière d'IA selon l'AI Act ?

    -Un usage de haute risque implique des systèmes d'IA utilisés dans des infrastructures critiques, la sécurité publique, l'éducation, l'emploi, et l'accès aux services essentiels, où des erreurs peuvent avoir des conséquences graves sur les individus.

  • Comment l'AI Act prévoit-il d'assurer la transparence et la sécurité des systèmes d'IA ?

    -L'AI Act impose des exigences de transparence accrues pour les systèmes d'IA à usage général, incluant des évaluations de la conformité, l'enregistrement dans une base de données publique, et la divulgation obligatoire de l'utilisation de l'IA aux individus concernés.

  • Qu'est-ce que l'AI Pact et comment les entreprises peuvent-elles y participer ?

    -L'AI Pact est une initiative permettant aux entreprises de s'engager à appliquer les règles de l'AI Act avant leur entrée en vigueur officielle. Environ 500 entreprises ont déjà rejoint cette initiative.

  • Quels sont les principaux domaines de focus pour l'utilisation de l'IA au niveau régional en Flandre ?

    -Les domaines de focus incluent la définition d'une vision à long terme, la préparation des personnes et des organisations, l'assurance de la confiance dans les applications, la stimulation de l'innovation, et le développement de composants réutilisables d'IA.

  • Comment la Flandre se prépare-t-elle à l'introduction de l'AI Act ?

    -La Flandre prépare des directives pour l'utilisation de l'IA générative, évalue l'utilisation du co-pilote Microsoft 365, et développe des outils pour aider les autorités locales à se conformer aux exigences légales et garantir la confiance dans les solutions d'IA gouvernementales.

  • Quels sont les impacts potentiels des systèmes d'IA sur la gestion et la compétitivité des entreprises ?

    -Les systèmes d'IA peuvent améliorer l'efficacité et la prise de décision, mais ils nécessitent une gestion prudente pour éviter des risques élevés et garantir le respect des droits des individus, ce qui peut influencer la compétitivité et la réputation des entreprises.

Outlines

00:00

📉 'Social scoring et réglementation de l'IA en Chine'

Le paragraphe aborde le système de notation sociale en Chine, où les citoyens commencent avec un certain nombre de points et peuvent en perdre pour des infractions telles que ne pas respecter la lumière rouge. Il mentionne également la catégorisation biométrique, la reconnaissance faciale et la prédictive policing, expliquant les restrictions et les exceptions pour ces technologies. Par exemple, la reconnaissance faciale est interdite sauf dans des circonstances spécifiques, comme une menace imminente d'attentat. Le texte souligne également les nouvelles règles qui exigent une autorisation préalable d'un juge pour utiliser certaines technologies d'IA.

05:01

🔍 'Les obligations et le système de mise en œuvre de l'AI Act'

Ce paragraphe traite des obligations imposées par l'AI Act aux États membres et aux autorités nationales, ainsi qu'à la future Agence de l'IA. Il explique le rôle de l'AI Act dans la régulation des systèmes d'IA à risque élevé et la mise en place d'un système de sandbox pour faciliter le développement d'IA. Le paragraphe mentionne également la création d'une agence centrale pour la réglementation des systèmes d'IA à grande échelle et à risque systémique, ainsi que le lancement récent de cette agence.

10:04

📝 'Présentation sur les obligations des fournisseurs et des employeurs en vertu de l'AI Act'

La présentation se concentre sur les obligations spécifiques pour les fournisseurs et les employeurs d'IA en vertu de l'AI Act. Elle explique les différences entre ces deux rôles et les obligations correspondantes, notamment l'évaluation de la conformité, l'enregistrement des systèmes d'IA à haut risque, et l'obligation de divulgation. Le paragraphe met également en évidence les obligations spéciales pour les employeurs qui modifient un système d'IA à haut risque ou qui le déployent dans des contextes spécifiques, comme le travail ou l'éducation.

15:07

🏛️ 'Déploiement responsable de l'IA dans les autorités publiques'

Ce paragraphe décrit les efforts du gouvernement flamand pour intégrer l'IA de manière responsable et fiable. Il présente les principes directeurs pour l'utilisation de l'IA, les critères de fiabilité, et les initiatives en cours pour préparer les employés et les organisations à utiliser l'IA. Le texte mentionne également la création d'un centre d'expertise AI, la définition de lignes directrices pour l'utilisation d'IA générative, et l'évaluation de l'IA co-pilote pour les fonctionnaires.

Mindmap

Keywords

💡Système d'évaluation sociale

Le système d'évaluation sociale fait référence à une méthode de notation du comportement des individus, souvent utilisée dans le contexte de la Chine. Dans le script, il est mentionné comme étant interdit pour le secteur public et le secteur privé, soulignant les préoccupations en matière de confidentialité et de surveillance. L'exemple donné est que si un individu ne suit pas un feu de signalisation, des points lui sont retirés, ce qui peut entraîner des restrictions telles que l'interdiction de prendre le train si le score est trop bas.

💡Catégorisation biométrique

La catégorisation biométrique est une technique d'identification ou de classification basée sur des caractéristiques physiques uniques. Le script mentionne que cela peut être important pour identifier des tendances politiques, bien que cela soit considéré comme moins pertinent pour de nombreuses personnes. Cependant, il est questionné en termes d'éthique et de respect de la vie privée.

💡Reconnaissance faciale

La reconnaissance faciale est une technologie utilisée pour identifier les personnes en se basant sur leurs traits faciaux. Dans le script, elle est discutée comme un outil important pour les autorités publiques, notamment dans le domaine de l'application de la loi, mais soumise à des restrictions et nécessitant une autorisation préalable dans certaines circonstances, comme en cas de menace terroriste imminente.

💡Policing prédictive

La policing prédictive est une méthode d'analyse de données pour anticiper et prévenir les crimes. Le script indique que cette pratique n'est pas autorisée lorsqu'elle se concentre sur des individus spécifiques, mais reste possible lorsqu'elle est basée sur des lieux ou des périodes de temps spécifiques, comme par exemple le 31 août à l'après-midi.

💡Risques élevés

Les risques élevés font référence aux systèmes d'IA qui, bien qu'ayant le potentiel d'être bénéfiques, nécessitent des critères de qualité minimum en raison de leur impact significatif sur la vie des gens. Le script explique que les systèmes à risques élevés sont autorisés mais doivent être soumis à une évaluation du modèle de l'état de l'art pour s'assurer de leur fiabilité.

💡Systèmes d'infrastructure critique

Les systèmes d'infrastructure critique englobent les éléments essentiels à la fonctionne de la société, tels que la fourniture d'eau, la circulation routière, etc. Le script indique que les systèmes d'IA implémentés dans ces domaines sont considérés comme à haut risque et soumis à des obligations particulières pour garantir leur sécurité et leur fiabilité.

💡Formation et évaluation des employés

Le script aborde l'obligation pour les employeurs de suivre les instructions d'utilisation des systèmes d'IA, y compris la formation et l'évaluation des employés. Cela peut inclure la surveillance de l'IA pour détecter les dysfonctionnements ou les erreurs, et de fournir une formation adéquate aux employés pour gérer ces systèmes.

💡Services publics essentiels

Les services publics essentiels sont mentionnés dans le script comme un domaine où l'IA peut être utilisée pour fournir un accès à des services tels que l'éducation et la formation professionnelle. Cependant, l'utilisation de l'IA dans ce contexte est soumise à des obligations particulières en raison de son impact sur les droits fondamentaux des individus.

💡Évaluation d'impact sur les droits fondamentaux

L'évaluation d'impact sur les droits fondamentaux est un processus obligatoire pour les employeurs qui déployent des systèmes d'IA à haut risque, en particulier pour les autorités publiques. Le script indique que cette évaluation doit être réalisée avant le déploiement de l'IA et envoyée à l'autorité de surveillance du marché nationale.

💡Services d'assistance juridique

Le script mentionne les services d'assistance juridique comme un exemple de domaine où l'IA peut être utilisée, mais où des obligations particulières s'appliquent, notamment en ce qui concerne la protection de la vie privée et la conformité avec les réglementations en vigueur.

💡Conformité avec la réglementation

La conformité avec la réglementation est un thème central du script, qui insiste sur l'importance de s'assurer que les systèmes d'IA déployés respectent les obligations légales, y compris celles imposées par le règlement général sur la protection des données (RGPD) et l'acte relatif à l'IA. Les employeurs sont encouragés à évaluer les risques et à mettre en place des mesures pour garantir la conformité.

Highlights

La discussion sur le scoring social en Chine est interdite pour le public mais étendue au secteur privé, impliquant des points retranchés pour des actions telles que ne pas respecter un feu de signalisation.

La catégorisation biométrique est importante, bien que non pertinente pour tout le monde, pour identifier les préférences politiques potentielles.

Le reconnaissance facial est crucial pour les autorités publiques, bien que restreint et nécessitant une autorisation préalable dans certaines circonstances.

La police prédictive basée sur l'individu n'est pas autorisée, mais reste possible en se concentrant sur des lieux et des moments spécifiques.

La reconnaissance des émotions dans le lieu de travail est interdite, à l'exception de l'utilisation pour évaluer les clients.

L'extraction non ciblée est interdite, empêchant la création de bases de données d'images faciales sans but spécifique.

Les systèmes à haut risque d'IA sont considérés comme de bonnes IA, avec des critères de qualité minimum nécessaires pour éviter les impacts négatifs sur la vie des gens.

Les systèmes d'infrastructure critique nécessitent une vérification de l'IA avant utilisation, comme dans les systèmes de freinage des voitures.

Les autorités migratoires et les services d'asile sont des domaines à haut risque, nécessitant une attention particulière en cas d'erreur potentielle.

Les évaluations des candidats à des postes universitaires et la détection de tricheries sont des cas à haut risque pour la formation et l'emploi.

Les services essentiels au public, tels que le scoring de crédit et l'assurance-vie, sont des cas à haut risque soumis à des obligations spécifiques.

Le nouveau volet du AI Act concerne les systèmes d'IA à usage général avec des exigences de transparence supplémentaires.

L'Office de l'IA a été lancé pour superviser les systèmes d'IA à usage général, promouvoir la recherche et l'innovation, et gérer les aspects de sécurité.

Les boîtes de démarrage (sandboxes) sont mises en place pour aider les entreprises à développer de l'IA en facilitant l'accès aux questions réglementaires.

L'AI Pact a été créé pour permettre aux entreprises de s'engager à appliquer les règles de l'AI Act avant leur entrée en vigueur officielle.

Les autorités publiques et les organismes de surveillance du marché au niveau national sont chargés d'appliquer les règles sur les cas à haut risque.

Les obligations des fournisseurs et des employeurs en vertu de l'AI Act sont claires,区分着他们各自的 responsabilités et exigences spécifiques.

Les employeurs, y compris les autorités publiques, doivent effectuer une évaluation de l'impact sur les droits fondamentaux avant le déploiement d'un système d'IA à haut risque.

Transcripts

play00:00

a lot of uh discussion going on uh very

play00:02

quickly the social scoring uh obviously

play00:04

uh is something which is prohibited um

play00:07

and for everybody uh so it's public

play00:09

sector there was there from the

play00:10

beginning but it has also been extended

play00:11

to private sector now you know social

play00:13

scoring is the the idea of China you

play00:15

have thousand points at the beginning of

play00:16

the year if you cross the traffic light

play00:18

when it's red they take a point away and

play00:19

if you have 500 points you're not

play00:21

allowed to take the train anymore um we

play00:23

also have biometric categorization

play00:25

that's maybe uh not that relevant for

play00:28

many people but it's important to have

play00:29

that because there actually are people

play00:31

trying to do that you know try by the

play00:32

size of your your nose to identify

play00:34

whether you're going to vote left or

play00:35

right um um we have the face recognition

play00:39

that's very important for public

play00:40

authorities especially in in the law uh

play00:43

law

play00:44

enforcement um it is forbidden but it's

play00:47

allowed under certain circumstances for

play00:49

example if you have uh imminent threat

play00:51

of terrorist attack uh what has changed

play00:54

relative to the initial proposal is now

play00:56

that now you basically need a warrant I

play00:57

mean it's not called a warrant but you

play00:58

need a uh prior authorization by by a

play01:01

judge basically um so that it's possible

play01:04

under certain circumstances but you know

play01:06

it really has to be checked very

play01:08

much uh individual predictive policing

play01:10

uh that's also something obviously of

play01:12

interest for for public

play01:13

authorities is not allowed the focus

play01:16

being on the individual so uh no

play01:18

Minority Report you can still do

play01:21

predictive predictive policing uh on the

play01:23

base uh on the basis of place or time

play01:25

you know you can still try to get your

play01:27

AI to identify that the most dangerous

play01:29

time of the year is the 31st August in

play01:31

the afternoon and maybe the most

play01:33

dangerous road right now is the cross of

play01:34

rala and

play01:36

pluma uh so that's uh continues to be

play01:40

possible uh but it is it's a high risk

play01:43

um same as biometic everything which is

play01:45

forbid forbidden and if then there's an

play01:46

exception that exception becomes

play01:48

automatically high risk and I come back

play01:50

later to what that actually means also

play01:52

forbidden is um the is emotion

play01:55

recogition in the

play01:57

workplace um that figuratively speaking

play01:59

so uh it means for workers so if you are

play02:02

um you're not allowed to you know check

play02:05

the the smile on the face of your

play02:06

employee when they interact with

play02:08

customers but you are actually allowed

play02:10

to do that on the customer side um then

play02:13

is high risk of course um and that of

play02:15

course is something uh which applies to

play02:18

to public institutions as well

play02:19

especially also to education and so far

play02:21

as the public um and then uh there's

play02:24

exception for medical safety reasons so

play02:25

you can still have your camera wake up

play02:28

the lower driver before you fall to

play02:29

sleep

play02:30

uh and then last but not least

play02:32

untargeted scraping um you're not

play02:33

allowed to build up face spatial images

play02:37

um just for fun you know it's that's the

play02:39

Clear View case uh but it's also true

play02:41

not just for the internet also for True

play02:43

for CCTV so you can just use your camera

play02:46

from um you know your CCTV in the train

play02:49

station or in this big shopping center

play02:51

or in in the in

play02:53

the L in the um in the big building of

play03:00

of the city uh just to build up a uh a

play03:03

database of of face

play03:05

images so that's roughly the provisions

play03:08

you see it's still a very small number

play03:09

it's like six it's a bit longer than it

play03:11

was before but it's roughly roughly

play03:13

stayed in a very small number especially

play03:16

because that's a limited list of course

play03:17

and the the allowed ones is an unlimited

play03:21

list um the highrisk systems haven't

play03:24

changed very much

play03:26

um the uh one important thing to

play03:29

remember here is that highrisk AI is

play03:32

good AI so um the dividing line between

play03:35

better and good AI is really between

play03:37

forbidden and highrisk so the bad AI is

play03:39

forbidden you may not do it highrisk

play03:41

systems are potentially uh beneficial AI

play03:44

there's nothing wrong with them however

play03:47

because they can have a huge impact on

play03:48

people's lives if they're not done

play03:50

properly you really have to uh have

play03:52

minimum quality uh criteria and that's

play03:54

what the a activity does so the

play03:57

high-risk use cases um they concern

play04:00

certain critical infrastructure there's

play04:01

there's a well actually three two

play04:03

elements which are not on list here

play04:05

first of all there's the kind of

play04:06

manufacturing the the physical anything

play04:09

which can kill or M you moves fast is or

play04:10

made out of metal or has spikes motor

play04:13

sauce cars or whatever if you put any uh

play04:15

safety relevant uh AI into a safety

play04:18

relevant component uh that AI has to be

play04:20

verified you know if you have a put AI

play04:24

into the braking system of your car uh

play04:25

you need to verify the AI before you can

play04:27

actually use it there uh if you put a in

play04:29

the sound system you don't need to do it

play04:30

because it's not

play04:31

safy um and then there's the second list

play04:35

um which is not here uh because it's a

play04:37

bit long which is uh basically

play04:39

everything in the migration Asylum

play04:41

police enforcement area because there's

play04:42

quite a lot of cases in there uh when I

play04:45

say a lot it's still you know like 20 or

play04:47

25 cases so it's still a limited number

play04:49

but is quite a lot because not because

play04:51

the people in that area do a bad job but

play04:53

simply because if something goes wrong

play04:55

then you know that's a much bigger

play04:57

problem if you're talking about police

play04:58

you know if you the system Mal functions

play05:00

and you were put in excuse me uh to

play05:03

interrupt you but there is a comment on

play05:04

the chat uh could you please speak a bit

play05:08

lower I think that we having a bit of

play05:11

because it's a very complex and Rich

play05:14

topic and it takes a bit of time also to

play05:16

process the information thank you very

play05:18

much and apologies for interrupting no

play05:21

no

play05:22

sorry um okay so um that's because in

play05:27

the in the uh law enforcement area

play05:29

there's just if something goes wrong the

play05:32

consequences are much worse and if you

play05:34

are put into jail for 3 days before the

play05:36

AI system realizes it has made the

play05:38

mistake that of course is a much bigger

play05:40

problem than you know if some some other

play05:42

AI system malfunctions and maybe you

play05:44

know something in the factory goes

play05:47

wrong uh but in addition to that um we

play05:50

also have the list here the you need you

play05:53

have certain critical

play05:54

infrastructures um such as uh Road

play05:58

Traffic Supply water Etc where of course

play06:01

it's very important that these function

play06:02

for society that's why they're high risk

play06:05

uh but then you also have things like

play06:06

educational and vocational

play06:09

training so uh especially things like

play06:13

providing access to University selecting

play06:15

candidates from

play06:16

universities evaluating um students but

play06:21

also software to monitor cheating

play06:23

because obviously that's these things uh

play06:25

getting access to University passing

play06:27

your exam uh are very important for for

play06:30

for um for well basically for your

play06:32

future career and uh therefore if by

play06:35

mistake uh they consider that you have

play06:37

been teaching well that's not true that

play06:38

is a a serious

play06:40

risk um very important uh for the public

play06:44

sector just as well as for the private

play06:46

is employment um that's first of all uh

play06:50

the recruitment area so uh job

play06:53

applications or evaluation of candidates

play06:55

so uh sorting of CVS uh which by the is

play07:00

meant in a large sense so it's not just

play07:02

you have a job um and then you sort the

play07:05

CVS uh by uh buy an artificial

play07:09

intelligence system is also um you have

play07:11

candidates the kind of offers you

play07:13

provide to these candidates that also

play07:15

included here uh but also um for example

play07:19

the evaluation of workers or the job

play07:22

termination uh all of this is high risk

play07:24

well contribution to job termination of

play07:26

course and then we have the access to

play07:28

essential private and public

play07:31

services um that was there from the

play07:33

beginning um you're allware of of of the

play07:36

big Scandal the had in the Netherlands

play07:37

uh and it have been uh added a couple of

play07:40

private uh uh private Services

play07:42

especially in the financial area uh

play07:44

credit scoring uh pricing of life and

play07:46

health

play07:47

insurance um now all of these are

play07:50

high-risk cases and therefore they have

play07:51

to uh comply with a certain number of

play07:53

obligations and I'm I understand that

play07:56

Laura is going to explain to you the

play07:58

obligations so I'm going to skip that

play08:00

part um and in particular for public

play08:03

authorities there's the fundamental

play08:05

rights impact assessment which I'm going

play08:06

to skip as well um

play08:09

the new the big new element in the uh

play08:13

final version of the a Act is a Jal

play08:15

purpose

play08:16

AI um so here for the kind of big but

play08:21

well very big but not hugely very big

play08:25

systems you need you have additional

play08:27

transparency requirements um so that the

play08:30

people who might want to use a general

play08:32

purpose system for a specific purpose

play08:34

later on can actually fulfill the

play08:36

obligations and then the most important

play08:38

is the general purpose AI with systemic

play08:40

risk that's the um the uh the these are

play08:46

the general purpose systems which are

play08:48

incredibly big um so the threshold here

play08:51

has been set at the 10 of at the level

play08:53

of 25 flops floating operations I have

play08:56

given this speech a lot of times I've

play08:58

never met anybody who actually knows

play08:59

what a floating operation is uh but just

play09:01

to give you an idea uh CAD GPT 3 no

play09:05

sorry C GPT was developed with 10 at the

play09:07

level of 18

play09:08

flops uh now you might consider that you

play09:11

know 10 18 and 25 is pretty close uh but

play09:14

since we're talking about levels here

play09:17

that means it's a million times bigger

play09:19

than than chat GPD so they're very very

play09:21

very big systems these are the systems

play09:24

uh care uh which you know El musk talks

play09:27

about it talks about AI taking over the

play09:29

world the end of the the end of the

play09:30

world as we know it um now if you have

play09:34

these systems then you have obviously

play09:36

the same obligations as from a lower and

play09:38

you have to do a state-of-the-art model

play09:40

evaluation which basically means you

play09:41

have to try to to turn this system away

play09:45

from what it's supposed to do and make

play09:47

it do things that it's not supposed to

play09:48

do and if you U succeed in doing that uh

play09:52

then obviously you have to redo the

play09:53

system until eventually it only does

play09:55

what it's supposed to be

play09:57

doing um now with the general property

play10:00

AI comes another innovation in the AI

play10:03

act compared to what we originally

play10:05

proposed which is we now have a uh an

play10:08

enforcement system which is basically

play10:10

cut into two uh the specific AI has

play10:14

always been uh the idea has always been

play10:17

to implement that at a national level so

play10:19

member states uh designate an authority

play10:22

which can vary from Member state to

play10:23

member State everybody has a choice so

play10:26

some member states might want to um use

play10:28

their data protection authority others

play10:31

the cyber security Authority Spain as

play10:33

far as I know has decided to create a

play10:35

totally new Authority so that's you know

play10:37

totally up to member states uh and they

play10:39

will actually enforce all of the uh

play10:41

rules on highrisk cases uh and these

play10:44

member states then come together in the

play10:45

European AI board which is really

play10:47

glorified member states committee and

play10:49

then you have two committees which

play10:50

support the AI board one is the

play10:52

scientific panel where you have the

play10:53

academic experts telling them what's

play10:54

possible and what's not possible and

play10:56

then youve the you've got the advisory

play10:57

form which is really where the

play10:59

stakeholders are so you have industry

play11:02

you have uh Academia Civil Society smmes

play11:07

you know any kind of uh political group

play11:09

which might be interested and uh would

play11:12

want to uh know Express their opinions

play11:15

uh and that's roughly the way it stays

play11:17

and the new thing which we have is the

play11:19

AI office uh I hasten to add that we did

play11:22

not ask for that that's not an idea of

play11:23

the European commission the European

play11:25

Parliament decided that it was and the

play11:28

council agreed that was necessary for

play11:30

the general purpose AI uh to have one

play11:33

centralized enforcement body in Europe

play11:35

so the split is really specific AI

play11:39

National level with coordination at

play11:41

European level uh general purpose AI is

play11:44

being regulated by the AI

play11:49

office now the AI office actually was

play11:51

launched uh two days ago on

play11:54

Monday

play11:55

um it is a bit bigger than just the

play11:58

regulation um because and there I come

play12:01

back to what I said at the beginning uh

play12:03

we are not only about regulating AI we

play12:05

don't think AI is very negative we also

play12:07

think it very positive and therefore it

play12:09

was felt it was felt and it was that

play12:11

it's important that the AI office is not

play12:13

only dealing with the negative Parts but

play12:15

also for the positive parts so the AI

play12:18

office is both responsible for the

play12:20

regulation and the safety of AI but it's

play12:23

also important for uh it's also uh

play12:26

responsible for promotion of AI for

play12:29

research of a in Ai and for innovation

play12:31

of AI and for the use of AI for uh

play12:34

positive

play12:35

purposes um now as I said it was

play12:39

launched two days ago but of course for

play12:40

the moment it's still very much on paper

play12:43

um simply because the AI Act is not yet

play12:45

in force it will only come in for in

play12:48

force on the 1st of August uh and

play12:50

therefore we still basically have the

play12:52

same people we had before uh we will

play12:55

however start well we actually have

play12:56

started hiring but the people are not

play12:58

there yet uh hopefully the a office will

play13:01

be full strength by the end of the year

play13:02

and then it will actually be functioning

play13:05

as a office for the moment it's a bit of

play13:07

an empty shell but not totally empty

play13:08

because there are some people there but

play13:09

it's a shell which still needs to be uh

play13:13

filled uh an important element of the a

play13:16

act uh and here again I come back to the

play13:19

fact that we quite positive what the

play13:21

sandboxes uh the idea is that uh you

play13:24

have one uh in each member State um and

play13:28

basically basically the idea is that it

play13:31

allows companies to develop AI with a

play13:35

very easy access for regulatory

play13:37

questions to the authorities so if you

play13:40

if you develop your AI and you have a

play13:41

question uh and you're not sure whether

play13:44

you know that actually would be

play13:45

compatible or not with the a act then if

play13:48

you're in the sbox you actually

play13:49

basically just pick up the phone and ask

play13:51

uh and the reason one of the reasons we

play13:54

did that is because one of the lessons

play13:55

of the uh data protection regulation is

play13:58

that very of often the problem is not

play14:00

really what's allowed and what's

play14:01

forbidden but the uncertainty that

play14:02

people don't really know what they may

play14:04

and what they may not do and the

play14:06

sandboxes here are really an attempt uh

play14:09

to uh address that uh yeah for companies

play14:12

then the advantages that they um you

play14:15

know get much faster access to to advice

play14:17

and they don't need that many lawyers to

play14:19

actually tell them what they can and

play14:20

cannot do um the AI act uh enters into

play14:25

Force progressively over the next couple

play14:27

of years so the first first to come into

play14:31

uh into Force are the prohibitions after

play14:34

6 months so if the AI act enters into

play14:36

force on the 1st of August they come

play14:39

into force on the 1st of February next

play14:41

year the prohibitions are first because

play14:43

they're easiest to do you just have to

play14:45

stop doing them you don't have to get

play14:46

any certification or any controlled

play14:48

Authority or anything you just have to

play14:50

stop doing them you still need a

play14:52

transition period because you might want

play14:54

to replace a prohibited system by

play14:55

another one which is not prohibited

play14:59

uh and of course we will have to um uh

play15:02

provide guidance on that uh in general

play15:06

the a act uh was concluded as you may

play15:10

remember very much under time pressure

play15:12

and therefore there was quite a number

play15:13

of issues which were well not left open

play15:15

but maybe not as detailed as as it could

play15:19

have been and um the result of that is

play15:21

that the a office will have to draft

play15:23

something like 60 um acts 60 60 texts

play15:27

over the next uh well basically 24

play15:30

months uh they can be codes of conduct

play15:33

uh codes of practice delegated act

play15:35

implementing acts there a whole variety

play15:37

of of things to do and of course they

play15:40

they have to follow the um entry into

play15:42

Force so the first things we have to do

play15:43

is the guidance on prohibited systems uh

play15:45

we're working very much on that and then

play15:47

the next one uh which comes uh is uh the

play15:50

roots on general purpose AI uh and we're

play15:53

also very much working on that right now

play15:55

keeping in mind uh and here once again I

play15:57

come back to at the beginning um you

play15:59

know when we're drafting this we're very

play16:01

much trying to ensure that the AI act

play16:03

works but also to promote Ai and to make

play16:05

sure that it doesn't get stifled and The

play16:06

Innovation can be developed in Europe um

play16:09

even within the framework of that um of

play16:12

uh of that act last but not least um it

play16:16

comes into into Force relatively quickly

play16:19

but there's still two years to go uh for

play16:21

the highrisk applications and three

play16:23

years for the for the physical high-risk

play16:25

applications that's why our commissioner

play16:27

has created the AI Pact so it's an act

play16:29

with a p in front of it a PCT where

play16:31

basically companies can come forward and

play16:34

promise to already apply the

play16:36

rules uh ahead of time so uh instead of

play16:40

applying them in 24 months they can

play16:42

apply them already today and we've got

play16:45

around 500 companies already

play16:47

joining and the reason why that is

play16:50

actually possible is because many of the

play16:52

obligations we are asking companies to

play16:54

do is really only state-of-the-art and

play16:57

therefore many of large companies which

play16:58

have state state-of-the-art AI they're

play17:01

fulfilling them anyway and for them it's

play17:03

an easy way uh to just sign up to the a

play17:05

pack they don't really have to do much

play17:06

of an effort just a bit uh to to make

play17:09

sure that all of this is

play17:10

compatible okay and with that I have

play17:12

exceeded my time by five minutes I'm

play17:14

sorry about that um but I hope you have

play17:18

mercy with

play17:19

me thank you very much um and did I

play17:23

didn't um say but yeah we can ask a few

play17:26

questions now and then uh move to the

play17:29

next presentation so we have two

play17:31

questions in in the in the chat uh which

play17:34

one of these I I agree with Gabriel had

play17:36

the same um the same doubt on the AI PCT

play17:40

um because uh I also was not aware of

play17:43

this um I just heard it a few days ago

play17:46

in another presentation um and uh online

play17:49

uh there is also an expression of

play17:51

interest to join the AIP PCT um but it

play17:55

seems that it's mostly directed to

play17:57

Industries um

play17:59

and the question would be if it's only

play18:01

for public authorities so local Regional

play18:05

National authorities can also enter into

play18:07

the a act and contribute to it well

play18:11

fundamentally yes uh but is of course

play18:13

true and I did not mention the uh the

play18:17

actual obligations which in which come

play18:19

from the Air Act but it's the the

play18:21

obligations are mostly for the providers

play18:23

for the developers and therefore um I

play18:25

guess it's more for uh AI provide us and

play18:29

develop us and less for public

play18:30

authorities having said that you know if

play18:32

if public authorities develop their own

play18:34

systems or in so far as they apply

play18:36

systems like they're perfectly happy to

play18:37

join I mean we we're perfectly happy for

play18:39

them to

play18:40

join yes and I believe yes that that is

play18:44

true that most public authorities will

play18:45

fall in the category of uh deployers but

play18:49

uh then I think it could be interesting

play18:51

also for for for them so that's that's

play18:54

good to know and um thank you and then

play18:56

there is a a second question in the in

play18:59

the chat uh how do you foresee that the

play19:01

understanding of uh what establishes the

play19:05

different uh oh it's

play19:07

not very clear um maybe we put the

play19:12

question in the chat would you like to

play19:14

jump pin a second I'm not sure if you

play19:17

ask about the

play19:19

risk uh yes I'm talking about the risk

play19:23

ah yes perfect so how it's um uh is

play19:27

defined what's I can what's not I'm not

play19:30

sure if that is a question well I mean

play19:33

the we have a different definition in in

play19:35

the a act what is a high risk and

play19:37

basically there a that's a list you know

play19:39

how how grave it can be uh for the

play19:41

people affected How likely people are

play19:43

affected what kind of particular groups

play19:45

Etc I mean it's an I don't remember

play19:47

article but it's a long list and um we

play19:49

will do a revision uh of the of the

play19:52

high-risk uh cases of the list of

play19:54

high-risk cases regularly and basically

play19:56

we'll take the exact uh um the exact

play19:59

criteria which are in art in in that

play20:01

article and we just apply them and see

play20:04

uh what else might be considered high

play20:06

risk or might might need to be

play20:08

considered high risk now the idea of

play20:10

course is that uh for things like that

play20:12

you have the European AI board because

play20:14

that's where you will have the national

play20:15

authorities uh and the national

play20:17

authorities will deal with these things

play20:18

every day and therefore they will

play20:20

realize if something comes up uh is

play20:23

developed comes to the market uh which

play20:26

is a highrisk case and which possibly

play20:27

should be addressed

play20:29

uh and then they would actually come to

play20:30

the I board and tell the other National

play20:32

authorities that and then the other

play20:34

National authorities which may or may

play20:36

not have made the same experience I

play20:38

would then come to the conclusion that

play20:40

maybe we should be looking at that and

play20:41

then we will look at that according to

play20:42

the criteria which which have been which

play20:45

are set out in in article I think it's

play20:46

Article

play20:49

Five yeah thank you and there's another

play20:52

couple of questions and also another one

play20:55

from from my side but I I will give the

play20:56

floor to laa so that she can present um

play21:00

and then I will probably there will be

play21:01

other few questions for you Martin um

play21:04

but thank you in the meantime um and uh

play21:08

laa if you want to go ahead with your

play21:11

presentation I do just let me share my

play21:22

screen it might just take a second from

play21:24

my end because I'm needing to give um

play21:27

permission to

play21:29

uh the app so try and get it up in the

play21:34

meantime that would be very

play21:35

helpful

play21:37

yes no

play21:45

voice let me open your

play21:49

presentation I have the PDF

play21:54

I okay

play22:04

strange let me try on my

play22:07

end might just make

play22:11

[Music]

play22:13

it open but it doesn't

play22:21

show let me try again

play22:42

yes perfect it works can you see it yes

play22:46

fantastic all right let me

play22:52

just I'm sorry I don't know if you can

play22:54

see the full screen or just the slide

play22:57

just the slid

play22:59

uh full screen would be great but

play23:01

otherwise it's uh it's also fine zooming

play23:05

in would be also okay all right let me

play23:08

just put it inside sh up then so we can

play23:10

get there all right sorry technical

play23:13

delays um my name is Laura Lazaro

play23:15

Cabrera I'm a council and program

play23:17

director for equity and dat at CDT

play23:19

Europe that's the center for democracy

play23:22

and Technology we're brussels-based the

play23:24

nonprofit Civil Society organization

play23:26

that works towards the preservation of

play23:28

human rights in EU law and policy and

play23:32

today I'm going to be diving with you

play23:35

further into the obligations that the AI

play23:37

act creates for providers and employers

play23:40

within the AI act taxonomy um just to

play23:43

situate my presentation within what's

play23:45

already been said uh you remember the

play23:47

traffic light system that was mentioned

play23:49

by Martin um essentially here we're

play23:51

going to be talking about the

play23:53

obligations imposed in relation to

play23:55

high-risk AI systems so you remember

play23:58

there or the unacceptable um types of AI

play24:01

so that's a red light the high-risk a

play24:03

systems which are the orange light and

play24:05

then um I guess the yellow light would

play24:07

correspond to to those AI systems that

play24:10

present specifically a transparency risk

play24:12

but we're talking about the orange ones

play24:13

now so the ones are not prohibited but

play24:15

just

play24:17

below just moving to the next slide for

play24:20

today I really have three goals uh we

play24:22

don't have a lot of time so I'll take

play24:24

you through these briefly the first goal

play24:26

is to uh be in a position where we can

play24:28

differentiate what obligations

play24:30

correspond to Providers and what

play24:31

obligations correspond to employers the

play24:34

second one is to uh have a basic

play24:37

understanding um of the obligations of

play24:39

the employers and I say basic because we

play24:43

could spend hours talking about the

play24:44

detail of the obligations and also as

play24:47

Martin already mentioned there will be a

play24:49

lot of um outputs coming out from the

play24:52

Commission in the AI office specifically

play24:54

so we know uh from maron's presentation

play24:56

we will be expecting further guidelines

play24:58

on high-risk a systems but there are

play25:00

also guidelines forthcoming on

play25:01

prohibitions for example so there is a

play25:04

lot of ink yet to run um on many aspects

play25:07

of the AI act so this is just a

play25:09

preliminary overview um of those

play25:11

obligations and lastly I'd like to take

play25:14

you through a few key considerations uh

play25:16

for you to take into account prior to

play25:18

deploying an AI system so here I'm going

play25:20

to be moving us away from strict legal

play25:22

compliance and a little bit more into

play25:25

the territory of best practice

play25:28

so without further Ado um what are we

play25:31

talking about when we talk about uh the

play25:33

role of public authorities in the AI act

play25:36

um the taxonomy is broader than just

play25:38

providers and deployers we're also

play25:40

talking about um Distributors and

play25:42

importers but for the purposes of of

play25:45

this presentation and this audience it

play25:47

makes sense to focus on these two

play25:48

concepts providers and

play25:51

deployers as was already mentioned

play25:53

really the bulk of the obligations rests

play25:56

with providers specifically so from a

play25:58

compliance perspective you probably

play26:01

don't want to be rushing to be a

play26:02

provider unless you have the

play26:03

infrastructure already in place to do so

play26:05

properly um and as you can see from the

play26:08

definition a provider can include a

play26:10

Public Authority uh so it will be for

play26:12

instance a Public Authority that

play26:13

develops an a system or has one

play26:16

developed and placed on the

play26:18

market however I think the bulk of you

play26:21

here in the audience that represent

play26:22

local authorities will most likely um

play26:25

have your Authority fall within the

play26:26

category of the employer if you choose

play26:28

to to deploy an AI system so that will

play26:32

be uh any entity including a Public

play26:35

Authority that uses an AI system under

play26:37

its Authority but and this is a key

play26:39

point a deployer can become a provider

play26:42

within the terms of the act under a few

play26:44

circumstances so there's three uh here

play26:47

on the side I've only put two because I

play26:49

think those are the most likely ones um

play26:51

the first situation is if the deployer

play26:53

makes a substantial modification to a

play26:56

high-risk a system such a remains a

play26:58

high-risk AI the second one is if the

play27:01

deployer modifies the intended purpose

play27:03

of the AI system including if it is a

play27:05

general purpose a model but we won't be

play27:07

getting too much into that and if now

play27:10

that the purpose is modified that brings

play27:13

the AI system into the high-risk

play27:14

category whereas before it was not in

play27:16

that category and there's a third

play27:18

instance where the deployer puts its

play27:20

name or trademark on the highrisk K

play27:22

system uh which is I think a foreseeable

play27:25

instance where the deployer will become

play27:27

a provider so be warned of these

play27:29

distinctions because the moment you step

play27:31

into this territory you could then be

play27:33

holding yourself to the higher standard

play27:35

the more complex obligations imposed by

play27:38

the

play27:39

act so ahead of jumping into the

play27:42

deployer obligations um I'd like to

play27:44

quickly run you through the obligations

play27:46

applicable um to providers so there's a

play27:50

few technical ones um for instance uh

play27:52

ensuring there is a quality management

play27:54

system in place in relation to the high

play27:56

risk AI they deploy keeping technical

play27:58

documentation as well as logs um but

play28:01

more importantly and this is the bulk of

play28:03

the of the obligations it's to ensure

play28:06

that there is a Conformity assessment

play28:08

undertaken in relation to the specific

play28:10

high-risk AI um that is being placed on

play28:13

the markets and as many of you might

play28:16

already know this is a process that

play28:18

predates the AI act uh Conformity

play28:20

assessments have been around uh in

play28:22

product safety legislation for a long

play28:24

time and because the AI Act is at its

play28:26

hard product safety legislation as well

play28:28

as well even though it brings in other

play28:29

other considerations it's something

play28:31

that's been essentially adapted to the

play28:34

AI world but has been around for a

play28:36

while um other obligations that tie in

play28:40

with the the plers that obligations will

play28:42

have include the registration of

play28:44

high-risk a systems in the database um

play28:47

so the commission will have to develop a

play28:49

database recording all of the high-risk

play28:52

AI uses in the continent and this will

play28:55

obviously include an obligation on

play28:57

provider to register the AI systems on

play29:00

the database um similarly providers have

play29:03

an obligation to disclose an AI system

play29:05

in certain

play29:06

circumstances um though we will come to

play29:08

see that employers have a similar

play29:10

obligation in place and lastly they have

play29:12

an obligation to ensure that they're

play29:14

providing proper instructions for use um

play29:17

this one is particularly important for

play29:19

the employers because the employers will

play29:21

have an obligation that will come to

play29:23

soon to actually um ensure that these

play29:27

disclosure these instructions are being

play29:29

followed uh so just putting that out

play29:31

there for you all to take into

play29:36

account now looking at what deployers uh

play29:39

must specifically do Under the AI act um

play29:42

so we already talk about um the

play29:44

registration of the high-risk a system

play29:46

in the relevant database this is the

play29:49

case already for for providers to do but

play29:52

the employers that are our public

play29:53

authorities have a complementary

play29:55

obligation to ensure that a specific

play29:58

section of the information that's to be

play30:00

put in the database is filled out uh so

play30:03

public authorities will specifically

play30:04

have to look into this and make sure

play30:06

that they have um the relevant

play30:08

information which will then be depending

play30:10

on the type of high-risk a system be

play30:12

publicly available um for other people

play30:15

to

play30:16

consult there's also a few safety

play30:18

obligations for the employers to take

play30:20

into account um and these come in in

play30:24

three different flavors so the first one

play30:27

is to follow instructions for use

play30:29

already developed by the provider as

play30:31

well as ensuring that there are

play30:33

sufficient Technical and organizational

play30:35

measures in place to be able to actually

play30:38

uh follow these

play30:39

instructions uh similarly the employers

play30:42

have to ensure have a certain obligation

play30:44

at least to ensure that the AI system is

play30:47

working properly so they will have to

play30:50

have systems in place so that human

play30:52

oversight can happen and individuals in

play30:55

charge of this oversight will need to

play30:56

have the necessary competence training

play30:59

Authority and support to be able to do

play31:02

so um lastly they will have to monitor

play31:06

AI systems for um two things that are

play31:09

defined extensively by the AI act so

play31:11

systemic risk or alternatively the

play31:14

likelihood of the high-risk a system

play31:16

resulting in a serious incident again

play31:18

another concept um extensively covered

play31:20

by the by the AI act so it's not nothing

play31:23

um of course providers have to make sure

play31:25

that their uh high-risk a assistants uh

play31:28

function properly but the plers will

play31:30

have a significant role in monitoring

play31:32

that this is the case even after the

play31:34

provider has made this

play31:36

assessment another thing that um was

play31:38

mentioned in passing in the previous

play31:40

presentation is the obligation tot take

play31:42

a fundamental rights impact assessment

play31:44

now this is a key obligation and it's

play31:46

one that again applies specifically to

play31:49

the employers that are public

play31:51

authorities or in the words of the act

play31:53

the employers who are governed by public

play31:55

law now the fundamental rights in

play31:58

assessment will be mandatory in relation

play32:00

to highrisk a systems and uh the

play32:04

template for it will be developed by the

play32:06

AI office so there isn't one already in

play32:09

place but once there is then public

play32:12

authorities there are deployers surve

play32:14

systems will need to fill out that

play32:15

templat and ensure that they send it on

play32:18

to the relevant Market surveillance

play32:19

Authority at National level and this is

play32:22

something that they need to have in

play32:23

place prior to the deployment of an AI

play32:25

system so this is really a key item

play32:28

um to put front and center in the in the

play32:30

compliance list there are a few special

play32:33

obligations that employers have to take

play32:35

into account as well and that the

play32:38

special the use of the special uh word

play32:40

is entirely mine but I'm using it to

play32:43

Showcase that these obligations will be

play32:45

dependent on the context uh or the

play32:47

setting in which the deployers are

play32:50

seeking to deploy the AI system so

play32:52

firstly um in the situation where a

play32:55

deployer has control over the input data

play32:58

they will have to ensure that that data

play33:00

is sufficiently representative in view

play33:02

of the purpose of the AI

play33:04

system similarly uh if the deployer is

play33:08

choosing to deploy the high-risk a

play33:10

system in their workplace then they will

play33:12

have an obligation to inform the

play33:14

affected workers as well as workers

play33:16

representatives and uh and this is one

play33:19

of the trickier uh types of highrisk a

play33:21

systems to use if they're using um post

play33:24

remote biometric systems so that is

play33:26

systems that carry out biometric

play33:28

identification but not real time they

play33:31

will have an obligation to obtain

play33:33

authorization for this use within 48

play33:35

hours and they will have to submit an

play33:37

annual report to Market surveillance

play33:38

authorities and data protection agencies

play33:41

so there are a few uses of AI uh

play33:44

classified as high risk that still come

play33:46

with special obligations in view of the

play33:49

risk that they're likely to

play33:51

present there are other obligations that

play33:54

are more user facing um so for instance

play33:57

uh employers will have the obligation to

play33:59

inform individuals if AI is used to make

play34:02

decisions about them or to assist in

play34:03

making decisions about them many of you

play34:06

will remember uh from Lessons Learned in

play34:08

the data protection context uh automated

play34:11

decision- making is already prohibited

play34:13

to a certain extent but the AI act takes

play34:15

this obligation or this prohibition a

play34:17

little bit further and goes as far as

play34:19

the state if you're making a decision

play34:21

that's assisted by AI then you have to

play34:23

tell individuals um similarly there's an

play34:26

obligation to provide a clear and

play34:28

meaningful explanation of any AI

play34:30

assisted decision making and here um

play34:34

this is not so much an obligation on

play34:36

deployers as it is a right uh for

play34:38

individuals however effectively

play34:40

translates into an obligation so the ACT

play34:42

creates a right for individuals to seek

play34:44

this type of explanation but in turn

play34:46

that will mean or by extension that the

play34:49

employers will have an obligation to

play34:50

provide it so this is important as well

play34:53

ensuring that there is an infrastructure

play34:54

in place to manage these requests and

play34:56

address them

play34:58

uh and lastly disclosure obligations so

play35:01

we already covered that providers have a

play35:03

disclosure obligation in some instances

play35:06

there will be a concurrent disclosure

play35:08

obligation for deployers as well so for

play35:11

example uh if a deployer is using um an

play35:15

emotional recognition or biometric

play35:17

categorization AI system they will have

play35:19

to disclose that unless the use is for

play35:22

the prent the prevention detection or

play35:24

investigation of criminal offenses

play35:27

if uh similarly a deployer is using text

play35:31

generating AI with a specific purpose of

play35:33

informing the public on matters of

play35:35

public interest again they will have to

play35:37

disclose unless uh similarly the

play35:40

criminal investigation exception applies

play35:42

or alternatively the AI uh has undergone

play35:45

a process of human review or editorial

play35:48

control and there is somebody who holds

play35:50

editorial responsibility over this

play35:53

content um and finally when it comes to

play35:55

deep fakes which uh some of you may may

play35:57

have missed the AI act explicitly

play35:59

addresses and also the fines um again

play36:02

will be an opportunity to disclose

play36:04

unless once more the uses um authorized

play36:07

for the detection prevention or

play36:09

investigation of

play36:12

crime so to really finish I'd like to

play36:16

take us through um a deployer checklist

play36:19

things to take into account prior to

play36:20

deploying AI as has already been

play36:22

mentioned there's still time before the

play36:25

relevant provisions of the AA become

play36:27

applicable

play36:28

um it is foreseen that the ACT will

play36:30

become into Force as a piece of

play36:31

legislation in July this year but

play36:34

obviously the different sections of the

play36:36

ACT are going to be becoming applicable

play36:39

in a staggered manner first one's uh the

play36:42

first section to become applicable or

play36:44

the article will be the one of

play36:45

prohibitions and then uh further down

play36:47

the line we'll be looking at the

play36:49

sections on high-risk a assistance which

play36:51

are the ones we're touching on here so

play36:52

there's still time but these are

play36:54

considerations um to have in place

play36:56

nonetheless before then

play36:58

so to start with obligations outside of

play37:00

the AI act the AI act States in several

play37:03

places that it's without prejudice

play37:06

essentially to pre-existing legislation

play37:08

on a series of areas but the one I want

play37:11

to talk about specifically is the

play37:12

general data protection regulation and

play37:15

to give you an example of how important

play37:18

data protection is in the context of the

play37:19

AI act um within the summary that the

play37:23

public authorities will have to provide

play37:25

in relation to high-risk AI systems to

play37:27

include in the commission database they

play37:30

will need to provide also a summary of a

play37:32

data protection impact assessment if

play37:34

they're compelled by law to carry one

play37:36

out so these two um are intimately

play37:39

linked and it's really relevant to

play37:41

consider to what extent the uh

play37:44

deployment of an AI comes with

play37:46

additional obligations so you can think

play37:49

of providers and deployers as uh

play37:51

controllers and processors respectively

play37:54

following the terminology of the gdpr so

play37:57

that's

play37:58

number one uh number two whether the AI

play38:01

being deployed is high risk so we

play38:04

covered the different instances or the

play38:06

different uh use cases that could fall

play38:09

as or be categorized as high risk within

play38:12

the act by way a reminder if any of you

play38:15

is Keen to go back to the text of the

play38:17

act after this presentation you'll find

play38:18

those irisk categories in part in Annex

play38:22

3 although as was already mentioned an

play38:24

AI system will also be high risk if the

play38:26

AI is used as a

play38:28

product safety component but in essence

play38:31

uh once uses in classified as high-risk

play38:34

in anex 3 there will be an opportunity

play38:36

for deployers to assess whether they

play38:38

consider that the AI the high-risk AI

play38:41

system is in fact lowrisk or minimal

play38:44

risk so here the any local Authority

play38:48

wishing to deploy AI systems will have

play38:49

to consider whether indeed that

play38:51

particular use of AI is high- risk

play38:55

because that determination for of all we

play38:57

need to be

play38:58

recorded and then that will invite the

play39:01

book of the obligations are likely to

play39:03

apply um on the under the AI act another

play39:07

thing to consider is a fundamental

play39:08

rights impact assessment I already

play39:10

mentioned that this uh will uh have to

play39:13

be undertaken prior to deployment um but

play39:17

maybe best practice I first of all it

play39:19

will need to be included in the

play39:22

high-risk um use cases database uh

play39:24

prepared by the commission not in full

play39:26

but a part of it or at least a summary

play39:29

uh but it might be best practice to make

play39:31

these fundamental rights effect

play39:32

assessments more public to the extent

play39:33

that it is possible to ensure that there

play39:36

is appropriate Civil Society or public

play39:38

oversight um another aspect to consider

play39:42

is the sufficiency of AI disclosure

play39:44

providers will have this obligation

play39:46

already but deployers will need to

play39:48

consider if they have additional

play39:49

obligations on top of the obligation

play39:51

already held by employers and if they're

play39:54

taking the necessary steps to make sure

play39:57

that this disclosure happens in the way

play39:59

it's intended to for um individuals who

play40:02

are affected by the AI system or are

play40:03

facing the AI

play40:05

system then another um aspect to

play40:09

consider would be the clarity and

play40:10

robustness of instructions for use so

play40:13

the aak already sets a baseline for how

play40:15

thorough these instructions must be by

play40:18

we of reminder these would be the

play40:19

instructions provided by the providers

play40:21

to the deployers and they must be

play40:23

thorough enough to allow a deployer to

play40:25

be able to use the AI system safely but

play40:29

in addition to that providers are asked

play40:31

to go even further and to uh detail how

play40:35

the AI system would operate in

play40:37

reasonably foreseeable instances of

play40:39

misuse and how it might result in

play40:41

different risks or harms depending on

play40:43

the different uses that de a deployer

play40:46

might reasonably engage in so it will be

play40:49

important for deployers to hold

play40:50

providers to the standard make sure that

play40:52

the instructions are sufficiently

play40:54

thorough so that the provider can then

play40:56

follow them properly

play40:58

another aspect to consider is

play40:59

infrastructure to enable appropriate

play41:01

human oversight um here again we we will

play41:05

recall the obligation of deployers to

play41:07

make sure that the AI is operating

play41:10

safely yes uh very sorry to to interrupt

play41:13

uh we are very running very late and I

play41:15

wanted to give also answer the if you

play41:17

can uh close a bit the presentation

play41:21

sorry we interrupting no you wrap up but

play41:25

um another thing to consider is the AV

play41:26

availability to receive requests for

play41:29

detailed information or even receive

play41:31

complaints for fundamental rights and we

play41:32

can come back to that in the questions

play41:35

and there's my email address if you want

play41:36

to have any followup interactions or

play41:38

informal chats thank you thank you very

play41:41

much um and uh there there's a few

play41:45

questions also in the in the chat from

play41:47

from the participants as well um and if

play41:51

it's fine with you I would suggest to to

play41:53

move to an and if you want to reply to

play41:56

some of them the chat uh in the meantime

play41:59

uh there's mostly on the um some legal

play42:03

obligations that you mentioned uh before

play42:06

um on the what is sufficient

play42:08

representation uh on also the uh data

play42:11

sets um uh and I saw a few others uh yes

play42:15

what is a relevant database um uh what

play42:19

is substantial modification in the case

play42:22

of then the deployer becoming providers

play42:24

and and so on um so yes if you can reply

play42:28

also in the chat then we can we can see

play42:30

a bit so I can move to to ANS for his uh

play42:33

his presentation very sorry for for

play42:35

running late but I can see it's an

play42:37

interesting discussion and then um we

play42:39

will take all the questions that are not

play42:41

have not been replied and then we will

play42:43

try to to reply in the uh in the

play42:46

followup uh but

play42:55

please uh you're muted

play43:01

you can see my presentation yes and here

play43:05

okay yes I don't have much time left so

play43:06

I'll go through these slides pretty

play43:08

quickly uh we had two interesting

play43:10

presentation one more about what's the

play43:12

philosophy behind the AI act and one

play43:15

what are the practical consequences for

play43:17

regional and local governments uh I have

play43:19

to admit that at the Flemish level we're

play43:21

still struggling with comprehending the

play43:22

full extent of the eii ACT uh so I will

play43:26

probably uh only be discussing what we

play43:28

have been doing uh as as uh with respect

play43:32

to the introduction of EI in the Flemish

play43:34

government so what have I been doing at

play43:36

the regional level uh what we have set

play43:38

as a as a guideline is that we want to

play43:41

fully embrace the power of AI but in a

play43:43

trustworthy manner which is typic the

play43:46

typical European approach to how we want

play43:48

to use AI uh we want to use AI in a

play43:51

trustworthy manner to be able to do that

play43:53

in in the Flemish the flers digital

play43:56

agency we have cre cre AI expertise

play43:58

Center a group of dedicated people uh

play44:01

who want to uh stimulate and support the

play44:04

use of AI both in Regional government

play44:06

and in local governments what we see in

play44:09

the next couple of years is uh five

play44:12

areas of focus uh we want to uh yeah

play44:16

Define our long-term vision and AI

play44:18

strategy we want to prepare the people

play44:20

and organization so we want to provide

play44:22

add uh sufficient training to our uh

play44:27

people and and prepare our organizations

play44:29

for the use of AI systems we want to be

play44:32

able to uh uh guarantee the

play44:34

trustworthiness of the applications that

play44:36

we either deploy or uh uh start building

play44:40

ourselves uh we want of course as well

play44:42

as European Union that wants to do uh

play44:45

stimulate

play44:46

Innovation uh apply AI in new and in

play44:50

innovative ways and finally uh since the

play44:54

fles eii office fles AI agency Pro

play44:57

provides support to uh local governments

play44:59

and Regional government we also want to

play45:01

develop a number of reusable AI building

play45:03

blocks um so that people can use uh uh

play45:08

things like uh large language models

play45:11

which have been developed specifically

play45:12

for use by government what have we done

play45:16

so far uh as far as the long-term vision

play45:18

and AI strategy is concerned and

play45:20

preparing our people in organization is

play45:22

we have defined a number of guiding

play45:24

principles we have set and that a and

play45:27

the flamish government has to be

play45:29

Democratic trustworthy human centered

play45:31

and sustainable with the proper use and

play45:33

management of data and applied with the

play45:36

necessary

play45:37

expertise uh especially the aspect of

play45:41

trustworthy there we've said we uh want

play45:44

all our AI use and applications to

play45:46

satisfy eight requirements uh I will not

play45:49

go into them into detail these are the

play45:52

typical requirements which are also

play45:55

defined at the European level and in

play45:57

number of other uh National eii

play46:00

strategies what we've also done to

play46:02

prepare our people in organization and

play46:04

to be uh to guarantee that they use AI

play46:07

in a trustworthy manner is that we've

play46:10

defined a number of guidelines for the

play46:12

use of generative AI we have seen that

play46:15

our uh civil servants already have been

play46:17

starting to use the publicly available

play46:20

generative AI as like chpt and we said

play46:24

in order to uh do that in a trust world

play46:27

manner we have to define a number of

play46:29

guidelines uh these guidelines are are

play46:32

pretty

play46:33

straightforward uh and uh are now being

play46:36

used not only in the regional government

play46:37

but increasingly also in our local

play46:40

governments uh and their Common Sense uh

play46:42

guidelines in what you have to do if you

play46:45

want to use generative EI in a safe

play46:48

manner what we're also doing in order to

play46:52

prepare our people and organization is

play46:54

uh We've we've been looking at a number

play46:56

of AI co-pilots and we're in

play46:58

particularly evaluating the use of the

play47:01

Microsoft

play47:04

365

play47:06

co-pilot um you probably already have

play47:08

seen demonstrations of the Microsoft 365

play47:11

co-pilot we consider that to be a

play47:13

possible AI assistance to our civil

play47:16

servants but before we introduce that

play47:18

into our organizations we want to be

play47:21

able to make it a definitive business

play47:23

case because this co-pilot does cost

play47:25

money and will have an impact on how we

play47:28

do things so we're trying to identify

play47:30

scenarios where this co-pilot can be

play47:33

used effectively and efficiently uh we

play47:36

try to determine what are the profiles

play47:38

of the typical users which can be uh

play47:42

which can use this Microsoft 365

play47:44

co-pilot in a meaningful way and we're

play47:46

also looking at the Privacy aspects of

play47:50

this co-pilot

play47:51

use so to conclude uh what are we doing

play47:55

for the local authorities uh uh we're

play47:57

going to draft future guidelines

play47:59

regarding this appropriate use of AI

play48:01

co-pilots in all our office software

play48:03

we're also going to draft guidelines on

play48:05

the appropriate use of generative AI in

play48:08

a digital Service delivery we already

play48:10

have a number of uh Flemish

play48:12

organizations providing chat boss based

play48:14

on generative Ai and we want to be

play48:17

certain that they use that in in the

play48:19

appropriate way uh we're still assessing

play48:22

the need for AI training and support so

play48:25

we have a at the flish level we have a

play48:27

knowledge centered data and society

play48:29

which looks into the the work and

play48:31

societal impact of the use of AI and in

play48:35

cooperation with this knowledge Center

play48:37

we are conducting a survey on the

play48:39

generative AI literacy which exists

play48:41

among local governments so that we can

play48:43

determine what the need is for further

play48:45

training and education in this domain

play48:49

and then as it becomes clear what uh

play48:52

what are all the different obligations

play48:54

that we have to fulfill as part of the

play48:56

eii ACT we will be developing tools to

play48:59

support this compliance with the the

play49:01

legal requirements and also tools to

play49:03

verify if a governmental AI solution

play49:06

complies with the guiding principles

play49:08

that we have defined and then

play49:10

particularly uh the trustworthiness

play49:17

requirements that was basically what I

play49:19

wanted to say in five

play49:22

minutes thank you very much I think you

play49:26

passed the test I was still very clear

play49:29

despite the the litest time uh and again

play49:32

apologies for for that um but for my

play49:36

side is is very interesting and

Rate This

5.0 / 5 (0 votes)

Related Tags
Règlement AIAutorités PubliquesConfiance IAImpact DroitsConformité IAGestion DonnéesStratégie IAFormation IACompliance IAInnovation IA
Do you need a summary in English?