Legal implications of the Artificial Intelligence Act for cities, regions, and communities 20240619
Summary
TLDRLe script d'une conférence aborde en détail les implications du futur AI Act de l'UE, expliquant les obligations légales pour les fournisseurs et les utilisateurs d'IA, ainsi que les risques et les contrôles associés. Il met en lumière les systèmes d'IA à haut risque, la nécessité d'une évaluation des impacts sur les droits fondamentaux et la conformité avec la réglementation existante, comme le Règlement Généal sur la Protection des Données (RGPD). Les intervenants discutent également des défis et des stratégies pour l'adoption responsable de l'IA au sein des gouvernements régionaux et locaux.
Takeaways
- 📋 Le système de notation sociale en Chine est interdit pour le secteur public et privé, bien que certaines exceptions soient autorisées sous certaines circonstances.
- 🚦 La catégorisation biométrique et la reconnaissance facial sont des technologies sensibles qui ne sont pas autorisées sans autorisation préalable d'un juge, même si elles sont interdites.
- 🈲 L'IA prédictive et l'IA de reconnaissance des émotions dans le lieu de travail sont interdites, sauf si elles sont utilisées dans des contextes spécifiques et contrôlés.
- 🔏 La collecte non ciblée de données faciales est interdite, et les exceptions pour les cas de sécurité médicale sont soigneusement réglementées.
- ⚠️ Les systèmes à haut risque d'IA sont soigneusement définis et nécessitent des critères de qualité minimum pour assurer la sécurité et la protection des droits fondamentaux.
- 🛠️ Les systèmes d'IA à haut risque concernent des infrastructures critiques et des domaines tels que l'éducation, l'emploi, et les services essentiels, nécessitant une évaluation des impacts sur les droits fondamentaux.
- 🏛️ Les autorités publiques ont des obligations spécifiques en matière d'évaluation des impacts sur les droits fondamentaux et doivent être conscientes des obligations de l'AI Act.
- 📈 L'AI Act introduit un nouveau système de régulation pour les systèmes d'IA à grande portée et potentiellement à risque systémique, avec un bureau centralisé pour l'enforcement.
- 🔄 L'AI Act entrera en vigueur progressivement, avec des délais différents pour les interdictions, les systèmes à haut risque et les applications physiques à haut risque.
- 🤝 L'AI Pact a été créé pour encourager les entreprises à s'engager à appliquer les règles de l'AI Act avant leur entrée en vigueur officielle, montrant une approche proactive de la régulation de l'IA.
Q & A
Qu'est-ce que le scoring social et pourquoi est-il interdit ?
-Le scoring social est une pratique d'origine chinoise où les individus commencent l'année avec un certain nombre de points, qui diminuent en fonction de leurs infractions (comme traverser un feu rouge). Lorsque le score devient trop bas, certaines libertés, comme prendre le train, sont restreintes. Cette pratique est interdite pour éviter une surveillance excessive et des restrictions injustes des libertés individuelles.
Dans quelles conditions la reconnaissance faciale est-elle autorisée pour les autorités publiques ?
-La reconnaissance faciale est interdite sauf dans des circonstances spécifiques telles qu'une menace imminente d'attentat terroriste. Dans ce cas, une autorisation préalable par un juge est nécessaire.
Pourquoi l'émotion et la reconnaissance biométrique sont-elles interdites sur le lieu de travail ?
-Ces pratiques sont interdites pour protéger les droits des travailleurs et éviter la surveillance intrusive. Par exemple, vérifier le sourire d'un employé lorsqu'il interagit avec des clients n'est pas autorisé.
Quelles sont les obligations des employeurs concernant l'utilisation de systèmes d'IA à haut risque ?
-Les employeurs doivent suivre les instructions d'utilisation fournies par le fournisseur de l'IA, garantir une surveillance humaine adéquate, surveiller les systèmes d'IA pour les risques systémiques et les incidents graves, et informer les travailleurs et leurs représentants de l'utilisation de l'IA.
Qu'est-ce qu'un 'usage de haute risque' en matière d'IA selon l'AI Act ?
-Un usage de haute risque implique des systèmes d'IA utilisés dans des infrastructures critiques, la sécurité publique, l'éducation, l'emploi, et l'accès aux services essentiels, où des erreurs peuvent avoir des conséquences graves sur les individus.
Comment l'AI Act prévoit-il d'assurer la transparence et la sécurité des systèmes d'IA ?
-L'AI Act impose des exigences de transparence accrues pour les systèmes d'IA à usage général, incluant des évaluations de la conformité, l'enregistrement dans une base de données publique, et la divulgation obligatoire de l'utilisation de l'IA aux individus concernés.
Qu'est-ce que l'AI Pact et comment les entreprises peuvent-elles y participer ?
-L'AI Pact est une initiative permettant aux entreprises de s'engager à appliquer les règles de l'AI Act avant leur entrée en vigueur officielle. Environ 500 entreprises ont déjà rejoint cette initiative.
Quels sont les principaux domaines de focus pour l'utilisation de l'IA au niveau régional en Flandre ?
-Les domaines de focus incluent la définition d'une vision à long terme, la préparation des personnes et des organisations, l'assurance de la confiance dans les applications, la stimulation de l'innovation, et le développement de composants réutilisables d'IA.
Comment la Flandre se prépare-t-elle à l'introduction de l'AI Act ?
-La Flandre prépare des directives pour l'utilisation de l'IA générative, évalue l'utilisation du co-pilote Microsoft 365, et développe des outils pour aider les autorités locales à se conformer aux exigences légales et garantir la confiance dans les solutions d'IA gouvernementales.
Quels sont les impacts potentiels des systèmes d'IA sur la gestion et la compétitivité des entreprises ?
-Les systèmes d'IA peuvent améliorer l'efficacité et la prise de décision, mais ils nécessitent une gestion prudente pour éviter des risques élevés et garantir le respect des droits des individus, ce qui peut influencer la compétitivité et la réputation des entreprises.
Outlines
📉 'Social scoring et réglementation de l'IA en Chine'
Le paragraphe aborde le système de notation sociale en Chine, où les citoyens commencent avec un certain nombre de points et peuvent en perdre pour des infractions telles que ne pas respecter la lumière rouge. Il mentionne également la catégorisation biométrique, la reconnaissance faciale et la prédictive policing, expliquant les restrictions et les exceptions pour ces technologies. Par exemple, la reconnaissance faciale est interdite sauf dans des circonstances spécifiques, comme une menace imminente d'attentat. Le texte souligne également les nouvelles règles qui exigent une autorisation préalable d'un juge pour utiliser certaines technologies d'IA.
🔍 'Les obligations et le système de mise en œuvre de l'AI Act'
Ce paragraphe traite des obligations imposées par l'AI Act aux États membres et aux autorités nationales, ainsi qu'à la future Agence de l'IA. Il explique le rôle de l'AI Act dans la régulation des systèmes d'IA à risque élevé et la mise en place d'un système de sandbox pour faciliter le développement d'IA. Le paragraphe mentionne également la création d'une agence centrale pour la réglementation des systèmes d'IA à grande échelle et à risque systémique, ainsi que le lancement récent de cette agence.
📝 'Présentation sur les obligations des fournisseurs et des employeurs en vertu de l'AI Act'
La présentation se concentre sur les obligations spécifiques pour les fournisseurs et les employeurs d'IA en vertu de l'AI Act. Elle explique les différences entre ces deux rôles et les obligations correspondantes, notamment l'évaluation de la conformité, l'enregistrement des systèmes d'IA à haut risque, et l'obligation de divulgation. Le paragraphe met également en évidence les obligations spéciales pour les employeurs qui modifient un système d'IA à haut risque ou qui le déployent dans des contextes spécifiques, comme le travail ou l'éducation.
🏛️ 'Déploiement responsable de l'IA dans les autorités publiques'
Ce paragraphe décrit les efforts du gouvernement flamand pour intégrer l'IA de manière responsable et fiable. Il présente les principes directeurs pour l'utilisation de l'IA, les critères de fiabilité, et les initiatives en cours pour préparer les employés et les organisations à utiliser l'IA. Le texte mentionne également la création d'un centre d'expertise AI, la définition de lignes directrices pour l'utilisation d'IA générative, et l'évaluation de l'IA co-pilote pour les fonctionnaires.
Mindmap
Keywords
💡Système d'évaluation sociale
💡Catégorisation biométrique
💡Reconnaissance faciale
💡Policing prédictive
💡Risques élevés
💡Systèmes d'infrastructure critique
💡Formation et évaluation des employés
💡Services publics essentiels
💡Évaluation d'impact sur les droits fondamentaux
💡Services d'assistance juridique
💡Conformité avec la réglementation
Highlights
La discussion sur le scoring social en Chine est interdite pour le public mais étendue au secteur privé, impliquant des points retranchés pour des actions telles que ne pas respecter un feu de signalisation.
La catégorisation biométrique est importante, bien que non pertinente pour tout le monde, pour identifier les préférences politiques potentielles.
Le reconnaissance facial est crucial pour les autorités publiques, bien que restreint et nécessitant une autorisation préalable dans certaines circonstances.
La police prédictive basée sur l'individu n'est pas autorisée, mais reste possible en se concentrant sur des lieux et des moments spécifiques.
La reconnaissance des émotions dans le lieu de travail est interdite, à l'exception de l'utilisation pour évaluer les clients.
L'extraction non ciblée est interdite, empêchant la création de bases de données d'images faciales sans but spécifique.
Les systèmes à haut risque d'IA sont considérés comme de bonnes IA, avec des critères de qualité minimum nécessaires pour éviter les impacts négatifs sur la vie des gens.
Les systèmes d'infrastructure critique nécessitent une vérification de l'IA avant utilisation, comme dans les systèmes de freinage des voitures.
Les autorités migratoires et les services d'asile sont des domaines à haut risque, nécessitant une attention particulière en cas d'erreur potentielle.
Les évaluations des candidats à des postes universitaires et la détection de tricheries sont des cas à haut risque pour la formation et l'emploi.
Les services essentiels au public, tels que le scoring de crédit et l'assurance-vie, sont des cas à haut risque soumis à des obligations spécifiques.
Le nouveau volet du AI Act concerne les systèmes d'IA à usage général avec des exigences de transparence supplémentaires.
L'Office de l'IA a été lancé pour superviser les systèmes d'IA à usage général, promouvoir la recherche et l'innovation, et gérer les aspects de sécurité.
Les boîtes de démarrage (sandboxes) sont mises en place pour aider les entreprises à développer de l'IA en facilitant l'accès aux questions réglementaires.
L'AI Pact a été créé pour permettre aux entreprises de s'engager à appliquer les règles de l'AI Act avant leur entrée en vigueur officielle.
Les autorités publiques et les organismes de surveillance du marché au niveau national sont chargés d'appliquer les règles sur les cas à haut risque.
Les obligations des fournisseurs et des employeurs en vertu de l'AI Act sont claires,区分着他们各自的 responsabilités et exigences spécifiques.
Les employeurs, y compris les autorités publiques, doivent effectuer une évaluation de l'impact sur les droits fondamentaux avant le déploiement d'un système d'IA à haut risque.
Transcripts
a lot of uh discussion going on uh very
quickly the social scoring uh obviously
uh is something which is prohibited um
and for everybody uh so it's public
sector there was there from the
beginning but it has also been extended
to private sector now you know social
scoring is the the idea of China you
have thousand points at the beginning of
the year if you cross the traffic light
when it's red they take a point away and
if you have 500 points you're not
allowed to take the train anymore um we
also have biometric categorization
that's maybe uh not that relevant for
many people but it's important to have
that because there actually are people
trying to do that you know try by the
size of your your nose to identify
whether you're going to vote left or
right um um we have the face recognition
that's very important for public
authorities especially in in the law uh
law
enforcement um it is forbidden but it's
allowed under certain circumstances for
example if you have uh imminent threat
of terrorist attack uh what has changed
relative to the initial proposal is now
that now you basically need a warrant I
mean it's not called a warrant but you
need a uh prior authorization by by a
judge basically um so that it's possible
under certain circumstances but you know
it really has to be checked very
much uh individual predictive policing
uh that's also something obviously of
interest for for public
authorities is not allowed the focus
being on the individual so uh no
Minority Report you can still do
predictive predictive policing uh on the
base uh on the basis of place or time
you know you can still try to get your
AI to identify that the most dangerous
time of the year is the 31st August in
the afternoon and maybe the most
dangerous road right now is the cross of
rala and
pluma uh so that's uh continues to be
possible uh but it is it's a high risk
um same as biometic everything which is
forbid forbidden and if then there's an
exception that exception becomes
automatically high risk and I come back
later to what that actually means also
forbidden is um the is emotion
recogition in the
workplace um that figuratively speaking
so uh it means for workers so if you are
um you're not allowed to you know check
the the smile on the face of your
employee when they interact with
customers but you are actually allowed
to do that on the customer side um then
is high risk of course um and that of
course is something uh which applies to
to public institutions as well
especially also to education and so far
as the public um and then uh there's
exception for medical safety reasons so
you can still have your camera wake up
the lower driver before you fall to
sleep
uh and then last but not least
untargeted scraping um you're not
allowed to build up face spatial images
um just for fun you know it's that's the
Clear View case uh but it's also true
not just for the internet also for True
for CCTV so you can just use your camera
from um you know your CCTV in the train
station or in this big shopping center
or in in the in
the L in the um in the big building of
of the city uh just to build up a uh a
database of of face
images so that's roughly the provisions
you see it's still a very small number
it's like six it's a bit longer than it
was before but it's roughly roughly
stayed in a very small number especially
because that's a limited list of course
and the the allowed ones is an unlimited
list um the highrisk systems haven't
changed very much
um the uh one important thing to
remember here is that highrisk AI is
good AI so um the dividing line between
better and good AI is really between
forbidden and highrisk so the bad AI is
forbidden you may not do it highrisk
systems are potentially uh beneficial AI
there's nothing wrong with them however
because they can have a huge impact on
people's lives if they're not done
properly you really have to uh have
minimum quality uh criteria and that's
what the a activity does so the
high-risk use cases um they concern
certain critical infrastructure there's
there's a well actually three two
elements which are not on list here
first of all there's the kind of
manufacturing the the physical anything
which can kill or M you moves fast is or
made out of metal or has spikes motor
sauce cars or whatever if you put any uh
safety relevant uh AI into a safety
relevant component uh that AI has to be
verified you know if you have a put AI
into the braking system of your car uh
you need to verify the AI before you can
actually use it there uh if you put a in
the sound system you don't need to do it
because it's not
safy um and then there's the second list
um which is not here uh because it's a
bit long which is uh basically
everything in the migration Asylum
police enforcement area because there's
quite a lot of cases in there uh when I
say a lot it's still you know like 20 or
25 cases so it's still a limited number
but is quite a lot because not because
the people in that area do a bad job but
simply because if something goes wrong
then you know that's a much bigger
problem if you're talking about police
you know if you the system Mal functions
and you were put in excuse me uh to
interrupt you but there is a comment on
the chat uh could you please speak a bit
lower I think that we having a bit of
because it's a very complex and Rich
topic and it takes a bit of time also to
process the information thank you very
much and apologies for interrupting no
no
sorry um okay so um that's because in
the in the uh law enforcement area
there's just if something goes wrong the
consequences are much worse and if you
are put into jail for 3 days before the
AI system realizes it has made the
mistake that of course is a much bigger
problem than you know if some some other
AI system malfunctions and maybe you
know something in the factory goes
wrong uh but in addition to that um we
also have the list here the you need you
have certain critical
infrastructures um such as uh Road
Traffic Supply water Etc where of course
it's very important that these function
for society that's why they're high risk
uh but then you also have things like
educational and vocational
training so uh especially things like
providing access to University selecting
candidates from
universities evaluating um students but
also software to monitor cheating
because obviously that's these things uh
getting access to University passing
your exam uh are very important for for
for um for well basically for your
future career and uh therefore if by
mistake uh they consider that you have
been teaching well that's not true that
is a a serious
risk um very important uh for the public
sector just as well as for the private
is employment um that's first of all uh
the recruitment area so uh job
applications or evaluation of candidates
so uh sorting of CVS uh which by the is
meant in a large sense so it's not just
you have a job um and then you sort the
CVS uh by uh buy an artificial
intelligence system is also um you have
candidates the kind of offers you
provide to these candidates that also
included here uh but also um for example
the evaluation of workers or the job
termination uh all of this is high risk
well contribution to job termination of
course and then we have the access to
essential private and public
services um that was there from the
beginning um you're allware of of of the
big Scandal the had in the Netherlands
uh and it have been uh added a couple of
private uh uh private Services
especially in the financial area uh
credit scoring uh pricing of life and
health
insurance um now all of these are
high-risk cases and therefore they have
to uh comply with a certain number of
obligations and I'm I understand that
Laura is going to explain to you the
obligations so I'm going to skip that
part um and in particular for public
authorities there's the fundamental
rights impact assessment which I'm going
to skip as well um
the new the big new element in the uh
final version of the a Act is a Jal
purpose
AI um so here for the kind of big but
well very big but not hugely very big
systems you need you have additional
transparency requirements um so that the
people who might want to use a general
purpose system for a specific purpose
later on can actually fulfill the
obligations and then the most important
is the general purpose AI with systemic
risk that's the um the uh the these are
the general purpose systems which are
incredibly big um so the threshold here
has been set at the 10 of at the level
of 25 flops floating operations I have
given this speech a lot of times I've
never met anybody who actually knows
what a floating operation is uh but just
to give you an idea uh CAD GPT 3 no
sorry C GPT was developed with 10 at the
level of 18
flops uh now you might consider that you
know 10 18 and 25 is pretty close uh but
since we're talking about levels here
that means it's a million times bigger
than than chat GPD so they're very very
very big systems these are the systems
uh care uh which you know El musk talks
about it talks about AI taking over the
world the end of the the end of the
world as we know it um now if you have
these systems then you have obviously
the same obligations as from a lower and
you have to do a state-of-the-art model
evaluation which basically means you
have to try to to turn this system away
from what it's supposed to do and make
it do things that it's not supposed to
do and if you U succeed in doing that uh
then obviously you have to redo the
system until eventually it only does
what it's supposed to be
doing um now with the general property
AI comes another innovation in the AI
act compared to what we originally
proposed which is we now have a uh an
enforcement system which is basically
cut into two uh the specific AI has
always been uh the idea has always been
to implement that at a national level so
member states uh designate an authority
which can vary from Member state to
member State everybody has a choice so
some member states might want to um use
their data protection authority others
the cyber security Authority Spain as
far as I know has decided to create a
totally new Authority so that's you know
totally up to member states uh and they
will actually enforce all of the uh
rules on highrisk cases uh and these
member states then come together in the
European AI board which is really
glorified member states committee and
then you have two committees which
support the AI board one is the
scientific panel where you have the
academic experts telling them what's
possible and what's not possible and
then youve the you've got the advisory
form which is really where the
stakeholders are so you have industry
you have uh Academia Civil Society smmes
you know any kind of uh political group
which might be interested and uh would
want to uh know Express their opinions
uh and that's roughly the way it stays
and the new thing which we have is the
AI office uh I hasten to add that we did
not ask for that that's not an idea of
the European commission the European
Parliament decided that it was and the
council agreed that was necessary for
the general purpose AI uh to have one
centralized enforcement body in Europe
so the split is really specific AI
National level with coordination at
European level uh general purpose AI is
being regulated by the AI
office now the AI office actually was
launched uh two days ago on
Monday
um it is a bit bigger than just the
regulation um because and there I come
back to what I said at the beginning uh
we are not only about regulating AI we
don't think AI is very negative we also
think it very positive and therefore it
was felt it was felt and it was that
it's important that the AI office is not
only dealing with the negative Parts but
also for the positive parts so the AI
office is both responsible for the
regulation and the safety of AI but it's
also important for uh it's also uh
responsible for promotion of AI for
research of a in Ai and for innovation
of AI and for the use of AI for uh
positive
purposes um now as I said it was
launched two days ago but of course for
the moment it's still very much on paper
um simply because the AI Act is not yet
in force it will only come in for in
force on the 1st of August uh and
therefore we still basically have the
same people we had before uh we will
however start well we actually have
started hiring but the people are not
there yet uh hopefully the a office will
be full strength by the end of the year
and then it will actually be functioning
as a office for the moment it's a bit of
an empty shell but not totally empty
because there are some people there but
it's a shell which still needs to be uh
filled uh an important element of the a
act uh and here again I come back to the
fact that we quite positive what the
sandboxes uh the idea is that uh you
have one uh in each member State um and
basically basically the idea is that it
allows companies to develop AI with a
very easy access for regulatory
questions to the authorities so if you
if you develop your AI and you have a
question uh and you're not sure whether
you know that actually would be
compatible or not with the a act then if
you're in the sbox you actually
basically just pick up the phone and ask
uh and the reason one of the reasons we
did that is because one of the lessons
of the uh data protection regulation is
that very of often the problem is not
really what's allowed and what's
forbidden but the uncertainty that
people don't really know what they may
and what they may not do and the
sandboxes here are really an attempt uh
to uh address that uh yeah for companies
then the advantages that they um you
know get much faster access to to advice
and they don't need that many lawyers to
actually tell them what they can and
cannot do um the AI act uh enters into
Force progressively over the next couple
of years so the first first to come into
uh into Force are the prohibitions after
6 months so if the AI act enters into
force on the 1st of August they come
into force on the 1st of February next
year the prohibitions are first because
they're easiest to do you just have to
stop doing them you don't have to get
any certification or any controlled
Authority or anything you just have to
stop doing them you still need a
transition period because you might want
to replace a prohibited system by
another one which is not prohibited
uh and of course we will have to um uh
provide guidance on that uh in general
the a act uh was concluded as you may
remember very much under time pressure
and therefore there was quite a number
of issues which were well not left open
but maybe not as detailed as as it could
have been and um the result of that is
that the a office will have to draft
something like 60 um acts 60 60 texts
over the next uh well basically 24
months uh they can be codes of conduct
uh codes of practice delegated act
implementing acts there a whole variety
of of things to do and of course they
they have to follow the um entry into
Force so the first things we have to do
is the guidance on prohibited systems uh
we're working very much on that and then
the next one uh which comes uh is uh the
roots on general purpose AI uh and we're
also very much working on that right now
keeping in mind uh and here once again I
come back to at the beginning um you
know when we're drafting this we're very
much trying to ensure that the AI act
works but also to promote Ai and to make
sure that it doesn't get stifled and The
Innovation can be developed in Europe um
even within the framework of that um of
uh of that act last but not least um it
comes into into Force relatively quickly
but there's still two years to go uh for
the highrisk applications and three
years for the for the physical high-risk
applications that's why our commissioner
has created the AI Pact so it's an act
with a p in front of it a PCT where
basically companies can come forward and
promise to already apply the
rules uh ahead of time so uh instead of
applying them in 24 months they can
apply them already today and we've got
around 500 companies already
joining and the reason why that is
actually possible is because many of the
obligations we are asking companies to
do is really only state-of-the-art and
therefore many of large companies which
have state state-of-the-art AI they're
fulfilling them anyway and for them it's
an easy way uh to just sign up to the a
pack they don't really have to do much
of an effort just a bit uh to to make
sure that all of this is
compatible okay and with that I have
exceeded my time by five minutes I'm
sorry about that um but I hope you have
mercy with
me thank you very much um and did I
didn't um say but yeah we can ask a few
questions now and then uh move to the
next presentation so we have two
questions in in the in the chat uh which
one of these I I agree with Gabriel had
the same um the same doubt on the AI PCT
um because uh I also was not aware of
this um I just heard it a few days ago
in another presentation um and uh online
uh there is also an expression of
interest to join the AIP PCT um but it
seems that it's mostly directed to
Industries um
and the question would be if it's only
for public authorities so local Regional
National authorities can also enter into
the a act and contribute to it well
fundamentally yes uh but is of course
true and I did not mention the uh the
actual obligations which in which come
from the Air Act but it's the the
obligations are mostly for the providers
for the developers and therefore um I
guess it's more for uh AI provide us and
develop us and less for public
authorities having said that you know if
if public authorities develop their own
systems or in so far as they apply
systems like they're perfectly happy to
join I mean we we're perfectly happy for
them to
join yes and I believe yes that that is
true that most public authorities will
fall in the category of uh deployers but
uh then I think it could be interesting
also for for for them so that's that's
good to know and um thank you and then
there is a a second question in the in
the chat uh how do you foresee that the
understanding of uh what establishes the
different uh oh it's
not very clear um maybe we put the
question in the chat would you like to
jump pin a second I'm not sure if you
ask about the
risk uh yes I'm talking about the risk
ah yes perfect so how it's um uh is
defined what's I can what's not I'm not
sure if that is a question well I mean
the we have a different definition in in
the a act what is a high risk and
basically there a that's a list you know
how how grave it can be uh for the
people affected How likely people are
affected what kind of particular groups
Etc I mean it's an I don't remember
article but it's a long list and um we
will do a revision uh of the of the
high-risk uh cases of the list of
high-risk cases regularly and basically
we'll take the exact uh um the exact
criteria which are in art in in that
article and we just apply them and see
uh what else might be considered high
risk or might might need to be
considered high risk now the idea of
course is that uh for things like that
you have the European AI board because
that's where you will have the national
authorities uh and the national
authorities will deal with these things
every day and therefore they will
realize if something comes up uh is
developed comes to the market uh which
is a highrisk case and which possibly
should be addressed
uh and then they would actually come to
the I board and tell the other National
authorities that and then the other
National authorities which may or may
not have made the same experience I
would then come to the conclusion that
maybe we should be looking at that and
then we will look at that according to
the criteria which which have been which
are set out in in article I think it's
Article
Five yeah thank you and there's another
couple of questions and also another one
from from my side but I I will give the
floor to laa so that she can present um
and then I will probably there will be
other few questions for you Martin um
but thank you in the meantime um and uh
laa if you want to go ahead with your
presentation I do just let me share my
screen it might just take a second from
my end because I'm needing to give um
permission to
uh the app so try and get it up in the
meantime that would be very
helpful
yes no
voice let me open your
presentation I have the PDF
I okay
strange let me try on my
end might just make
[Music]
it open but it doesn't
show let me try again
yes perfect it works can you see it yes
fantastic all right let me
just I'm sorry I don't know if you can
see the full screen or just the slide
just the slid
uh full screen would be great but
otherwise it's uh it's also fine zooming
in would be also okay all right let me
just put it inside sh up then so we can
get there all right sorry technical
delays um my name is Laura Lazaro
Cabrera I'm a council and program
director for equity and dat at CDT
Europe that's the center for democracy
and Technology we're brussels-based the
nonprofit Civil Society organization
that works towards the preservation of
human rights in EU law and policy and
today I'm going to be diving with you
further into the obligations that the AI
act creates for providers and employers
within the AI act taxonomy um just to
situate my presentation within what's
already been said uh you remember the
traffic light system that was mentioned
by Martin um essentially here we're
going to be talking about the
obligations imposed in relation to
high-risk AI systems so you remember
there or the unacceptable um types of AI
so that's a red light the high-risk a
systems which are the orange light and
then um I guess the yellow light would
correspond to to those AI systems that
present specifically a transparency risk
but we're talking about the orange ones
now so the ones are not prohibited but
just
below just moving to the next slide for
today I really have three goals uh we
don't have a lot of time so I'll take
you through these briefly the first goal
is to uh be in a position where we can
differentiate what obligations
correspond to Providers and what
obligations correspond to employers the
second one is to uh have a basic
understanding um of the obligations of
the employers and I say basic because we
could spend hours talking about the
detail of the obligations and also as
Martin already mentioned there will be a
lot of um outputs coming out from the
Commission in the AI office specifically
so we know uh from maron's presentation
we will be expecting further guidelines
on high-risk a systems but there are
also guidelines forthcoming on
prohibitions for example so there is a
lot of ink yet to run um on many aspects
of the AI act so this is just a
preliminary overview um of those
obligations and lastly I'd like to take
you through a few key considerations uh
for you to take into account prior to
deploying an AI system so here I'm going
to be moving us away from strict legal
compliance and a little bit more into
the territory of best practice
so without further Ado um what are we
talking about when we talk about uh the
role of public authorities in the AI act
um the taxonomy is broader than just
providers and deployers we're also
talking about um Distributors and
importers but for the purposes of of
this presentation and this audience it
makes sense to focus on these two
concepts providers and
deployers as was already mentioned
really the bulk of the obligations rests
with providers specifically so from a
compliance perspective you probably
don't want to be rushing to be a
provider unless you have the
infrastructure already in place to do so
properly um and as you can see from the
definition a provider can include a
Public Authority uh so it will be for
instance a Public Authority that
develops an a system or has one
developed and placed on the
market however I think the bulk of you
here in the audience that represent
local authorities will most likely um
have your Authority fall within the
category of the employer if you choose
to to deploy an AI system so that will
be uh any entity including a Public
Authority that uses an AI system under
its Authority but and this is a key
point a deployer can become a provider
within the terms of the act under a few
circumstances so there's three uh here
on the side I've only put two because I
think those are the most likely ones um
the first situation is if the deployer
makes a substantial modification to a
high-risk a system such a remains a
high-risk AI the second one is if the
deployer modifies the intended purpose
of the AI system including if it is a
general purpose a model but we won't be
getting too much into that and if now
that the purpose is modified that brings
the AI system into the high-risk
category whereas before it was not in
that category and there's a third
instance where the deployer puts its
name or trademark on the highrisk K
system uh which is I think a foreseeable
instance where the deployer will become
a provider so be warned of these
distinctions because the moment you step
into this territory you could then be
holding yourself to the higher standard
the more complex obligations imposed by
the
act so ahead of jumping into the
deployer obligations um I'd like to
quickly run you through the obligations
applicable um to providers so there's a
few technical ones um for instance uh
ensuring there is a quality management
system in place in relation to the high
risk AI they deploy keeping technical
documentation as well as logs um but
more importantly and this is the bulk of
the of the obligations it's to ensure
that there is a Conformity assessment
undertaken in relation to the specific
high-risk AI um that is being placed on
the markets and as many of you might
already know this is a process that
predates the AI act uh Conformity
assessments have been around uh in
product safety legislation for a long
time and because the AI Act is at its
hard product safety legislation as well
as well even though it brings in other
other considerations it's something
that's been essentially adapted to the
AI world but has been around for a
while um other obligations that tie in
with the the plers that obligations will
have include the registration of
high-risk a systems in the database um
so the commission will have to develop a
database recording all of the high-risk
AI uses in the continent and this will
obviously include an obligation on
provider to register the AI systems on
the database um similarly providers have
an obligation to disclose an AI system
in certain
circumstances um though we will come to
see that employers have a similar
obligation in place and lastly they have
an obligation to ensure that they're
providing proper instructions for use um
this one is particularly important for
the employers because the employers will
have an obligation that will come to
soon to actually um ensure that these
disclosure these instructions are being
followed uh so just putting that out
there for you all to take into
account now looking at what deployers uh
must specifically do Under the AI act um
so we already talk about um the
registration of the high-risk a system
in the relevant database this is the
case already for for providers to do but
the employers that are our public
authorities have a complementary
obligation to ensure that a specific
section of the information that's to be
put in the database is filled out uh so
public authorities will specifically
have to look into this and make sure
that they have um the relevant
information which will then be depending
on the type of high-risk a system be
publicly available um for other people
to
consult there's also a few safety
obligations for the employers to take
into account um and these come in in
three different flavors so the first one
is to follow instructions for use
already developed by the provider as
well as ensuring that there are
sufficient Technical and organizational
measures in place to be able to actually
uh follow these
instructions uh similarly the employers
have to ensure have a certain obligation
at least to ensure that the AI system is
working properly so they will have to
have systems in place so that human
oversight can happen and individuals in
charge of this oversight will need to
have the necessary competence training
Authority and support to be able to do
so um lastly they will have to monitor
AI systems for um two things that are
defined extensively by the AI act so
systemic risk or alternatively the
likelihood of the high-risk a system
resulting in a serious incident again
another concept um extensively covered
by the by the AI act so it's not nothing
um of course providers have to make sure
that their uh high-risk a assistants uh
function properly but the plers will
have a significant role in monitoring
that this is the case even after the
provider has made this
assessment another thing that um was
mentioned in passing in the previous
presentation is the obligation tot take
a fundamental rights impact assessment
now this is a key obligation and it's
one that again applies specifically to
the employers that are public
authorities or in the words of the act
the employers who are governed by public
law now the fundamental rights in
assessment will be mandatory in relation
to highrisk a systems and uh the
template for it will be developed by the
AI office so there isn't one already in
place but once there is then public
authorities there are deployers surve
systems will need to fill out that
templat and ensure that they send it on
to the relevant Market surveillance
Authority at National level and this is
something that they need to have in
place prior to the deployment of an AI
system so this is really a key item
um to put front and center in the in the
compliance list there are a few special
obligations that employers have to take
into account as well and that the
special the use of the special uh word
is entirely mine but I'm using it to
Showcase that these obligations will be
dependent on the context uh or the
setting in which the deployers are
seeking to deploy the AI system so
firstly um in the situation where a
deployer has control over the input data
they will have to ensure that that data
is sufficiently representative in view
of the purpose of the AI
system similarly uh if the deployer is
choosing to deploy the high-risk a
system in their workplace then they will
have an obligation to inform the
affected workers as well as workers
representatives and uh and this is one
of the trickier uh types of highrisk a
systems to use if they're using um post
remote biometric systems so that is
systems that carry out biometric
identification but not real time they
will have an obligation to obtain
authorization for this use within 48
hours and they will have to submit an
annual report to Market surveillance
authorities and data protection agencies
so there are a few uses of AI uh
classified as high risk that still come
with special obligations in view of the
risk that they're likely to
present there are other obligations that
are more user facing um so for instance
uh employers will have the obligation to
inform individuals if AI is used to make
decisions about them or to assist in
making decisions about them many of you
will remember uh from Lessons Learned in
the data protection context uh automated
decision- making is already prohibited
to a certain extent but the AI act takes
this obligation or this prohibition a
little bit further and goes as far as
the state if you're making a decision
that's assisted by AI then you have to
tell individuals um similarly there's an
obligation to provide a clear and
meaningful explanation of any AI
assisted decision making and here um
this is not so much an obligation on
deployers as it is a right uh for
individuals however effectively
translates into an obligation so the ACT
creates a right for individuals to seek
this type of explanation but in turn
that will mean or by extension that the
employers will have an obligation to
provide it so this is important as well
ensuring that there is an infrastructure
in place to manage these requests and
address them
uh and lastly disclosure obligations so
we already covered that providers have a
disclosure obligation in some instances
there will be a concurrent disclosure
obligation for deployers as well so for
example uh if a deployer is using um an
emotional recognition or biometric
categorization AI system they will have
to disclose that unless the use is for
the prent the prevention detection or
investigation of criminal offenses
if uh similarly a deployer is using text
generating AI with a specific purpose of
informing the public on matters of
public interest again they will have to
disclose unless uh similarly the
criminal investigation exception applies
or alternatively the AI uh has undergone
a process of human review or editorial
control and there is somebody who holds
editorial responsibility over this
content um and finally when it comes to
deep fakes which uh some of you may may
have missed the AI act explicitly
addresses and also the fines um again
will be an opportunity to disclose
unless once more the uses um authorized
for the detection prevention or
investigation of
crime so to really finish I'd like to
take us through um a deployer checklist
things to take into account prior to
deploying AI as has already been
mentioned there's still time before the
relevant provisions of the AA become
applicable
um it is foreseen that the ACT will
become into Force as a piece of
legislation in July this year but
obviously the different sections of the
ACT are going to be becoming applicable
in a staggered manner first one's uh the
first section to become applicable or
the article will be the one of
prohibitions and then uh further down
the line we'll be looking at the
sections on high-risk a assistance which
are the ones we're touching on here so
there's still time but these are
considerations um to have in place
nonetheless before then
so to start with obligations outside of
the AI act the AI act States in several
places that it's without prejudice
essentially to pre-existing legislation
on a series of areas but the one I want
to talk about specifically is the
general data protection regulation and
to give you an example of how important
data protection is in the context of the
AI act um within the summary that the
public authorities will have to provide
in relation to high-risk AI systems to
include in the commission database they
will need to provide also a summary of a
data protection impact assessment if
they're compelled by law to carry one
out so these two um are intimately
linked and it's really relevant to
consider to what extent the uh
deployment of an AI comes with
additional obligations so you can think
of providers and deployers as uh
controllers and processors respectively
following the terminology of the gdpr so
that's
number one uh number two whether the AI
being deployed is high risk so we
covered the different instances or the
different uh use cases that could fall
as or be categorized as high risk within
the act by way a reminder if any of you
is Keen to go back to the text of the
act after this presentation you'll find
those irisk categories in part in Annex
3 although as was already mentioned an
AI system will also be high risk if the
AI is used as a
product safety component but in essence
uh once uses in classified as high-risk
in anex 3 there will be an opportunity
for deployers to assess whether they
consider that the AI the high-risk AI
system is in fact lowrisk or minimal
risk so here the any local Authority
wishing to deploy AI systems will have
to consider whether indeed that
particular use of AI is high- risk
because that determination for of all we
need to be
recorded and then that will invite the
book of the obligations are likely to
apply um on the under the AI act another
thing to consider is a fundamental
rights impact assessment I already
mentioned that this uh will uh have to
be undertaken prior to deployment um but
maybe best practice I first of all it
will need to be included in the
high-risk um use cases database uh
prepared by the commission not in full
but a part of it or at least a summary
uh but it might be best practice to make
these fundamental rights effect
assessments more public to the extent
that it is possible to ensure that there
is appropriate Civil Society or public
oversight um another aspect to consider
is the sufficiency of AI disclosure
providers will have this obligation
already but deployers will need to
consider if they have additional
obligations on top of the obligation
already held by employers and if they're
taking the necessary steps to make sure
that this disclosure happens in the way
it's intended to for um individuals who
are affected by the AI system or are
facing the AI
system then another um aspect to
consider would be the clarity and
robustness of instructions for use so
the aak already sets a baseline for how
thorough these instructions must be by
we of reminder these would be the
instructions provided by the providers
to the deployers and they must be
thorough enough to allow a deployer to
be able to use the AI system safely but
in addition to that providers are asked
to go even further and to uh detail how
the AI system would operate in
reasonably foreseeable instances of
misuse and how it might result in
different risks or harms depending on
the different uses that de a deployer
might reasonably engage in so it will be
important for deployers to hold
providers to the standard make sure that
the instructions are sufficiently
thorough so that the provider can then
follow them properly
another aspect to consider is
infrastructure to enable appropriate
human oversight um here again we we will
recall the obligation of deployers to
make sure that the AI is operating
safely yes uh very sorry to to interrupt
uh we are very running very late and I
wanted to give also answer the if you
can uh close a bit the presentation
sorry we interrupting no you wrap up but
um another thing to consider is the AV
availability to receive requests for
detailed information or even receive
complaints for fundamental rights and we
can come back to that in the questions
and there's my email address if you want
to have any followup interactions or
informal chats thank you thank you very
much um and uh there there's a few
questions also in the in the chat from
from the participants as well um and if
it's fine with you I would suggest to to
move to an and if you want to reply to
some of them the chat uh in the meantime
uh there's mostly on the um some legal
obligations that you mentioned uh before
um on the what is sufficient
representation uh on also the uh data
sets um uh and I saw a few others uh yes
what is a relevant database um uh what
is substantial modification in the case
of then the deployer becoming providers
and and so on um so yes if you can reply
also in the chat then we can we can see
a bit so I can move to to ANS for his uh
his presentation very sorry for for
running late but I can see it's an
interesting discussion and then um we
will take all the questions that are not
have not been replied and then we will
try to to reply in the uh in the
followup uh but
please uh you're muted
you can see my presentation yes and here
okay yes I don't have much time left so
I'll go through these slides pretty
quickly uh we had two interesting
presentation one more about what's the
philosophy behind the AI act and one
what are the practical consequences for
regional and local governments uh I have
to admit that at the Flemish level we're
still struggling with comprehending the
full extent of the eii ACT uh so I will
probably uh only be discussing what we
have been doing uh as as uh with respect
to the introduction of EI in the Flemish
government so what have I been doing at
the regional level uh what we have set
as a as a guideline is that we want to
fully embrace the power of AI but in a
trustworthy manner which is typic the
typical European approach to how we want
to use AI uh we want to use AI in a
trustworthy manner to be able to do that
in in the Flemish the flers digital
agency we have cre cre AI expertise
Center a group of dedicated people uh
who want to uh stimulate and support the
use of AI both in Regional government
and in local governments what we see in
the next couple of years is uh five
areas of focus uh we want to uh yeah
Define our long-term vision and AI
strategy we want to prepare the people
and organization so we want to provide
add uh sufficient training to our uh
people and and prepare our organizations
for the use of AI systems we want to be
able to uh uh guarantee the
trustworthiness of the applications that
we either deploy or uh uh start building
ourselves uh we want of course as well
as European Union that wants to do uh
stimulate
Innovation uh apply AI in new and in
innovative ways and finally uh since the
fles eii office fles AI agency Pro
provides support to uh local governments
and Regional government we also want to
develop a number of reusable AI building
blocks um so that people can use uh uh
things like uh large language models
which have been developed specifically
for use by government what have we done
so far uh as far as the long-term vision
and AI strategy is concerned and
preparing our people in organization is
we have defined a number of guiding
principles we have set and that a and
the flamish government has to be
Democratic trustworthy human centered
and sustainable with the proper use and
management of data and applied with the
necessary
expertise uh especially the aspect of
trustworthy there we've said we uh want
all our AI use and applications to
satisfy eight requirements uh I will not
go into them into detail these are the
typical requirements which are also
defined at the European level and in
number of other uh National eii
strategies what we've also done to
prepare our people in organization and
to be uh to guarantee that they use AI
in a trustworthy manner is that we've
defined a number of guidelines for the
use of generative AI we have seen that
our uh civil servants already have been
starting to use the publicly available
generative AI as like chpt and we said
in order to uh do that in a trust world
manner we have to define a number of
guidelines uh these guidelines are are
pretty
straightforward uh and uh are now being
used not only in the regional government
but increasingly also in our local
governments uh and their Common Sense uh
guidelines in what you have to do if you
want to use generative EI in a safe
manner what we're also doing in order to
prepare our people and organization is
uh We've we've been looking at a number
of AI co-pilots and we're in
particularly evaluating the use of the
Microsoft
365
co-pilot um you probably already have
seen demonstrations of the Microsoft 365
co-pilot we consider that to be a
possible AI assistance to our civil
servants but before we introduce that
into our organizations we want to be
able to make it a definitive business
case because this co-pilot does cost
money and will have an impact on how we
do things so we're trying to identify
scenarios where this co-pilot can be
used effectively and efficiently uh we
try to determine what are the profiles
of the typical users which can be uh
which can use this Microsoft 365
co-pilot in a meaningful way and we're
also looking at the Privacy aspects of
this co-pilot
use so to conclude uh what are we doing
for the local authorities uh uh we're
going to draft future guidelines
regarding this appropriate use of AI
co-pilots in all our office software
we're also going to draft guidelines on
the appropriate use of generative AI in
a digital Service delivery we already
have a number of uh Flemish
organizations providing chat boss based
on generative Ai and we want to be
certain that they use that in in the
appropriate way uh we're still assessing
the need for AI training and support so
we have a at the flish level we have a
knowledge centered data and society
which looks into the the work and
societal impact of the use of AI and in
cooperation with this knowledge Center
we are conducting a survey on the
generative AI literacy which exists
among local governments so that we can
determine what the need is for further
training and education in this domain
and then as it becomes clear what uh
what are all the different obligations
that we have to fulfill as part of the
eii ACT we will be developing tools to
support this compliance with the the
legal requirements and also tools to
verify if a governmental AI solution
complies with the guiding principles
that we have defined and then
particularly uh the trustworthiness
requirements that was basically what I
wanted to say in five
minutes thank you very much I think you
passed the test I was still very clear
despite the the litest time uh and again
apologies for for that um but for my
side is is very interesting and
Browse More Related Video
Le cours lu - Habiter les littoraux (6ème)
Qu’est-ce que l’AI Act (ou le règlement européen sur l’intelligence artificielle) ? 🤖
06 - CCNA 01 - Chapitre 01 - Tendances réseaux
TERATEC24 - Interview d'Hugues Even (BNP Paribas)
CEJM - Th4 Chap3 : Le numérique dans l'entreprise et la protection des personnes
L'IA DÉTRUIRA-T-ELLE L'HUMANITÉ !?
5.0 / 5 (0 votes)