European Schoolnet Podcast Episode 3 - The role of AI in Gender-Based Violence.
Summary
TLDRThe European School Net podcast episode discusses the critical role of AI in addressing online gender-based violence. Postdoctoral researcher Sylvia Simine, an expert in tech-facilitated gender violence, highlights the risks of AI misuse, such as deep fakes and algorithmic bias, and their impact on women. She emphasizes the importance of ethical AI development, inclusive of marginalized communities, and stresses the need for societal and cultural change alongside legal measures to combat this digital violence.
Takeaways
- π The podcast episode discusses the role of AI in addressing and potentially facilitating gender-based violence online, highlighting the transformative potential and risks of AI, such as data privacy, ethical implications, and bias.
- π AI systems can be misused to create and distribute deep fakes and manipulated content, which often targets women, leading to significant psychological and social impacts.
- π§ The intersection of AI and gender-based violence is nuanced and ever-changing, with risks including online harassment, defamation campaigns, and the reinforcement of harmful gender stereotypes.
- π» The misuse of AI tools often stems from a desire to target and harm women, reflecting broader societal issues of misogyny and the need for more inclusive and ethical technology development.
- π The growth of online misogyny and the accessibility of AI tools contribute to the rise in gender-based violence, with young people potentially using these tools out of curiosity or a lack of understanding of the harm they cause.
- π The importance of sexual and digital education is emphasized to equip young people with the knowledge to navigate the digital world safely and understand the implications of technology on gender relations.
- π The campaign for the criminalization of non-consensual image sharing in Italy demonstrates the power of collective action and political engagement in driving legal and societal change.
- π The role of intersectional feminism in guiding technological development is highlighted, emphasizing the need for diverse and inclusive perspectives to prevent the perpetuation of harmful biases.
- π The discussion underscores the non-neutrality of data and algorithms, and the necessity of transparency and ethical considerations in AI development to avoid reinforcing societal biases.
- π‘οΈ The need for a cultural and ethical shift alongside technological advancements is stressed, rather than relying solely on technical solutions to combat gender-based violence.
- π The episode concludes with a call to action for continued efforts in education, political will, and cultural change to create a safer digital environment and address gender-based violence effectively.
Q & A
What is the main focus of the European School Net podcast series episode featuring Sylvia Simine?
-The episode focuses on the intersection of artificial intelligence and gender-based violence online, discussing the role of AI in facilitating or addressing such violence.
What are some examples of how AI can be misused to facilitate gender-based violence online?
-Examples include deep fake technology for creating manipulated explicit content, AI algorithms used for online harassment or doxing, and AI chatbots programmed with biases that reinforce gender stereotypes.
What percentage of deep fake content online is sexually explicit, and who is the majority of this content targeting?
-96% of deep fake content online is sexually explicit, and 99% of this content targets women.
How can generative AI algorithms contribute to gender-based violence?
-Generative AI algorithms can be used for defamation campaigns, automating online harassment, or escalating violence through the spread of hateful or threatening messages.
What is the potential issue with AI-driven chatbots and virtual assistance?
-They can be programmed with biases that contribute to reinforcing harmful gender stereotypes.
What is the term used to describe the immersive reality experiences where online violence can be more vividly experienced, and what is an example of such violence?
-The term is 'metaverse', and an example of violence in this context is 'Meta rape'.
What is the role of algorithmic bias in AI systems?
-Algorithmic bias can unintentionally perpetuate and amplify societal biases present in the training data, leading to discriminatory outcomes.
What was the main finding from Sylvia Simine's research on non-consensual dissemination of intimate images on Telegram?
-The research found that these channels were used by men to construct masculinity by using women's bodies as trade currencies, normalizing non-consensual sharing of intimate images.
How do AI systems reinforce gender stereotypes, according to Sylvia Simine's discussion on social media platforms?
-AI systems on social media platforms can reinforce gender stereotypes through biased algorithms that decide what content is seen, often encoding discriminatory social norms into their operations.
What are some of the issues with AI systems in terms of data collection and privacy?
-AI systems often scrape data from the internet non-consensually, use it to train models, and can lead to ethical discussions regarding the use of personal data without consent.
What is the importance of including marginalized communities in the development of AI technology?
-Including marginalized communities ensures a more inclusive and diverse development of technology, leading to better results and innovations that avoid perpetuating existing biases and discrimination.
What was the outcome of Sylvia Simine's campaign against non-consensual image sharing in Italy?
-The campaign led to the criminalization of non-consensual image sharing in Italy, although it did not initially include digital platforms as part of the responsibility.
What is the role of education in addressing the issues discussed in the podcast?
-Education plays a crucial role in raising awareness about the risks and ethical implications of AI, promoting a better understanding of consent, privacy, and the impact of technology on society.
How can young people be empowered to navigate the digital world safely?
-By providing them with digital and sexual education, fostering an understanding of technology's impact, and promoting critical thinking to avoid falling into traps set by AI systems or malicious online behaviors.
Outlines
π€ AI and Gender-Based Violence: The Transformative and Risky Potential
The podcast episode begins with a discussion on the role of artificial intelligence (AI) in addressing and potentially facilitating gender-based violence online. The host, Ena, introduces the topic by highlighting the transformative potential of AI and its associated risks, such as data privacy, ethical implications, and AI bias. The guest, Sylvia Simine, a researcher and international activist, shares her expertise on technology-facilitated gender-based violence. The conversation touches on the misuse of AI systems, including deep fakes and manipulated content that can perpetuate harmful gender stereotypes and facilitate online violence.
π The Nuances of Online Gender-Based Violence and AI
Sylvia provides an in-depth look at the types of online gender-based violence risks that young people and children may encounter with the use of AI. She discusses the ever-evolving nature of these risks, starting with deep fake technology, which is predominantly used to create sexually explicit content targeting women. Sylvia also mentions other AI-related issues such as defamation campaigns, online harassment, doxing, and the reinforcement of gender stereotypes by AI-driven chatbots and virtual assistants. The conversation emphasizes the importance of understanding the intersection of AI and gender-based violence as a complex and dynamic phenomenon.
π« The Misuse of AI Tools and the Predators' Motivations
The discussion delves into the reasons and motivations behind the misuse of AI tools, particularly by predators and offenders. Sylvia explains that the creation of deep fakes and manipulated content can stem from various motivations, including targeting women to silence their voices online and reinforcing traditional masculine values. She also touches on the accessibility of deep fake tools and the potential for misuse by individuals who may not fully understand the harmful consequences of their actions. The conversation highlights the broader social issues related to misogyny and the need for comprehensive sexual education to address the underlying problems.
π Research Insights on Non-Consensual Image Sharing and Masculinity
Sylvia shares her research findings on the non-consensual dissemination of intimate images, specifically within Telegram groups. Her study, conducted with colleague Lucha Botti, explored the social reasons behind the formation of these misogynistic channels and the role of masculinity in such behaviors. The research revealed a 'homo-socialization' dynamic where men use women's bodies as a currency to construct their masculinity. Sylvia discusses the shocking findings, including the existence of an archive referred to as 'the Bible,' and the implications of these behaviors on societal norms and values.
π Algorithmic Bias and the Perpetuation of Gender Stereotypes
The conversation shifts to the role of AI algorithms in reinforcing gender stereotypes. Sylvia provides examples of how AI systems, including those on social media platforms, can perpetuate discriminatory norms and biases. She discusses the opaque nature of these algorithms, which often operate as 'black boxes,' making it difficult to identify and address the biases they propagate. Sylvia emphasizes the need for transparency and the inclusion of diverse perspectives in the development of AI to prevent the perpetuation of harmful stereotypes.
π AI Data Collection and the Ethical Implications of Generative AI
Sylvia explains the technical aspects of how AI systems collect and use data, focusing on the ethical implications of generative AI. She discusses the non-consensual scraping of data from the internet to train AI models, which can lead to the propagation of existing biases. Sylvia stresses the importance of considering data neutrality and the need for an ethical approach to AI development, particularly with regards to intersectional issues such as gender, race, class, and sexual orientation.
π The Impact of AI on Sexual Education and Youth
The discussion addresses the impact of AI on sexual education, particularly in the context of platforms like Pornhub. Sylvia shares her research findings on how these platforms track users and personalize content based on collected data, often reinforcing heterosexual male gaze. She argues for the importance of comprehensive sexual education to counteract the potential negative influences of AI-driven content on young people's understanding of sex and relationships.
π‘οΈ The Role of Digital and Sexual Education in Navigating AI Risks
Sylvia discusses the importance of digital and sexual education in helping young people navigate the risks associated with AI, such as automated harassment and the manipulation of emotions by AI chatbots. She emphasizes the need for a collective understanding of privacy, intimacy, and consent, rather than placing the burden on individuals. Sylvia also highlights the role of political will in implementing ethical guidelines for AI development and the importance of cultural change alongside technological advancements.
ποΈ Legal and Cultural Shifts in Addressing Non-Consensual Image Sharing
Sylvia shares her experience with the campaign that led to the criminalization of non-consensual image sharing in Italy. She describes the process from its online inception through a petition to the eventual legal changes. Sylvia emphasizes the importance of continued advocacy, both nationally and internationally, to address this transnational issue. She concludes by stressing the power of collective action and the necessity of legal and cultural shifts to achieve justice for survivors of online gender-based violence.
Mindmap
Keywords
π‘European School Net
π‘Gender-based violence
π‘Artificial Intelligence (AI)
π‘Deep fakes
π‘Algorithmic bias
π‘Digital citizenship
π‘Online harassment
π‘Ethical implications
π‘Sexual education
π‘Data privacy
π‘Feminist activism
Highlights
The podcast discusses the role of AI in addressing and potentially facilitating gender-based violence online.
AI systems can be misused to create deep fakes and reinforce harmful gender stereotypes.
96% of deep fake content online is sexually explicit, with 99% targeting women, indicating a gendered risk.
AI-driven chatbots and virtual assistants can be programmed with biases that perpetuate gender stereotypes.
Algorithmic bias in AI systems can unintentionally amplify societal biases present in training data.
The importance of addressing online misogyny and its impact on the creation of deep fake content.
The role of sexual education in understanding the violence behind using AI applications.
A study on the nonconsensual dissemination of intimate images on Telegram, revealing a social reason behind the sharing of such content.
The campaign for the criminalization of non-consensual image sharing in Italy and its impact.
The need for transparency in how platforms like Pornhub track users and the implications for privacy.
AI systems often lack inclusivity, with teams developing AI being predominantly composed of white men.
The potential for feminist activism and approaches to guide the development of more ethical AI technologies.
The importance of understanding AI as a cultural and social issue, not just a technical one.
The role of intersectional feminism in addressing biases in AI and promoting more inclusive technology.
The potential for AI to be used for good, as demonstrated by feminist hackers and activists.
The challenge of educating young people about the risks of AI chatbots and the importance of understanding consent and privacy.
The MANABO toolbox, an interactive game aimed at empowering digital gender dialogue and challenging stereotypes.
Transcripts
welcome to the third episode of European
School Net podcast series that today we
are running together with a manable
project that aims to tackle online
gender-based violence and today we're
talking about the intersection the role
that AI plays in add in addressing
gender-based violence artificial
intelligence is one of the most
discussed topics nowadays mostly because
of it its transformative potential but
also because of the risks that it POS
particularly concerning data privacy
ethical implications and AI bias for
instance AI systems can um be misused
and can cause uh and facilitate gender
based violence online from Deep fakes
and manipulated content to AI algorithms
that reinforce harmful gender
stereotypes my name is Ena and I'm
working in digital citizenship
Department of European school net where
we tackle the topics related to Online
safety digital rights digital skills and
media literacy and everything related to
digital environments and the use of
technology and um today I'm here with
amazing guest Sylvia simine who is um a
researcher postdoctoral researcher and
her uh expertise focuses on technology
facilitated gender-based violence online
and also uh she is an international
activist and thanks to her
contribution uh now uh the in the
non-consensual sharing of intimate
images is criminalized in Italy so I'm
very much looking forward syia to speak
to you today I have a lot of questions
so thank you so much for joining us
thank you Inda for the invitation and to
manabo H for inviting me I'm very happy
to be here and
chatting perfect so let's dive into it
um my first question would be um could
you give us like a general overview on
what uh what type of risks were I
talking about um when when young people
and children go go online uh what type
of online gender-based violence risk
they can face and encounter with the use
of
AI of course in so I would start from
the very beginning of the whole question
so as you have rightly mentioned you
know that AI right now is a bus word so
we're seeing it everywhere in the past
year everyone is discussing generative
AIS and they harms and potentially risk
also possibly what they could do for
good um but in the conversation around
gender-based violence particularly both
on women and youngers this is very
interesting because AI is uh showing us
how gender-based violence online is a
very nuanced phenomenon and also it's
everchanging together with technology so
it's very difficult to give an overview
which is uh exhaustive right now because
we have to take into account that this
is always changing and evolution evolu
so um what we can see right now is that
in the past year at least there was a
growth of risk uh embedded in into
online gender based violence and their
relationship with with AI and the first
example that I'm thinking of and I'm
sure you must be also familiar with it
is uh what is called known as deep fake
technology deep fake technology is uh
basically the use of AI for creating
manipulated content manipulated pictures
video audios and 96% of this content
online is sexually explicit meaning that
most of this deep fake content is abuse
uh 99% of this content targets women so
uh this shows us the gendered side of
this kind of particular risk which can
have a big detrimental impact on the
reputation and on the on the psychologic
and social life of women and girls
uh but let's not just focus on deep fake
technology because this is just one of
the problems that AI can cause I'm going
to list THS and then if you want we can
get a bit into more depth uh for example
generative AI algorithms can also be
used against women and girls for a
defamation campaign or for automating
what it's called online harassment or
doxing this is already happening and
this is also very important for example
in light of political elections or
crucial political moments where women
are uh of course the most targeted for
hateful or threatening messages which
are now facilitated by this uh
generative AI which can escalate this
kind of violence then I also want to
mention the AI driven chat Bots and
virtual assistance which can be
programmed with certain biases that can
contribute to reinforcing gender
stereotypes then we have uh virtual
reality AAL reality the metaverse which
are very immersive reality where online
VI violence can also be uh lived in a
more immersive way and we have what is
called The Meta rape for example already
experienced in the metaverse and finally
maybe this is the most common and also
something that has been uh around for a
very long time now uh which is the
algorithmic bias that can
unintentionally perpetuate and amplify
soet biases which are present in the
training data just an overview you yeah
yeah very very comprehensive overview I
think we'll need to go into details for
each each uh issue in particular so um
am I right to say that it's um some
issues are related to the behavior of
users around AI tools and around AI
systems but something is of course based
on AI algorithms that are out there so
if we're looking at the kind of like the
issue of misusing AI tools how um maybe
you've done some uh some work some in
your research
that uh gives us an understanding of
this Behavior what type of reasons
predators and offenders uh follow what
type of incentives do they have to act
like this to create deep fakes and
manipulated content what triggers them
so it's very complicated there is not
just a simple question to a simple
answer to this question I think because
reasons can be very different as well
depending on what um which level you are
of creating deep fakes there are people
who are uh the very beginning of the
implementation of deep fakes so they can
code and they can decide openly to uh
create deep fakes uh against women for
example and then there are also users
who can do it from a a more naive way
somehow because these deep fakes Bots
and application are very easy to use and
to find they are free they are very
quick so I would say it highly depends
we have to take into account that right
now we are witnessing a moment of a big
growth of online misogyny so it means
that the so-call the manosphere what we
always thought that it was uh just a
closed environment of the internet is
popping the bubble is popping so it's
going everywhere and we have a big
monoculture I would say on the internet
uh so part of the reason I would say is
often to Target women and to harm them
to make sure their voices can stay out
of the internet which has been
traditionally very masculine
masculinized so so uh this is uh
specifically happening to those who
decide openly to uh create this kind of
content but I think that in general deep
fake contents are also understood as
less malicious because of this idea of
them being fake so it means we
underestimate the actual arms that women
and girls can leave when they experience
deep fake abuse uh so as I was say 99%
of this abuse is targeting women these
AI are trained on women body so we have
a huge social problem is here I think uh
especially for young men and boys who
are using this kind of tools out of I
think also sort of curiosity or thinking
that this is not harmful again they
don't have a sexual education that make
them understand the violence that is
behind using these kind of applications
and platforms and uh I also have this
feeling that much can also do with the
So-Cal pornification of society so
meaning we are exposed to a lot of porn
images or sexualized images and we also
tend to look for more extreme content to
what it's be what it's felt as more
authentic is more intriguing somehow so
this fake technology can also provide
this idea to have more authentic content
you know so they want to address women
that they know for example feeling that
this is more exciting somehow and this
has a lot to do with masculinities as
well very interesting yeah I think it's
it's interesting to see how accessible
these platforms are to be honest I was
shock when I discovered that uh and also
the fact that um yes the it's highly
customizable so you can definitely like
use your imagination and fantasy and
allow it to go the directions that you
you wanted to yeah which can pause some
risks um so my question to you actually
uh in regards to masculinity you've done
some um amazing work and a study on the
telegram case if I'm not mistaken and it
was on the nonconsensual dissemination
of intimate images could you probably
tell us more about it yes I think this
research is very very very ented with
what what I was saying also because
imagine that already back to 2020 when
the research took place deep fake porn
was already there so meaning there were
already Bots for deep faking women and
girls so very briefly what I did
together with a colleague Lucha Botti
was accessing 50 groups and channels we
created a sample of this abusive and
misogynist channels on telegram which
are specifically created for sharing
non-consensual ual material sexual
material of women and girls just women
and girls so they are populated mostly
by men and our research had a research
question at the very Basit which was why
why is this happening why are these men
meeting to share this type of content
and what is happening what is wrong what
is the social reason behind this and be
Beyond finding material that was uh
really shocking because it was not just
of course the content and the sexually
explicit content that we found
non-consensually shared but we even
found an archive called the Bible where
all this material was um collected and
encrypted to make sure that if the
channel gets close all the material will
be saved so they have this o idea also
of this non-consensual material that's
why it's called the Bible and there
there were also material of women who
are already dead like tips Canon the
Italian surv well not Survivor the
Italian victim uh that then also uh
opened the discuss about a
non-consensual dissemination of intimate
images in Italy and um to reply our
question and go back to masculinity we
understood that the problem was mostly
about how these men and boys from every
age and from every kind of social class
were creating their social relation they
were PE relations so it's what in we
called the homo socialization of peers
the heteronormative hos socialization
meaning that women bodies are used as a
trade currencies to construct
masculinity to make other men and other
boys telling you you are a good man you
are a true man you're a real man because
you desire and you like women and you
like sex and so in this sense the
consent is negotiated or even hidden
women are just objects you know and that
was I think the most shocking result and
also telling us so much about what we
are missing right now in society to
limit this kind of problem yeah yeah
yeah wow it's really shocking and a
little bit sad as well to hear that
stuff like this exists out there well
thank you so much for sharing syia uh I
think I would like to now move a little
bit towards the algorithms and gender
stereotypes and how this really um uh
the intersection that it has could you
probably give us some examples on how AI
systems have reinforced perhaps the
gender
stereotypes of course there are many
many examples in this case as I said
some of them are already out there for a
very long while and thinking for example
of social media platforms you know
sometimes it makes me smile that now
we're discussing generative Ai and
intell artificial intelligence at large
as if it was something new something
that uh has just been popping up in
society but but social media platforms
has been used using artificial
intelligence forever to for example uh
track ads or decided deciding which
content we can see whose voices should
be listened or not so this means that
for example in the in this case of
social media platforms one of the most
um interesting examples and maybe the
most known is censorship right so how
these algorithms can decide with which
bodies can be seen uh how women bodies
for example should or should not be seen
so it means that they have um basically
encoded rules social norms that are very
discriminating into writted Norms online
so for example you know we cannot show
our nipples online uh but men can and
this is just arbitrary you know I don't
think we never had a discussion on this
but then if we move to other more
complex system that are also already out
there and I'm thinking for example of um
systems algorithmic system that decide
who to H so this was an example in
Amazon um deciding who was uh the best
candidate to cover a very high position
in computer science inside Amazon and
not surprising this algorithm was
excluding women from um from the process
because it was fed again with
discriminatory data and you know we have
a problem of women into uh Tech so the
algorith was just reinforcing this
another example that comes to my mind is
an example of Spain uh where the
government decided to use and Implement
an algorithmic system that was used to
decide and classify the risk of violence
that a woman could have based on her uh
experience of stalking for example or
prev violence and these algorithms again
were fed with insufficient data or
biased data so the result uh as killed
women who were classified as low risk
you know in stain what happens with many
of these systems it's something that I
want to highlight is that often times
they are very opaque and they are black
boxes where this data becomes very
difficult to check so if we don't open
this black boxes we cannot face these
biases that are implemented Within These
systems so the big problem right now
with artificial intelligence and bias is
that women are not included most times
marginalized communities are not
included often times these teams uh and
councils of AI inside platforms or
inside these systems are basically
populated by white men so imagine where
this bias can come from you know
sometimes we think that it's technology
that it's just a bit um uh float and
then we can fix it but I think this kind
of example can just show us that
actually it's a social problem it's a
cultural problem it's not
technical it's culturally embedded
already and then we just train AI kind
of to do to continue this pattern thank
you so much syia actually this this
brought me to the question how could you
explain us how AI collect data on us I
think it's it's it's became pretty
evident that it's just following us on
across all platforms across all networks
and devices that we connect to but could
you perhaps explain us more in technical
details if possible what type of uh
Technologies used to collect this data
well I want to highlight and first of
all that I'm not a technology a
technologist so I'm a sociologist I
study digital technology so I know how
these systems works but if I say
something that is not technically
correct please forgive me um however how
this technology work is uh basically
with a training in the your system with
millions of millions of data I'm talking
now about generative AI which often are
taken and scraped from the internet and
this often times happen non-consensually
actually we have a huge ethical
discussion regarding generative AI on
art on culture on journalism on
information because the data they are
using to create the beautiful images
that we see out there are created with
images of artists that are already out
in the internet and that are scraped and
put into this system and then uh trained
uh to give you an output that it's a
fabricated output right so the big
problem with how this data well with
these machines are trained is precisely
data because data are not neutral and so
on uh algorithms are not neutral and
outputs will never be neutral you know
so we have to discuss data we have to
think of data because if we don't these
algorithmic systems will only
regurgitate the same biases that humans
have trained them to adopt you know I
think this is the crucial issue we are
facing right now with artificial
intelligence and from an intersectional
point of view this is very relevant
because it's not just about gender but
it's also about race class disability
sexual orientation
religion and all and how all this
Dimension intersect among them and with
gender meaning that the people who will
be most discriminated by this systems
that now uh seem to be the Revolution
and that we're thinking that they will
make our life better will be the ones
that will suffer the most from the
consequences of having this kind of
Technologies into have be put into
society without a previous discussion on
their
ethics okay very interesting
thanks syvia um I think yeah I would
like to come back to the issue of gender
stereotypes and uh I would like to ask
you to perhaps El elaborate a little bit
more on the study that you made also on
PornHub that uh was about header nor
normativity that uh prior was
prioritized on the platform could you
tell us more about this
study of course the study was also um
connected with this topic that I was
commenting right now about algorithms
and uh the So-Cal platform affordances
that are offered uh from the platform to
uh again decide uh which type of content
you will consume in the platform so what
we wanted to see on forab since it's the
most used uh platform for consuming
pornographic content was understanding
how PAB was uh tracing users and the
personalizing their own page according
to uh the data they were collect Ing and
and tracing uh the first result which I
think is very interesting is that first
of all porhab at time of the research
was not telling out openly that they
were tracking users so that was a
violation of gdpr and that opened also a
strategic litigation against bab for
violating rules um so not telling users
that they will be traced and this is
particularly important in pornographic
platforms because you know that they are
uh getting to know your sexual
preferences your sexual orientation and
they can use it to send this to
companies or against you you know so we
have to be transparent about our data
again and how they are being used and
second thing which I think is the most
interesting result and it's connected
with what you were saying regarding uh
CET normativities and mail Gaze on
online platform is that although the
platform was tracing users and although
we we saw that data were actually
collected um pornh have decided to not
personalize the H page which is pretty
crazy if you think of how the rest of
social media platform work which are
highly personalized but in the case of
forna it was not interesting whether you
were a woman a heter woman a lesbian
woman or a transgender woman or man the
content you were proposed by Vi perap
was always the one for a head to remain
so this is exactly the result showing
how Al also how pornographic platforms
embed this uh heterosexual male Gaye and
since pornograph pornographic platforms
are still used from so many young people
to understand sex and inform them
themselves about sex I think this is
particularly crucial because it tells us
that without sexual education this
platforms will be the ones telling um
young boys and girls how they should
expect their sexual relations to be and
we know that these platforms also have
very extreme content a lot of
non-consensual content so I don't think
we should leave uh sexual education to
pornographic
platforms okay okay yeah fair enough uh
actually now I have a question on sexual
education and Technology I know uh syvia
that you also do some seminars and
workshops and schools and you try to um
you know give this knowledge to to young
people could you perhaps um give a
little bit of an Insight how does it
happen what type of response you get
from young people because it's quite a
sensitive topic so I can imagine that
it's very difficult to to approach young
people on that I mean young people I
think they are very eager to learn so I
have to say that in the past years yes
in the past year um I I've done sexual
education mostly through an NGO that I
co-founded in Italy a virgin and Marty
uh that was born with the aim of doing
sexual education and digital education
and I have to say that it was often
times young people calling us to the
year schools so they were like hi can
you please come we have this day where
we have
autonomous uh day independent day where
we can decide who to invite can you come
we would like to know more about um I
don't know gender identities or even
porn uh bodes you know or Digital Sex
and so for us it was always very
intriguing to see how they wanted to
receive sexual education but in a
country like Italy they can't because
they have because we have a political
willing willing willingness to uh limit
and censor discourses around sexualities
and this creates a problem so uh again I
think this is uh really about uh
politics it's really about deciding
whether we want to give young people the
tools to understand what is happening
around them both on the digital and and
the offline and the in dimension that
both have because otherwise we are
leaving them alone and I don't think
this is fair and I have to say that
every time I went to school I always
came out with a very positive feeling of
saying even if I convinced one person
today about um positive or inclusive uh
discourses there was always somebody
raising their your hand and say thank
you for saying this I am nonbinary or I
have an I had an experience of online
sexual violence so thank you for saying
this and I felt that saved them you know
so what if we had this every day this is
what I'm uh thinking because two hours
per year is not enough and we are still
having this problem all around Europe so
we should have path of sexual education
and digital education to make sure young
people
can be um can feel safe online and
offline yeah yeah yeah I agree uh
actually my next question is about
mostly digital education and it's it's
concerning AI chat Bots because I think
it's it's it's it's really concerning
issue right now especially that they're
trained for automated harassment or they
can mimic language and behavior of The
Trusted adult or sometimes it's also
extremely extremely scary to realize
that they also can leverage your
emotions they can read when you feel
lonely or sad and they can play with it
and make uh make you feel comfortable
and secure Etc so my question here is
that how do we um explain and how do we
um you know uh educate young people uh
to conversate and to uh avoid these type
of traps and just to be familiar aware
of those um when they conversate so to
say with AI chat Bots what can they do
and and know
I mean again this is complex because I
think we shouldn't think that adults in
this case can educate young people
because often times if you go to uh
people who are older than us they are
possibly even more fragile in front of
these traps than young people who are
living into this kind of technology so
of course we all need to be educated and
the way to do it at a
societal um
overview is uh understanding the risk
first of all understanding what is
happening and we're going to a huge
speed you know and having political
responses in this sense uh is difficult
because you know that political
institutions are always lower than
technology uh but I think what we need
again is uh understanding the the conent
the concept of privacy of intimacy again
of consent both of both all of Concepts
as collective issues not just individual
issues otherwise the response will
always be put on the individual instead
of collective uh Society you know which
avoids providing political answers in my
opinion so of course we need to do
education as I said but also I want to
highlight once again that this is about
political willness is to limit also the
implementation of certain malicious
application of um of AI because right
now when you discuss AI especially in um
political institution it's all about
Innovation it's all about uh letting go
because we cannot stop the Innovation
and the we cannot limit technology blah
blah blah and this is false and this is
false because we have to put ethics at
the core of the Innovation and the
technological development otherwise that
will always have um bad con consquences
on Society and also on young people who
of course are more fragile but what I
want to say Ina is that I will never
tell young people not use technology
because it's impossible and they will
always do but this is what they are told
often times at schools by governments
you know but they your parents and this
is not the right way to do it it's more
again also in this case to understand
together with them what is going on and
what we could do regarding
changing yeah yeah yeah yeah I agree we
cannot avoid technology is there AI is
there and it will be there so we need to
learn and we need to help young people
to uh become more aware in terms of how
we can use it and leverage it to its
full potential instead of just avoiding
it that's not gonna that is impossible
okay that's um a very interesting
Insight siia so my question here
is um so we know the complexity of the
risk that are related to AI um my
question here is that why can we not
train and I know that there are some
efforts already made to train and to
detect Predators offenders and to flag
those and to detect generated AI
generated content online but my question
here here can we trust are those tools
reliable enough uh and do you believe
that in near if not do you believe that
in near future it will be the case that
we'll be able to um kind of use AI to
prevent those type of things from from
happening instead of allowing them and
facilitating
them so um again um I think my answer in
this case will be also nuanced in the
sense that I think first of all that
technology should could and should be
more inclusive and more feminist so we
need to use feminism as the core concept
to develop technology meaning we need to
include more women and more marinal I
Community into the development of
technology to make sure we can have uh
better results and better Innovation you
know and in this case for example
feminist activism with a age so hackers
feminist shows us that there are
implementations uh of technology and or
even of AI that can be used for good uh
but this is again uh something that
requires a very um big political
understanding of the problems um so when
we understand the problems and we
understand discrimination we can think
of Alternatives so again I think
intersectional feminism shall guide the
change in this sense otherwise there
won't be change at all the problem is
that most of these alternatives are
always side so they are not massified
and they are not used by the broad
public the general public because you
also know that the internet is not a
democratic place there are there are
bubbles that are Biggers and there are
platforms that are more used much more
used than others and these are not
inclusives um so on the other hand my
answer also connects with the idea that
for me since we have discussed um so far
how much this is a social problem and a
cultural problem I don't think that we
need necessarily a technical response to
this problem I think that technology
will change together with society and
culture we don't need more technology I
think we need more culture we need more
uh ethics again and we need a broader
understand of consent and we need to
also recognize that the gender-based
violence exist and is causing so much
harm that we cannot just ignore it you
know and um and this is so important
because again we always think that these
technology then are just technical you
know are just mathematics and we forget
how much there is always a human choice
in the systems that we use even in
platform moderation we think that
platform moderation is just for
algorithms online it's not there is
always a human being deciding whether
the content it stays or goes you know so
again I think we should work more on the
The Human Side of technology and less on
mathematics and then possibly we will
have better results at the very end yeah
that sounds good and quite positive that
we still have some power and we can uh
make a change and speaking about a
change actually could you tell us more
uh about your uh the campaign that you
contributed to uh that led to the uh
criminalization of non-consensual image
sharing in Italy I think it's very
impressive so I think this is also
example of how we can um you know how
can be how we can change the the
situation that we're at right now um
syia share us share share it to us
please thank you so much yes of course I
will be happy to share it with you I
have to say it's a very long story so I
will try to
resume um this started back in 2017 when
I started to work on online gender-based
violence and image based sexual abuse um
I have to say this as most of uh us
women feminists start to work on these
kind of topics uh is always due to a
sort of uh abuse experience that we have
so that was also my case unfortunately
this is how we started I I like to say
this mostly because sometimes we think
that feminists start to be feminist
because they had like an illumination on
the way uh you know and they started to
be passionate about this topic out of
nothing and it's not true we experience
violence our lives are embedded by
violence so this in my case was also
online not just online I had several
experiences of that and in that moment I
was also studying a PhD so I wanted to
give a name to what I was to what I
experienced I wanted to understand to
how many other women that was happening
and this is how everything started right
because I started to collect data I
started to work with NGS like Amnesty
International in Italy and we started to
create this framework for gender based
VI online that was unexisting back to
2017 and this slowly took me into a
political engagement that was completely
unexpected at the very beginning and it
growth uh exponentially uh during the
years until in 201 end of
2018 beginning of 2019 I launched this
campaign online at the beginning through
a petition um the hashtag was intima
meaning violated intimacy where I was
asking to Italian institutions to do
something regarding non-consensual
dissemination of intimate images and
recognize it as a gendered problem and
recognize it as a violation of consent
and recognize also the responsibility of
all the digital platforms that were um
facilitating that uh uh that content and
we're not removing it and we're not
replying to survivors so that campaign
started online but then it became a
mediatic campaign it also took form
offline through schools again through
events on the territory and we started
to work on a bill because I started to
collaborate with a political a
politician in in Italy until uh the law
was presented uh in July uh 2019 and
approved in July
2019 uh so it was a very quick campaign
then in the end uh in six months we had
the the low approved and uh
unfortunately it was not approved for as
I want it so the part on digital
platforms was gone for example yet the
the law recognized the gender aspect of
it because it was introduced in the red
code bill which is about gender-based
violence uh last things I want to say
regarding this campaign is first of all
that it shows that we can always make a
change if we want if we get angry for
real and we start to collaborate with
other people and try to make
ourself understood and we can really
make the difference uh I I really
believe it and this is uh empowering
because sometimes we think that in front
of technology and violence there is
nothing we can do but this is not true
this is what they want us to think um
and then on the other hand I think that
also in this case the legal response is
necessary but is not enough so we need
to do more because this is also a
transnational problem it's not just
National we need to work on the European
side of it because platforms are not
National they are Global so the work is
not is not finished you know and we
there is much more that we have to do
and that we are doing I'm working with
other survivors and activists around
Europe and around the world and
hopefully this will go further and
further until we receive Justice for Rio
because right now it's not happening and
so we will insist until this uh is done
yeah yeah yeah sounds very motivating
and inspiring thank you syvia I think on
this positive note we'll try to draw
this podcast to an end although there
are so many questions that I would like
to continue asking you but probably um
next time when we meet again hopefully
um so thank you so much syia for making
yourself available and for sharing your
expertise today and the work amazing
work that you do thank you it was an
amazing conversation very happy thank
you so much um I would just take uh
opportunity to also share the um the
outcomes of the manable project that we
are currently working on um uh tackling
gender-based violence um recently the U
manable toolbox has been launched and
it's a interactive uh game that contains
uh quizzes dilemas and challenges that
young people can uh play with and um
Empower digital uh gender dialogue and
gender stereotypes di look in the
classroom or at home together with their
teachers or parents and caregivers but
also by themselves so this can be found
and with with other initiatives of men
on at
men. so please go ahead and visit you
can also subscribe to uh social media
channels of the enable projects on Tik
Tok and um
Instagram uh and uh please please
subscribe to European school night
podcast Channel as well to follow up on
the next episodes that we'll be dealing
with with Technology Innovation and uh
education topics thank you so much for
uh listening us today and I wish you a
beautiful day thank you
[Music]
Browse More Related Video
5.0 / 5 (0 votes)