GEF Madrid 2024: AI, Enhancing Digital Safety in Schools?
Summary
TLDRThe panel discussion, featuring experts from diverse educational backgrounds, delves into the impact of AI in schools. Concerns are raised about students' over-reliance on AI, potentially undermining learning and human connection. The panel emphasizes the importance of establishing guidelines and guardrails for ethical AI use, considering teacher-student relationships and pedagogical soundness. The conversation highlights the need for benchmarks that assess AI tools' appropriateness for educational contexts and stresses the importance of human interaction and well-being in learning.
Takeaways
- π Stuart Mitchell from Educational Services Australia emphasizes the importance of the national task force for the safe and responsible use of AI in schools and the development of the Australian framework for ethical AI use in education.
- π« Joe Clement and Matt Miles, high school teachers and authors, express skepticism about the impact of AI and technology in classrooms, highlighting the potential for misuse and the importance of teachers' experiences in policy discussions.
- π€ Jeff Bradley, former Director of the commission on International Education, stresses the need for a shared understanding of the purpose of school before implementing AI tools, and the importance of guidelines and guardrails for AI use in education.
- π₯ French, President of the Federation for European education, discusses the need to focus on the relationship between teachers and students, the importance of citizenship skills, and the potential impact of AI on the role of teachers.
- π Maria Jose Oando, representing a UK-based education consultancy, talks about the need for benchmarks that focus on pedagogy and the appropriateness of AI-generated content for different age groups in education.
- π The panel agrees on the challenge of ensuring AI applications are designed ethically, considering student well-being, transparency, and explainability, especially given the 'black box' nature of AI technology.
- π¨βπ« There is a consensus on the crucial role of teachers in the implementation of AI, with a focus on the socio-emotional and cultural aspects of education that AI cannot replace.
- π The discussion points out the potential negative impacts of technology on students' well-being, with increased rates of depression, anxiety, and loneliness correlating with the rise of digital devices.
- π The panel calls for caution and careful consideration of the risks associated with AI in education, advocating for policies and practices that prioritize student well-being and ethical use.
- π The conversation highlights the global nature of the AI in education discussion, with perspectives from Australia, the United States, Europe, and the developing world.
- π§ The panel suggests that while AI has the potential to automate tasks and improve educational processes, its implementation must be managed with clear policies and an understanding of its limitations in enhancing learning and teaching.
Q & A
What is the name of the organization Stuart Mitchell represents?
-Stuart Mitchell represents Educational Services Australia, a not-for-profit company owned by the ministers for education in Australia.
What was the role of Stuart Mitchell in the national task force for the safe and responsible use of AI in schools?
-Stuart Mitchell had the privilege of being part of the national task force for the safe and responsible use of AI in schools, where he played a part in the development of the Australian framework for the safe and ethical use of AI in schools.
What is the perspective of Joe Clement and Matt Miles on educational technology?
-Joe Clement and Matt Miles, who are high school teachers and co-authors of the book 'Screen Schooled', consider themselves as Tech Skeptics. They express concerns about the overuse of screens and technology in classrooms and their potential negative impact on student learning.
What is the concern expressed by the panelists about AI in education?
-The panelists express concerns about the potential misuse of AI, such as students relying on it to take shortcuts in learning, and the lack of understanding of AI's role in education. They also discuss the importance of having a shared understanding of the purpose of schooling before integrating AI tools.
What does Jeff Bradley emphasize as a fundamental question regarding AI in education?
-Jeff Bradley emphasizes the need for a shared understanding of what school is for, suggesting that the purpose of schooling should be agreed upon before discussing the integration of AI tools in the educational system.
What is the role of the AI for Education.org initiative as described by Maria Jose Oando?
-The AI for Education.org initiative, led by Maria Jose Oando, focuses on building AI tools and products for developing countries in sub-Saharan Africa and East Asia. The initiative ensures that these tools are pedagogically sound and relevant for the context of developing countries.
What challenges does Educational Services Australia face in implementing guard rails for AI applications?
-Educational Services Australia faces challenges in assessing whether an AI application is designed ethically, considering student well-being and rights, and dealing with the 'black box' nature of AI technology, which lacks transparency and explainability.
What is the importance of the teacher-student relationship according to the panelists?
-The panelists stress the importance of the teacher-student relationship for enhancing learning outcomes and digital safety. They argue that this relationship should be at the center of educational practices, especially in the context of AI integration.
What is the potential impact of AI on the role of teachers, as discussed by the panel?
-The panel discusses the potential for AI to change the role of teachers, with concerns that AI could replace teachers or reduce them to mere assistants. There is a call to strengthen the teacher-student relationship and protect teachers' jobs in the face of AI integration.
What does Claude compare the current situation of AI in education to, and what is the lesson to be learned?
-Claude compares the current situation of AI in education to the advent of social media, which started with great promise but has had unforeseen negative impacts. The lesson to be learned is the importance of implementing guard rails and guidelines to ensure the safe and positive use of AI in schools.
What is the purpose of the 'Responsible Artificial Intelligence for Learning' (RAIL) developed by the Middle States Association (MSA)?
-The RAIL developed by MSA is a set of protocols and standards that schools need to meet and adhere to in order to receive a credential indicating they are properly managing and using AI in education. It includes safeguards and requires schools to have policies on the treatment of AI technology.
Outlines
π£οΈ Introduction to the Panel
The video script begins with an introduction by Stuart Mitchell from Educational Services Australia, a not-for-profit organization involved in educational technology. He mentions his role in the national task force for AI in schools and the development of a framework for the safe and ethical use of AI. The introduction also includes a quick rundown of the diverse panelists, setting the stage for a dynamic discussion on AI in education.
π€ Skepticism in EdTech Adoption
Joe Clement and Matt Miles, both high school teachers in the United States, express skepticism about the impact of technology in classrooms. They discuss the gap between the promises of transformative educational technology and the reality of its use by students, emphasizing the need for teachers' voices in policy discussions. They also highlight the potential negative effects of AI, such as students relying on AI to take shortcuts in learning.
π International Perspectives on AI in Education
The panel includes international perspectives from French, the President of the Federation for European Education, and Jeff Bradley, former Director of the commission on International Education. They discuss the importance of understanding the purpose of school before integrating AI tools and the need for shared understanding and guidelines for the use of AI in education. The conversation touches on the challenges of assessing AI applications ethically and the importance of considering student well-being.
π The Role of Human Rights and Well-being in AI Integration
The discussion delves into the intersection of human rights, well-being, and AI, with emphasis on the importance of the teacher-student relationship. Concerns are raised about students becoming too dependent on AI and the potential devaluation of teachers' roles. The panelists consider the implications of AI on the service of teachers and the need to protect their jobs in the face of AI advancements.
π The Importance of Pedagogical Benchmarks for AI Tools
Maria Jose Oando, representing a UK-based education consultancy, discusses the rapid adoption of AI tools and the challenges of selecting appropriate educational technology. She highlights the need for benchmarks that focus on pedagogy and content appropriateness for different age groups, emphasizing the difficulty of evaluating AI tools that perform multiple tasks in education.
π©βπ« The Impact of AI on Teacher-Student Relationships
The panelists explore the potential impact of AI on the crucial relationship between teachers and students. There are concerns that AI could replace the human connection in education, which is vital for students' socio-emotional well-being. The discussion also touches on the potential for AI to exacerbate educational inequities and the importance of maintaining human interaction in learning.
π‘οΈ Implementing Guard Rails for AI in Education
The conversation concludes with a focus on managing AI in education responsibly. The panelists discuss the need for policies, protocols, and standards to ensure the safe and beneficial use of AI. They highlight the importance of involving teachers in policy development and educating all stakeholders about the responsible use of AI in schools.
Mindmap
Keywords
π‘Educational Technology
π‘Artificial Intelligence (AI)
π‘Framework
π‘Pedagogically Sound
π‘Ethical Use
π‘Human Rights
π‘Accreditation
π‘Personalized Learning
π‘Social-Emotional Wellbeing
π‘Guard Rails
Highlights
Stuart Mitchell introduces himself as part of the national task force for the safe and responsible use of AI in schools in Australia.
Joe Clement and Matt Miles, co-authors of 'Screen Schooled', express skepticism about the impact of technology on students' learning.
French, President of the Federation for European Education, emphasizes the importance of the teacher-student relationship in enhancing digital safety and quality.
Jeff Bradley discusses the role of AI standards in international education and the importance of shared understanding of the purpose of school.
Maria Jose Oando highlights the need for AI tools to be pedagogically sound and relevant for developing countries.
The panelists agree on the necessity of involving teachers in the conversation about AI integration in education.
Concerns are raised about students' dependency on AI and the potential negative impact on their learning process.
The discussion points out the gap between the potential of AI and its practical application in enhancing student well-being and human rights.
The need for guidelines and guard rails for the ethical use of AI in education is emphasized.
Stuart Mitchell talks about the challenges of translating the Australian framework for AI in schools into practical directives for software companies.
The panel discusses the difficulty in assessing whether AI applications are designed ethically, considering student well-being and rights.
Claude stresses the importance of focusing on teachers and the risk of AI replacing the role of educators.
Maria Jose Oando discusses the rush to market AI tools in education and the lack of benchmarks focusing on pedagogy.
The conversation highlights the potential of AI to automate grading and administrative tasks, freeing up teacher time.
Matt Miles expresses concern about the social and emotional impact of AI on students, emphasizing the importance of human connection.
The panel concludes with a call for careful consideration of AI's role in education, focusing on student well-being and ethical use.
Transcripts
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e e
yes
okay welcome everyone um my name's
Stuart Stuart Mitchell I'm from
Australia from uh an organization called
Educational Services Australia where I
um can you guys hear me okay there we go
that's better so yeah my name is Stuart
Mitchell I'm from uh Education Services
Australia which is a not for-profit
company um owned by the ministers for
education in Australia and we're tasked
with um rolling out educational
technology uh that's in line with the
the national uh agenda but I've also had
the privilege um of being part of the
national task force for the safe uh and
responsible use of AI in schools over
the last year which was a an initiative
set up by the ministers around April
last year and and I played a small part
in the development of a um a framework
that was published in October which was
called the Australian uh framework for
the safe and ethical use of AI in
schools um I'm going to ask your um your
forgiveness on two fronts it's late for
me at night in Australia so my body is
running a little bit late and uh the
other thing is I was asked to jump in
and and facilitate this session at the
last minute so please bear with me um
we're g to start by just doing a quick
runaround we've got a really diverse uh
group of of participants in the panel
today with a range of experiences we're
probably going to drift around the
conversation I think because we have so
much uh so much knowledge in a in a
range of different
capacities um but I might start uh down
with yourself Joe
just a brief introduction uh explaining
uh yeah who you are sure uh my name is
Joe Clement and I'm a high school
teacher in the United States just
outside of Washington DC in Virginia um
been teaching for uh this is my 30th
year and um mat miles and I co-authored
a book called screen schooled it's
available wherever books are sold um
we'll get more into it
later well he just took my introduction
but yes I'm also a high school teacher
with Joe and uh we just to kind of
context we're kind of we would call
ourselves more uh Tech Skeptics um the
subtitle of our book screen school was
two veteran teachers expose how screen
overuse is making our kids
Dumber hello I'm
French and uh I am the President of the
Federation for European education and
I'm elected member of the Bureau of the
steering committee of Education within
the Council of Europe
and hello hello um my name is Jeff
Bradley um I worked for seven years
until just a couple of months ago when I
uh stepped down as the Director of the
commission on International Education uh
at the New England Association of
schools and colleges which accredits K12
schools uh in 93 countries uh around the
world uh and is responsible for
overseeing a lot of the standards that
schools are expected to follow among
which are uh AI standards um close with
some of the other accreditation agencies
as well that work in the International
School world uh and that's some of the
perspective that I want to bring into
the conversation
today hello everybody my name is Maria
Jose oando everybody knows me by Mah uh
my colleague couldn't attend so I think
he got kicked out of the program but I'm
representing him uh we I Le a small
boutique consultant in education in the
UK in London and for the last year and a
half we've been leading the AI for
education.org initiative this in uh this
initiative is is looking how we can
build uh AI tools and products for
developing countries in subsaharan
Africa and as well as East Asia so a lot
of our work that we've been doing is is
is ensuring that these tools are
pedagogically sound but also relevant
for the context in in developing
countries um so that's me
and so as you can see we've got a really
broad range of contexts and experiences
as I mentioned but what I'm really
interested is is perhaps uh Joe and Matt
I'll pass you this microphone as well so
you can both talk simultaneously I'm
really curious uh to hear a little bit
about that skepticism that that
underpins your book and perhaps uh some
of your thinking around uh what you're
seeing in terms of the the difference
perhaps between the theory and the
practice that's emerging in schools in
your context in in the US uh in the use
of AI
thanks thanks very much um the first
thing I I want to point out is that very
often um educational policy is made by
people who are are are well-intentioned
um but often aren't experts in uh kids
and in teaching and a lot of times
teachers are left out of the
conversation um it would be interesting
if there were a session today for
instance just panel of classroom
teachers what are your experiences what
are you seeing um anytime a a new
initiative is is going to be rolled out
that's supposed to be transformative
that's supposed to reimagine how
Education Works and we're going to
redesign the curriculum or whatever
teachers and my mind should be at the
center of that discussion because
they're the ones that are that are doing
it every day and so what we've heard and
as I mentioned I've been teaching for 30
years i' I've heard the word
reimagined uh revolutionary red designed
transformative um I don't know how many
times and and and all of those things if
you go back through
whatever have the potential to to do all
the all the good that we hear about the
potential was there of course the
question is what is the
reality and the reality is um as you
might imagine if you can imagine
yourself being 16 and being handed a
study guide in History Class where
you're supposed to go and look up a
bunch of terms and whatever and then
somebody tells you hey you've got this
tool where you can just type you know
upload your study guide and it'll do it
for
you how how do we think kids are going
to use that that's how they use it and
they're usually very honest about their
their use and they'll tell you that
that's how they use it even though they
understand many of them at least have a
rudimentary understanding of of what is
possible we all look to um take
shortcuts and I think you know teenagers
uh we I mean we work with teenagers but
I think this goes all the way down to
the to the youngest kids also are
looking for some some way to to have to
do less work I me we all are right and I
think that's the that's the problem that
we see is that we keep hearing that this
is going to be such a benefit and we all
want the same thing we all want happier
healthier smarter
kids um we just don't see that happening
very often when it comes to these sorts
of Technology
yeah in the is this in the last
presentation the he kind of glossed over
the quote kids don't learn from AI they
rely on AI and I came to learn about
chat gbt from the suspicion my kids
suddenly started asking if I could load
all the assigns went digitally I give
them a paper copy and they want it
digitally that's interesting I've been
doing this for 16 years and why
digitally and what I learned is they
that way it was easier to copy and paste
everything in Chad jpt right and that's
how I learned about ch gbt it was too
hard for them to actually type it right
now to say that they could produce uh
quality answers um is one thing but
we're you know learning is very
different than producing right we look
at these technologies that are doing
menial tasks well in the workforce
making production more efficient and
lowering costs and that's great but what
we're trying to do is teach and and
that's by by definition has to be a
difficult process for the brain to
retain that and be able to use it they
have to struggle with it it's not about
just quickly producing an essay or or
you know we don't have quotas of essays
we have to grade and they're helping us
meet that quota right they they are
thinking deeply about information and
you can't do that unless it's been
internalized right so so what we see is
that that Reliance on AI is detrimental
or at least it has been so far to
learning Jeff I'm uh ask for your your
perspective on what we've just
heard yeah I I I would agree with what
you're saying in the sense that um
there's a confusion out there about what
is the purpose of the tools that we now
have at our disposal uh I would say that
the confusion and the maybe resistance
to the terms like uh revolutionize and
transform is a lack of I think shared
understanding of a more fundamental Al
question which is what is school
for and until we have a shared
understanding whether it's the community
of families that send their children to
this school or the culture in which
you're embedded that believes that
schools are for this and that purpose
until you have a shared understanding
that everybody can look at and say yes I
agree it's hard to have conversations
about new tools and revolutionizing and
transforming
are you transforming because the um
current uh state of things doesn't
answer the question what is school for
adequately so I think we're confused a
little bit as a culture and and I'm
speaking from the American perspective
and as an accreditation agency um an
accreditation agency would say in the
case of
neasc that I'm most familiar with what
is your school's shared understanding of
highquality learning so we start there
as a kind of fundamental question do you
have that definition of learning a a
shared that people all agree that's what
this school is for when you have that I
think it's a lot easier to say okay
here's this wildly powerful new tool
that by the way we didn't ask for
schools did not put in an order to have
chat GPT dropped into their midst
now we're finding ways of incorporating
it and dealing with it and managing it
and I would just say two words about
that one is guard
rails and guidelines and that guidelines
and guard rails are where safety comes
in and we're proper usage it's an
amazingly powerful tool we know and it's
not a a single thing either because
there are so many applications as we
know but unless we have a kind of common
understanding that this Enterprise of
schooling the reason we send and pay and
have kids do this and have teachers show
up is for the following reasons is it to
get a job is it to promote democracy is
it to promote literacy is it to feed
children as it is literally in in many
CA it's a lot of things in many places
let's have that conversation first and
then let's talk about how best to manage
with
AI thanks Jeff I mean what what that
brings to my mind is is one of the
challenges we're facing in Australia is
around trying to try to bridge some of
those gaps you spoke about so the
framework that I mentioned earlier um
which we've developed which was
fundamentally designed to give guidance
to to teachers and schools about how to
use AI safely and as I said we were
doing this this time last year when the
world was really fresh and emerging
around around Ai and um the the task
force was was a really interesting group
of people it was a it was a mix of um of
academics and policy makers and one of
the stumbling blocks we ran into in the
um in the second session was we're
starting to go down the rabbit hole of
the technology a and thinking about you
know what is it going to do how is it
going to work and we had to stop
ourselves and ask really is this the
best framing for us and or should we
instead frame this framework um um in
terms of the outcomes that we
want uh both uh pedagogically uh from
and from a learning perspective from a
from from students well-being
perspective um human rights
perspective and that was a really
powerful way because what we've got now
is a framework that is sort of resilient
to the changes of Technology however one
of the challenges that that sort of the
organization I'm with has been tasked
with is now turning that into um some
guidance and directives and standards
for software companies to implement
those guard rails that you're talking
about right and in some domains that's
quite straightforward so um security and
privacy were quite relatively
straightforward ones for us to tackle
and we weren't Reinventing the wheel we
were learning from from the rest of the
world um but the area where we've really
struggled has been uh how to how to and
I think this was picked up in one of the
sessions this morning how to assess
uh
whether an AI application is is is
really been designed ethically and how
is it taking into consideration student
well-being and and their and their
rights and so things like transparency
explainability uh these have been real
challenges because there's this blackbox
technology and we can't peer inside it
and it's also a technology that doesn't
really have a a concept of understanding
meaning it is it's as we've been hearing
it's a it's patent a patent making
machine um so this is somewhere where I
mean I'd be interested Claude in in your
perspective I understand you're you're
from the tertiary space but really also
the the human rights and and your
thoughts on I guess that that
intersection of of Human Rights
wellbeing and AI
yeah yeah thank you I would react on
those the two intervention the first is
that um definitely the new
students whatever their pupils or
students they not only rely on AA but
they get quite addict and dependent and
uh this is quite an issue regarding the
topic enhanced safety and uh regarding
also the quality measures we always
until now put the student in the
center of the net but I think this is
oldfashioned we have to put the
relationship between the teacher and the
student in the center and that is very
important to enhance quality and
therefore to enhance digital safety and
for the European point of view I would
say that we usually not in Europe but in
the world we usually split the
competencies into two competencies hard
skills and soft skills and in Europe we
like to split it in three competencies
hard skills soft skills and citizenship
skills
and those citizenship skills they are
focused on human rights critical
thinking um sustainability and that's
very important and I will conclude my
intervention on this I think we have to
focus now as you said on the teachers
because teacher they were supposed and I
speak on the part they were supposed to
have the knowledge but now are there
assisting
AI is AI the supposed to be having the
the knowledge and what is going to be
the role of the teacher and this is very
important because we are partner um in
the social dialogue with the European
Union and we are and you are concerned
we are speaking about the salaries of
the teachers it's not the same to pay a
teacher who has the knowledge than a
teacher who is just an assistant of the
knowledge given by the machine and
giving a look to our nion
interpreters we are very lucky to have
human beings till now but regarding
interpretation and translation 10 years
ago when you had a text to translate it
was it costed around 14 cents up to 20
cent the word now to to be done by human
now to be done by human is 10 to 14
cents because you can have a document
translated for six cents and now when
you want to have your document with the
iso ISO Norm standard it's just only 14
cents that is to say that for his 14
cents you have two translators so this
is very important to also speak about
the service of
people thank you I I want to come back
um a little bit in a moment and and
maybe explore that that question of that
you raised around the relationship
between teachers and students and and
how uh
AI is is kind of playing a role in that
but but before we do I think uh Maria it
would be great to hear from you because
youve got a very different perspective
in terms of the markets that you're
working in and the communities that
you're you're you're engaged with so
we'd love to hear about that yeah thank
you very much yeah as as everybody has
said like we are seeing that AI is is
being adopted faster than what we can
imagine uh in terms of of of AI
development tools we also see a rush to
Market because developers are seeing you
know like putting AI tools and a lot of
like the choices of which tools to
decide is L down to the teachers and to
use like teacher ratings and aspects
like that with AI technology changing so
fast you know like evidence decision
making on which tools are are really
good is is difficult
uh so we've been working in in in our
initiative we started working and
thinking of the evidence piece that that
is needed so that these AI tools are are
really fit for purpose and and a lot of
like I think there are two aspects of
generative AI that has brought into the
new is this generation uh of of of text
and generation and um there's there's a
lot of benchmarks as we know like all
the foundation found foundational models
open AI Gemini compete each other uh
based on like whether they pass the MML
Benchmark and all these benchmarks have
defined you know like can it perform at
a test level of high school or
university in chemistry in all the
subjects you can imagine we've seen a
gap uh and just echoing the the the idea
of of or the focus on supporting the
teachers we've seen a gap that there's
no Benchmark that focuses on pedagogy
that's the first Gap you know like is
this content generated appropriate for
my kids so there's no Benchmark you you
say you know like I want to create
content for my kids they are a age eight
is this the appropriate content for for
the level right so what we're working in
is is trying to close that Gap and
develop a benchmark that looks at the
pedagogy and looks at all the different
tasks that a teacher does to see to see
to see how we can create benchmarks and
obviously the task is very difficult and
at the moment we don't have the solution
because it's not a single single task
task the teachers assess they give
feedback they plan classes and for each
of these ones there could be a different
Benchmark that could be created so maybe
it's a series of benchmarks that are
needed but also you want benchmarks that
can evaluate if the content that is
being produced is appropriate for grade
level two kids so I I agree like our
perspective as well and we have had um
actually two weeks ago with with an
International Community supporting
Educators um are convening to discuss
the benchmarking and and we all agree
that the focus should be at the moment
on the teacher and on on on Guiding that
and I think um yeah that's that's part
of the of the the benchmarking and the
evidence piece the other thing I think
is worth mentioning is when we think
about ech and this is also something
that I've been saying it's like ech has
this connotation that ech is for
Learning and and and and teaching and
that's all but AI has expanded the
capabilities of e tech so if a tool can
automate grading for example I can can
free up time
like the tool should be evaluated
because it automates grading does it
saves time it's not we shouldn't impose
you know like does it improve learning
outcomes when a tool is just designed to
save the time of the teacher right so
there's a lot of like Automation and
processes improvements that you know
like that have expanded the use cases in
education uh that is is it's just worth
considering that ech is no longer just
about learning it's also about processes
around the education system processes
for head teachers processes uh you know
like per performance review at school
level and streamline you know like
administrative tasks that are time
consuming so that's some some
thoughts yeah that's a really really
interesting point I mean there a
significant problem in Australia and
from what I'm hearing around the world
uh is the time that teachers have
available for teaching uh and the amount
of time that is consumed by other
non-teaching tasks and I think that's an
area that's possibly not being looked at
as much in terms of the the hype that's
around Ai and education yeah um John
Matt really interested in your thoughts
again thinking about the role of edtech
in general historically as as it's been
uh it's been delivering a lot of
promises in the classroom and and in
some ways I think has put screens
between students and teachers in some
context particularly going back to to
when I started uh back in the very early
2000s uh with the with the emergence of
sort of v and lmss and you know this
idea that every student's going to be
sitting in class with a a screen between
them and the teacher was was a little
bit baffling to me and and I had a
degree of cynicism I'm wondering what
you're seeing in terms of um coming back
back to claud's point around the
relationship between uh students and
teachers what's the roles that you can
see for for AI there both in a in a
positive but also in a in a challenging
sense um to me they the maybe the
scariest thing um about AI is not I
don't really worry about AI taking my
job or whatever it's that you hear that
uh about personalized learning and about
how every kid is going to be you know if
you if you have uh you need a tutor you
have an AI assistant
whatever there there is no no kid
that gets super excited about the
confetti congratulations you know that
you got the question right you know
whatever that your your AI tutor would
generate for you and what we know
particularly about kids who are
struggling is that the the thing that
matters more than anything else and this
is in educational research that goes
back Generations it's the relationship
between the teacher and student that
matters most of all so when I say scary
it's um there's an there's an equity
piece that I think is is at this point
unaddressed um because what you hear is
AI is going to go into some of the some
schools and and work with some
populations that where where they're
they're cash strapped they don't have
the resources so AI can can take the
role of an individual tutor for every
single kid and very often those are the
kids who who most desperately need the
connection with a human being um in fact
we've seen when um there was there were
schools in Brooklyn there there are um
there are these um automated learning
systems that have been chunked out in
the US and probably around the world to
uh to schools wholesale to schools where
where kids kids are essentially
warehoused in a gymnasium and they're in
a cube and they're you know supposed to
go through their lessons all day long
whatever um and there was an example in
Brooklyn where the students just walked
out um they we're not doing this anymore
and and and you hear these Rebellion
stories um all over the place and so I
think we get seduced by the word
personalized and the phrase personalized
learning individualized
instruction um I I haven't it doesn't
mean I guess it can't work I haven't
seen it work but what I know for a fact
is that students need the connection
with the teacher maybe more than
anything
else yeah and we we get back to the idea
one of the we we wrote the book
primarily in response to that movement
you were talking about um and what we
were seeing in the classroom not only
the screens were distracted but the cell
phones also and what we see is if you
look at any measure of well-being since
the Advent of the smartphone it it's
gone drastically down right depression
rates are at alltime high anxiety rates
suicidal a
all of those our kids are very lonely
and depressed right and they need
there's we were talking about the role
of a teacher it's more than just
delivering content there's the the socio
emotional rule right the social cultural
Rule and when we're dealing with kids
who are so depressed I mean I've I've
had at least half a dozen students
hospitalized for suicidal ideation just
this year so far right and see kids
isolated right anxiety a lot of the five
of four and iips for anxiety disorders
are kids can bring in their noise
cancelling headphones and stay
distracted disconnected they're more
disconnected right and what we know is
that human connection we haven't evolved
you know since we invented you know
Netscape and and other Technologies we
haven't evolved to not need human
connection so that's that social
cultural piece my fear with AIS is even
if you do have a device that can answer
questions for you right that's a missed
opportunity for a connection with
another human being be it the teacher be
it a peer right and if we keep creating
and developing these rabbit holes for
our kids to go down in isolation what is
the impact is that going to have on
their social emotional wellbeing that's
a thought that worries me with
AI I am you you said a word that that we
are going maybe in few years say that is
a very important word fear AI doesn't
know the word fear we fears AI doesn't
not doesn't fear we know that we are
going to die one day AI has eternity in
front and I'm not so optimistic like you
regarding the the teachers you know in
Europe we have a short cage a shortage
of two million
teachers and we are
speaking fortunately or unfortunately
from wealthy country look at the
non-wealthy country they are putting
money not in training teachers but
having more AI more content so I think
that we really have to strengthen the
relationship between teacher and student
to protect also the job of
teachers uh I'm not so optimistic thank
you following up a little bit following
up on the idea of um how do we go about
managing AI because it's here and it's
here to stay and we know that
um I I would advise anybody who asks me
now to find where are uh schools that
care deeply about proper use of AI where
are they turning to whom are they
looking um and on what are they
depending and the one of the scariest
graph uh representations this morning I
think it was Professor Fidel who showed
the um graph line of Technology always
outpacing individuals um and policies
and and companies were were were were
you know next um next in line behind the
people um but policies are the last
things so the technology is here we have
it it's in our pockets it's in our kids
Pockets how do we manage it with
policies and how do we use it because
maybe it can enhance the relationships
between students and teachers I don't
know does it who who's doing it I would
say look to where those schools are that
care deeply about proper use and safe
use and one of the places they're
turning right now is a fellow
accreditation uh Association um based in
in Philadelphia um in Pennsylvania
called the middle States Association MSA
and they've developed something called
rail
rail which is responsible artificial
intelligence for Learning and it's a set
of protocols and standards basically
that schools need to meet and adhere to
in order to then receive this credential
that says we are living by these
expectations which of course are
evolving but includes lots of safeguards
and it includes you need to have a
policy in your school that says here is
how we are going to treat this
technology um of artificial intelligence
here's how we're going to treat a
generative AI here's how this affects
our assessment
policy combined with that of course
needs to be um involvement of teachers
in developing the policies and then also
in educating the teachers and the
parents and the students here's what the
policy says now here is what we believe
about use of this tool in our school I
want to say one other thing and it was
mentioned again earlier today as well
that we can take a lesson I think from
this fast changing environment
from social media and what happened and
how social media when it arrived seemed
like this amazing promise to connect the
whole world to improve access to promote
democracy and the voices of
people can and does it still do that
sure but this promise has turned in many
ways into a perversion that is affecting
Rel relationships that is affecting
democracies in very negative ways that
we did not anticipate so let's learn the
lesson and I again want to talk about
guard rails and guidelines if we're
going to use AI in
schools fantastic and um yeah look we I
think we're running towards the end of
our our session and so I think for me
some things that have emerged um out of
this conversation as well as the
sessions this morning that are that are
really interesting uh
thinking back to to the history of Ed
edch or edtech that I've been involved
in going back to as I said that the
start of this this
Century I think
what's the kind of light at at the end
of the tunnel for me is the fact that
we're having these
conversations at other points in time
we've seen this immense hype about the
promises of of technology in the
classroom and a rush to get it in there
and a rush to believe it's going to
solve our problems whereas now and I
think it did start to a degree with that
with that open letter from from the AI
Engineers last year uh that really kind
of stopped the the world in its tracks I
think this is a this is a positive thing
that where we're stopping we're thinking
about the risks we're thinking about the
human Dimension we're thinking about the
relationships we're thinking about uh
the important aspects of of what it is
to to be human as we've got for the
framing for this this conference uh and
we're most importantly thinking about
the well-being of students uh and and
carefully considering uh the role of AI
and and and therefore implementing some
of these standards guard rails and um
and protocols and really trying to say
to the industry hey slow down a bit and
and make we want to make sure that what
we're doing here is safe and it's having
a positive impact uh so I want to thank
uh each of my panelists um we all came
together pretty pretty quickly at short
notice but it's been a wonderful
discussion so so thank you all and
thanks for coming and
[Applause]
listening for
5.0 / 5 (0 votes)