Human AI Collaboration Enables More Empathic Conversations in Mental Health Support
Summary
TLDRThis presentation explores the intersection of AI and empathy, particularly in the context of mental health support. It discusses the potential of AI to enhance access and quality of mental health care, focusing on peer support platforms like TalkLife. The speaker introduces a project that uses AI to provide real-time feedback to improve the empathy of peer supporters' responses. Emphasizing ethical considerations, the talk concludes with a randomized control trial demonstrating the effectiveness of AI-assisted empathy in peer support.
Takeaways
- 🤖 The talk focuses on the intersection of AI and empathy, particularly in the context of mental health support.
- 🔍 The speaker discusses the potential of AI to enhance access and quality of mental health support, while also addressing the associated risks and ethical considerations.
- ⚠️ A content warning is given for anonymized examples involving mental illness, self-harm, and suicidal ideation, used to illustrate real-world challenges.
- 🤝 The project is a collaboration involving the e-Science Institute, the Garvey Institute at UW, AI2, and a peer support platform called TalkLife.
- 🧠 The importance of empathy in mental health support is highlighted, with AI being explored as a tool to improve empathy in peer support interactions.
- 📊 Empathy is measured using adapted psychological scales, focusing on emotional reactions, interpretations, and explorations within text-based communication.
- 💬 AI techniques are used to analyze and enhance the expression of empathy in responses to individuals seeking support on platforms like TalkLife.
- 💡 The system 'PARTNER' is introduced, which uses reinforcement learning to rewrite responses to increase their empathic content.
- 📈 A randomized control trial demonstrates that AI-assisted feedback can significantly improve the level of empathy expressed by peer supporters.
- 🛡️ Ethical considerations are paramount, with a focus on co-design, informed consent, safety reviews, and ensuring the AI system does not disrupt the human connection in peer support.
Q & A
What is the main focus of the research discussed in the transcript?
-The main focus of the research is the intersection of AI and empathy, specifically exploring how AI can help improve access and quality of mental health support through peer support platforms.
Why is empathy important in the context of mental health support?
-Empathy is crucial in mental health support because it is associated with symptom improvement and the formation of positive, successful relationships. It allows for better understanding and feeling of the emotions and experiences of others, which is key in providing effective support.
What is the role of peer support platforms in mental health?
-Peer support platforms serve as a place where individuals can find others who share similar experiences and challenges. They can complement traditional forms of care like therapy and provide a space for people to access support, especially when they might not have access to professional mental health services.
How does the AI system provide feedback to enhance empathy in peer supporters?
-The AI system provides feedback by analyzing responses and suggesting edits that can increase the level of empathy expressed. It uses reinforcement learning to transform lower empathy posts into responses with higher empathy levels, focusing on emotional reactions, interpretations, and explorations.
What are the potential risks associated with AI technology in mental health, as mentioned in the transcript?
-Potential risks include the disruption of meaningful human connections, the possibility of invalidating or harming individuals in crisis, and the ethical considerations of introducing AI into sensitive areas like mental health support.
How was the AI system in the study designed to minimize risks?
-The AI system was designed to minimize risks by co-designing with stakeholders, focusing on a narrow scope of empathy, ensuring human supervision, and providing minimal and context-specific feedback only when needed. It also allowed peer supporters to flag and control the feedback process.
What was the outcome of the randomized control trial mentioned in the transcript?
-The randomized control trial showed that the human-AI collaboration approach was effective in expressing more empathy. Participants who had access to AI feedback expressed substantially more empathy in their responses compared to the control group.
What ethical framework was referenced in the transcript for guiding the development of AI in mental health?
-The ethical framework referenced is by Camille Nebeker and others at recode health, which helped inform the concrete actions taken to ensure the safety and welfare of participants in the study.
How does the transcript suggest the future of AI in mental health support?
-The transcript suggests that AI-based feedback could help improve access and quality of mental health care, not only for peer supporters but also for professionals. It highlights the potential for AI to assist in moment-to-moment interventions and to enhance training programs to address the shortage in the mental health workforce.
What are some of the challenges in developing AI for mental health support as discussed in the transcript?
-Challenges include understanding and navigating the ethical considerations, ensuring the safety and welfare of participants, and developing AI systems that can effectively and appropriately provide feedback to enhance empathy without disrupting human connections.
Outlines
🤖 Introduction to AI and Empathy in Mental Health Research
The speaker begins by expressing gratitude and excitement for the opportunity to discuss research at the intersection of AI and empathy, particularly in the context of mental health. They provide a content warning for sensitive topics such as mental illness, self-harm, and suicidal ideation, which will be discussed using anonymized examples. The research is a collaborative effort involving institutes and industry partners, aiming to improve mental health support through AI. The speaker emphasizes the importance of human-AI collaboration over replacing human roles and shares insights into the potential of AI to enhance access and quality of mental health support, focusing on peer support platforms like Talk Life.
🧠 Measuring and Enhancing Empathy in Peer Support
The speaker delves into the hypothesis that peer supporters could express higher levels of empathy with the help of automated, actionable feedback. They discuss the importance of measuring empathy using established psychological scales adapted for text-based communication. The empathy measurement framework includes emotional reactions, interpretations, and explorations, with examples provided to illustrate different levels of empathetic responses. The speaker then explores the downstream implications of empathy, such as increased positive engagement and relationship formation on peer support platforms. They highlight the need for empathy training and feedback, given that even with challenging posts, the average empathy score is low, indicating a need for improvement.
💬 Empathic Rewriting: AI's Role in Enhancing Empathy
The speaker introduces 'empathic rewriting', a task where AI takes a low-empathy response and transforms it into a higher-empathy one. They provide an example of how AI can edit a response to be more empathetic by adding emotional reactions and maintaining the original message's context. The system, named 'Partner', uses reinforcement learning to perform this task, being trained on increasing empathy while ensuring responses remain fluent and context-specific. The speaker emphasizes that this system outperforms previous natural language processing methods in empathic expression and sets the stage for demonstrating how AI can collaborate with humans to enhance empathy in peer support.
🔧 Human-AI Collaboration for Empathy Expression
The speaker presents a system that allows human peer supporters to collaborate with AI to express empathy more effectively. They contrast a control group, where supporters write responses without feedback, with a treatment group that receives AI-generated feedback. The AI provides suggestions for improving responses, which supporters can choose to accept or ignore. The speaker discusses the results of a randomized control trial involving peer supporters from Talk Life, which showed that the treatment group expressed substantially more empathy. The feedback system was well-received, with participants feeling more confident in their ability to support others empathetically. The speaker also touches on the ethical considerations and safety measures taken in developing and deploying the AI system.
🛡 Ethical Considerations and Future Directions in AI and Mental Health
The speaker concludes by emphasizing the importance of ethical considerations in AI technology for mental health. They discuss the challenges of ensuring safety and welfare, highlighting the use of an ethical framework to guide the development of the AI system. The speaker mentions the co-design process with stakeholders, the focused scope on empathy, and the 'AI in the loop' approach that respects the human connection on peer support platforms. They also discuss the use of a sandbox environment for safety testing and the importance of informed consent. Looking ahead, the speaker is optimistic about AI's potential to improve mental health care, not just in empathy but in various other aspects such as cognitive reframing and psycho-education. They invite further questions and thank the audience, signaling the end of the presentation.
Mindmap
Keywords
💡AI
💡Empathy
💡Mental Health Support
💡Peer Support
💡Anonymized Examples
💡E-Science Institute
💡Clinical Psychologist
💡Reinforcement Learning
💡Randomized Control Trial (RCT)
💡Ethics in AI
Highlights
The talk discusses the intersection of AI and empathy, focusing on mental health support.
Content warning is given for anonymized examples of mental illness, self-harm, and suicidal ideation.
The project was enabled by collaborations with the e-science Institute, Garvey Institute at UW, and other local institutes.
AI's potential to improve access and quality of mental health support is explored.
The importance of empathy in mental health support and its association with symptom improvement is highlighted.
The lack of mental health professionals and the role of peer support platforms like Talk life are discussed.
Empathy is defined as the ability to understand or feel the emotions and experiences of others.
A framework for measuring empathy in text-based communication is presented.
Empathy is found to be associated with positive engagement and relationship formation on peer support platforms.
The need for empathy training and feedback is identified, given the low average empathy scores in peer responses.
AI techniques are used to generate empathetic text, called empathic rewriting.
A system called PARTNER is introduced, which uses reinforcement learning to improve empathy in text responses.
A randomized control trial shows that AI feedback can effectively increase empathy in peer support responses.
The talk emphasizes the importance of human-AI collaboration over replacing human support.
Ethical considerations and safety measures in developing AI for mental health are discussed.
The potential of AI-based feedback to improve mental health care access and quality is summarized.
The talk concludes with a look ahead at future applications of AI in mental health, including training programs and addressing workforce shortages.
Transcripts
[Music]
foreign
this is very impressive doing this life
it's a great honor to speak uh kind of
in between Sean and Ali and nin uh have
many people that have invested so much
in the infrastructure and the
environment that we have here to do
great research for many people in this
room and I'm excited to share some of
that research
and specifically that will be about at
the intersection of AI and empathy and
I'll give a brief content warning before
we start I'll share a few anonymized
examples around mental illness self-harm
and suicidal ideation only do that to
illustrate the very real domain
challenges as well as the solutions and
of course these are properly anonymized
and nobody asked asked me to do this but
this will be very easy to tie in I think
precisely because of the environment
that we have here this is a project that
was unique that started at The e-science
Institute that was uniquely enabled by
the Garvey Institute at UW other local
institutes like ai2 and local industry
and of course everything I'm going to
show is teamwork including my PhD
students Ashi Sharma in a Lin as well as
UW
Stanford clinical psychologist Steve
Atkinson Adam minor and Folks at a peer
support platform talk live
and over the past few weeks you might
have come across one or multiple
headlines in the media there's a lot of
talk about the latest and greatest of AI
chat GPT this chat GPD that
um and uh many of these articles have
focused on the opportunities but also
the risks for AI technology in mental
health and uh in this talk I want to
demonstrate first some of that potential
specifically how AI could help improve
access and quality of mental health
support but I also want to focus on the
potential risks and about the ethics
that just came up earlier in the
question and what can be done to
mitigate those risks and I'll emphasize
they will focus here on human eye
collaboration not to replace anybody in
fact in the exact research I'm about to
show you we found that the human eye
collaboration turns out to be superior
to human only or also AI only approaches
so why do we desperately need to improve
access to mental health support
myself maybe many people in this room
are painfully aware that the need for
mental health care outweighs the access
to it one in five adults have a mental
illness most do not receive treatment
and often that is because of a pervasive
lack of mental health professionals
and this reality really emphasizes the
role of peer support and in this talk
this will be about how AI can help
improve the quality of this peer support
and make it more effective
so these peer support platforms are used
by millions and millions of people today
they're a place where you can find
people that might share your experience
that might share your suffering that
might understand what you're going
through and I'll talk about a particular
one today called Talk life you see here
a screenshot on the slide it looks a lot
like other social media platforms that
it might have seen and used but it's
specifically for mental health support
and the mental health challenges that
you see people Express there really run
the whole gamut of what you might
imagine
in principle these platforms could be
amazingly useful they could complement
traditional forms of care such as
therapy
unfortunately for many people it's
really all that they have access to
whether that's for insurance reasons
social isolation stigma in these
platforms could make interventions
available to people that otherwise
wouldn't have access to them
now the success of these platforms
relies on quality support and that
involves a range of skills but a central
enabling factor is empathy empathy has
been shown to be associated with symptom
Improvement and with kind of the
formation of positive successful
relationships that we call Alliance or
rapport
and by empathy I mean the ability to
understand or feel the emotions and
experiences of others
now let's say somebody that we call the
support Seeker they share and
unfortunately so my whole family hates
me
and then the peer supporter might
respond try talking to your friends
now that's by far not the worst message
we've any of us have seen on the
interwebs it just doesn't communicate
empathy actively it's pretty common for
peers to maybe a little quickly jump to
telling others what what actions they
should take instead you could express
more empathy for instance by saying I
could imagine that that makes you feel
really isolated so interpreting what
that person is going through and
underlying all the work I'm about to
show you is the central hypothesis that
peer supporters might Express higher
levels of empathy if they had access to
automated actionable just-in-time
feedback
and over the next few minutes I'll First
share how we can measure empathy that's
expressed uh then we'll learn about how
we can use AI techniques to see uh when
AI is expressed in the real world what
uh Downstream implications that has
we'll talk about how AI can
quote-unquote generate empathy in the
sense of generating text that expresses
empathy and then we'll put all of that
together and see how humans and I can
collaborate on on empathy expression
now the first important part about
measuring empathy is not to let computer
scientists Define it so in this work we
simply just adapted existing scales
about empathy that in Psychology have
been studied for decades and we had to
adapt them to the kind of exclusively
text-based and asynchronous
communication setting where I can't
interrupt you I can't vary my pitch and
we did that in Combined multiple scales
to capture emotional and cognitive
aspects I'll explain how we kind of with
the framework that we ended up with
um
there's three mechanisms of how empathy
can be communicated there's emotional
reactions kind of communicating the
emotions experience after reading a post
there's interpretations communicating
the understanding of the inferred
feeling some experiences of another
person and lastly there's Explorations
do you do anything to help improve your
understanding by exploring the feelings
and experiences of another person
we decided for each of these so we would
differentiate between not communicating
that at all doing that to a weak
relatively generic degree or a stronger
more specific degree and I'll share a
few examples
again somebody might share and
unfortunately so my whole family hates
me I don't see any point in living
somebody might respond let me know if
you want to talk not the worst response
doesn't actively Express empathy I
understand how you feel would be a weak
interpretation everything will be fine
weak emotional reaction what happened is
a generic exploration and you can be
more specific for instance by saying if
that happened to me I would feel really
isolated I really hope things would
improve or I wonder if this makes you
feel isolated question mark
and then return these into computational
prediction tasks two of them one given a
kind of post and a response what's the
level of empathy being expressed so
these are kind of three classification
tasks
and then a second task of what we call
rationale extraction this is about
recognizing which are the exact words
that are communicating empathy here here
are the regions highlighted in in red
and blue
I won't go into a lot of details where
we developed state-of-the-art methods
for performing these tasks in in
recognizing how much empathy is being
expressed in text
and then we work curious
um what the downstream effects of
empathy are now we tried really hard to
capture the Clinical Psychology
construct of empathy it's not a given
that on a social media kind of peer
support platform that people on this
platform would resonate with that exact
construct
but it turns out that they do and
whenever empathy is being expressed
that's associated with lots of positive
outcomes so here in the x-axis we have
how much empathy is being expressed and
we find that when more empathy is being
expressed we see people liking those
responses more we see them replying more
often so we see kind of positive
engagement coming from that we also saw
that whenever empathy was expressed
there was a uh kind of the social media
equivalent of relationship formation if
you respond to me with empathy I would
be 80 more likely to follow you right
after kind of starting a potential uh
kind of longer uh relationship on this
platform
but we also saw a lot of Need for
empathy training and feedback so if you
take our kind of three mechanisms you
add them up you have a scale from zero
to six and even among kind of posts
where people share their challenges
um uh responses scored on average only a
one out of six which isn't great turns
out it also doesn't get better over time
people don't just kind of learn to
express empathy better by by being on
this platform for longer times if
anything's the opposite and I've learned
from our psychology colleague colleague
said this is really true for therapists
as well without deliberate practice and
specific feedback also professionals can
diminish in their skills over time
so that set us on a path on
um kind of what we might do to give
people feedback and help train people in
expressing empathy
and we'll first looked at that just as a
task for machines and we called that
empathic rewriting where we want to have
a machine take a lower empathy post and
transform it to a similar response with
higher levels of empathy and here's an
example of what that could look like
let's say somebody shares I can't deal
with this part of my bipolar I really
need help somebody might respond don't
worry try to relax is there anyone you
can talk to
again not the worst response by far it
has this don't worry piece in here which
I've learned really is kind of a red
flag clearly the person is already
warring so this easily can come across
invalidating so let's say what if a
machine could help kind of adjust and
edit this post by replacing don't worry
with being manic is no fun it's actually
really scary adding an emotion reaction
I'm sorry to hear that this is troubling
you and then keeping as much as possible
of the original post as well
so that's what empathic rewriting is and
we built a system that's based on
reinforcement learning a system called
partner to do that exact task I'll share
in a high level how this works so this
system would look at a conversation
between a support seeker and a peer
supporter that informs the state of this
system the system then decides which
rewriting action wants to take so that
could be replacing a sentence adding a
new sentence
and then the system gets rewards and
that's how that system is trained and
specifically we rewarded this system for
increased empathy of course that's what
we want to do here but also keeping
fluent English having a coherent uh kind
of self-consistent response and also
being context specific to avoid generic
responses that we know people hate like
um kind of I'm so sorry to hear that
that you can use in in absolutely any
situation
um I'll uh kind of won't share more
details about the system in the favor of
showing you a little demo in a second
um but kind of this is a system that can
perform these tasks uh better than the
state of the the prior state of the art
in natural language processing
so now let's put all of that together
and see how uh kind of human peer
supporters and kind of AI systems can
collaborate on expressing empathy the
most effectively
and I'll I'll show you kind of the
system that we developed and I'll start
with kind of what the control group or
the status quo looks like so here kind
of there's a post you can respond to it
you're kind of greeted with an empty
text box even though it's kind of a
tough task
um and you can write out your response
so I'll write out my response here again
this is the control group that doesn't
have access to feedback let's say for
illustrational purposes I use our red
flag of don't worry and I'll write I'm
there for you
now here's the kind of treatment group
that will contrast with that and in that
condition peer supporters did have
access to feedback they didn't need to
use it we didn't incentivize them anyway
but they could access it and I'll kind
of show an example here I'll first write
out the exact same response
you might note the red flagging button
here at the bottom I'll come back to
that in a second
I request feedback takes about a second
and then we can display some potential
feedback displayed similarly how kind of
um kind of spell checking or or grammar
feedback in the AI system catches maybe
the Stonewall is a great idea you could
consider replacing it with it must be a
real struggle it also recognizes there's
no exploration of the other person's
feelings and experiences happening here
yet so it also suggests adding have you
tried talking to your boss
um
so I'll blow this up kind of now what
could a peer supporter do they could
kind of click on replace and insert they
could maybe they don't like the feedback
or they like it so much they want more
so you can reload the feedback as well
you can ignore the feedback you can edit
before and after and so on
and then we performed a randomized
control trial to figure out whether this
is actually any helpful and we worked
with actual peer supporters from this
talk life platform uh 300 of them in a
fully remote RCT we randomly divided
into these two conditions kind of the
status quo you write that on your own
and you don't get any feedback
or the treatment group that had access
to feedback
of course the primary outcome is the
level of empathy being expressed and one
thing that's really important here is we
actually set up ourselves up in a really
conservative way we didn't want to
measure oh it's kind of the use of the
that AI system kind of any useful but
what we did is we trained everybody just
before the study on kind of traditional
means of empathy so we would show people
a a screen with definitions of empathy
the mechanisms of empathy different
examples but kind of a static way how
you would do that today so if on the
next slide we see any differences
between treatment and control group what
that would mean is that this kind of
just-in-time very context specific
feedback is actually helpful Beyond this
generic training even if we literally
trained you I know three minutes ago
um of course I wouldn't set it up if the
result wasn't positive but whether you
um
um ask human peer supporters on the left
or machines on the right hand side both
agree that in the treatment condition
substantially more empathy is being
expressed about 20 more it was about
double that for people that actually
find it challenging to support others
empathically
the participants uh kind of 77 percent
wanted that system deployed and it's
still far from perfect UH 60 percent
found the feedback actionable and
helpful and one thing that was really
meaningful to us was that 69 said I feel
more confident now after this feedback
and supporting others and kind of
empowering peer supporters with these
systems is of course
um kind of what we're after here and it
suggests that there could be kind of
training effects of of using these
systems in the future as well
now I mentioned we didn't incentivize
people at all to use this feedback kind
of that that's likely how systems like
that would be used in in the future so
they have to be kind of good enough for
people to actually want to use them
um and people use them in a variety of
different ways some people chose never
to use them uh some people used it a lot
and here on the x-axis there's kind of
different clusters of how people use the
system from not using it ever even
though they could on the left to using a
kind of basically all the time on the
right one thing that's really
interesting is that the more people use
the system the more empathy they
expressed as well and these are quite
massive uh effect sizes
now of course there's selection effects
here people on the right might be
particularly motivated and interested in
doing that people on the left might have
been unfortunate and just not gotten the
best feedback uh that we could generate
but it was very interesting to see kind
of the Striking dose response type
relationship
um nevertheless
so over the past few minutes I hope that
I was able to convince you at least of
some of the potential and the potential
benefit on opportunity for AI technology
in mental health but of course AI
technology and mental health creates
lots of risk and we had a question about
that earlier as well and of course in
whatever we do here the safety and
Welfare participants has to come first
any potential needs need to be
considered and navigated
the challenge is that in kind of a very
new and emerging field they are it can
be challenging to understand like how
exactly do you do that it's a lot easier
for me to say here that I care about all
these things it's a lot more challenging
to figure out how exactly this needs to
happen and I'll give a shout out here to
a great ethical framework by Camille
nebeker and others at recode health and
that helped us kind of inform the
concrete actions that we took here that
I I want to share next
so first I believe it was critical to
the success here to co-design the system
with the various stakeholders to
understand the risks to minimize the
risks to create any benefit so we worked
over many years with the designers of
this platform with the peer supporters
and the users of this platforms and with
clinicians uh personally I'm a computer
scientist
you might have noticed that we're
intervening on the peer supporter we're
trying to help the helpers we're not
intervening on the person acutely in
crisis that's a deliberate choice for
how to do that with less risk
you might also have noticed that there's
a very focused scope here on empathy and
we understand empathy and have studied
empathy for decades this isn't a general
chat bot where we don't know what will
happen
um but it's very specifically designed
for empathy feedback
we also kind of from the beginning of
this project understood that there's
kind of this really meaningful human
human connection happening on this
platform and there's absolutely a risk
that if you kind of introduce an AI
system in the middle that you would kind
of disrupt that meaningful connection
and that's of course not what we want to
do here
in my world of computer science and
machine learning sometimes we call these
things human in the loop I think in the
course of this project we preferred
calling this AI in the loop on a back
seat with human supervision and felt
more comfortable with that perspective
and kind of part of what the system is
doing here that it was designed to give
minimal feedback and only if needed to
only pop up if there's kind of an
opportunity for empathy here and not to
do anything if that's for instance
already expressed
we also made a deliberate choice of not
to kind of answer fewer research
questions with a lot more work and
specifically what we did is that we
designed a kind of realistic looking
sandbox environment instead of actually
showing the responses yet to somebody on
the stock life platform we felt the
right way to kind of progress in this
line of work was to study the safety
implications first and for instance we
did that through that flagging button
that I showed you earlier
um and kind of with the kind of flagging
we're also using the same idea of kind
of human air collaboration where there's
lots of algorithms behind the scenes
that filter
potentially inappropriate or unsafe
content but it also allows the peer
supporter full control it allows them to
flag anything that might not be caught
by these algorithms
and in in doing that also improve these
systems over time
now participants had access to a crisis
hotline could quit at any point we did
safety reviews of these systems at every
stage over multiple years some of you
might find it interesting that in the
kind of exact AI objective functions you
can encode some of these ethical
considerations for instance to uh kind
of make the minimal changes needed in
giving the feedback rather than kind of
letting you write out a response and
then I cross it all out which seems like
an amazingly uh effective way
invalidating people and making them less
confident that they're good at
supporting others and of course there
was also informed consent and IRB
approval I think a lot of that is kind
of what we needed to learn over the last
couple years and it's pretty clear
similar to Ali's response earlier that a
lot more work is needed in in this area
on the kind of ethical principles in
digital health and also the algorithmic
approaches to ensure safety
I'll summarize and briefly look ahead
and then I very much look forward to the
Q a
I told you about empathy that empathy is
crucial but a little bit more rare than
we would like in the work I showed here
introduces new computational tasks and
data sets and tools that can be used for
facilitating empathic conversations
based on state-of-the-art natural
language processing techniques
we saw an RCT that shows that this human
eye collaboration approach can be
effective
in this and other tools we've developed
are actually already in use by several
mental health organizations
serving millions of people
so looking ahead I believe that AI based
feedback could really help improve
access and quality of mental health care
whether that's for peers or
professionals
I think it could help improve within the
moment interventions like I showed you
but also help improve training programs
to help us address the massive shortage
in the mental health Workforce here
um and of course we would focus on many
more things and kind of things Beyond
empathy like maybe cognitive reframing
complex Reflections building effect and
patient provider relationships
psycho-education challenging Health
misinformation and so on
if anybody's interested here at this URL
you can find all the papers models data
sets associated with this work uh big
shout outs to the amazing team that made
this possible again and I look forward
to your questions thank you
[Music]
thank you
関連動画をさらに表示
Richard Taylor | Dunbar Area Christian Youth Project #YouthWorkChangesLives
GEF Madrid 2024: AI's role in Student Wellbeing
Summit Fernando Díaz Chief Learning and Technology Office Mentu GEF 2024
Combating the mental health crisis on Canadian campuses
Brave New Words - Bill Gates & Sal Khan
SHOCKING Robots EVOLVE in the SIMULATION plus OpenAI Leadership Just... LEAVES?
5.0 / 5 (0 votes)