Human AI Collaboration Enables More Empathic Conversations in Mental Health Support

UW Video
5 May 202323:34

Summary

TLDRThis presentation explores the intersection of AI and empathy, particularly in the context of mental health support. It discusses the potential of AI to enhance access and quality of mental health care, focusing on peer support platforms like TalkLife. The speaker introduces a project that uses AI to provide real-time feedback to improve the empathy of peer supporters' responses. Emphasizing ethical considerations, the talk concludes with a randomized control trial demonstrating the effectiveness of AI-assisted empathy in peer support.

Takeaways

  • 🤖 The talk focuses on the intersection of AI and empathy, particularly in the context of mental health support.
  • 🔍 The speaker discusses the potential of AI to enhance access and quality of mental health support, while also addressing the associated risks and ethical considerations.
  • ⚠️ A content warning is given for anonymized examples involving mental illness, self-harm, and suicidal ideation, used to illustrate real-world challenges.
  • 🤝 The project is a collaboration involving the e-Science Institute, the Garvey Institute at UW, AI2, and a peer support platform called TalkLife.
  • 🧠 The importance of empathy in mental health support is highlighted, with AI being explored as a tool to improve empathy in peer support interactions.
  • 📊 Empathy is measured using adapted psychological scales, focusing on emotional reactions, interpretations, and explorations within text-based communication.
  • 💬 AI techniques are used to analyze and enhance the expression of empathy in responses to individuals seeking support on platforms like TalkLife.
  • 💡 The system 'PARTNER' is introduced, which uses reinforcement learning to rewrite responses to increase their empathic content.
  • 📈 A randomized control trial demonstrates that AI-assisted feedback can significantly improve the level of empathy expressed by peer supporters.
  • 🛡️ Ethical considerations are paramount, with a focus on co-design, informed consent, safety reviews, and ensuring the AI system does not disrupt the human connection in peer support.

Q & A

  • What is the main focus of the research discussed in the transcript?

    -The main focus of the research is the intersection of AI and empathy, specifically exploring how AI can help improve access and quality of mental health support through peer support platforms.

  • Why is empathy important in the context of mental health support?

    -Empathy is crucial in mental health support because it is associated with symptom improvement and the formation of positive, successful relationships. It allows for better understanding and feeling of the emotions and experiences of others, which is key in providing effective support.

  • What is the role of peer support platforms in mental health?

    -Peer support platforms serve as a place where individuals can find others who share similar experiences and challenges. They can complement traditional forms of care like therapy and provide a space for people to access support, especially when they might not have access to professional mental health services.

  • How does the AI system provide feedback to enhance empathy in peer supporters?

    -The AI system provides feedback by analyzing responses and suggesting edits that can increase the level of empathy expressed. It uses reinforcement learning to transform lower empathy posts into responses with higher empathy levels, focusing on emotional reactions, interpretations, and explorations.

  • What are the potential risks associated with AI technology in mental health, as mentioned in the transcript?

    -Potential risks include the disruption of meaningful human connections, the possibility of invalidating or harming individuals in crisis, and the ethical considerations of introducing AI into sensitive areas like mental health support.

  • How was the AI system in the study designed to minimize risks?

    -The AI system was designed to minimize risks by co-designing with stakeholders, focusing on a narrow scope of empathy, ensuring human supervision, and providing minimal and context-specific feedback only when needed. It also allowed peer supporters to flag and control the feedback process.

  • What was the outcome of the randomized control trial mentioned in the transcript?

    -The randomized control trial showed that the human-AI collaboration approach was effective in expressing more empathy. Participants who had access to AI feedback expressed substantially more empathy in their responses compared to the control group.

  • What ethical framework was referenced in the transcript for guiding the development of AI in mental health?

    -The ethical framework referenced is by Camille Nebeker and others at recode health, which helped inform the concrete actions taken to ensure the safety and welfare of participants in the study.

  • How does the transcript suggest the future of AI in mental health support?

    -The transcript suggests that AI-based feedback could help improve access and quality of mental health care, not only for peer supporters but also for professionals. It highlights the potential for AI to assist in moment-to-moment interventions and to enhance training programs to address the shortage in the mental health workforce.

  • What are some of the challenges in developing AI for mental health support as discussed in the transcript?

    -Challenges include understanding and navigating the ethical considerations, ensuring the safety and welfare of participants, and developing AI systems that can effectively and appropriately provide feedback to enhance empathy without disrupting human connections.

Outlines

00:00

🤖 Introduction to AI and Empathy in Mental Health Research

The speaker begins by expressing gratitude and excitement for the opportunity to discuss research at the intersection of AI and empathy, particularly in the context of mental health. They provide a content warning for sensitive topics such as mental illness, self-harm, and suicidal ideation, which will be discussed using anonymized examples. The research is a collaborative effort involving institutes and industry partners, aiming to improve mental health support through AI. The speaker emphasizes the importance of human-AI collaboration over replacing human roles and shares insights into the potential of AI to enhance access and quality of mental health support, focusing on peer support platforms like Talk Life.

05:01

🧠 Measuring and Enhancing Empathy in Peer Support

The speaker delves into the hypothesis that peer supporters could express higher levels of empathy with the help of automated, actionable feedback. They discuss the importance of measuring empathy using established psychological scales adapted for text-based communication. The empathy measurement framework includes emotional reactions, interpretations, and explorations, with examples provided to illustrate different levels of empathetic responses. The speaker then explores the downstream implications of empathy, such as increased positive engagement and relationship formation on peer support platforms. They highlight the need for empathy training and feedback, given that even with challenging posts, the average empathy score is low, indicating a need for improvement.

10:02

💬 Empathic Rewriting: AI's Role in Enhancing Empathy

The speaker introduces 'empathic rewriting', a task where AI takes a low-empathy response and transforms it into a higher-empathy one. They provide an example of how AI can edit a response to be more empathetic by adding emotional reactions and maintaining the original message's context. The system, named 'Partner', uses reinforcement learning to perform this task, being trained on increasing empathy while ensuring responses remain fluent and context-specific. The speaker emphasizes that this system outperforms previous natural language processing methods in empathic expression and sets the stage for demonstrating how AI can collaborate with humans to enhance empathy in peer support.

15:03

🔧 Human-AI Collaboration for Empathy Expression

The speaker presents a system that allows human peer supporters to collaborate with AI to express empathy more effectively. They contrast a control group, where supporters write responses without feedback, with a treatment group that receives AI-generated feedback. The AI provides suggestions for improving responses, which supporters can choose to accept or ignore. The speaker discusses the results of a randomized control trial involving peer supporters from Talk Life, which showed that the treatment group expressed substantially more empathy. The feedback system was well-received, with participants feeling more confident in their ability to support others empathetically. The speaker also touches on the ethical considerations and safety measures taken in developing and deploying the AI system.

20:03

🛡 Ethical Considerations and Future Directions in AI and Mental Health

The speaker concludes by emphasizing the importance of ethical considerations in AI technology for mental health. They discuss the challenges of ensuring safety and welfare, highlighting the use of an ethical framework to guide the development of the AI system. The speaker mentions the co-design process with stakeholders, the focused scope on empathy, and the 'AI in the loop' approach that respects the human connection on peer support platforms. They also discuss the use of a sandbox environment for safety testing and the importance of informed consent. Looking ahead, the speaker is optimistic about AI's potential to improve mental health care, not just in empathy but in various other aspects such as cognitive reframing and psycho-education. They invite further questions and thank the audience, signaling the end of the presentation.

Mindmap

Keywords

💡AI

AI, or Artificial Intelligence, refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the video, AI is central to the discussion on how it can be leveraged to improve mental health support. The speaker discusses AI's potential to enhance empathy in peer support platforms, demonstrating how AI can be programmed to recognize and generate empathetic responses.

💡Empathy

Empathy is the ability to understand and share the feelings of another. It is a critical aspect of human interaction, especially in providing support. The video emphasizes the importance of empathy in mental health support, suggesting that AI can be trained to recognize and generate responses that express higher levels of empathy, thus improving the quality of peer support on platforms like Talk life.

💡Mental Health Support

Mental health support refers to the assistance provided to individuals experiencing mental health issues. The video discusses the shortage of mental health professionals and how AI could help bridge this gap by improving the quality of peer support, which is a form of mental health support that can be accessed by millions through platforms like Talk life.

💡Peer Support

Peer support involves individuals receiving help from others who have similar experiences or challenges. In the context of the video, peer support is provided through online platforms where users can connect and share their mental health struggles. The speaker highlights the potential of AI to enhance the empathy and effectiveness of peer supporters on these platforms.

💡Anonymized Examples

Anonymized examples are real instances or data that have been modified to ensure the identities of the individuals involved are protected. The video mentions the use of anonymized examples of mental illness, self-harm, and suicidal ideation to illustrate the challenges and potential solutions in the domain of mental health support.

💡E-Science Institute

The E-Science Institute is mentioned as an organization that has contributed to the project discussed in the video. It is an example of the collaboration between different institutes and industries that have come together to leverage AI for mental health support, indicating the interdisciplinary nature of the work.

💡Clinical Psychologist

A clinical psychologist is a mental health professional who diagnoses and treats mental, emotional, and behavioral disorders. In the video, a clinical psychologist is part of the team working on the AI project, emphasizing the importance of incorporating professional psychological knowledge into the development of AI systems for mental health.

💡Reinforcement Learning

Reinforcement learning is a type of machine learning where an agent learns to make decisions by taking actions in an environment to maximize some notion of cumulative reward. The video describes the use of reinforcement learning in developing an AI system called 'partner' that can rewrite responses to express higher levels of empathy.

💡Randomized Control Trial (RCT)

A randomized control trial is a scientific experiment that involves random assignment of subjects into treatment groups to test the efficacy of a new intervention. In the video, an RCT was conducted with peer supporters from the Talk life platform to evaluate whether AI feedback could effectively improve the expression of empathy in their responses.

💡Ethics in AI

Ethics in AI pertains to the moral principles that guide the development and use of AI technologies. The video discusses the ethical considerations in using AI for mental health, such as ensuring safety, obtaining informed consent, and conducting reviews to minimize risks, highlighting the importance of ethical frameworks in AI research.

Highlights

The talk discusses the intersection of AI and empathy, focusing on mental health support.

Content warning is given for anonymized examples of mental illness, self-harm, and suicidal ideation.

The project was enabled by collaborations with the e-science Institute, Garvey Institute at UW, and other local institutes.

AI's potential to improve access and quality of mental health support is explored.

The importance of empathy in mental health support and its association with symptom improvement is highlighted.

The lack of mental health professionals and the role of peer support platforms like Talk life are discussed.

Empathy is defined as the ability to understand or feel the emotions and experiences of others.

A framework for measuring empathy in text-based communication is presented.

Empathy is found to be associated with positive engagement and relationship formation on peer support platforms.

The need for empathy training and feedback is identified, given the low average empathy scores in peer responses.

AI techniques are used to generate empathetic text, called empathic rewriting.

A system called PARTNER is introduced, which uses reinforcement learning to improve empathy in text responses.

A randomized control trial shows that AI feedback can effectively increase empathy in peer support responses.

The talk emphasizes the importance of human-AI collaboration over replacing human support.

Ethical considerations and safety measures in developing AI for mental health are discussed.

The potential of AI-based feedback to improve mental health care access and quality is summarized.

The talk concludes with a look ahead at future applications of AI in mental health, including training programs and addressing workforce shortages.

Transcripts

play00:00

[Music]

play00:03

foreign

play00:05

this is very impressive doing this life

play00:09

it's a great honor to speak uh kind of

play00:12

in between Sean and Ali and nin uh have

play00:16

many people that have invested so much

play00:18

in the infrastructure and the

play00:19

environment that we have here to do

play00:21

great research for many people in this

play00:23

room and I'm excited to share some of

play00:24

that research

play00:26

and specifically that will be about at

play00:30

the intersection of AI and empathy and

play00:33

I'll give a brief content warning before

play00:35

we start I'll share a few anonymized

play00:38

examples around mental illness self-harm

play00:40

and suicidal ideation only do that to

play00:42

illustrate the very real domain

play00:44

challenges as well as the solutions and

play00:46

of course these are properly anonymized

play00:49

and nobody asked asked me to do this but

play00:52

this will be very easy to tie in I think

play00:54

precisely because of the environment

play00:56

that we have here this is a project that

play00:59

was unique that started at The e-science

play01:01

Institute that was uniquely enabled by

play01:03

the Garvey Institute at UW other local

play01:06

institutes like ai2 and local industry

play01:08

and of course everything I'm going to

play01:10

show is teamwork including my PhD

play01:12

students Ashi Sharma in a Lin as well as

play01:15

UW

play01:16

Stanford clinical psychologist Steve

play01:19

Atkinson Adam minor and Folks at a peer

play01:22

support platform talk live

play01:26

and over the past few weeks you might

play01:28

have come across one or multiple

play01:30

headlines in the media there's a lot of

play01:32

talk about the latest and greatest of AI

play01:34

chat GPT this chat GPD that

play01:37

um and uh many of these articles have

play01:40

focused on the opportunities but also

play01:42

the risks for AI technology in mental

play01:45

health and uh in this talk I want to

play01:48

demonstrate first some of that potential

play01:50

specifically how AI could help improve

play01:53

access and quality of mental health

play01:55

support but I also want to focus on the

play01:57

potential risks and about the ethics

play01:59

that just came up earlier in the

play02:00

question and what can be done to

play02:02

mitigate those risks and I'll emphasize

play02:04

they will focus here on human eye

play02:05

collaboration not to replace anybody in

play02:08

fact in the exact research I'm about to

play02:10

show you we found that the human eye

play02:13

collaboration turns out to be superior

play02:15

to human only or also AI only approaches

play02:20

so why do we desperately need to improve

play02:23

access to mental health support

play02:25

myself maybe many people in this room

play02:28

are painfully aware that the need for

play02:30

mental health care outweighs the access

play02:32

to it one in five adults have a mental

play02:35

illness most do not receive treatment

play02:37

and often that is because of a pervasive

play02:39

lack of mental health professionals

play02:42

and this reality really emphasizes the

play02:45

role of peer support and in this talk

play02:47

this will be about how AI can help

play02:50

improve the quality of this peer support

play02:53

and make it more effective

play02:57

so these peer support platforms are used

play02:59

by millions and millions of people today

play03:01

they're a place where you can find

play03:04

people that might share your experience

play03:07

that might share your suffering that

play03:09

might understand what you're going

play03:10

through and I'll talk about a particular

play03:13

one today called Talk life you see here

play03:15

a screenshot on the slide it looks a lot

play03:18

like other social media platforms that

play03:20

it might have seen and used but it's

play03:22

specifically for mental health support

play03:23

and the mental health challenges that

play03:25

you see people Express there really run

play03:27

the whole gamut of what you might

play03:29

imagine

play03:32

in principle these platforms could be

play03:35

amazingly useful they could complement

play03:37

traditional forms of care such as

play03:39

therapy

play03:41

unfortunately for many people it's

play03:42

really all that they have access to

play03:44

whether that's for insurance reasons

play03:46

social isolation stigma in these

play03:50

platforms could make interventions

play03:52

available to people that otherwise

play03:53

wouldn't have access to them

play03:58

now the success of these platforms

play03:59

relies on quality support and that

play04:02

involves a range of skills but a central

play04:05

enabling factor is empathy empathy has

play04:08

been shown to be associated with symptom

play04:11

Improvement and with kind of the

play04:13

formation of positive successful

play04:15

relationships that we call Alliance or

play04:17

rapport

play04:18

and by empathy I mean the ability to

play04:22

understand or feel the emotions and

play04:24

experiences of others

play04:27

now let's say somebody that we call the

play04:30

support Seeker they share and

play04:31

unfortunately so my whole family hates

play04:33

me

play04:34

and then the peer supporter might

play04:36

respond try talking to your friends

play04:39

now that's by far not the worst message

play04:41

we've any of us have seen on the

play04:43

interwebs it just doesn't communicate

play04:45

empathy actively it's pretty common for

play04:48

peers to maybe a little quickly jump to

play04:50

telling others what what actions they

play04:52

should take instead you could express

play04:55

more empathy for instance by saying I

play04:57

could imagine that that makes you feel

play04:58

really isolated so interpreting what

play05:00

that person is going through and

play05:02

underlying all the work I'm about to

play05:03

show you is the central hypothesis that

play05:06

peer supporters might Express higher

play05:08

levels of empathy if they had access to

play05:10

automated actionable just-in-time

play05:13

feedback

play05:15

and over the next few minutes I'll First

play05:17

share how we can measure empathy that's

play05:20

expressed uh then we'll learn about how

play05:23

we can use AI techniques to see uh when

play05:26

AI is expressed in the real world what

play05:28

uh Downstream implications that has

play05:30

we'll talk about how AI can

play05:33

quote-unquote generate empathy in the

play05:34

sense of generating text that expresses

play05:37

empathy and then we'll put all of that

play05:39

together and see how humans and I can

play05:44

collaborate on on empathy expression

play05:48

now the first important part about

play05:50

measuring empathy is not to let computer

play05:53

scientists Define it so in this work we

play05:57

simply just adapted existing scales

play05:59

about empathy that in Psychology have

play06:01

been studied for decades and we had to

play06:04

adapt them to the kind of exclusively

play06:07

text-based and asynchronous

play06:09

communication setting where I can't

play06:11

interrupt you I can't vary my pitch and

play06:16

we did that in Combined multiple scales

play06:18

to capture emotional and cognitive

play06:20

aspects I'll explain how we kind of with

play06:22

the framework that we ended up with

play06:24

um

play06:25

there's three mechanisms of how empathy

play06:29

can be communicated there's emotional

play06:31

reactions kind of communicating the

play06:34

emotions experience after reading a post

play06:36

there's interpretations communicating

play06:39

the understanding of the inferred

play06:41

feeling some experiences of another

play06:42

person and lastly there's Explorations

play06:45

do you do anything to help improve your

play06:47

understanding by exploring the feelings

play06:49

and experiences of another person

play06:52

we decided for each of these so we would

play06:54

differentiate between not communicating

play06:56

that at all doing that to a weak

play06:58

relatively generic degree or a stronger

play07:00

more specific degree and I'll share a

play07:02

few examples

play07:04

again somebody might share and

play07:06

unfortunately so my whole family hates

play07:08

me I don't see any point in living

play07:10

somebody might respond let me know if

play07:12

you want to talk not the worst response

play07:14

doesn't actively Express empathy I

play07:18

understand how you feel would be a weak

play07:20

interpretation everything will be fine

play07:22

weak emotional reaction what happened is

play07:25

a generic exploration and you can be

play07:27

more specific for instance by saying if

play07:29

that happened to me I would feel really

play07:30

isolated I really hope things would

play07:32

improve or I wonder if this makes you

play07:34

feel isolated question mark

play07:37

and then return these into computational

play07:40

prediction tasks two of them one given a

play07:44

kind of post and a response what's the

play07:47

level of empathy being expressed so

play07:49

these are kind of three classification

play07:50

tasks

play07:51

and then a second task of what we call

play07:54

rationale extraction this is about

play07:56

recognizing which are the exact words

play07:58

that are communicating empathy here here

play08:00

are the regions highlighted in in red

play08:03

and blue

play08:04

I won't go into a lot of details where

play08:06

we developed state-of-the-art methods

play08:08

for performing these tasks in in

play08:11

recognizing how much empathy is being

play08:12

expressed in text

play08:15

and then we work curious

play08:18

um what the downstream effects of

play08:19

empathy are now we tried really hard to

play08:23

capture the Clinical Psychology

play08:24

construct of empathy it's not a given

play08:27

that on a social media kind of peer

play08:29

support platform that people on this

play08:31

platform would resonate with that exact

play08:34

construct

play08:35

but it turns out that they do and

play08:38

whenever empathy is being expressed

play08:39

that's associated with lots of positive

play08:41

outcomes so here in the x-axis we have

play08:43

how much empathy is being expressed and

play08:45

we find that when more empathy is being

play08:47

expressed we see people liking those

play08:50

responses more we see them replying more

play08:52

often so we see kind of positive

play08:53

engagement coming from that we also saw

play08:56

that whenever empathy was expressed

play09:00

there was a uh kind of the social media

play09:02

equivalent of relationship formation if

play09:05

you respond to me with empathy I would

play09:06

be 80 more likely to follow you right

play09:09

after kind of starting a potential uh

play09:12

kind of longer uh relationship on this

play09:13

platform

play09:17

but we also saw a lot of Need for

play09:19

empathy training and feedback so if you

play09:21

take our kind of three mechanisms you

play09:22

add them up you have a scale from zero

play09:24

to six and even among kind of posts

play09:27

where people share their challenges

play09:30

um uh responses scored on average only a

play09:32

one out of six which isn't great turns

play09:35

out it also doesn't get better over time

play09:37

people don't just kind of learn to

play09:38

express empathy better by by being on

play09:40

this platform for longer times if

play09:42

anything's the opposite and I've learned

play09:45

from our psychology colleague colleague

play09:48

said this is really true for therapists

play09:50

as well without deliberate practice and

play09:52

specific feedback also professionals can

play09:55

diminish in their skills over time

play09:58

so that set us on a path on

play10:01

um kind of what we might do to give

play10:03

people feedback and help train people in

play10:06

expressing empathy

play10:07

and we'll first looked at that just as a

play10:10

task for machines and we called that

play10:12

empathic rewriting where we want to have

play10:16

a machine take a lower empathy post and

play10:20

transform it to a similar response with

play10:22

higher levels of empathy and here's an

play10:24

example of what that could look like

play10:27

let's say somebody shares I can't deal

play10:29

with this part of my bipolar I really

play10:30

need help somebody might respond don't

play10:33

worry try to relax is there anyone you

play10:35

can talk to

play10:36

again not the worst response by far it

play10:39

has this don't worry piece in here which

play10:41

I've learned really is kind of a red

play10:42

flag clearly the person is already

play10:44

warring so this easily can come across

play10:47

invalidating so let's say what if a

play10:49

machine could help kind of adjust and

play10:52

edit this post by replacing don't worry

play10:55

with being manic is no fun it's actually

play10:57

really scary adding an emotion reaction

play10:59

I'm sorry to hear that this is troubling

play11:01

you and then keeping as much as possible

play11:04

of the original post as well

play11:07

so that's what empathic rewriting is and

play11:09

we built a system that's based on

play11:11

reinforcement learning a system called

play11:12

partner to do that exact task I'll share

play11:15

in a high level how this works so this

play11:17

system would look at a conversation

play11:19

between a support seeker and a peer

play11:21

supporter that informs the state of this

play11:24

system the system then decides which

play11:27

rewriting action wants to take so that

play11:29

could be replacing a sentence adding a

play11:31

new sentence

play11:32

and then the system gets rewards and

play11:34

that's how that system is trained and

play11:37

specifically we rewarded this system for

play11:40

increased empathy of course that's what

play11:43

we want to do here but also keeping

play11:45

fluent English having a coherent uh kind

play11:48

of self-consistent response and also

play11:50

being context specific to avoid generic

play11:53

responses that we know people hate like

play11:56

um kind of I'm so sorry to hear that

play11:58

that you can use in in absolutely any

play12:00

situation

play12:03

um I'll uh kind of won't share more

play12:06

details about the system in the favor of

play12:07

showing you a little demo in a second

play12:10

um but kind of this is a system that can

play12:12

perform these tasks uh better than the

play12:15

state of the the prior state of the art

play12:17

in natural language processing

play12:21

so now let's put all of that together

play12:22

and see how uh kind of human peer

play12:26

supporters and kind of AI systems can

play12:29

collaborate on expressing empathy the

play12:32

most effectively

play12:33

and I'll I'll show you kind of the

play12:36

system that we developed and I'll start

play12:37

with kind of what the control group or

play12:39

the status quo looks like so here kind

play12:42

of there's a post you can respond to it

play12:44

you're kind of greeted with an empty

play12:46

text box even though it's kind of a

play12:47

tough task

play12:49

um and you can write out your response

play12:51

so I'll write out my response here again

play12:55

this is the control group that doesn't

play12:56

have access to feedback let's say for

play12:58

illustrational purposes I use our red

play13:02

flag of don't worry and I'll write I'm

play13:04

there for you

play13:06

now here's the kind of treatment group

play13:08

that will contrast with that and in that

play13:10

condition peer supporters did have

play13:13

access to feedback they didn't need to

play13:15

use it we didn't incentivize them anyway

play13:17

but they could access it and I'll kind

play13:19

of show an example here I'll first write

play13:22

out the exact same response

play13:24

you might note the red flagging button

play13:26

here at the bottom I'll come back to

play13:28

that in a second

play13:29

I request feedback takes about a second

play13:31

and then we can display some potential

play13:34

feedback displayed similarly how kind of

play13:36

um kind of spell checking or or grammar

play13:40

feedback in the AI system catches maybe

play13:43

the Stonewall is a great idea you could

play13:45

consider replacing it with it must be a

play13:47

real struggle it also recognizes there's

play13:50

no exploration of the other person's

play13:52

feelings and experiences happening here

play13:53

yet so it also suggests adding have you

play13:56

tried talking to your boss

play13:58

um

play14:00

so I'll blow this up kind of now what

play14:02

could a peer supporter do they could

play14:04

kind of click on replace and insert they

play14:07

could maybe they don't like the feedback

play14:08

or they like it so much they want more

play14:10

so you can reload the feedback as well

play14:12

you can ignore the feedback you can edit

play14:14

before and after and so on

play14:18

and then we performed a randomized

play14:21

control trial to figure out whether this

play14:23

is actually any helpful and we worked

play14:25

with actual peer supporters from this

play14:27

talk life platform uh 300 of them in a

play14:30

fully remote RCT we randomly divided

play14:33

into these two conditions kind of the

play14:36

status quo you write that on your own

play14:38

and you don't get any feedback

play14:40

or the treatment group that had access

play14:42

to feedback

play14:44

of course the primary outcome is the

play14:46

level of empathy being expressed and one

play14:50

thing that's really important here is we

play14:51

actually set up ourselves up in a really

play14:53

conservative way we didn't want to

play14:55

measure oh it's kind of the use of the

play14:57

that AI system kind of any useful but

play15:00

what we did is we trained everybody just

play15:02

before the study on kind of traditional

play15:04

means of empathy so we would show people

play15:07

a a screen with definitions of empathy

play15:10

the mechanisms of empathy different

play15:11

examples but kind of a static way how

play15:13

you would do that today so if on the

play15:16

next slide we see any differences

play15:18

between treatment and control group what

play15:20

that would mean is that this kind of

play15:22

just-in-time very context specific

play15:25

feedback is actually helpful Beyond this

play15:28

generic training even if we literally

play15:31

trained you I know three minutes ago

play15:35

um of course I wouldn't set it up if the

play15:37

result wasn't positive but whether you

play15:40

um

play15:40

um ask human peer supporters on the left

play15:42

or machines on the right hand side both

play15:45

agree that in the treatment condition

play15:46

substantially more empathy is being

play15:48

expressed about 20 more it was about

play15:51

double that for people that actually

play15:53

find it challenging to support others

play15:55

empathically

play15:58

the participants uh kind of 77 percent

play16:01

wanted that system deployed and it's

play16:04

still far from perfect UH 60 percent

play16:06

found the feedback actionable and

play16:08

helpful and one thing that was really

play16:10

meaningful to us was that 69 said I feel

play16:13

more confident now after this feedback

play16:15

and supporting others and kind of

play16:17

empowering peer supporters with these

play16:18

systems is of course

play16:20

um kind of what we're after here and it

play16:22

suggests that there could be kind of

play16:25

training effects of of using these

play16:27

systems in the future as well

play16:31

now I mentioned we didn't incentivize

play16:33

people at all to use this feedback kind

play16:35

of that that's likely how systems like

play16:37

that would be used in in the future so

play16:39

they have to be kind of good enough for

play16:40

people to actually want to use them

play16:43

um and people use them in a variety of

play16:44

different ways some people chose never

play16:47

to use them uh some people used it a lot

play16:49

and here on the x-axis there's kind of

play16:51

different clusters of how people use the

play16:54

system from not using it ever even

play16:56

though they could on the left to using a

play16:58

kind of basically all the time on the

play17:00

right one thing that's really

play17:01

interesting is that the more people use

play17:03

the system the more empathy they

play17:04

expressed as well and these are quite

play17:06

massive uh effect sizes

play17:09

now of course there's selection effects

play17:12

here people on the right might be

play17:14

particularly motivated and interested in

play17:16

doing that people on the left might have

play17:19

been unfortunate and just not gotten the

play17:22

best feedback uh that we could generate

play17:24

but it was very interesting to see kind

play17:26

of the Striking dose response type

play17:28

relationship

play17:30

um nevertheless

play17:34

so over the past few minutes I hope that

play17:38

I was able to convince you at least of

play17:39

some of the potential and the potential

play17:41

benefit on opportunity for AI technology

play17:43

in mental health but of course AI

play17:45

technology and mental health creates

play17:47

lots of risk and we had a question about

play17:48

that earlier as well and of course in

play17:51

whatever we do here the safety and

play17:52

Welfare participants has to come first

play17:55

any potential needs need to be

play17:57

considered and navigated

play17:58

the challenge is that in kind of a very

play18:01

new and emerging field they are it can

play18:04

be challenging to understand like how

play18:06

exactly do you do that it's a lot easier

play18:08

for me to say here that I care about all

play18:10

these things it's a lot more challenging

play18:11

to figure out how exactly this needs to

play18:14

happen and I'll give a shout out here to

play18:16

a great ethical framework by Camille

play18:19

nebeker and others at recode health and

play18:22

that helped us kind of inform the

play18:23

concrete actions that we took here that

play18:26

I I want to share next

play18:30

so first I believe it was critical to

play18:32

the success here to co-design the system

play18:34

with the various stakeholders to

play18:36

understand the risks to minimize the

play18:38

risks to create any benefit so we worked

play18:40

over many years with the designers of

play18:42

this platform with the peer supporters

play18:44

and the users of this platforms and with

play18:46

clinicians uh personally I'm a computer

play18:48

scientist

play18:49

you might have noticed that we're

play18:51

intervening on the peer supporter we're

play18:53

trying to help the helpers we're not

play18:54

intervening on the person acutely in

play18:56

crisis that's a deliberate choice for

play18:59

how to do that with less risk

play19:02

you might also have noticed that there's

play19:03

a very focused scope here on empathy and

play19:06

we understand empathy and have studied

play19:07

empathy for decades this isn't a general

play19:10

chat bot where we don't know what will

play19:12

happen

play19:13

um but it's very specifically designed

play19:15

for empathy feedback

play19:19

we also kind of from the beginning of

play19:21

this project understood that there's

play19:22

kind of this really meaningful human

play19:24

human connection happening on this

play19:26

platform and there's absolutely a risk

play19:28

that if you kind of introduce an AI

play19:30

system in the middle that you would kind

play19:32

of disrupt that meaningful connection

play19:33

and that's of course not what we want to

play19:35

do here

play19:36

in my world of computer science and

play19:39

machine learning sometimes we call these

play19:41

things human in the loop I think in the

play19:43

course of this project we preferred

play19:44

calling this AI in the loop on a back

play19:47

seat with human supervision and felt

play19:49

more comfortable with that perspective

play19:50

and kind of part of what the system is

play19:52

doing here that it was designed to give

play19:54

minimal feedback and only if needed to

play19:57

only pop up if there's kind of an

play19:58

opportunity for empathy here and not to

play20:01

do anything if that's for instance

play20:02

already expressed

play20:05

we also made a deliberate choice of not

play20:09

to kind of answer fewer research

play20:11

questions with a lot more work and

play20:13

specifically what we did is that we

play20:15

designed a kind of realistic looking

play20:18

sandbox environment instead of actually

play20:20

showing the responses yet to somebody on

play20:22

the stock life platform we felt the

play20:25

right way to kind of progress in this

play20:27

line of work was to study the safety

play20:29

implications first and for instance we

play20:31

did that through that flagging button

play20:32

that I showed you earlier

play20:34

um and kind of with the kind of flagging

play20:37

we're also using the same idea of kind

play20:39

of human air collaboration where there's

play20:41

lots of algorithms behind the scenes

play20:43

that filter

play20:45

potentially inappropriate or unsafe

play20:47

content but it also allows the peer

play20:50

supporter full control it allows them to

play20:52

flag anything that might not be caught

play20:55

by these algorithms

play20:57

and in in doing that also improve these

play20:59

systems over time

play21:02

now participants had access to a crisis

play21:05

hotline could quit at any point we did

play21:07

safety reviews of these systems at every

play21:09

stage over multiple years some of you

play21:12

might find it interesting that in the

play21:13

kind of exact AI objective functions you

play21:15

can encode some of these ethical

play21:17

considerations for instance to uh kind

play21:19

of make the minimal changes needed in

play21:24

giving the feedback rather than kind of

play21:25

letting you write out a response and

play21:27

then I cross it all out which seems like

play21:30

an amazingly uh effective way

play21:33

invalidating people and making them less

play21:35

confident that they're good at

play21:36

supporting others and of course there

play21:39

was also informed consent and IRB

play21:41

approval I think a lot of that is kind

play21:44

of what we needed to learn over the last

play21:45

couple years and it's pretty clear

play21:46

similar to Ali's response earlier that a

play21:49

lot more work is needed in in this area

play21:51

on the kind of ethical principles in

play21:54

digital health and also the algorithmic

play21:56

approaches to ensure safety

play21:59

I'll summarize and briefly look ahead

play22:02

and then I very much look forward to the

play22:04

Q a

play22:05

I told you about empathy that empathy is

play22:07

crucial but a little bit more rare than

play22:09

we would like in the work I showed here

play22:13

introduces new computational tasks and

play22:15

data sets and tools that can be used for

play22:16

facilitating empathic conversations

play22:18

based on state-of-the-art natural

play22:21

language processing techniques

play22:22

we saw an RCT that shows that this human

play22:25

eye collaboration approach can be

play22:26

effective

play22:28

in this and other tools we've developed

play22:30

are actually already in use by several

play22:31

mental health organizations

play22:33

serving millions of people

play22:37

so looking ahead I believe that AI based

play22:39

feedback could really help improve

play22:42

access and quality of mental health care

play22:44

whether that's for peers or

play22:45

professionals

play22:46

I think it could help improve within the

play22:49

moment interventions like I showed you

play22:51

but also help improve training programs

play22:53

to help us address the massive shortage

play22:56

in the mental health Workforce here

play22:59

um and of course we would focus on many

play23:02

more things and kind of things Beyond

play23:03

empathy like maybe cognitive reframing

play23:05

complex Reflections building effect and

play23:08

patient provider relationships

play23:09

psycho-education challenging Health

play23:11

misinformation and so on

play23:14

if anybody's interested here at this URL

play23:17

you can find all the papers models data

play23:20

sets associated with this work uh big

play23:23

shout outs to the amazing team that made

play23:25

this possible again and I look forward

play23:27

to your questions thank you

play23:29

[Music]

play23:32

thank you

Rate This

5.0 / 5 (0 votes)

相关标签
AIMental HealthEmpathySupport PlatformsPeer SupportTherapyEthicsPsychologyTechnologyResearch
您是否需要英文摘要?