Increasing AI Tool Adoption by Front-Line Workers

MIT Sloan Management Review
8 Sept 202257:53

Summary

TLDRThe webinar, 'Increasing AI Tool Adoption by Front-Line Workers,' discusses the challenges of implementing AI in organizations due to resistance from end users. Experts Kate Kellogg, Mark Sendak, and Suresh Balu share research findings and tactics for successful AI integration, emphasizing the need to address user concerns about workflow and autonomy. They highlight the predictive, laborious, and prescriptive nature of AI and provide examples from various industries, including healthcare, to illustrate how these tactics can be applied effectively.

Takeaways

  • 🎯 AI implementations often fall short due to end user resistance, as they perceive few benefits and increased workload.
  • πŸ” Research by Kate Kellogg, Mark Sendak, and Suresh Balu found that addressing end users' concerns about workflow and autonomy can lead to AI tool adoption.
  • πŸ‘₯ Involving end users from the beginning is crucial for successful AI implementation, as they have unique insights and needs.
  • πŸ’‘ AI solutions should not only make regular work faster and easier (the sundae) but also offer new capabilities (the cherry on top).
  • πŸ“ˆ To increase end user benefits, AI solutions must predict outcomes valuable to the user and reduce the labor involved in their tasks.
  • πŸ› οΈ Developers need to influence top managers to change the reward system to align with AI outcomes and user adoption.
  • πŸ“Š AI's laborious nature requires reducing the burden on end users, automating data input and pre-staging interfaces to meet their needs.
  • πŸ”— Integrating AI tools into existing workflows requires careful planning to ensure minimal disruption and seamless use.
  • 🌟 Protecting end users' autonomy is key when implementing prescriptive AI, avoiding interference with core tasks they value.
  • πŸ“ Measuring the impact of AI tactics involves assessing clinical outcomes, process efficiency, cost-effectiveness, and addressing health disparities.

Q & A

  • What is the main challenge faced by organizations in implementing AI tools?

    -The main challenge is that end users often resist adopting AI tools, especially those guiding decision-making, as they see few benefits for themselves and the new tools may add to their workload and reduce their autonomy.

  • How can developers ensure that AI tools are more readily adopted by end users?

    -Developers can ensure adoption by addressing end users' concerns about workflow and autonomy, reconciling conflicting stakeholder interests, and increasing the benefits for the end users.

  • What are the three ways AI is different from other technologies?

    -AI is different because it is predictive, laborious, and prescriptive, which can result in few benefits, additional labor, and decreased autonomy for employees.

  • What are the six tactics identified for successfully implementing AI solutions?

    -The six tactics are not explicitly mentioned in the transcript, but they revolve around increasing end user benefits, reducing labor, and protecting autonomy while addressing workflow and stakeholder concerns.

  • How can organizations measure the success of AI implementations?

    -Success can be measured by increased revenue, reduced costs, improved product and service quality, and most importantly, by the adoption rate and the positive impact on the end users' workflow and autonomy.

  • What role do front-line workers play in the adoption of AI solutions?

    -Front-line workers play a crucial role as they are the end users who interact directly with the AI tools. Their acceptance and proper use of these tools determine the success of AI implementations in achieving organizational goals.

  • How can AI solutions result in increased revenue and reduced costs for organizations?

    -By improving product and service quality, AI solutions can lead to higher customer satisfaction and loyalty, resulting in increased revenue. Additionally, they can help optimize processes, reduce inefficiencies, and minimize waste, leading to reduced costs.

  • What is the significance of the 'ice cream sundae' analogy mentioned in the transcript?

    -The 'ice cream sundae' analogy is used to illustrate the two-fold approach to AI solutions: the base (the sundae) represents the core functionality that makes regular work faster and easier, while the cherry on top represents the additional, innovative capabilities that AI can bring to a job.

  • What are some examples of industries where the six tactics for successful AI implementation can be applied?

    -The transcript mentions examples from HR recruitment, sales, and fashion buying. These tactics are generalizable across industries, suggesting they could also be applied in healthcare, finance, marketing, and more.

  • How can developers address the issue of end users' resistance to AI tools?

    -Developers can address resistance by involving end users from the beginning, understanding their needs and concerns, and designing AI tools that align with their workflow, provide clear benefits, and do not infringe on their autonomy.

  • What is the role of rewards and incentives in the adoption of AI tools?

    -Rewards and incentives play a significant role as they can motivate end users to adopt AI tools. By tying rewards to outcomes that AI tools can improve, organizations can encourage end users to engage with and utilize the AI solutions effectively.

Outlines

00:00

πŸ“ Introduction to AI Tool Adoption Webinar

The webinar, titled 'Increasing AI Tool Adoption by Front-Line Workers,' is moderated by Elizabeth Heichler from MIT Sloan Management Review. It addresses the common issue of AI implementations not meeting their goals due to end-user resistance. The discussion focuses on research by Kate Kellogg, Mark Sendak, and Suresh Balu, which identifies strategies for successful AI adoption. The webinar introduces the concept that employees are resistant to solutions that do not improve their work experience, highlighting the predictive, laborious, and prescriptive nature of AI and its impact on employees.

05:01

πŸš€ Six Tactics for Successful AI Implementation

Kate Kellogg presents six tactics for successful AI implementation based on research conducted at Duke over five years. These tactics are applicable across industries and are illustrated with examples from different sectors. The tactics include increasing benefits for end users, addressing their concerns about workflow and autonomy, and reconciling conflicting stakeholder interests. The importance of involving end users from the beginning and changing the reward system to align with the new AI tools is emphasized.

10:02

🧐 Case Study: AI in HR Recruitment

A detailed case study is presented about an AI solution for HR recruiters, specifically sourcers, who identify and attract candidates with technical skills. The AI tool aims to predict which candidates are likely to accept job offers. However, the true end users are the sourcers, who have different needs from the interviewers. The developers address the sourcers' concerns by introducing a pre-interview assessment and changing the reward system to include successful candidate acceptance of offers.

15:05

πŸ€” Challenges in AI Adoption

The discussion highlights the challenges in AI adoption, such as the need for end users to bring their own requirements, which complicates development, and the difficulty developers face in influencing the reward system. It is stressed that involving end users from the start and understanding their needs is crucial for successful AI implementation. The webinar also touches on the labor-intensive nature of AI and the need to reduce the workload for end users.

20:05

πŸ› οΈ Example: AI for Salespeople

An example of an AI solution for salespeople is provided, illustrating how the AI tool was developed to reduce labor for end users. The organization involved sold manufacturers' products to wholesalers and retail chains. The AI tool was designed to help salespeople with white space analysis, identifying opportunities for sales based on clients' existing purchases and potential needs. The developers automated data inputting and pre-staged the interface to meet the salespeople's needs.

25:09

🌟 Protecting Autonomy with AI

The importance of protecting end users' autonomy when implementing AI solutions is discussed. An example from the fashion industry is used to illustrate how AI can assist fashion buyers without infringing on their core creative tasks. The AI developers avoided interfering with the buyers' decision-making process and instead focused on helping with less enjoyable tasks. The example emphasizes the need to add the six tactics to the toolkit for successful AI adoption.

30:11

πŸ’‘ Real-World AI Application: Healthcare

Mark Sendak shares examples of AI application within the healthcare industry, specifically at Duke Health. He discusses the process of identifying projects for innovation and the importance of aligning senior strategic priorities with the needs of front-line workers. The discussion includes the challenges faced by primary care physicians in managing chronic diseases and the potential of AI to assist in preventing disease progression.

35:16

🩺 Healthcare AI: Primary Care Physicians

The webinar delves into the specifics of how AI can support primary care physicians in managing kidney disease. It highlights the burden on physicians due to the progression of chronic diseases and the feedback they receive from specialists. An AI solution is presented that allows kidney specialists to send recommendations directly to primary care doctors, helping them manage kidney disease more effectively.

40:19

πŸ₯ Integration of AI in Healthcare

The discussion continues with strategies for integrating AI in healthcare settings, focusing on reducing labor for end users and ensuring the AI tools are effectively used. The importance of aligning incentives for primary care doctors and involving downstream specialists in the development and validation of AI systems is emphasized. The webinar also touches on the challenges faced by emergency department doctors in managing various conditions.

45:22

🧠 Understanding AI Decision Support

The webinar addresses the importance of front-line users understanding the nature of AI decision support. It emphasizes that AI provides predictions with a certain level of confidence, not definitive answers. The need for explainability in AI outputs is discussed, with the level of understanding required varying depending on the domain. In healthcare, a higher level of explainability is necessary due to the critical nature of the decisions being made.

50:25

πŸ“ˆ Measuring the Impact of AI

The panelists discuss how to measure the impact of AI technologies, emphasizing the need for adoption and continuous use for AI tools to generate value. They share their experiences from Duke Health, where they measure the effectiveness of AI tools by looking at clinical outcomes, process and adoption measures, cost and economics, and addressing health disparities. The importance of reducing the burden on clinicians to ensure adoption is highlighted.

55:26

πŸ”„ Monitoring Model Drift and Feedback

The importance of monitoring model drift and incorporating front-line user feedback is discussed. The panelists explain that model drift can occur due to changes in technology infrastructure or process and workflow shifts. They share examples from Duke Health, where they proactively monitor changes and update their models accordingly. The need for end users to understand the AI tools and provide feedback for continuous improvement is emphasized.

πŸŽ“ Training and Educating Users on AI

The webinar addresses the importance of training and educating users on AI, especially in understanding that AI provides predictions with confidence levels rather than definitive answers. The panelists discuss the different levels of understanding required by different stakeholders and the need for AI explainability. They share experiences from Duke Health, where they ensure clinicians understand the algorithms used in AI tools and provide extensive documentation for different types of stakeholders.

🌐 Final Thoughts on AI Adoption

Kate Kellogg concludes the webinar with final thoughts on AI adoption. She emphasizes the importance of the work being done in AI and the need for continuous learning, iteration, and support from developers, C-suite leaders, researchers, and funders. She encourages the audience to remember the front-line workers who will bring AI solutions into the real world and to focus on their needs and experiences.

Mindmap

Keywords

πŸ’‘AI Tool Adoption

AI Tool Adoption refers to the process by which end users, particularly front-line workers, begin to use and integrate artificial intelligence tools into their daily tasks and decision-making processes. In the context of the video, it is emphasized that successful adoption is not just about the technology but also about addressing user concerns, workflow integration, and aligning incentives. The video provides examples and tactics to increase the likelihood of AI tools being adopted and effectively used by end users.

πŸ’‘Predictive AI

Predictive AI refers to the ability of artificial intelligence systems to analyze data and make predictions about future outcomes or trends. In the video, it is mentioned that because AI is predictive, it can foresee certain results, which can be beneficial for organizations but may also lead to additional labor or decreased autonomy for employees if not properly managed. The predictive nature of AI requires developers to consider the end users' perspective and needs when designing these systems.

πŸ’‘Labor Intensive

Labor Intensive describes tasks or processes that require a significant amount of human effort and time. In the context of AI, making AI tools labor intensive can be counterproductive as it may deter end users from adopting them. The video emphasizes the need to reduce labor for end users by streamlining processes and automating data inputting where possible, ensuring that AI solutions do not add unnecessary workload.

πŸ’‘Prescriptive AI

Prescriptive AI involves AI systems not only analyzing data and making predictions but also suggesting specific actions or decisions based on those predictions. This type of AI can be particularly challenging in fields like healthcare where professionals value their autonomy in decision-making. The video discusses the importance of balancing the prescriptive nature of AI with the need to protect the end users' autonomy and integrate AI recommendations in a way that supports, rather than replaces, human judgment.

πŸ’‘End User Concerns

End User Concerns refer to the worries or issues that the individuals who directly use a product or service may have. In the context of AI tool adoption, addressing these concerns is crucial for successful implementation. The video highlights that end users may resist AI tools if they perceive no direct benefits, or if the tools add to their workload or reduce their autonomy. Developers must understand and address these concerns to facilitate adoption.

πŸ’‘Workflow Integration

Workflow Integration is the process of incorporating new tools or systems into the existing set of procedures that an organization or individual follows. In the context of AI, it is important for AI tools to be seamlessly integrated into the workflow to avoid disrupting the end users' normal operations. The video emphasizes that successful AI implementation requires careful consideration of how AI tools fit into the workflow and how they can improve or streamline existing processes without causing undue burden.

πŸ’‘Autonomy

Autonomy in the context of the video refers to the independence and freedom that end users have in making decisions and carrying out their tasks. The implementation of AI tools can sometimes be perceived as a threat to this autonomy, especially if the tools are prescriptive and dictate specific actions. The video stresses the importance of protecting and respecting end users' autonomy when developing and implementing AI solutions to ensure their acceptance and effective use.

πŸ’‘Incentives Alignment

Incentives Alignment means aligning the goals and rewards of different stakeholders in a way that encourages cooperation and the achievement of common objectives. In the context of AI implementation, aligning incentives can involve ensuring that the benefits of AI, such as cost savings or improved outcomes, are shared with the end users. This can motivate end users to adopt and effectively use AI tools, as they see a direct benefit to themselves or their organization.

πŸ’‘Data Science

Data Science is an interdisciplinary field that uses scientific methods, processes, algorithms, and systems to extract knowledge and insights from structured and unstructured data. In the video, data science is a critical component of AI solutions, where it serves as the 'cherry on top' of the AI sundae analogy, meaning it provides additional value on top of improving regular work processes. The developers in the video use data science techniques to create predictive models and actionable insights that can assist end users in their tasks.

πŸ’‘End User Benefits

End User Benefits refer to the advantages or positive outcomes that the individuals who use a product or service directly experience. In the context of AI tools, ensuring that end users see clear benefits is crucial for adoption and effective use. The video emphasizes the importance of designing AI solutions that not only improve organizational outcomes but also enhance the end users' experience and productivity.

Highlights

AI implementations are often falling short due to resistance from end users who see few benefits for themselves and increased workload.

Research by Kate Kellogg, Mark Sendak, and Suresh Balu found that addressing end users' concerns about workflow and autonomy can lead to successful AI adoption.

AI solutions can result in increased revenue, reduced cost, and improved quality for organizations, but may also lead to decreased autonomy for employees.

Six tactics have been identified for successfully implementing AI solutions, which are generalizable across industries.

AI is different from other technologies as it is predictive, laborious, and prescriptive.

To increase benefits for end users, developers should focus on improving outcomes and changing the reward system.

Reducing labor for end users involves automating data input and pre-staging the interface to meet their needs.

Protecting end users' autonomy involves avoiding infringement on their core tasks and seeking their input in solution design.

AI solutions should be designed to make regular work faster and easier, with the ability to do cool things as a 'cherry on top'.

End users' involvement from day one is crucial for the success of AI implementation.

Developers need to upwardly influence top managers to change the reward system for end users.

AI solutions should be integrated into clinical workflows to generate value and ensure continuous use.

Measuring the impact of AI solutions involves assessing clinical outcomes, process and adoption measures, cost and economics, and addressing health disparities.

Model drift is important to monitor, as it can indicate changes in technology infrastructure or process and workflow shifts.

Training and educating front-line users on the nature of AI decision support output is crucial for effective tool adoption.

Transcripts

play00:02

- Hello, and welcome to our webinar,

play00:04

"Increasing AI Tool Adoption by Front-Line Workers."

play00:07

I'm Elizabeth Heichler, editorial director

play00:10

at MIT Sloan Management Review,

play00:12

and I will be your moderator.

play00:14

Now to today's topic.

play00:16

Many AI implementations are falling short of their goals

play00:20

to help organizations improve product and service quality,

play00:23

reduce costs, and increase revenues.

play00:25

The reason is that end users often resist adopting AI tools,

play00:29

especially those that guide decision-making

play00:31

because they see few benefits for themselves,

play00:34

and the new tools often add to their workload

play00:36

while taking away some of their autonomy.

play00:39

However, research by Kate Kellogg,

play00:41

Mark Sendak, and Suresh Balu

play00:43

has found that when developers ensure that end users'

play00:46

concerns about workflow and autonomy are addressed,

play00:49

conflicting stakeholder interests can be reconciled

play00:52

and AI tools more readily adopted.

play00:55

Kate is the David J. McGrath Jr.

play00:58

Professor of Management and Innovation

play01:00

here at the MIT Sloan School.

play01:02

Mark and Suresh are at the Duke Institute

play01:04

for Health Innovation, where Mark

play01:06

is the population health and data science lead,

play01:09

and Suresh is program director,

play01:11

along with his role as associate dean

play01:13

for innovation and partnership

play01:15

for the Duke University School of Medicine.

play01:17

Welcome Kate, Mark, and Suresh.

play01:22

- Thanks, Elizabeth.

play01:24

All right, everyone.

play01:25

Let's get this webinar started.

play01:27

Here is our punchline.

play01:29

Employees don't like solutions that make their lives worse.

play01:33

For organizations, AI solutions can result

play01:36

in increased revenue, reduced cost, and improved quality,

play01:40

but there's three ways that AI is different

play01:43

from other technologies, and we'll talk about these.

play01:46

It's predictive, it's laborious, and it's prescriptive.

play01:49

So that means that for employees,

play01:51

it can result in few benefits, additional labor,

play01:54

and decreased autonomy.

play01:57

We've been doing research at Duke over the last five years,

play02:00

and we've identified six tactics

play02:02

for successfully implementing solutions.

play02:05

We've compared implementations that are successful

play02:08

versus those that are failed.

play02:10

And these six tactics are generalizable across industries.

play02:14

So I'm gonna start by presenting three examples

play02:18

of how you would use these tactics

play02:20

in these other industries.

play02:22

And then I'm gonna hand it off to Mark

play02:24

who's gonna present examples from Duke Health.

play02:27

We'll finish with Q&A.

play02:30

So here are the six tactics, and let's take one at a time.

play02:34

First, because AI is predictive,

play02:36

you need to increase benefits for end users.

play02:40

I'm gonna use an example of an AI solution

play02:43

for HR recruiters, and in particular for sourcers.

play02:47

So sourcers for this organization located

play02:50

and attracted candidates.

play02:52

They identified candidates

play02:53

with difficult-to-find technical skills,

play02:56

engaged with them to speak with them about the company,

play02:58

and then handed them off to HR interviewers

play03:01

to be interviewed.

play03:03

And the actions taken upstream by these sourcers

play03:06

had large consequences for the downstream interviewers

play03:10

because the interviewers then need to screen them

play03:13

for mutual fit.

play03:14

And then the HR closers were the ones that offered the jobs.

play03:19

So the interviewers really wanted the sourcers

play03:23

to find candidates who are more likely to accept offers

play03:27

once they were offered.

play03:29

And so they came to the AI developers and they said,

play03:32

give us a solution that will help the sourcers predict

play03:35

which candidates will be accepting the offers.

play03:39

The problem for that was that the interviewers

play03:43

were not the true end users for the tool.

play03:46

It was actually the sourcers who were gonna need

play03:48

to use the tool, and they had different needs.

play03:52

They wanted to hit their quotas by finding all candidates

play03:55

with the desired skills, regardless of how likely

play03:58

they were to accept the offer.

play04:00

So the developers were smart and they said,

play04:03

all right, so let's see what we can do to address

play04:06

the pain points of the sourcers.

play04:08

And what they found out was for the sourcers,

play04:11

a big problem was that they would hand off these candidates,

play04:15

and they would just sit there in the system for two weeks

play04:18

before the interviewers could interview them

play04:20

because the interviewers didn't have enough bandwidth.

play04:23

So the developers said, okay,

play04:25

why don't we do something different?

play04:27

We're gonna introduce

play04:29

a pre-interview assessment part of this,

play04:32

where now, candidates are gonna get handed off,

play04:34

and they're gonna have to do an assessment

play04:36

of their coding skills

play04:38

before they move to the interview stage.

play04:41

And so, you may be sitting there thinking,

play04:44

well, wait a minute, that's not data science.

play04:48

Exactly.

play04:49

And I had a developer just give me a great analogy for this.

play04:53

And that is that for any AI solution,

play04:56

it's like an ice cream sundae where the data science

play04:59

is a cherry on top.

play05:01

So if you think about your own life and all the things

play05:03

you're trying to get done,

play05:05

what you really wanna do is be able to get your regular work

play05:09

done faster and more easily.

play05:11

And that's the sundae.

play05:13

And then that's great if you can also then do

play05:17

these really cool things and do your job differently,

play05:20

that's the cherry on top.

play05:21

But you need to first give them the sundae.

play05:24

They also were smart because they rewarded the outcomes

play05:28

the solution was designed to improve.

play05:30

And so historically, the sourcers had been rewarded

play05:34

just for the number of candidates they passed on

play05:38

who fit the desired skills.

play05:40

And they changed the reward system.

play05:42

So now, they would also be rewarded

play05:44

for the number of candidates who accepted offers.

play05:48

So this sounds so easy as you guys sit back there

play05:51

in your offices and you think, of course I know how

play05:55

to think about the needs of end users.

play05:58

But what we found is that it's easier said than done

play06:01

for two reasons.

play06:03

The first is that these end users always

play06:05

bring their own requirements,

play06:07

and that complicates development.

play06:09

And interestingly, it's not usually the developers

play06:12

that have trouble with that.

play06:13

It's usually the people who come to the developers

play06:16

asking for the solutions.

play06:17

So the HR interviewers in this case,

play06:20

who say, "Oh yeah, we know the sourcers

play06:22

are the true end users, but let's not get them involved yet.

play06:26

Let's get a little further along before we bother them

play06:30

and get them involved."

play06:31

But you need to get them involved from day one.

play06:35

And then the second reason it's hard is that developers

play06:38

don't control the reward system.

play06:40

So they need to upwardly influence top managers

play06:43

in order to get the reward system changed,

play06:46

and that happened in this case also,

play06:49

but for AI implementation to be successful,

play06:52

you are gonna need to increase end user benefits

play06:55

in these two ways.

play06:57

The next thing about AI is it's very laborious,

play07:01

and you need to reduce the labor for the end users.

play07:04

I'm gonna use an example of an AI solution

play07:07

for sales people.

play07:08

The sales people in this organization

play07:10

sold manufacturers product in large quantities

play07:13

to wholesalers and retail store chains.

play07:17

And they spent a lot of their time

play07:19

building personal relationships with customers

play07:22

in order to identify new opportunities.

play07:25

So they were very busy, the salespeople,

play07:27

generating leads, making calls, responding to emails

play07:30

in order to hit their sales target.

play07:32

So they're busy people anyway,

play07:35

and now, you're asking them to do all this labor.

play07:38

It's not completely new that you've asked for feedback

play07:42

for end users.

play07:42

So for an example, in this organization,

play07:45

they had a CRM system for sales.

play07:47

And whenever they did a new iteration of development,

play07:50

they would ask for feedback from the end users.

play07:53

But what's different about AI is it requires feedback

play07:57

in all of these different steps,

play07:59

and all of you out there who are implementing AI,

play08:02

know that it takes a long time to do.

play08:04

And it's very laborious to do all of these steps.

play08:08

And so you need to figure out how to reduce the labor

play08:12

for the end users that are involved in doing these steps.

play08:15

So in this organization,

play08:17

they enlisted the head of process improvement,

play08:19

who was the one who wanted the tool in the first place,

play08:21

by the way.

play08:22

The salespeople in the beginning,

play08:23

really not very interested in this tool.

play08:26

And they got the head of process improvement to perform

play08:29

a lot of this data work originally.

play08:31

And they also initially built the model

play08:34

so it required a smaller amount of training data in order

play08:38

to show proof of concept to the salespeople

play08:40

and show them what the promise of this tool was.

play08:44

They also reduced integration work for the end users

play08:48

and they did this by automating the data inputting required

play08:52

to use the tool as much as possible.

play08:54

And they also pre-staged the interface

play08:56

to meet the salespeople's needs.

play08:58

So just to give you an example of that,

play09:01

the sales people in this organization

play09:03

did what's called white space analysis

play09:06

where for any given client,

play09:07

they would look at what are they already buying

play09:10

versus what's the full product line

play09:13

that they could be buying based on their industry

play09:15

and based on particular demographics.

play09:18

So the AI solution developers made it easier for them

play09:21

to do that white space analysis, again,

play09:23

giving them that whole ice cream sundae.

play09:27

Again, sorry, easier said than done for two reasons.

play09:31

First of all, end users are great at the data work

play09:34

because they're the ones that know the ins and outs

play09:37

of the data.

play09:37

They know the idiosyncrasies, the best outcome measures.

play09:41

So of course the developers would love

play09:43

to have their input as much as possible.

play09:46

And then when it comes to integration work,

play09:49

they're also the best.

play09:50

They know the daily workflow.

play09:51

They know where this solution should best fit in.

play09:54

And the developers know that if they enlist their help,

play09:58

then these end users will also serve as super users

play10:02

to persuade their peers to use the solution.

play10:05

So the answer is not to completely avoid

play10:08

asking end users for help.

play10:10

It's just to use the labor

play10:12

of the end users very judiciously.

play10:16

Finally, because AI is prescriptive,

play10:19

you need to protect the autonomy of end users.

play10:22

And here, I'm going to use an example for a solution

play10:25

for fashion buyers, from my colleague,

play10:27

Melissa Valentine at Stanford.

play10:30

And let me just start by saying,

play10:32

it's pretty funny, but I'm gonna give you guys

play10:34

a lesson on fashion.

play10:35

If I have any friends on this webinar,

play10:38

they're getting a really big kick out of this right now,

play10:41

but it's a great example.

play10:43

So let's see how I do.

play10:46

Fashion buyers selected and ordered which clothing

play10:49

would be sold to gain maximum profit.

play10:51

So they traveled the world, looking at the fashion runways,

play10:55

what are the best pieces?

play10:57

Then they combined their knowledge of fashion

play10:59

with their business sense to decide

play11:01

what mix of brands to buy.

play11:03

And that's what you would then see

play11:05

in the spring season, online or when you're in stores

play11:09

is what those fashion buyers decided to buy.

play11:12

So top managers had always used a detailed planning process

play11:17

to better help control the fashion buyer decision-making,

play11:21

but AI was just a whole different level here,

play11:25

where now, it was gonna use historical data

play11:28

that incorporated the intuition of the fashion buyers

play11:31

that they'd been using for years

play11:33

to provide highly specific recommendations

play11:36

and allow managers to track whether the buyers

play11:39

were following them.

play11:41

And so perhaps not surprisingly, the buyers said,

play11:44

"Wait a minute.

play11:45

We don't love the idea of introducing this solution

play11:48

that's gonna allow people outside of fashion buying,

play11:51

who know nothing about fashion,

play11:53

to be shaping and critiquing our decision making.

play11:57

So the AI developers were smart.

play12:00

They avoided infringing on the end user core tasks.

play12:04

So in fashion buying, what the fashion buyers loved to do

play12:09

was the creative task of knowing fashion

play12:13

and knowing what that assortment could be.

play12:15

That's why they went into this job in the first place.

play12:19

What they didn't like the scutwork,

play12:21

was now figuring out which vendors

play12:24

are gonna be the ones to supply

play12:26

those different fashion items.

play12:28

And so what the developers did was they developed a solution

play12:32

that allowed the buyers to dictate the assortment,

play12:35

but helped them allocate sizes across vendors

play12:38

and do the scutwork piece that they didn't like to do.

play12:43

They also asked the end users to help evaluate the solution.

play12:46

So the end users initially said,

play12:49

"Thank you very much for your fancy AI solution,

play12:52

but I like my current planning process.

play12:54

I understand my technology.

play12:56

I'm not gonna look dumb in front of people.

play12:58

I really know the ins and outs. I'm not interested."

play13:02

And so the developer said, "Okay, that that's fine.

play13:05

Do you mind helping us design an A/B test

play13:08

to evaluate the performance of your current system

play13:11

versus this new system?"

play13:13

And they did that.

play13:14

So then the fashion buyers would see,

play13:16

wow, this new system is actually pretty good.

play13:19

So great. Let's protect autonomy.

play13:23

Easier set than done. Here's why.

play13:25

First of all, intervening around core tasks

play13:28

promises to yield greater gain.

play13:30

So if you think about it, fashion buying is high up

play13:33

in the value chain.

play13:34

If you intervene there with an AI solution,

play13:37

you are gonna affect everything downstream

play13:40

in terms of revenues and gross margins.

play13:42

If you intervene just around the vendor piece,

play13:45

it's a lot less impact.

play13:48

But what we found is intervening around a circumscribed set

play13:53

of tasks with the solution that actually gets used

play13:56

is a lot more effective than building

play13:59

this perfect AI solution that intervenes

play14:01

in exactly where you want to, but then doesn't get used.

play14:05

The second thing that's hard

play14:07

is when you do involve these end users,

play14:09

especially if they're not so interested in this solution

play14:11

to begin with, they are gonna select

play14:13

the most challenging tests for you to do

play14:16

to prove your solution.

play14:17

So that's not so easy, but again,

play14:20

we have found that it's necessary to protect autonomy

play14:24

if you want this to be successful.

play14:27

So we suggest that you add these six tactics

play14:30

to your toolkit.

play14:31

Why?

play14:32

Because success doesn't arise from big data

play14:35

and sparkling technologies.

play14:36

Instead, it depends on these end users on the ground.

play14:40

So here's stark reality.

play14:43

If you wanna be successful,

play14:44

you need to increase values for those who are working

play14:48

on the front lines in order for AI to function

play14:51

in the real world.

play14:52

So with that, I'm going to go ahead and hand this off

play14:55

to my colleague, Mark Sendak,

play14:57

who's gonna take you through the Duke examples.

play15:01

- Thank you, Kate, for walking us through the six tactics.

play15:04

Hi, everybody. My name is Mark Sendak

play15:06

and I'm speaking to you

play15:07

from the Duke Institute for Health Innovation

play15:10

where I work as a physician intrapreneur.

play15:14

And I'm gonna take the six tactics that Kate

play15:17

just walked us through,

play15:19

and I'm gonna ground them in examples within healthcare,

play15:22

a very kind of professional services-oriented industries.

play15:27

So at Duke Health, we have two structured ways

play15:31

of identifying projects for innovations.

play15:34

On the left-hand side, this is our annual request

play15:38

for applications where we align senior strategic priorities

play15:43

from our leadership with the needs of front-line workers.

play15:47

We've done about 100 projects through that process.

play15:50

And then on the right-hand side,

play15:51

we have more of an innovation "Shark Tank" competition

play15:55

where we help commercialize IP and inventions

play16:00

built at the university.

play16:03

So I'm gonna go through the same six tactics

play16:09

in the same buckets.

play16:10

This will likely sound familiar.

play16:12

You just heard about the great use cases

play16:14

that Kate brought up from non-health care industry.

play16:18

And I'm gonna tell you what this looks like

play16:21

within a health system.

play16:23

So to kick things off, I wanna help orient things

play16:27

around patient experiences and provider experiences.

play16:31

So we're gonna kick things off with a patient.

play16:34

This could be a family member.

play16:35

This could be yourself

play16:36

if you have experience with a chronic illness,

play16:39

where typically, there's some trajectory that starts

play16:43

on the left-hand side with a healthy adult.

play16:46

They may develop an early-stage chronic disease.

play16:49

This could be diabetes, hypertension, kidney disease.

play16:54

Gradually over time, that that condition can worsen.

play16:58

The kidney function can worsen.

play17:01

Liver function can worsen.

play17:03

And ultimately, for some patients,

play17:05

the organ function gets so bad that we need to start looking

play17:10

at a way to replace that organ.

play17:13

And so what that looks like at a place like Duke Health

play17:16

is that that individual patient is gonna end up interacting

play17:21

with our system differently.

play17:22

So they may start by seeing a primary care physician,

play17:26

but as a kidney disease, liver disease,

play17:29

whatever organ it is, as that deteriorates,

play17:33

they need to see additional specialists.

play17:35

And ultimately, on the right-hand side,

play17:37

you see that organ transplantation

play17:40

is very high-cost, multidisciplinary team,

play17:44

where you have very specialized surgeons

play17:47

and immunotherapies.

play17:50

So it really goes from upstream to downstream,

play17:53

and more complex in the number of people involved.

play17:59

So a lot of the problems that we work on,

play18:02

and that we build AI or ML solutions for

play18:06

is to prevent some downstream progression

play18:09

and prevent the bad outcomes from happening.

play18:13

But like Kate mentioned with the interviewers

play18:15

and the sourcers in the HR process,

play18:18

if we're trying to prevent downstream progression,

play18:21

that often requires that there's somebody earlier

play18:25

in the process,

play18:26

which is really often the primary care doctor,

play18:28

that is intervening earlier.

play18:32

And so this may mean that the primary care doc

play18:34

has to do more to manage the chronic diseases,

play18:37

to really make sure that these conditions don't worsen.

play18:42

So we all here may have heard

play18:45

how burdened healthcare workers are.

play18:48

So I'm gonna start with the example

play18:50

of a primary care physician.

play18:52

Primary care docs really do have to manage everything.

play18:55

So we'll start off by talking about kidney disease,

play18:59

where the kidney disease may progress.

play19:02

A patient may have to go see a kidney specialist,

play19:05

and the kidney specialist may tell the primary care doc,

play19:09

"You could have done more to prevent

play19:11

the kidney disease progression."

play19:14

Same for patients with diabetes.

play19:16

There's gonna be endocrinologists

play19:18

who see really late stage diabetes.

play19:21

Maybe they see kidney disease.

play19:24

You can lose your vision,

play19:27

retinal complications from diabetes

play19:29

so you're gonna see endocrinologists

play19:31

who tell the primary care docs,

play19:33

"You could have done more earlier to manage this condition.

play19:36

You could have done more and earlier

play19:37

to manage heart disease, to manage liver disease,

play19:41

to manage lung disease."

play19:43

The list goes on.

play19:45

So at the end of the day, you get these upstream workers

play19:48

who are really burdened by all of these requests

play19:52

from the downstream specialists, telling them,

play19:56

"Hey, we could have done more earlier

play19:58

and prevented these downstream complications."

play20:01

So when we're building AI,

play20:04

this really does change the tactics.

play20:07

And so this first tactic is identifying

play20:10

who your end user is, and addressing their problems.

play20:15

So let's go back to our primary care doc.

play20:18

So we're gonna use the example of kidney disease,

play20:21

where there's nephrologists who are specialists

play20:24

in caring for kidney disease patients.

play20:26

And they're telling the primary care docs,

play20:28

"You could have done more earlier to manage

play20:30

this kidney disease."

play20:33

But we have to take a step back and think about,

play20:35

okay, what are the primary care doctors' pain points?

play20:39

First off, they have limited time to address

play20:42

all of the conditions.

play20:44

So what we did with the AI is we helped a specialist,

play20:47

a kidney specialist send recommendations directly

play20:51

to the PCP to help them manage that kidney disease,

play20:55

to help relieve some of the burden

play20:57

of having to think about that specific condition.

play21:00

Another problem for PCPs is that they go

play21:04

from one 15-minute visit to the next 15-minute visit.

play21:08

And that's day after day after day.

play21:10

I'm sure folks in the audience have experienced this.

play21:13

When you walk into a doctor's appointment,

play21:15

you often have to reiterate the same thing multiple times

play21:20

because there's not a lot of opportunity

play21:22

to prepare for these visits.

play21:24

And so what we did with the AI is that the specialist

play21:27

sends the recommendation immediately before the visit.

play21:31

That way, the information is teed up for the PCP

play21:34

to be able to act on it.

play21:38

And then lastly, it's the fact that,

play21:40

like I mentioned before,

play21:41

PCPs go in and out of visits all day

play21:44

and they may not have the opportunity

play21:46

to really follow-up with patients

play21:48

and make sure that that patient with kidney disease,

play21:51

where they recommended a change in the medication dose

play21:55

or recommended that they see a dietician

play21:57

or a kidney specialist, they may not actually

play22:02

know whether that happens.

play22:03

So what we did with the AI is we also complimented

play22:07

the tool with care managers

play22:08

who would actually follow-up on patients.

play22:12

The second tactic, building off of Kate's examples,

play22:16

is to align the outcomes and share in the reward

play22:20

with your true end users.

play22:22

Which in our case, is the primary care doctor.

play22:25

The reality is if you are able to prevent progression

play22:30

of kidney disease, you actually end up saving

play22:34

a health insurer or a payer a lot of money.

play22:37

So to put this in dollar estimates,

play22:41

a dialysis crash start that happens in the hospital

play22:45

can cost upward of $80,000.

play22:48

And you've gotta imagine

play22:49

if you have these kidney specialists telling the PCPs,

play22:53

"This could have been avoided if we acted sooner,"

play22:56

ultimately, we want to align the incentives

play22:59

so the PCP can help also receive some of the shared savings

play23:05

from avoiding these bad outcomes for kidney disease.

play23:08

So we actually did work with our health system leader

play23:11

to help align incentives for the downstream

play23:14

and upstream users.

play23:17

The next bucket of tactics relates to the labor involved

play23:20

in building and validating AI systems.

play23:24

So let's go back to our primary care doc example.

play23:28

We have all of these downstream specialists telling

play23:31

the primary care doc, "Hey, you could have managed

play23:34

this chronic disease earlier

play23:36

and prevented these bad outcomes."

play23:39

Unfortunately in healthcare though,

play23:40

those primary care docs aren't the only generalists

play23:44

who get that kind of feedback all the time.

play23:47

Another setting where this happens is emergency departments.

play23:51

Probably another setting that many folks

play23:53

in this audience are familiar with.

play23:55

You show up, and the ED doc has to manage

play23:58

whatever comes their way.

play24:00

So for example, you have a certain type

play24:03

of heart attack, an NSTEMI.

play24:06

That is lower risk and patients don't have to go

play24:09

to the ICU when they have this type of heart attack.

play24:13

They have a better chance of doing well in the hospital

play24:16

and returning home.

play24:18

But what happens is that the ED docs,

play24:21

in the time pressure that they face

play24:23

going from one patient to the next,

play24:25

they're just trying to make sure that patients

play24:27

get the care they need.

play24:28

So they send a lot of these patients

play24:30

to the intensive care unit.

play24:32

So you have intensive care unit docs going down to the ED

play24:36

and saying, "Hey, if you identified this disease sooner

play24:39

and you recognize that this was a low risk condition,

play24:41

you actually don't need to send these patients to the ICU.

play24:45

That's a really high cost resource,

play24:46

and we can use that bed for somebody else."

play24:49

Something similar happens for sepsis.

play24:52

You have the hospital specialists coming down

play24:54

to the ED saying, "Hey, if you had identified

play24:57

and started treating sepsis sooner,

play24:59

we could have avoided the bad outcomes for our patients

play25:02

in the hospital."

play25:04

The ED docs are even going upstream from their setting,

play25:09

trying to say, "Okay, can we change EMS routing?"

play25:13

So EMS, if you're able to route patients

play25:16

to the other hospital, you can make it more efficient.

play25:21

When there's patients backing up in the waiting room,

play25:25

and you're waiting hours to see folks,

play25:28

people are going to the ED docs saying,

play25:29

"Hey, can you improve telemedicine within your setting?

play25:34

That way, we can see people

play25:35

in the waiting room more quickly.

play25:37

And then the last example that we'll build off of,

play25:39

it relates to blood clots that go to your lungs

play25:42

called pulmonary embolisms.

play25:44

And once again, this is trying to tell the ED docs,

play25:47

"Hey, these low risk patients,

play25:50

they don't need to come to the hospital.

play25:51

You could actually send them back home."

play25:55

So these ED docs similar to the primary care docs,

play25:59

they really feel like they carry the weight of the world.

play26:02

And I know what this looks like personally,

play26:04

my wife is a pediatrician and a primary care doc.

play26:07

And literally day after day,

play26:09

you're only able to manage a very small portion

play26:13

of the things that you encounter.

play26:15

So when we're building technologies to try to improve

play26:19

how we manage these conditions,

play26:21

we have to be thinking

play26:23

about how do we deploy the Mack Trucks to remove

play26:27

some of that labor that's falling on these upstream users

play26:31

who are inundated with requests to do their jobs better.

play26:35

So tactic three is how do we reduce the labor

play26:40

on those upstream end users to build out the datasets

play26:44

for AI model development?

play26:47

So what we did in this case

play26:49

is we actually asked the downstream specialists

play26:52

who care for pulmonary embolism

play26:56

to adjudicate the cases,

play26:57

make sure that we were defining our outcome accurately.

play27:01

Outcome definition is probably one

play27:02

of the most important things that you do

play27:04

when building an AI model.

play27:06

These downstream specialists, they did all the QA,

play27:09

quality assurance, quality control

play27:11

for the inputs of the model,

play27:13

making sure that the data that was being fed to the model

play27:17

to learn from was all valid.

play27:19

And then the third thing was we often run our models

play27:23

in what we call a silent mode.

play27:25

And this is when we run it prospectively

play27:27

without actually exposing the end user to the algorithm,

play27:31

just to make sure that it works in a production setting.

play27:34

And our beta testers during the silent mode

play27:36

were the specialists.

play27:38

And they took on that labor.

play27:40

So really, we do everything

play27:42

and try to minimize anything

play27:45

that we ask these inundated upstream workers to take on

play27:49

as we develop the AI solutions.

play27:53

The next tactic is when you actually go to integrate,

play27:57

how do you minimize burden for use of the AI tool?

play28:02

So going back to our pulmonary embolism use case,

play28:05

we have the ED docs who carry the weight of the world,

play28:08

and they're being told, "Hey, these low risk patients

play28:12

with pulmonary embolisms, you can actually send them home."

play28:16

The challenge here is that the ED docs

play28:19

are reluctant to send patients home

play28:22

unless if they know that the patient

play28:24

is gonna get the care they receive outside of the hospital.

play28:28

So what did we do?

play28:30

We built the AI solution in a way

play28:33

where patients who are identified

play28:35

as having this low risk pulmonary embolism,

play28:39

there's a notification sent to a care manager

play28:42

who schedules the appointment, in clinic, three to five days

play28:48

after that emergency department encounter.

play28:51

So literally, it's the notification sent

play28:55

to schedule the appointment,

play28:57

and then the ED doc is told,

play28:59

"Hey, this is a low risk patient.

play29:02

We've already coordinated things to make sure

play29:04

that the patient can be safely seen at home,

play29:07

and this patient can be safely discharged."

play29:12

So the first two use cases,

play29:15

I talked about healthcare examples,

play29:17

which is a big part of my life,

play29:20

working at the Duke Institute for Health Innovation.

play29:23

The last two tactics, I'm gonna ground in experience

play29:26

from a different part of my life as the father

play29:29

of two young girls.

play29:30

So these are characters I've gotten to know really well

play29:34

over the last five, six years.

play29:37

You can probably imagine with two daughters,

play29:39

how important "Frozen" is to our family.

play29:43

So on the right-hand side, you have Elsa.

play29:46

So Elsa is really kind of one of the main drivers

play29:50

of the "Frozen" series, and hence the name "Frozen."

play29:54

It comes from the fact that Elsa has magical powers

play29:57

to freeze anything.

play30:00

She can turn anything to ice.

play30:03

And so she has these magical powers.

play30:07

She is seen as an outcast in her community

play30:10

because of these magical powers.

play30:13

But at her core, she identifies with these powers

play30:18

and with what it gives her the ability to do.

play30:22

So she has this tension of having magical powers,

play30:26

not being really warmly accepted by her community,

play30:30

and she's a princess.

play30:33

So she's the oldest daughter in a royal family.

play30:36

And she comes to terms with the fact

play30:37

that she really is not eager to take on a leadership role

play30:43

within her community and within her kingdom.

play30:46

Thankfully, she has this amazing younger sister, Anna.

play30:50

Anna has grown up in Elsa's shadow in some ways,

play30:54

and looks up to her older sister.

play30:57

She is fascinated with the magical powers

play31:00

of her older sister, but she understands

play31:03

that her older sister needs protection

play31:05

and that she's seen as an outcast.

play31:08

And the things that her older sister identifies with

play31:11

are really seen as kind of scary or intimidating by others

play31:15

within the kingdom.

play31:17

And Anna is eager to take on a leadership role

play31:21

in her kingdom and kind of emerges in the series

play31:24

as the heir to the throne.

play31:28

So we have Elsa and Annas throughout our own organizations.

play31:36

And I'll try to map these to the org structure

play31:39

that Kate went through,

play31:40

where there are many examples of front-line workers

play31:44

with magical powers that we are trying to further equip

play31:48

with tools that we're significantly investing in.

play31:52

These tools can be intimidating.

play31:54

They can make folks feel like they have the ability

play31:58

to do things that they wouldn't be able to do as leaders

play32:01

in the organization.

play32:02

And those front-line workers need protection

play32:05

from managers and organizational leaders

play32:08

that more fit the character of Anna.

play32:13

And so this is kind of one of the most tense scenes

play32:18

in the first "Frozen" movie,

play32:20

where Elsa ends up leaving the kingdom.

play32:22

She seeks solace in a large ice kind of castle

play32:27

that she builds herself.

play32:29

And many of the villagers, they go out

play32:33

to try to encourage her to come back

play32:35

and stop using her power.

play32:37

And for Elsa, this is really kind of scary.

play32:40

And she has to kind of fight back with the villagers

play32:44

to keep her core tasks and her ability to continue

play32:47

using the powers that she has.

play32:50

And her sister ends up intervening,

play32:52

and helping kind of mediate the need for Elsa

play32:57

to be able to really embrace and use her special powers.

play33:04

The last tactic is more modeled off of the sequel

play33:08

in the series, "Frozen II,"

play33:11

where Elsa and Anna develop

play33:13

kind of a common framework and model,

play33:16

defining under which conditions can the magical powers

play33:20

most advance the interests of the kingdom.

play33:24

How should the magical powers be used versus not used?

play33:29

Where should Elsa focus her efforts,

play33:32

and what should she be prioritizing?

play33:36

What are the parameters

play33:38

by which she should be using her powers?

play33:40

And the second movie kind of really brings

play33:43

together the two sisters

play33:45

where you have this highly equipped,

play33:47

talented front-line out there in the community

play33:51

using magical powers and a leader in the kingdom

play33:55

who is protecting and making sure that her sister's powers

play33:59

are respected within the community.

play34:02

So I hope that this helps map these different types of roles

play34:07

and these different types of tactics

play34:08

to all of our organizations.

play34:11

And we're gonna kind of review the six tactics.

play34:14

We bucket them into these predictive AI,

play34:17

laborious AI, prescriptive AI.

play34:20

And I hope the examples that we've given you

play34:23

from other non-healthcare industries, fashion,

play34:27

hiring, grounded in healthcare examples,

play34:31

as well as other types of characters that folks on the call

play34:34

may be familiar with.

play34:35

So I'm gonna stop sharing my screen

play34:38

and we're gonna be going to a Q&A.

play34:43

- Great, thank you so much.

play34:46

Great presentation.

play34:47

Glad to have you all back with us.

play34:49

We'd like to remind our audience

play34:53

that we welcome your questions.

play34:54

We've had some come in during the presentation,

play34:57

but we would love to have more.

play34:59

So you can submit those using the questions module

play35:03

on the GoToWebinar control panel.

play35:06

So just to kick things off.

play35:10

And I'll address this to maybe whoever wants to grab this.

play35:16

Are there important differences in how you apply the tactics

play35:20

for AI solutions that are built internally

play35:23

versus those that are purchased from an external vendor?

play35:27

I dunno if you wanna grab that, Mark maybe?

play35:30

- Yeah, so I'll kick off, and then Suresh,

play35:32

if there's anything you wanna add.

play35:34

So I'll talk through one of our examples.

play35:37

We talked about kidney disease.

play35:40

And that was an example where we used an AI tool

play35:43

that was built externally.

play35:46

And so even within the examples

play35:47

we've talked about, there's both,

play35:50

where some of the algorithms we curate our own data for

play35:53

and then we build those internally.

play35:55

Other times, there's well-validated external studies

play35:59

or products on the market that will then integrate.

play36:02

I would say one piece that is different

play36:06

when you build the

play36:07

tool internally, there's a lot more of the labor required

play36:11

to validate the inputs and validate the outputs.

play36:14

So we have to really bring together

play36:16

the clinical specialists,

play36:18

agree on what the relevant information is,

play36:21

what's the right way to define the outcome,

play36:24

adjudicate those outcomes.

play36:26

Oftentimes, we have multiple comparisons,

play36:28

so there's like three variations of how we could define it.

play36:34

Whereas when we bring in an externally developed solution,

play36:38

a lot of that expert consensus is already built.

play36:42

And typically, it's a specialist or an expert

play36:45

within our own setting who's kind of like handing us

play36:48

a publication saying,

play36:50

"Hey, this is well-accepted in our profession

play36:53

as kind of the gold standard for what other sites

play36:56

are doing in terms of risk prediction.

play36:59

Can we use this?

play37:00

So that's the major difference,

play37:02

is the amount of end user labor going into the validation

play37:06

of the inputs and outputs.

play37:08

But the other tactics are relatively consistent

play37:12

in how we do the workflow integration and everything else.

play37:16

- Well, and just a follow up on that question.

play37:18

Is there a difference in the extent to which users along

play37:25

the process want better understanding

play37:28

of how the tool is working?

play37:30

You probably have a bit more visibility into your own.

play37:34

How does that play out with externally purchased tools?

play37:38

- So you wanna go, Suresh?

play37:41

- Yeah, I think that when we start with an external tool,

play37:45

that is that information asymmetry.

play37:47

So we need to certainly address that right

play37:50

from the beginning in terms

play37:51

of is it the right set of outcome?

play37:53

What are the limitations?

play37:54

I think those things have to be captured

play37:56

in a structured form so that this is incorporated

play38:00

into the education of that specific tool

play38:03

that's being developed and validated.

play38:05

And then, presented both to the end user

play38:09

and all the other users as well,

play38:11

other stakeholders will touch upon that tool.

play38:13

Otherwise, it becomes difficult to engage

play38:17

and drive adoption and see the returns.

play38:20

Mark, anything else?

play38:22

- Yeah, one thing to build off Suresh's comment

play38:24

is one of the ways that we address the information asymmetry

play38:29

is by doing the silent trials.

play38:32

So whether it's internal or external,

play38:35

every clinician in our setting,

play38:39

before they use a tool

play38:42

for their own patient care prospectively,

play38:45

they want to know, how does this work for my patients?

play38:49

And so we anticipate that,

play38:51

and it doesn't matter whether we built the tool

play38:53

or the tool was built elsewhere.

play38:55

Before we actually go to folks

play38:57

and try to get them ready for a rollout,

play39:00

we try to give them the information saying,

play39:02

"Hey, by the way, we've already done the analysis.

play39:05

This is how it works in this setting.

play39:08

- Right. Okay, great.

play39:10

Thank you, Mark.

play39:11

Now, here's a very general advice question,

play39:18

which I'm gonna kick to Kate

play39:19

since you worked with a lot of practitioners.

play39:22

This audience member is a business manager working

play39:26

with a team of data scientists and engineers.

play39:29

What advice do you have for a non-technical person

play39:31

collaborating with AI/ML experts

play39:34

so that we are all speaking the same language?

play39:37

And that's a very broad question,

play39:38

but I think you probably understand the sentiment

play39:41

that it's coming from and any thoughts you have.

play39:43

- And so, first of all, this is what's known as a cold call,

play39:46

guys in business school,

play39:48

and now, I'm on the receiving end.

play39:49

So I can see how this feels from the other side.

play39:52

So is the questioner asking,

play39:55

they are the person with

play39:56

the non-technical background?

play39:57

- Yeah, they are a business manager working with that team

play40:00

and looking for-

play40:01

- Okay, so I think that the easiest thing I would say

play40:06

is this is where boundaries, banners, and brokers

play40:09

are really important.

play40:10

And in organizations, product managers often play this role

play40:14

between sort of like top managers and domain experts

play40:18

on the one hand, and then the AI developers

play40:22

on the other hand.

play40:23

So I guess I would say of course, as a manager,

play40:27

get yourself up to speed as much as you can.

play40:30

Learn as much about the technical as you can.

play40:33

But the reality is that you're not gonna be able to learn

play40:37

everything you need to know to be really effective

play40:39

in the detail development.

play40:41

And that's where people like product managers are helpful.

play40:44

- Right. Okay, great.

play40:45

Thank you.

play40:46

And then another question I'll toss this out

play40:49

to whoever wants to grab it,

play40:50

which is how do you measure the impact of these tactics?

play40:55

How do you measure adoption and contribution

play40:58

of AI technology?

play41:01

- So this is interesting because I was recently asked

play41:05

by someone who's at a research site,

play41:08

"Tell me what is a good adoption rate?"

play41:12

And so that just sparked me reaching out

play41:14

to my networks and asking.

play41:16

And so what I found interesting in this

play41:18

is that sometimes, companies can really measure

play41:23

this very closely at the end user level.

play41:26

Like they can tell is the end user hovering

play41:29

over the solution?

play41:30

How much are they engaging?

play41:32

They have all these very detailed measurements.

play41:34

And then at a place like Duke,

play41:37

they may be measuring this by looking at the impact

play41:41

more than the very detailed use.

play41:43

So maybe Suresh or Mark,

play41:44

you guys could give a quick answer.

play41:46

Like, how do you measure impact in your setting?

play41:49

- Yeah.

play41:50

- I think of- - I'll go.

play41:52

Go ahead, Mark.

play41:53

- I think of the phrase like competing for eyeballs.

play41:58

And I would say that that is a more typical lens

play42:03

to look at things when you're trying to present

play42:06

something to somebody, and you're competing for real estate

play42:11

where they're looking.

play42:13

And so there's a lot of implementations

play42:16

that are firing popups or notifications.

play42:19

We generally try to avoid that type of implementation.

play42:24

So we more try to structure our products

play42:27

where somebody is responsible for using a tool

play42:32

and acting on things that they are presented.

play42:36

And there should be very little distraction

play42:39

from their time spent using that tool.

play42:42

So that's why we typically go to the next step,

play42:45

and say how do we measure effectiveness of the tool?

play42:48

Because it is built-in that there's labor involved

play42:53

in the tool use, and that labor is typically dedicated

play42:56

for use of the tool.

play42:58

- Right. Okay.

play43:00

Well, and then keeping on this theme of impact,

play43:03

and maybe rolling up to organizational politics a bit,

play43:06

and Suresh, I think with given your role there,

play43:11

this might be a good one for you.

play43:13

Given the enormous investments

play43:15

that your organization is making,

play43:18

how do you demonstrate value and demonstrate impact

play43:23

to organizational leaders to keep these projects going,

play43:27

keep this work going?

play43:30

- I will share what we've been doing here,

play43:33

and how we've be in demonstrating that

play43:34

in a much more practical way of doing things.

play43:38

We are an innovation team.

play43:40

So we have a very clear guiding principles

play43:43

by which we take on projects.

play43:45

Every single project, or an idea, or a concept

play43:48

that we touch upon, certainly the very first aspect is,

play43:52

a solution is built to show value.

play43:55

It's built to integrate, integrate in the sense

play43:57

that most of these AI tools do not generate value

play44:02

if it does not get integrated into the clinical workflow.

play44:06

So adoption and continuous use is an important aspect of it.

play44:10

That's why I've been going through the six items

play44:13

as to how to really engage and drive adoption

play44:16

is an important aspect that we discussed today.

play44:18

And then certainly, we look at ability to scale as well,

play44:21

doing all this stuff in a responsible fashion.

play44:24

So when we take on a project right at the outset,

play44:27

we clearly define the outcomes that we are going after,

play44:31

along with the right set of measures.

play44:33

Specific milestones and the risk

play44:36

and the risk mitigation steps.

play44:37

And our evaluation pieces fall into four,

play44:40

typically four different categories.

play44:43

One is the clinical outcomes that we are looking for.

play44:46

The second is certainly the process and adoption measures,

play44:50

efficiency measures, those type of attributes

play44:53

on a project-specific basis.

play44:55

Third is cost and economics aspects

play44:58

of it certainly figure in.

play45:00

And the fourth one certainly falls into the equity

play45:03

and how do we really address the health disparities as well?

play45:07

These are the four different categories that we look at,

play45:10

but certainly, right through the whole AI adoption piece,

play45:13

we look at can we reduce the burden or the workload

play45:18

on the clinicians who are using those set of tools?

play45:21

Otherwise, adoption just goes away.

play45:23

So these are the four basic categories that we look at,

play45:25

and we demonstrate on a project-by-project basis.

play45:28

Thank you.

play45:29

- Okay, great. Thank you.

play45:30

Now I have another question that I think

play45:33

we can look at from both maybe generally,

play45:36

and I might ask Kate to give us

play45:39

her thoughts and then also a little bit of what's Duke doing

play45:42

in this area.

play45:43

And that is how important is it to monitor model drift

play45:52

in these kinds of implementations?

play45:56

And what are, I think we'd be interested to hear,

play46:00

from Duke about if you have any procedures

play46:03

by which front-line users are giving feedback to the team.

play46:06

But also, I'm interested, Kate, in just your overall sense

play46:10

of how important that is to do

play46:12

and maybe if you think front-line users

play46:14

have a role to play?

play46:16

- So I guess what I'll say is that we have absolutely seen

play46:21

that a solution gets put out there by developers,

play46:25

and you really have no idea who is gonna grab hold

play46:28

of that solution and wanna use it,

play46:30

and maybe not understand what they're using it for.

play46:33

So that when I think of model drift,

play46:35

like one thing is, is it even the people

play46:37

that you designed this for who are the ones

play46:39

that end up using it?

play46:41

And if not, you need to find out

play46:43

who is using it, and what are they using it for?

play46:46

And do they really understand the tool well enough

play46:50

to be using it accurately.

play46:52

And then the second thing on the model drift,

play46:54

when it is being used by the users,

play46:56

is what can you do to continue to iterate

play46:59

as you learn, as you see what the end users

play47:04

are actually doing?

play47:05

How can you feed that back into the solution?

play47:08

And I think there's a big myth in AI

play47:10

that it's always a learning model.

play47:12

And I've just seen this again and again.

play47:14

It's not always a learning model.

play47:16

In fact, in many cases right now,

play47:18

it is not automatically a learning model,

play47:21

what you see out there in the wild.

play47:22

And so what it really is is people using it

play47:26

and then development team seeing what they're doing and,

play47:28

and then improving the tool as a result.

play47:30

So that's what I've seen.

play47:32

I don't know, Mark and Suresh,

play47:33

if you wanna say how you work with model drift at Duke?

play47:38

- So I wanna go off of Kate,

play47:40

where I almost view drift,

play47:43

you can imagine, like on one hand,

play47:45

you have the drift of the technology infrastructure

play47:49

that the AI is plugged into.

play47:51

On the other hand, you can have drift in the process

play47:54

and people and workflows

play47:56

that are putting the tool into practice.

play47:58

So when it comes to the technology,

play48:01

and we see changes all the time.

play48:03

There's new ways to measure labs.

play48:05

There's new medications that are given.

play48:08

There's new monitors that are purchased.

play48:11

Like our small innovation team doesn't control

play48:14

the supply chain of Duke Health.

play48:16

So we have to be very proactive about monitoring changes

play48:20

in data inputs and representation and updating

play48:24

on, typically, a bi-annual basis,

play48:26

the way that we map our model to the data input sources.

play48:31

On the other hand, just to give some examples of process

play48:34

and workflow shift.

play48:37

One example is where you're seeing successful adoption

play48:41

in one setting.

play48:42

And going to Kate's point, somebody starts trying to use

play48:45

the tool for an adjacent use case

play48:48

where it may or may not be appropriate.

play48:50

Another example is a lot of our tools,

play48:54

one of the follow-up actions is some communication

play48:58

between different types of specialists.

play49:01

And there's instances where we launch a tool

play49:04

with one expectation around communication,

play49:08

but then that deteriorates over time.

play49:10

And instead of calling people,

play49:12

people are just sending asynchronous texts.

play49:15

And so we have to monitor, okay,

play49:17

are the downstream actions still being conducted

play49:20

with the same rigor?

play49:21

And how do we continue to train people,

play49:24

continue to communicate why the structure's important?

play49:29

All of those things.

play49:32

- Got it. Thank you.

play49:33

And let's see, I'm not sure if this next question

play49:37

is relevant to you guys or not,

play49:39

but I will toss it out there.

play49:41

You may have opinions, if not experience.

play49:43

"Can you comment on the pros and cons

play49:45

of using synthetic data for prototyping

play49:48

and accelerating model development?"

play49:51

- Okay, guys, so I'm a qualitative researcher.

play49:53

No way am I touching this one.

play49:55

I'd go right to my Duke tech people.

play50:00

- Do you wanna start Suresh, or do you want me to?

play50:02

- Please.

play50:03

- Okay, so I would say where we use synthetic data

play50:08

or de-identified data,

play50:09

'cause you can say there's a spectrum of the proprietary

play50:14

or confidential nature of data.

play50:18

So whenever you're getting to the point of needing

play50:21

to validate something for operational use,

play50:24

you have to be using

play50:27

the identified proprietary, confidential data.

play50:31

But you can gradually progress across that spectrum.

play50:34

So environments in which we are very comfortable

play50:37

using synthetic or completely

play50:39

de-identified data is training.

play50:43

So when we're bringing new people into the organization,

play50:46

trying to help teach them our process, our workflows,

play50:49

how to work with data, synthetic data can be perfectly fine.

play50:54

The other is just testing of the technology.

play50:58

So when we take tools that we've built internally,

play51:01

and we're trying to validate them in new environments,

play51:04

we can install them in a new environment,

play51:06

and then run it on synthetic or test data.

play51:09

That is it's kind of like a canned problem set

play51:12

where we know the inputs, we know the outputs,

play51:14

we're just making sure that it functions.

play51:17

But I would say that the closer you get to needing

play51:19

to validate the clinical utility,

play51:22

the more you need to start

play51:25

getting into confidential proprietary datasets.

play51:30

- Got it. Okay. Thank you.

play51:32

- (indistinct) Use for synthetic data is development

play51:34

of workforce, in terms of training and educating,

play51:37

we have found significant value add in those

play51:39

because that's an important aspect of that.

play51:42

- Great, and that was a perfect transition

play51:44

because the next question that I had teed up

play51:46

was in fact about training and educating users.

play51:49

So here it is.

play51:51

We've heard that it's important for front-line users

play51:53

to understand the nature of AI decision support output.

play51:56

That it's a prediction with a certain level of confidence,

play51:59

not a so-called right answer,

play52:01

which people tend to, maybe who don't know a lot about AI,

play52:04

think it's like spitting out the right answer.

play52:06

How important do you think that kind of understanding

play52:09

is for the sort of decision support tools

play52:12

that we have been talking about in this webinar?

play52:16

So maybe I'll, that that's more broadly.

play52:18

I'd like to, maybe Kate can answer that.

play52:20

And I can.

play52:21

- Yeah. Why don't I start?

play52:23

And then I'll hand it off to you guys for specific examples

play52:27

of what you do at Duke around training.

play52:28

So I think this falls into the bigger category

play52:32

of AI explainability, and what is it that the end user

play52:35

really needs to understand.

play52:38

And oftentimes, what happens with AI outputs

play52:42

is there's a mismatch between what the AI is saying

play52:46

and what the user knows from their own experience.

play52:49

And in fact, that's why we're building these solutions

play52:51

in the first place.

play52:52

But that can make it very difficult for the end user

play52:55

to trust the AI recommendation.

play52:57

So one thing I would say is, I think it does depend

play53:01

on what domain you're in.

play53:02

So in healthcare, it's different than, for example,

play53:06

I have a project with some colleagues from Harvard

play53:09

on fashion allocation.

play53:11

So there, we're not so worried

play53:13

if the end user doesn't understand.

play53:15

They can just go with it, fine,

play53:18

that's not gonna affect anyone's life chances.

play53:21

In healthcare, it's a different situation,

play53:23

so you need a different level of explainability.

play53:27

One thing that I've seen that's really interesting,

play53:30

some work at MIT is during training,

play53:34

giving the end users some simulation

play53:36

so they can see where are the places that the AI solution

play53:40

is very accurate and where is it less accurate?

play53:43

So that as end users, they can get a feel

play53:46

for where they should be overruling the tool.

play53:49

But with that, Mark and Suresh,

play53:51

maybe you could talk about what you guys

play53:52

do at Duke around this?

play53:54

- Yeah, please.

play53:57

- So I can start.

play53:59

On a similar spectrum.

play54:01

So Kate, I'm from Northern California,

play54:03

have a lot of friends who work in big tech,

play54:05

and there are industries where most algorithm development

play54:11

is done on embeddings of the raw data,

play54:15

where nobody on the engineering team

play54:18

can tell you what any one individual feature even is

play54:21

because it's some mapping to many different transformations

play54:26

of many different raw data elements.

play54:28

And that was completely foreign to me.

play54:30

So even talking about the explainability or interpretability

play54:34

of a model input, I would say that, for us,

play54:37

is almost non-negotiable,

play54:38

where we have to be able to tell clinicians

play54:41

what is used in the algorithm.

play54:43

What are the discreet data elements?

play54:46

Here's a list of them.

play54:47

Here's what their distributions look like in the population

play54:50

that the data was trained on.

play54:52

So that's one piece.

play54:53

The other piece, and we're actually gonna be working

play54:55

with Kate on this in the upcoming year,

play54:58

is doing extensive interviews

play55:00

and prototyping of documentation for different types

play55:05

of stakeholders involved in the adoption process.

play55:08

So an end user may want to be able to, per se,

play55:14

double-click and see additional information

play55:17

about the algorithm, what the indicated use is,

play55:22

what the population demographics are for validation

play55:26

of the tool, whereas a business unit leader

play55:29

who's making the decision,

play55:32

"Do I move forward with adoption or not?"

play55:35

they may need much more extensive validation studies.

play55:39

And the other thing too is just professionally,

play55:41

there's norms in healthcare for how to validate

play55:47

and disseminate literature related to new innovations.

play55:51

So I think some of this has to map to the norms

play55:54

within your own industry for how you build credibility

play55:58

in a tool that's gonna be adopted by an organization.

play56:02

- Got it. Thank you.

play56:03

Well, we have just a few minutes left,

play56:05

so I wanted to give you a chance for some parting thoughts.

play56:10

Kate, do you wanna put a cap on this for us?

play56:13

- Yeah. (laughs)

play56:15

Okay, I'll try and tie this up.

play56:18

I think the reason why

play56:20

we're so excited about doing this research

play56:22

is AI is such an important space,

play56:25

and it's so important that you're all working in this area

play56:28

because it has huge opportunity to impact the world

play56:32

in a positive way.

play56:35

The reality is, right now, the performance

play56:38

is not matching where we want it to be.

play56:41

And so we all need to be learning from one another,

play56:44

taking on new ideas, iterating.

play56:47

Developers, for you guys,

play56:50

I'm sure it feels like every day,

play56:52

you feel like you solve something.

play56:53

The next day, you wake up

play56:55

and you get hit by a ton of bricks

play56:56

and you have to start over.

play56:58

So like, try, fail, dust yourselves off, iterate.

play57:03

C-suite leaders, do what you can to support

play57:07

the development teams, remove any roadblocks you can.

play57:10

Researchers, share your findings.

play57:13

Funders, keep funding.

play57:16

But as we do all that,

play57:18

I think what's really important is not to forget

play57:21

about the people on the front lines

play57:23

because they're the ones

play57:24

that are going to bring these AI solutions

play57:27

into the real world.

play57:28

Thank you.

play57:29

- Absolutely. Thank you, Kate.

play57:31

Well, that was great.

play57:32

Thank you, Kate, Mark, and Suresh for sharing your insights.

play57:35

This has been been a wonderful

play57:37

and really informative hour for us.

play57:38

And I also wanted to thank Five9, our sponsors today.

play57:42

For our audience, thank you very much for joining us today,

play57:46

and hope you'll join us again for another SMR webinar.

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
AI AdopotionFrontline WorkersHealthcare AIWorkflow ImprovementAutonomy in AIUser BenefitsData ScienceMIT ResearchDuke Health