Adopting AI: Ensuring Business Readiness

MIT Sloan Management Review
24 Sept 202158:32

Summary

TLDRThe webinar 'Adopting AI: Ensuring Business Readiness' discusses the importance of artificial intelligence in organizational futures. It explores挑选合适的问题用AI解决, launching AI initiatives, and the need for oversight and understanding of AI risks. Industry experts from diverse fields, including healthcare, share insights on AI use cases, challenges, and strategies for successful AI integration, emphasizing the need for robust data infrastructure, a skilled workforce, and compliance with evolving regulations.

Takeaways

  • 🤖 AI algorithms are abstract and probabilistic, making them complex and imprecise by nature.
  • 🔍 'Explainable AI' techniques can shed light on AI algorithms' predictions but cannot determine fairness or justice.
  • 🧠 AI systems are adaptive and learn from data, but lack creativity and may revert to known patterns in unforeseen circumstances.
  • 📈 AI systems are typically deployed at scale, which can magnify small errors and require robust feedback loops.
  • 👀 AI algorithms are impressionable and know only what they've been exposed to during training and production.
  • 🚫 AI algorithms can inadvertently pick up and reinforce biases present in the training data.
  • 🛠️ It's crucial to define operating conditions for AI systems and engineer safety controls into their processes.
  • 💡 AI solutions must be adopted and scaled effectively, with a focus on value creation and strategic alignment.
  • 🌐 AI and machine learning in healthcare hold great promise for predictive analytics, personalized care, and efficiency improvements.
  • 🧬 Healthcare data is vast and complex, requiring advanced NLP technologies to unlock valuable insights.
  • 🔄 The success of AI in healthcare relies on a collaborative effort between data scientists, clinicians, and end-users.

Q & A

  • What are the six characteristics of AI algorithms that require additional business due diligence?

    -The six characteristics are: 1) AI algorithms are abstract with complex inner workings; 2) They are probabilistic systems with imprecise outputs; 3) They are adaptive and respond to changes in data input; 4) They are not creative and revert to known patterns in unforeseen circumstances; 5) They are typically deployed at scale, which can magnify small errors; 6) They are impressionable and learn from the data they are exposed to during training and production.

  • How can organizations ensure their AI solutions deliver the intended outcomes?

    -Organizations can ensure intended outcomes by establishing robust feedback loops, clearly defining operating conditions, engineering safety controls, monitoring actual usage alongside intended use, and educating users on how the application is meant to be used.

  • What are some challenges in adopting AI solutions in healthcare?

    -Challenges include technical issues such as interpretability, bias, and drift, as well as non-technical issues like clinical validation, workflow integration, privacy concerns, data governance, and deciding whether to build or buy AI solutions.

  • What is the importance of explainable AI in business applications?

    -Explainable AI is crucial as it provides insights into which factors most influence an algorithm's predictions. This helps in making informed decisions about whether those factors are fair, just, and aligned with the business's strategic goals.

  • How can businesses mitigate the inherent risks of AI?

    -Businesses can mitigate risks by implementing robust safety controls, investing in explainable AI technologies, ensuring proper oversight, and maintaining a feedback loop to correct errors and improve the system over time.

  • What are some best practices for adopting AI solutions at scale?

    -Best practices include starting with a clear understanding of business strategy, defining and finding value in AI applications, ensuring the basics are done brilliantly, focusing on people capability and culture, and proceeding responsibly while building trust.

  • What is the role of AI in enhancing healthcare delivery systems?

    -AI can enhance healthcare delivery by providing predictive analytics for disease progression and readmission risks, personalized preventative care, natural language processing for unstructured data, and image recognition for various medical imaging needs.

  • How can companies balance the need for speed in AI adoption with the slower process of cultural change?

    -Companies can balance speed with cultural change by prioritizing education and awareness about AI across the organization, leveraging existing talent, and fostering partnerships between industry and academia to develop a skilled workforce.

  • What are some strategies for finding the right AI talent?

    -Strategies include looking within the organization for potential talent, partnering with academic institutions, investing in internal training programs, and considering a mix of STEM and SHAPE (sociology, humanities, ethics, sustainability) skills.

  • How can organizations prepare for and comply with international, regional, and national regulations regarding AI?

    -Organizations should have a dedicated team or individual to understand and monitor regulatory changes, engage with regulators, anticipate future compliance needs, and integrate legal and ethical considerations into AI development and deployment.

  • What are some indicators of data-science maturity within an organization?

    -Indicators of data-science maturity include the ability to build and implement AI solutions that improve quality, service, cost, and provider sustainability, as well as the capacity to communicate effectively with end-users and stakeholders.

Outlines

00:00

🌟 Introduction and Setting the Stage for AI Readiness

The webinar begins with Abbie Lundberg introducing the topic of AI adoption in businesses and its growing importance. Abbie outlines the webinar's agenda, which includes discussions on selecting the right AI problems to solve, ensuring business readiness for AI initiatives, understanding AI risks, and sharing expertise across industries with a focus on healthcare. The panel includes Kimberly Nevala from SAS, Fernando Lucini from Accenture Applied Intelligence, and Dr. Tad Funahashi from Kaiser Permanente, each bringing their unique perspectives on AI's strategic value and practical implementation.

05:00

🤖 AI Algorithm Characteristics and Business Diligence

Kimberly Nevala starts the discussion by highlighting six key characteristics of AI algorithms that require additional business diligence. These include the abstract nature of AI logic, the probabilistic nature of AI predictions, the adaptability of AI systems to data inputs, the magnification of small errors at scale, the impressionability of AI systems to data, and the incautious nature of AI algorithms. She emphasizes the need for explainable AI, robust safety controls, and the importance of understanding and mitigating AI's inherent risks.

10:02

📈 Business Practices for AI Readiness and Scaling

Fernando Lucini presents insights from Accenture's research on AI and business readiness, highlighting the belief among executives that scaling AI is crucial for growth. He discusses the challenges in scaling AI and the risks of not leveraging AI effectively. Fernando categorizes companies into three groups based on their AI maturity: proof-of-concept factories, strategic scalers, and industrial growers. He emphasizes the importance of affordability and accessibility of AI, defining and finding value in AI initiatives, and the need for a clear strategy and multidisciplinary teams to ensure successful AI adoption and scaling.

15:03

🏥 Applying AI in Healthcare: Challenges and Opportunities

Dr. Tad Funahashi shares the practitioner's view on AI in healthcare, discussing the use of AI and machine learning in various clinical applications. He talks about the potential of AI in predictive analytics, preventative care, and unlocking valuable information from unstructured data. Tad also addresses the technical challenges in AI, such as interpretability, bias, outlier cases, and the drift of medical practices over time. He stresses the importance of clinical validation, workflow integration, privacy, data governance, and the decision-making process in implementing AI solutions in healthcare.

20:05

🛠️ Implementation and Execution of AI in Healthcare

Tad continues the discussion on the hard parts of implementing AI in healthcare, focusing on clinical validation, workflow integration, privacy, data governance, and the build vs. buy dilemma. He emphasizes the need for a collaborative approach between data scientists and medical professionals to ensure the AI models are accurate, timely, and actionable. Tad also discusses the importance of protecting sensitive healthcare information and the challenges of integrating AI models into existing healthcare systems while complying with regulatory requirements.

25:07

🌐 Finding and Developing AI Talent

The panelists discuss the challenges of finding AI talent and suggest looking within the organization and partnering with academia to cultivate AI skills. They emphasize the importance of education and creating career paths within the company to develop a skilled workforce in AI. Fernando suggests creating unicorns within the company rather than finding them and stresses the need for a diverse set of skills, including those that understand the business context and can communicate effectively with end-users.

30:09

📜 Navigating AI Regulations and Compliance

The panelists address the need for organizations to understand and comply with international, regional, and national regulations regarding AI. They suggest looking inward to find individuals passionate about regulatory compliance and ethics, and outward to regulatory bodies for guidance. Kimberly and Fernando highlight the importance of being proactive and planning for future regulations, such as those requiring audits and assessments of AI systems for potential harms.

35:12

📊 Assessing and Monitoring Data Science Maturity

Tad shares his approach to assessing and monitoring the maturity of data science within an organization, emphasizing the importance of measurable outcomes and improvements in quality, patient service, and cost. He discusses the need for data scientists to understand the needs of end-users and the importance of communication between engineers and clinicians. Tad suggests that while there are many metrics to consider, focusing on the impact on healthcare providers and patients is key.

Mindmap

Keywords

💡AI algorithms

AI algorithms refer to the set of rules and computations that artificial intelligence systems use to make sense of data, make predictions, and solve problems. In the context of the video, these algorithms are highlighted as being abstract and complex, requiring additional business diligence due to their probabilistic nature and the inherent risks they pose.

💡Business readiness

Business readiness refers to a company's preparedness to adopt and integrate new technologies, such as AI, into its operations. It involves ensuring that the organization has the necessary infrastructure, oversight, and cultural adaptability to effectively use AI to achieve its goals.

💡Data governance

Data governance is the set of processes, policies, and standards that organizations implement to manage and protect the quality and availability of their data. It is crucial in AI initiatives to ensure that the data used by AI algorithms is accurate, reliable, and compliant with regulations.

💡Health care delivery system

The health care delivery system refers to the organization and management of health care services, including the way health care is provided, accessed, and paid for. In the context of AI, it involves leveraging technology to improve patient care, streamline operations, and enhance the overall quality and efficiency of health services.

💡Explainable AI

Explainable AI is a subfield of AI focused on creating systems whose actions can be easily understood by humans. It aims to make the decision-making process of AI algorithms transparent, allowing users to comprehend why specific predictions or actions were taken.

💡Adaptive systems

Adaptive systems are those that can change their behavior based on the data they receive or the environment they operate in. In the context of AI, it means the system can learn and improve over time by adapting to new data inputs.

💡Artificial intelligence and digital transformation

Artificial intelligence and digital transformation refer to the use of AI and other digital technologies to fundamentally change the way organizations operate and deliver value to their customers. It involves leveraging data and automation to enhance processes, innovate, and improve decision-making.

💡Machine learning engineering

Machine learning engineering is the process of designing, building, and maintaining systems that can learn from and make predictions or decisions based on data. It involves the application of statistical models and algorithms to enable computers to 'learn' from data without being explicitly programmed for every decision.

💡Bias in AI

Bias in AI refers to the tendency of AI systems to produce prejudiced or unfair outcomes due to imbalanced training data or flawed algorithms. This can lead to discrimination against certain groups or the reinforcement of existing inequalities.

💡Strategic value

Strategic value refers to the long-term benefits and advantages that an organization can gain from a particular action or initiative, aligning with its overall mission and goals. In the context of AI, it involves understanding how AI can contribute to the strategic objectives of the business.

Highlights

AI will be crucial for most organizations' futures, and understanding how to pick the right problems to solve with AI is essential.

Launching an AI initiative requires business readiness, including having a basic understanding of AI among users and appropriate oversight.

AI algorithms are abstract and probabilistic, with outputs that are predictions and inherently imprecise.

Explainable AI techniques can provide information on influential factors in algorithms' predictions but cannot determine fairness or justice.

AI systems are adaptive and respond to changes in data, but they are not creative and may revert to previously known patterns in unanticipated changes.

AI systems are typically deployed at scale, meaning small errors can magnify and become self-reinforcing without proper feedback loops.

AI algorithms are impressionable and their worldview is based solely on the data they are exposed to during training and production.

AI algorithms can make mistakes and are incautious; they require clearly defined operating conditions and robust safety controls.

Users' understanding of AI applications is critical for AI solutions to deliver intended outcomes, and monitoring their actual use is essential.

AI solutions are becoming more affordable and accessible, with tools like OpenAI's GPT-3 available as APIs for natural language processing.

Defining and finding value in AI initiatives is crucial, with a focus on identifying problems that can bring 10 times the value or savings.

Scaling value in AI requires doing the basics brilliantly, such as having strong data infrastructure and methodology.

AI is everyone's problem in a firm, and successful organizations ensure the right talent mix and alignment between strategy setters and workers.

Proceeding responsibly and building trust in AI involves creating a responsibility framework and ensuring transparency in AI's use.

Health care has a great deal of enthusiasm for AI and machine learning, with vast amounts of electronic health records offering potential for improved predictions and personalized care.

Technical challenges in AI for health care include interpretability, bias, outlier detection, and keeping up with changes in medical practice.

Clinical validation, workflow integration, privacy, data governance, and deciding whether to build or buy AI solutions are critical for successful implementation.

AI and machine learning in health care are part of a larger effort to increase operational efficiency and quality, requiring timely data access and clinical interventions.

Organizations must ensure their AI solutions comply with international, regional, and national regulations, and staying proactive in understanding and adhering to these regulations is key.

Assessing and monitoring data-science maturity in an organization involves measuring real-world outcomes, the capability of data scientists, and their understanding of end users' needs.

Transcripts

play00:01

- [Abbie] Hello, and welcome to our webinar.

play00:03

"Adopting AI: Ensuring Business Readiness."

play00:07

I'm Abbie Lundberg. I'm a business technology researcher

play00:10

and writer and president of Lundberg Media.

play00:13

I'll be moderating today's discussion.

play00:15

One way or another, artificial intelligence

play00:17

will be important to most organizations' futures.

play00:21

In Part One of this series,

play00:22

we explored how to pick the right problems to solve with AI.

play00:27

In the second installment, we'll examine

play00:29

what it takes to launch an AI initiative

play00:31

from a business readiness standpoint.

play00:34

This includes making sure critical

play00:36

business enables are in place,

play00:38

including a basic level of understanding

play00:40

of AI among users and appropriate oversight

play00:43

of AI initiatives and business processes.

play00:47

It also requires understanding

play00:49

and mitigating the inherent risks of AI.

play00:52

Our speakers today will discuss these issues and more.

play00:56

They'll help you determine

play00:57

if your organization is ready for AI,

play01:00

sharing their expertise across a range of industries

play01:03

with a special deep dive into AI use cases and

play01:05

challenges in the sector that affects us all: health care.

play01:10

Kimberly Nevala will start things off.

play01:13

Kimberly is a strategic advisor at SAS

play01:15

and an expert in the areas of advanced analytics,

play01:18

information governance, and data-driven culture.

play01:22

She helps clients understand

play01:23

both the strategic value and the practical realities

play01:26

of artificial intelligence and digital transformation.

play01:31

Kimberly will be followed by Fernando Lucini,

play01:33

managing director and global lead for data science

play01:36

and machine learning engineering

play01:38

at Accenture Applied Intelligence.

play01:41

Fernando has spent more than 20 years

play01:43

creating technologies to automate

play01:45

and understand text, speech, and video data

play01:48

and integrating these technologies

play01:49

into business solutions for Fortune 100 companies

play01:53

across a wide range of industries.

play01:56

Dr. Tad Funahashi will provide the practitioner's view.

play02:00

Tad is a practicing orthopedic surgeon

play02:02

and the chief innovation officer

play02:04

for Kaiser Permanente, Southern California.

play02:07

He leads a team of physicians, consultants,

play02:10

designers, data scientists,

play02:12

and engineers who work together

play02:13

across Kaiser Permanente to envision

play02:16

and build the health-care delivery system of the future.

play02:20

Welcome to you all.

play02:21

And Kimberly, I'll turn it over to you.

play02:30

- [Kimberly] Thank you, Abbie. All right.

play02:33

So I'm going to kick things off today

play02:35

by quickly reviewing six characteristics

play02:37

of AI algorithms that require us

play02:40

to apply additional business due diligence

play02:43

as we design, deploy, and maintain

play02:46

these systems in the world today.

play02:49

So the first -- and you're probably

play02:50

well aware of this -- is

play02:52

that AI algorithms are abstract.

play02:55

Unlike rule-based systems, the logic --

play02:59

in which it's fairly easy to follow the logic

play03:02

to get from A to Z, the inner workings

play03:04

of AI algorithms can be almost in comprehensively complex.

play03:09

And these are probabilistic systems.

play03:12

So their outputs are predictions

play03:13

which are imprecise by nature.

play03:17

Now, techniques to be able to shine a light

play03:19

on the inner workings of AI algorithms, known

play03:22

as "explainable AI," are rapidly evolving.

play03:25

but it's important to note that

play03:27

while these techniques can give us information

play03:29

about which factors most influence

play03:32

in algorithms' predictions,

play03:34

they cannot make the decision for us

play03:36

whether those factors are, in fact,

play03:38

right, fair, or just. In addition --

play03:43

and this is important to know --

play03:44

because AI systems are adaptive,

play03:47

they respond to changes

play03:49

in the data that, the data input they receive.

play03:53

In other words: They've learned --

play03:54

and this is both the good news

play03:56

and the bad news. The solutions are very smart,

play03:59

but they are not creative.

play04:02

So if the behaviors or the environment

play04:05

change in unanticipated ways,

play04:08

the solution is not going to come up

play04:10

with a novel offering or response.

play04:14

It's going to revert to the best-fitting

play04:17

previously known pattern.

play04:19

And this is why we saw so many analytics

play04:21

and AI models initially fail

play04:24

when COVID came on the scene.

play04:28

It's also important to note

play04:30

that AI systems are typically deployed at scale.

play04:34

And what this means is

play04:35

that small errors can quickly become magnified,

play04:39

and in fact, become self-reinforcing over time.

play04:43

If we don't have really good feedback loops

play04:46

that tell the algorithm when it's making the right choices

play04:49

and the wrong or a suboptimal choice,

play04:52

it is going to assume that the choices

play04:54

that it makes are correct.

play04:56

And that will then inform its future choices

play04:59

and so on and so forth.

play05:00

And you see where this leads;

play05:02

you can see this in your day-to-day life

play05:03

and your social-media feed --

play05:04

for instance, when they seem like

play05:06

they very quickly become hyper-focused

play05:09

on a single theme or topic.

play05:13

So it's important, then, as we think

play05:15

about that, to know that these systems

play05:19

are also highly impressionable.

play05:22

And by that, what I mean by that,

play05:24

is that they only know what they see.

play05:28

So the data that an algorithm is exposed to

play05:30

while it's in training,

play05:31

and while it's in production, is its entire worldview.

play05:36

It has no insight. It's completely blind to data

play05:39

or factors not reflected in that information.

play05:43

And it is really, really good at picking out,

play05:48

sometimes just strange correlations

play05:52

or really thin correlations in that data,

play05:55

even if those correlations are spurious

play05:58

or they don't reflect our desired future state.

play06:01

And this is where we see things

play06:02

like hiring algorithms that become biased against women

play06:06

even though gender isn't actually an explicit data.

play06:10

Now, in addition to being impressionable,

play06:12

AI algorithms are a little like teenagers --

play06:16

and I don't want to anthropomorphize here --

play06:18

but they're incautious.

play06:20

They're going to make mistakes,

play06:22

but because they're not self-aware,

play06:24

again, they're not going to know unless we tell them.

play06:27

They also don't apply any level

play06:30

of independent discretion or tact.

play06:33

So it's critically important

play06:35

that we clearly define the operating conditions

play06:38

for these systems and then engineer

play06:41

robust safety controls and resiliency into these processes.

play06:47

And particularly, this is particularly important

play06:52

because AI is increasingly just interwoven

play06:55

into the fabric of our core business processes.

play06:59

And this means that it becomes increasingly difficult

play07:02

for us to always fully understand the downstream impacts

play07:05

and implications of these systems.

play07:09

It can also become easy...

play07:12

It is also...I'm sorry, it's also easy

play07:13

to become over-reliant or overly trusting

play07:16

in the information being provided.

play07:19

So if your self-parking car has never made a mistake,

play07:23

you let your guard down,

play07:25

even though it may...make a mistake in the future.

play07:27

And as soon as you let your guard down,

play07:29

the engagement model that that system

play07:31

was designed to operate in has changed.

play07:34

So ensuring that our users not only understand

play07:38

how an application is intended to be used,

play07:41

but also watching how they actually use it

play07:44

is mission-critical to making sure

play07:46

that AI solutions deliver their intended outcomes.

play07:50

So there you go: Six characteristics of AI systems

play07:53

that require increased business due diligence.

play07:56

Now, with that said, I want to turn things

play07:58

over to Fernando to talk about six,

play08:00

I believe, best practices

play08:02

for ensuring your AI solutions are adopted

play08:05

and are to be adopted at scale, Fernando.

play08:10

- [Fernando] Lovely, thank you very much for that.

play08:13

I wanted to give you a perspective

play08:14

as I'm part of Accenture,

play08:16

which, of course, is a service company.

play08:17

So we're great observers --

play08:19

as we are having to live with these things,

play08:20

we're also great observers.

play08:21

So let me give you my six lessons

play08:23

of what we see are good practices people have

play08:26

when they do it, when they have that

play08:27

business readiness, right?

play08:29

But first, if somebody can move me

play08:30

to the first slide -- we're going to "top

play08:33

and tail" with this particular slide,

play08:35

what I want you to tell you here.

play08:36

So you understand that when we think about AI,

play08:39

we have to think about everything that goes with it.

play08:40

From the beginning to the end of the journey,

play08:42

it never is just about the model.

play08:44

It never is just about, you know,

play08:46

the data, it's always about that journey

play08:48

all the way from idea

play08:49

to how you make this work in production.

play08:50

So keep that in mind; we'll come back to this

play08:52

but I just want you to --

play08:53

and you'll have this slide later

play08:54

so you can do your mental modeling --

play08:56

but it's important to think of it as a complete problem.

play08:58

Otherwise, business readiness becomes incomplete

play09:02

and [inaudible].

play09:03

So for the first -- first, I wanted to give you a couple

play09:05

of bits of data from our latest research.

play09:07

We've done research for around 1,500 companies,

play09:09

C-level executives, asking them

play09:11

about AI and their business-readiness.

play09:13

And a couple of things stood out for me.

play09:14

First of all, you see there's some interesting things:

play09:16

that 84% of the executives that we interviewed

play09:19

believed that they wouldn't achieve their growth objectives

play09:21

unless they scale AI.

play09:23

Okay, this was not a leading question.

play09:25

So it's interesting that it comes like that. Second fact:

play09:28

that 76% thought they will struggle to scale.

play09:31

Okay, so "I need to scale to be successful,

play09:33

but I'm going to struggle to scale" -- that's interesting.

play09:35

And the last one, which is that 75%

play09:37

believed they would be risking going out

play09:40

of business if they didn't, if they didn't get the value.

play09:43

By the way, to me, this tells me a bunch of things

play09:45

as a practitioner and as a chief data scientist,

play09:47

which is that there's a high level

play09:49

of where we have to do in educating our executives

play09:51

because it's, I mean, this is --

play09:53

the numbers are almost like scare=level numbers.

play09:56

The other [Thing] is, there's a lot of work to do

play09:58

across the organization,

play09:59

so they understand, you know,

play10:01

how do we get to these, to this value?

play10:03

But I thought it was interesting for you to see this.

play10:05

And, Kimberly, you can move me to the next,

play10:07

the second interesting piece that came out

play10:09

of this research, which I hope

play10:10

helps your thinking, is this:

play10:12

We can really clearly see these customers

play10:15

in three categories. The category of, the,

play10:17

where most customers were,

play10:18

is this first proof-of-concept factory category.

play10:22

And I've given you the characteristics of each there --

play10:23

and this is per the research,

play10:25

as opposed to just my point of view --

play10:28

where 80 to 85% were effectively getting,

play10:30

not really seeing the value of the work in AI,

play10:33

because it was stuck in the proof of concept,

play10:35

never achieving production.

play10:37

And there were a bunch of reasons why --

play10:38

that misalignment with the CEO; not really,

play10:41

you know, having a lot of labs,

play10:42

but not a lot of abilities

play10:44

to do the route to [inaudible] and so on and so forth.

play10:47

Interestingly, the second category

play10:48

was this, "strategic scalers." Whereas only 10

play10:50

to 15% where they had figured out,

play10:53

they had great connection to the strategy,

play10:54

to the CEO, to work for the people

play10:56

who are actually delivering on the ground.

play10:59

They had a clear strategy

play11:00

of what was important -- part

play11:01

of the first session in this series;

play11:02

we talked about that. Without the multidisciplinary teams,

play11:06

all the seats, you see there.

play11:07

And, finally, the third category,

play11:09

which was the "industrial growers," a very small amount.

play11:12

And so those folks, these guys were beyond readiness.

play11:15

These were now real, you know, fast mode, right?

play11:18

I always say it's a little bit,

play11:19

AI is a little bit like building a car.

play11:21

They don't, you've got to get the wheels on quick

play11:22

so you can do miles, right?

play11:23

So these guys had, you know,

play11:25

the best car, they were rolling out.

play11:26

They had a lot of miles behind them.

play11:28

So I think the challenge for us

play11:29

in business readiness is that line in between --

play11:31

between the 85%, that Category One and that Category Two.

play11:35

So let's talk about these five lessons

play11:37

as they relate to this.

play11:38

The first lesson is that this,

play11:39

the AI is is affordable and easily accessible.

play11:44

And this for me is, by the way,

play11:46

in case you haven't read the book,

play11:47

I quote Dr. Kai-Fu Lee's book about

play11:51

China and AI in China. It's an incredible book.

play11:54

Please read it. But in the first few chapters

play11:55

he describes very clearly that China's doing great

play11:59

in terms of using AI because they are using --

play12:01

not necessarily because they're doing

play12:02

fundamental research in AI,

play12:03

but because they are great users of that technology.

play12:05

And we can talk about, we'll talk about

play12:07

the moral and the other responsible type

play12:09

of positions later on this.

play12:11

But as it relates to this -- and Kimberly,

play12:12

if you can move into the next one --

play12:14

an example of this, for example,

play12:16

is this is a little fun thing

play12:18

from OpenAI and their product called GPT-3,

play12:22

which, in case you guys haven't played around with it,

play12:24

is effectively a great breakthrough

play12:27

in natural language processing.

play12:29

But the point is that, you know,

play12:30

it's something that you can use as an API.

play12:32

It's readily available.

play12:34

It can interpret and understand texts

play12:36

in ways that it can even do simple arithmetic.

play12:39

And those examples are put on the screen,

play12:40

real examples where I can ask it to do a sequence,

play12:43

one, three, five, seven,

play12:44

and then we'll finish it as a Fibonacci sequence,

play12:46

one, one, three and so on, and it will finish it.

play12:49

So, and it will do this

play12:50

on the basis of its understanding of texts, pure text.

play12:54

So when in a world where we have all

play12:57

of these tools available, like GPT-3,

play12:59

likes the APS for [inaudible]

play13:01

like. There is no single data scientist

play13:03

in the world that builds or writes, line

play13:05

by line, a model for our regression anymore.

play13:07

They take it down from the internet and

play13:08

it's done. It's very important

play13:10

that if I taught business readiness

play13:12

we have the muscle of use,

play13:14

and accessibility is important.

play13:16

Second one -- thank you, Kimberly -- is the definition

play13:19

and finding a value. People that are doing

play13:21

very well are very good at defining and finding value.

play13:24

And let me give you a couple of examples.

play13:26

Kimberly, if you can move me to the first one,

play13:27

the first, the next slide: The research tells us

play13:31

that the people that are doing great defining --

play13:33

so this...by the way,

play13:35

by "defining" and "finding," I mean,

play13:36

"What is it that we need to do?

play13:37

What's important, what's meaningful?

play13:39

What's going to change the strategy?

play13:40

What's going to be material to me, executing the strategy

play13:42

of the company that happens

play13:45

to have AI if it doesn't accelerate?"

play13:46

We found that the people

play13:47

that were doing it very well

play13:48

tended to have a great return.

play13:50

And the way I tend to use this,

play13:52

as a rule of thumb that I want you to think of --

play13:54

if you move into the next, Kimberly -- is this idea

play13:57

of 10 times...it's critical

play14:00

that as you approach problems in AI

play14:03

and you think about the average,

play14:04

a bank may have three, four, 500 things of AI

play14:08

initiatives that they want to do that have AI in them.

play14:10

Telecom companies might have a thousand

play14:12

in that long, long list of strategic things

play14:15

they want to do, where AI is a key part.

play14:18

Let's think 10 times. What is the thing

play14:19

that's going to bring us 10 times value 10 times saving?

play14:21

Not a marginal benefit or a marginal saving.

play14:24

Why? Because very rarely do we see people

play14:27

actually approach problems little by little

play14:29

and actually get there.

play14:31

Given that there's so much technology available,

play14:33

it's almost better to take a problem

play14:35

that actually is more material and can actually

play14:37

be real, a real part of your strategy

play14:40

and that in itself, if it washes its own face,

play14:42

as we say in the UK, if it has a good business case,

play14:45

it will bring all the value behind.

play14:47

So 10 times, not 10%, keep that.

play14:50

Thank you, Kimberly.

play14:51

Yes, the next slide.

play14:52

So the next lesson is this scaling of value.

play14:55

So if we look at the first one, we know we have to be,

play14:59

we have to -- it's affordable, easily accessible.

play15:01

So let's get using No. 2,

play15:02

where we exercise the muscle

play15:04

of defining what's valuable

play15:06

and making sure we're doing that.

play15:08

Third one is: Just do more of it.

play15:10

How do we do more of this?

play15:13

And this is a complicated one.

play15:14

If you move onto the next slide, Kimberly, thank you.

play15:17

There's many lessons here.

play15:18

I'm going to give you two

play15:20

or three because of the, so you can have context.

play15:22

No. 1 is this idea of "do the basics brilliantly."

play15:26

There's -- it's very, very rare that you can,

play15:28

you can build a racing car or you can build

play15:31

any kind of advanced technology without having the basics --

play15:35

well, the wheels turning correctly,

play15:36

the brakes working, all these things

play15:39

that need to be true.

play15:40

So for AI to work very well

play15:42

and scale very well, you have to set yourself up

play15:43

with great data infrastructure,

play15:46

a great data science methodology.

play15:47

That is, [one] that is geared towards getting you

play15:49

to from the beginning to end,

play15:51

a great route that allows you

play15:52

to take that project and make it into production.

play15:55

A great focus on the kind of technology

play15:57

they need to scale certain things.

play15:59

So think of the basics, and within this same sphere,

play16:02

without moving from this slide,

play16:04

you also have to think about simple --

play16:07

really simple, sounds simple,

play16:08

but are quite complicated,

play16:10

but they are in scale-wise governance

play16:14

and other things that are outside the normal realm.

play16:17

Maybe they're not in mobile

play16:18

outside of the realms doing this.

play16:19

So do the basics brilliantly -- buy, build,

play16:22

borrow strategies.

play16:24

So all the kind of things that allow you

play16:25

to have a great base.

play16:27

So -- and, Kimberly, if you can move me,

play16:29

if you can move me to the next, please.

play16:31

So, next lesson is this idea of people, capability,

play16:34

and culture, which sounds like the,

play16:36

you know, the most obvious thing, right?

play16:38

But the truth is, if you move onto the next one,

play16:40

the truth is, we sometimes, we many times, get this wrong.

play16:43

So what do we mean by this?

play16:45

We have to bring people along for the journey.

play16:46

AI is everybody's problem in a firm.

play16:49

So the firms that do very well

play16:51

tend to have the following characteristics.

play16:54

They tend to have the right talent mix.

play16:56

So I always joke that data scientists don't build products.

play17:00

They are software engineers,

play17:01

they are behavioral engineers,

play17:04

all or an entire family of people building products.

play17:07

So let's have that, let's have those families

play17:09

of people doing our objectives, right?

play17:11

So look at how are you doing this.

play17:12

Your organization: Make sure

play17:15

that the distance between the C-suite

play17:17

and the people that are setting up

play17:18

the strategy for the company and the people

play17:19

that are doing the work is a small distance, [that]

play17:22

it's quite, very well-aligned.

play17:23

We see that as well.

play17:24

And I say my favorite, you know,

play17:26

that we used it for: buy before you build.

play17:28

So don't try to, don't try to build everything yourself.

play17:30

And if you look at some of the stats that

play17:31

which were interesting, I'll give you one of those.

play17:33

One of the ones that I like,

play17:34

which is the idea of this employees

play17:36

that are fully understand AI at scale,

play17:39

there's a, quite a large distance

play17:41

between the POC group and the scales, right?

play17:43

If you can observe that where

play17:45

the more we educate the firm

play17:46

about their role in the AI lifecycle,

play17:49

the more that we see that we become

play17:52

a strategic scale-up, which is the place you want to be

play17:54

because that's where

play17:55

we're actually getting things done

play17:57

and scaling them up.

play18:00

So people can do better.

play18:01

And the last one, which is this idea

play18:02

of proceed responsibly and build trust,

play18:05

I mean, this is an incredibly important topic.

play18:07

I'm sure others, other people in this room

play18:09

will agree, but what I want to,

play18:11

if you move me to the next one,

play18:12

just to introduce a few concepts, Kimberly,

play18:16

I think what's important is several things:

play18:18

that we have to create,

play18:20

a responsibility framework that adapts

play18:24

to our company, our citizens, our society

play18:27

and that we're very open and transparent about how this is.

play18:29

This is Category #1.

play18:31

Let's make sure we're clear

play18:32

about what it is that we're going to do

play18:34

with this wonderful technology.

play18:35

Secondly, we have to put the technological aspects

play18:39

in place that allow us to have the guardrails to act safely.

play18:42

There's going to be a lot of people

play18:44

that are going to be working on AI

play18:45

in a single company or a single government.

play18:47

So to think that one individual

play18:50

or many working together

play18:51

are always going to get it right is not really fair.

play18:54

So we really have to think

play18:55

about the professionalization of AI

play18:59

and how that professionalization

play19:01

leads to doing these things responsibly.

play19:04

And by the way, [inaudible]

play19:05

and I mean, everybody's well understood,

play19:07

you know, guard rails, methods,

play19:09

all of this technical implementations

play19:11

which connect with some of the numbers

play19:13

you see there. I specifically like the 60%

play19:17

of respondents reported that they still wanted

play19:19

to have a manual override.

play19:22

Of course you would. And again,

play19:24

we're not trying to make that number change

play19:26

but we're trying to do with building responsibility,

play19:28

and understand the role of everybody.

play19:29

So we can all be comfortable

play19:31

about whatever the number is,

play19:32

that it's actually doing the right thing

play19:33

for our company, our government, our citizenship

play19:36

and the world and the world at large,

play19:39

which -- Kimberly, can move me to the last one,

play19:41

which, funnily enough, as you can see,

play19:42

it brings us back to the roadmap.

play19:46

So it becomes clear that if you,

play19:47

as you're going through this roadmap,

play19:48

you're going from ideation to [questions about]:

play19:50

"Do I have the value on the strategy?

play19:52

Do I have the right people?

play19:54

You know, am I setting up myself correctly

play19:57

in governance to do this?

play19:58

And am I actually then getting the value,

play20:00

realizing the value, throughout this?"

play20:02

These five lessons will bear you out,

play20:04

which is: Get using. Make sure you're getting,

play20:07

you're doing the valuable things,

play20:08

the things that matter.

play20:09

Make sure you're setting yourself up

play20:10

to do more of those things.

play20:12

And make sure your people

play20:13

are coming with you on the journey.

play20:14

And make sure you're watching for whether

play20:16

it's within the boundaries,

play20:17

that you've set yourself up morally

play20:20

and ethically to continue that work.

play20:23

And then within that, I will give you the challenge

play20:25

before I hand off to Tad to take your own company

play20:28

or your own government

play20:29

through this framework. Start it,

play20:31

go from start and see if you actually have, today,

play20:34

the ability to go uninterrupted

play20:36

to the end of that journey

play20:37

and see where the journey is interrupted.

play20:40

Think about why that's interrupted

play20:42

and whether you can actually

play20:43

make that continuous to give you that scale.

play20:46

Kimberly, I think at this point,

play20:48

I'm handing over to my friend, Tad.

play20:55

- [Tad] All right, Kimberly. A really great framework

play20:59

in terms of how to apply AI in the business sense.

play21:02

What I thought we would do today is to take some

play21:05

of those learnings that we've applied

play21:07

to health care and share with you

play21:08

some of the lessons that we've learned.

play21:10

So the next slide, please.

play21:12

So we'll touch on a little bit about

play21:14

the use of AI and machine learning

play21:15

in the area of health care.

play21:17

We'll give you some clinical examples

play21:19

where we've applied it,

play21:20

and the challenges that we found in that,

play21:23

and then some guides, which I think Kimberly

play21:26

and Fernando have done a really good job

play21:27

of already, in terms of addressing the challenges

play21:31

as certainly applies to health care.

play21:34

Next slide. So there's no doubt

play21:38

that there's a great deal of enthusiasm

play21:40

in the use of AI and machine learning and health care.

play21:43

For example, in our institution,

play21:46

we've collected electronic records

play21:48

for almost two decades, amounting

play21:50

to practically 44 petabytes of health care data.

play21:54

Just to give you a relative scale of what that is,

play21:59

it would be roughly a physician that's been in practice

play22:02

for 170,000 to 180,000 years with perfect recollection

play22:08

and near-perfect pattern recognition.

play22:10

So if you could have such a doctor serving you

play22:13

or taking care of you, there'll be a great advantage, right?

play22:16

So we want a lot of stability.

play22:18

We still be able to do [inaudible],

play22:20

make that those...better make their predictions,

play22:23

treat illnesses more precisely,

play22:24

and really be a much more personalized type

play22:27

of care that we could deliver for you,

play22:28

if we could really unlock that data.

play22:31

And we're also beginning

play22:32

to look at data in the uses of machine learning

play22:35

and AI and wearables. Next slide.

play22:39

So the main areas that we're really looking at,

play22:42

about patients of AI in health care

play22:44

in our institution currently,

play22:45

is: One, in predictive analytics,

play22:48

with all of this experience in terms

play22:50

of what's happened thus far, we can tap into that

play22:53

and guess what might happen again in the future.

play22:56

So looking at things like who's most likely

play22:58

to progress in the disease,

play23:00

who is most likely to be readmitted

play23:02

after a hospitalization or for that matter,

play23:04

who is most likely to be admitted

play23:06

even if they haven't been admitted before.

play23:08

Cause I think very few of you out there think

play23:10

to yourselves, I have a week off next week.

play23:13

I think I'll go to the hospital, right?

play23:16

So if you could prevent you

play23:17

from having to go to the hospital

play23:18

it is something that we believe is much more

play23:20

of a desired outcome for all of our members,

play23:24

personal device can and risk stratification

play23:26

is a key part of our use of data.

play23:28

And then evidently it's well known

play23:30

for preventative care, preventative care.

play23:33

Thus far, you could say that we've used

play23:35

crude instruments such as gender, age,

play23:37

ethnicity, and whatnot to try to look at it,

play23:40

people that are higher risk for certain diseases.

play23:43

But if you could leverage that data

play23:44

and really tailor the prevention

play23:46

or even a tighter group you can really focus

play23:48

on those groups and really have much better health outcomes.

play23:52

Now, a lot of our health records are,

play23:54

as you might imagine locked up

play23:57

in our unstructured data that is in the notes of doctors,

play24:00

in the notes of nurses and whatnot.

play24:02

There's always been this joke

play24:03

about doctors' [hand]writing, but it used

play24:05

to not be electronic.

play24:06

And nobody could read that writing, right?

play24:08

Very hard to unlock. Well, even in the computer,

play24:11

if it's natural, if it's unstructured,

play24:13

we need natural language processing

play24:15

and whatnot to really extract that information out of there.

play24:18

And we're really beginning to develop

play24:19

some advanced NLP technologies,

play24:21

to be able to really unlock the value

play24:23

in the critical knowledge.

play24:25

That's locked in the unstructured portions

play24:27

of our electronic records.

play24:29

And, finally, I think all of you have heard

play24:31

about image recognition use in health care,

play24:34

such as retinal imaging,

play24:36

dermatologic imaging, pathology imaging,

play24:39

and then radiology imaging,

play24:41

and we are also looking at all of those areas.

play24:43

Next slide. So first, we need to talk

play24:48

a little bit about the technical challenges,

play24:50

so to speak, of doing modeling

play24:52

for AI, machine learning, and health.

play24:54

And I think some of these were covered previously very well

play24:57

by our previous speakers: interpretability

play24:59

or explainability, bias out by our edge cases,

play25:04

and drift. Let's look a little bit more carefully

play25:06

at each one of these. Next slide.

play25:08

So in terms of interpretability:

play25:10

Doctors and healthcare workers are famous

play25:14

for not believing about anything

play25:16

that's sent to them, right?

play25:17

So there's a new study that comes out

play25:19

and it's a definitive, randomized, double-blinded study

play25:23

and they go, "Ha! Let me look

play25:24

at that to see if it's really right."

play25:26

And, in fact, that's one

play25:27

of the reasons that any kind of new discovery in health care

play25:29

takes about 17 years before it becomes the norm.

play25:33

Well, imagine if you gave somebody a blinding insight

play25:37

about what's about to happen,

play25:38

but it came out of a black box, so doctors...

play25:39

So we have to figure out a way

play25:44

to make the data that comes out,

play25:46

a little bit more interpretable,

play25:48

and sometimes, interestingly, rather

play25:50

than creating tremendously complex models,

play25:53

we sometimes want to simplify the model,

play25:55

so it's a little bit more explainable

play25:57

and interpretable to the physicians. Now bias

play25:58

is something that's innate in health care, right?

play26:03

Most bias can be good, but some bias can be bad.

play26:07

That is, we know that certain populations

play26:09

that are higher-risk, that it's innate bias

play26:12

in the statistics, right?

play26:13

But if the bias is nefarious --

play26:17

meaning that it doesn't improve the outcome

play26:20

of those people -- then it's a negative type of bias.

play26:23

And it could be anything from ethnic groups

play26:25

and socioeconomic groups and whatnot.

play26:27

We need to carefully analyze

play26:29

that to make sure that the historic practices

play26:32

of the past that had innate biases

play26:34

that we didn't detect aren't replicated

play26:37

in the future. Outlier cases, in fact,

play26:40

are the ones that we stay up at night [about].

play26:42

The reason you want to come see a doctor, instead

play26:44

of Doctor Google, is that most of the time,

play26:48

you're going to be fine with whatever it is

play26:51

that's common. However, there are certain cases

play26:54

that are atypical, the red herrings,

play26:57

And when everybody is in my office

play26:59

and I'm examining them,

play27:01

I'm looking not only at what they most likely have,

play27:04

but what they could possibly have.

play27:07

And, unfortunately, right now

play27:08

AI is not that good at detecting those.

play27:11

It's more of a statistical norm, rather

play27:13

than looking for outliers.

play27:15

And as you know, health care practice

play27:18

changes over the course of time.

play27:19

We don't practice the way we did a hundred years ago,

play27:22

50 years ago, or for that matter, even five, 10 years ago.

play27:26

But remember: AI looks at the empiric data

play27:28

that is available from our past medical records

play27:31

and replicates that. I think

play27:32

Kimberly talked about this earlier.

play27:35

And so as practice begins to drift,

play27:37

we need to make sure that our recommendations

play27:40

that come out of the AI systems

play27:42

are also updated and continuously improved.

play27:45

Next slide. Now my data scientist

play27:50

and engineer tells me all those problems

play27:52

that we just talked about are the easy part.

play27:54

The hard part, in actuality, is

play27:57

in the implementation and execution.

play27:59

That is clinical validation,

play28:02

workflow integration, privacy, data governance

play28:06

and MIP, and a decision

play28:09

as to whether to build or buy, to create these.

play28:11

Let's take it a little bit deeper.

play28:12

Look at these in terms of clinical validation.

play28:17

We need to make sure that the data scientists

play28:19

and the doctors work hand in hand closely

play28:22

so that the predictions that that machine is making

play28:25

in fact makes sense.

play28:27

No. 1. And No. 2 is [it's] accurate, right?

play28:31

So that it's not making wrong predictions.

play28:35

And then, ultimately in that process,

play28:37

so that the physicians begin to develop a trust

play28:40

in that data that's coming, ultimately AI,

play28:44

or artificial intelligence,

play28:45

in health care will most likely

play28:46

become augmented intelligence.

play28:48

That is a pinging of things

play28:51

that we should be thinking about

play28:52

or broadening our diagnostic diagnosis

play28:55

in clinical arenas that AI can help us do.

play28:59

Perhaps one of the most important

play29:00

is workflow integration.

play29:02

If the data isn't given to us timely,

play29:05

if the data isn't relevant,

play29:07

if it's not actionable or interpretable,

play29:09

and we don't trust the data, and it's not a part

play29:14

of our decision-making process

play29:16

that we typically do in taking care of our patients,

play29:18

all the fancy arithmetic

play29:20

and the predictions are not that helpful.

play29:23

But we need to make sure

play29:24

that the productions of the model are presented

play29:26

to the physicians in a usable manner

play29:28

that's timely, that's meaningful.

play29:32

It goes without saying that health care information

play29:35

is incredibly sensitive.

play29:37

So privacy issues are critical for us.

play29:40

If we ever do share data with external companies,

play29:43

and just to get into the "build or buy,"

play29:45

that protection of that data,

play29:48

what happens to the data after it leaves,

play29:49

so to speak, our walls? The governance

play29:52

of that data, ultimately even the intellectual property

play29:55

that's associated with it, can really be challenges

play29:57

in terms of how best to utilize that data,

play29:59

because sometimes there's great engineering resources

play30:02

outside, so to speak, our walls of Kaiser Permanente.

play30:06

And yet, how do we leverage that expertise

play30:09

without putting any of the privacy

play30:11

or ownership issues at risk

play30:14

in terms of build or buy?

play30:15

There are models that are built

play30:16

for a variety of hospital systems and

play30:19

health care systems. And yet geography,

play30:22

financial incentives, a variety of other things

play30:24

can have slightly different impact on the data.

play30:27

And so that data in the models that are built

play30:31

on that are not necessarily directly transferable.

play30:34

So we need to pay attention to that.

play30:35

And, yeah, if you say, we want to make sure

play30:38

that we build it, that's customized for our database.

play30:41

It requires tremendous CPU, GPU, engineers' time,

play30:46

and it can be extremely expensive to build.

play30:49

And it obviously to continue to keep it updated.

play30:51

So that's another area

play30:52

that we need to keep attention to. Next slide.

play30:57

And so if we really began to think

play30:58

about the application of AI

play31:01

or augmented intelligence for health care

play31:02

and machine learning, it really is only one part

play31:06

of the creation towards increased operational efficiency

play31:09

and quality that we look at in health care.

play31:12

And so it's a tool that can be potentially helpful,

play31:15

but an amazingly accurate tool that's 99% accurate.

play31:19

It isn't really that helpful

play31:20

if you don't have timely access to the data,

play31:24

if there are clinical interventions

play31:26

that make any difference, you can have

play31:28

If you know when it's going to rain,

play31:29

but you don't have an umbrella ...

play31:30

So that doesn't help you, right?

play31:32

But the same thing in health care:

play31:34

We know something's going to happen

play31:36

but there's nothing we can do about it.

play31:38

It can be sometimes frustrating

play31:40

and we need to make sure

play31:41

that when we do get the data insights

play31:43

that there are supporting processes

play31:44

and workflows that allow the intervention to happen.

play31:47

We need to make sure that there are trained resources

play31:50

to make that happen.

play31:51

And then there's appropriate infrastructure

play31:53

for not only scaling it, but also in terms

play31:57

of making sure that the model is interoperable

play32:00

across our entire enterprise.

play32:02

And for example, we do that.

play32:04

We want to make sure we have metrics that look

play32:05

at both the performance of it,

play32:07

from making sure that there's no bias,

play32:09

that the quality is there,

play32:11

in fact, it has good impact on our costs,

play32:14

and ultimately that it's serving our members

play32:17

in a much more efficient manner.

play32:18

The next slide. So if we were to put this all together,

play32:22

we kind of take a step-wise approach to that.

play32:24

So as we look at applying AI,

play32:27

or any large models like this -- First,

play32:30

we want to make sure that we're not doing it

play32:31

just because it's fun, but it's actually

play32:34

solving a specific clinical problem

play32:36

that we typically couldn't solve as well,

play32:38

or could begin, like Fernando said,

play32:41

really creating a different type of model for health care.

play32:45

And once we do that, we want to make sure

play32:47

that the engineers and the physicians

play32:48

work side by side all along,

play32:50

that the physicians have faith

play32:52

in what the engineers and the models are outputting,

play32:55

and that the engineers truly understand

play32:57

what they're solving for.

play32:59

Of course, since we get so much data,

play33:01

we need to make sure that the data quality is good,

play33:03

that we adjudicate data that may be coming

play33:05

from slightly different sources,

play33:06

that the quality of the data is at the highest level

play33:11

that it can be, and then once we build

play33:13

that model and steps one, two, three

play33:15

are all about the model building

play33:16

and the outputs are objective, right?

play33:19

And that unwanted bias is not programmed into that model.

play33:25

And then in the lower three, four, five,

play33:27

and six are really about execution.

play33:29

We need to make sure

play33:31

that once we do put that in there,

play33:33

that there is value in terms of quality outcomes,

play33:36

service outcomes, cost, whatever it might be,

play33:38

that it is in fact something

play33:40

that is a value to the organization.

play33:42

And as we rolled out these things,

play33:44

we need to make sure that its performance,

play33:46

even though it may have been good to begin with,

play33:48

isn't beginning to decay over the course of time.

play33:50

And I think the last goes without saying,

play33:52

and certainly in health care, regulatory requirements

play33:55

and oversight of all of these models

play33:58

are absolutely essential to make sure

play34:00

that we are compliant.

play34:02

So with that let me now turn it back over

play34:05

to Kimberly and Abbie, I believe,

play34:06

who're going to jump us into the Q and A section of this.

play34:10

Thank you, everybody, for your attention

play34:11

and I look forward to our discussion.

play34:13

- [Abbie] Great. Thank you for those great presentations.

play34:17

Really useful information.

play34:18

And we do have some great questions coming

play34:20

in from the audience, and we will continue

play34:22

to take your questions for the remainder of the hour

play34:24

so you can keep submitting those.

play34:27

To kick things off,

play34:29

so there's -- there, we've gotten questions

play34:32

around a couple of different areas

play34:33

that I wanted to start with, the state of AI right now.

play34:38

And so this question was how mature

play34:39

are AI solutions now, can you find one

play34:42

for any application, and how much

play34:45

are customization or tailored solutions required?

play34:48

So, you know, can you go out and just buy something

play34:51

for a problem that you're trying to solve?

play34:53

Or how much do you really need

play34:55

to be customizing and doing your own work on that

play34:59

or hiring somebody to do your work on that?

play35:01

And I guess that's a question

play35:02

for all of you. Kimberly, you want to start?

play35:12

- [Kimberly] Yeah, absolutely, and I think Fernando

play35:13

probably has some good experience in this area.

play35:17

Well, I think that in a lot of ways,

play35:21

AI is very ubiquitous in that,

play35:25

a lot of the solutions that you buy

play35:28

at the shop today, and a lot of cases

play35:30

may have these advanced analytics capabilities

play35:32

built into them. But as Tad said,

play35:36

I think in a lot of cases,

play35:40

what you're looking to do is to buy some

play35:43

of the core capabilities,

play35:45

whether that is computer vision capabilities,

play35:47

natural language processing,

play35:49

and then customize those

play35:50

to the specific use case that is there.

play35:54

So today I think the build versus buy,

play35:56

there's probably still a lot more customization

play35:59

that happens even when we're buying a solution

play36:02

that's packaged with artificial intelligence.

play36:04

It's really important in those cases

play36:07

that you ensure the environment and the operating...

play36:11

whether that's sort of the customer,

play36:13

the way it interacts with your customers

play36:14

or your consumers, your users,

play36:15

the operating environment that you are deploying

play36:18

that actually matches the environment it was trained in --

play36:22

i.e, that it reflects the data that it was trained on.

play36:25

And there's really no sort of fast track

play36:28

through that part of the process,

play36:29

I don't believe. But in a lot of cases.

play36:33

I think most organizations find that in,

play36:36

especially around core business processes,

play36:38

whether that's in sales or customer experiences

play36:40

with these areas, there are packaged solutions

play36:42

that you can begin with validated

play36:45

and customized to your needs.

play36:47

That being said, there are also a lot

play36:50

of off-the-shelf now packages

play36:51

around that are the data science packages.

play36:53

So you can get the math,

play36:55

if you will, you can find organizations

play36:56

that these algorithms that come out of the box,

play36:59

that you can apply to your data

play37:01

that come packaged with interpretability

play37:04

and explainability and some of these components.

play37:07

The trick again is really matching those solutions

play37:10

so that you're finding for the right problem

play37:13

with the sort of right hammer.

play37:14

And that's probably the bigger issue right now --

play37:17

not so much finding an available tool,

play37:18

but the features in that the tools

play37:20

that are available actually

play37:21

match the problem you're trying to solve.

play37:23

Fernando, your thoughts.

play37:25

- [Fernando] Yeah, I started a different perspective,

play37:28

which is that, if you think about it,

play37:30

today, if you're a startup and you don't do some AI,

play37:32

you're probably not going to get any money.

play37:34

So there's tools for anything you can think of,

play37:38

there's AI's in or AI's for most of you can do.

play37:42

I do a little test that may make you smile,

play37:44

which is to think about from the moment you wake up

play37:47

to the moment you go to sleep,

play37:48

write down where you feel AI

play37:50

has been part of your life.

play37:51

If you know a little bit about AI,

play37:53

like it's my world, the list is enormous.

play37:56

It's enormous. You give it to my mom and nothing, obviously.

play38:00

So our perception of this.

play38:01

So what I will tell you is there's tools for everything.

play38:03

There's things that you can use really easily

play38:06

but doing what Tad has described is mega-difficult.

play38:10

So there's a lot of things available, but

play38:12

it doesn't mean you're actually going to get what you need.

play38:16

It's almost an excess, an excess of stuff,

play38:19

but still the people that didn't do very well

play38:21

still tend to be super-advanced in putting it,

play38:23

which is what you're saying, Kimberly, putting it together

play38:25

and making sure it starts a purpose

play38:27

and in the context of a company

play38:29

and in the context of all these other things.

play38:30

So, and that's why I have that first lesson,

play38:32

which is availability.

play38:34

I mean, it's there; doesn't mean it's easy, right?

play38:36

But there, absolutely, totally present.

play38:40

- [Tad] Yeah, I would agree.

play38:42

I think the number of tools that are available

play38:45

in the area of AI is I would make it -- I can go

play38:49

an auto shop or a Home Depot, right?

play38:52

There's tons of tools out there.

play38:54

And you can build just about anything

play38:55

with all the tools that are out there.

play38:56

And, in fact, we do leverage many of those tools.

play38:59

At the end of the day, though,

play39:00

you have to build those tools

play39:02

to build something that you need

play39:03

that's unique to your particular business, right?

play39:06

And in order to do that, just having

play39:08

a collection of tools doesn't do it.

play39:10

You have to put it all together.

play39:11

And not only after you put it all together,

play39:13

you have to have somebody that's willing to drive it,

play39:15

so to speak, right? The end user.

play39:17

So that not only are the ... the tools important,

play39:20

the user interface and the method

play39:23

by which the output of those tools are given

play39:26

to the end user is really critical, right?

play39:29

So if you think about driving a car

play39:31

and we didn't have those simple tools

play39:32

that allowed you to steer and accelerate and brake,

play39:35

A car could be incredibly hard to drive.

play39:37

And if it's incredibly hard to drive, people

play39:39

aren't going to use it, right?

play39:40

And if you think about it, though,

play39:42

all of the components of that car --

play39:44

incredibly complex; in fact, many of them leverage AI now.

play39:47

And I think it's not that dissimilar in

play39:49

most industries and certainly not dissimilar in health care.

play39:54

- [Abbie] Great, thanks. You know,

play39:55

we've got a lot of questions coming in

play39:57

that are very appropriately,

play39:59

because we were talking about business readiness --

play40:00

these questions are around the people part of doing AI.

play40:04

So the first one is: Who should be on the AI team?

play40:07

Who do you need on your AI team?

play40:09

And Tad, if you want to start on that one

play40:11

that would be great.

play40:12

- [Tad] Yeah, fair.

play40:15

So, first of all, the basics, right?

play40:17

You need the people that understand the computer --

play40:19

the engineers, the data scientists --

play40:22

and those are just table stakes.

play40:24

Without that, there is no data science, right?

play40:27

Having said that, though, just them in a vacuum

play40:29

doesn't do it for us.

play40:30

We actually need the end users, for a variety of reasons.

play40:33

No. 1, we need to know

play40:35

and understand exactly what their problem is

play40:37

and what the type of solution would best help their problem.

play40:42

Right? So we need the end users.

play40:43

Sometimes, though, the interpretation of that

play40:46

by engineers lacks a set, so to speak.

play40:50

So human-centered design people

play40:52

are really critical not only to translate

play40:56

what the end users are saying that they need,

play40:58

but also to create the interface back

play41:01

from the engineers to those users.

play41:03

So it becomes very understandable, right?

play41:06

And interestingly, also keeping our team --

play41:09

of course, both consultants

play41:10

that can manage the project,

play41:11

but also classic epidemiologists.

play41:14

And the statisticians that can begin to

play41:17

bridge that puts that, that, that gap

play41:20

for lack of a better word, between traditional statistics

play41:23

that we are very comfortable with in machine learning.

play41:26

So that physicians begin to understand

play41:28

what's really happening.

play41:29

And even with that, the interpretability,

play41:30

or the explainability can be difficult.

play41:33

But without it, that trust, you know,

play41:35

what's happened, can be lacking.

play41:36

So certainly in our field,

play41:38

that's also another critical component.

play41:44

- [Abbie] Fernando, you talked about the importance of --

play41:46

I think your slide was people, capabilities,

play41:49

and culture, and you talked about --

play41:53

so when you think about that culture piece,

play41:55

I was really thinking about the fact

play41:57

that, you know, changing culture takes time, right?

play42:02

And you also talked about the importance

play42:03

of moving quickly when you're moving forward with AI.

play42:07

So how do you balance the importance

play42:09

of moving quickly with the fact

play42:11

that culture does take time to change?

play42:13

Like where do you put your focus

play42:15

to sort of create, get past that dynamic?

play42:21

- [Fernando] Yeah, the simple answer for me is education.

play42:24

It's one of those moments where --

play42:26

and by the way, COVID has an interesting effect on this,

play42:28

which is that before COVID...many CEOs

play42:30

were thinking, "Oh, my God!

play42:31

Where's all this money that I spent on AI

play42:32

that doesn't seem to have produced any value?"

play42:34

And suddenly COVID hits,

play42:36

and it's been an acceleration

play42:37

through survival for many that needed it.

play42:40

And I'm sure, Tad, you've seen a lot of this

play42:41

in the health care industry.

play42:43

So AI has become an accelerator

play42:45

to that culture point because it was survival.

play42:48

I don't know which one, if it's continuing,

play42:49

going to continue to burn this bright

play42:51

and help push the -- obviously,

play42:52

we want this pandemic to go away, thank you.

play42:54

But, but apart from that,

play42:56

is it going to be have a tail

play42:58

that continues to help us with this culture?

play43:01

Abbie, we have the same problem with data,

play43:03

in that we talk about data-driven culture,

play43:05

and the truth is data-driven culture

play43:07

is a culture of self-service of,

play43:08

"I can serve myself, I can do things.

play43:11

I am a creator of data and a user of data."

play43:14

AI is a bit more complicated,

play43:15

because you don't wake up in the morning

play43:17

and decide you're going to create a neural net,

play43:19

and you figured out the mathematics of that.

play43:21

But if you follow Tad's great way of putting it,

play43:25

you can learn what your role is in it.

play43:27

So I think education across the widest possible base

play43:31

of companies and governments on the possibilities of AI

play43:34

not just the use cases, but a real notional understanding

play43:38

of what it is -- not to understand the mathematics

play43:40

but the notions will help remove

play43:43

the humanizing view that we have of this technology,

play43:46

of these machines, and help us with,

play43:50

"Ah! I understand more or less

play43:51

how these things kind of work, so I can

play43:52

understand why this is going to work a particular way.

play43:55

I've got this notional view of this."

play43:57

And that really helps accelerate these things.

play44:00

And, about education...

play44:02

if you look at the winners, the people that do really well,

play44:04

it's all education; it's a hundred percent education.

play44:06

And I can give you a fun way to think about this:

play44:08

...the people we don't talk about a lot:

play44:10

which is, if you put five data scientists

play44:12

in a room for three hours and you come back,

play44:14

they've done five models in five different machines

play44:16

in five different languages

play44:17

that will never work together.

play44:19

If you put five software engineers,

play44:21

you come back five, three hours later,

play44:23

there'll be no code written

play44:24

but they'll have worked out how to work together.

play44:26

They will have worked out

play44:27

how to share; everything will be sorted.

play44:29

We need -- there's a set of people

play44:31

called ML engineers, who are the performance

play44:33

and scaling engineering of models,

play44:36

who today are probably fighting,

play44:38

you know, to throw my hat out to those

play44:40

that career, is probably the people

play44:42

that we need the most today to make that translation.

play44:45

They're the people that are going to take those two rooms

play44:47

and remove the partition, so they can be together.

play44:50

And we need a lot of those.

play44:53

- [Abbie] It sounds like the last people you talked about,

play44:56

those are people that you need

play44:57

to have on your team, inside the company.

play45:00

but there's other roles that maybe

play45:02

you've got to go out and get

play45:04

that have have particular expertise

play45:07

around the data science

play45:09

and the real technical parts of doing AI.

play45:14

So this question is from someone in the audience;

play45:18

the question is: Where do you find

play45:20

the people to create AI solutions?

play45:23

And they're thinking in terms of

play45:24

you know, companies, countries,

play45:27

talent, marketplaces. Where should people

play45:31

go to look to get that AI talent?

play45:40

Kimberly, you're on mute.

play45:43

- [Kimberly] So let me start again, but I'll start.

play45:46

And then I'll go to Fernando and Tad.

play45:49

So one of the things I think that is becoming increasingly

play45:51

interesting is partnerships between industry and academia.

play45:56

And that could be whether you're utilizing

play45:59

friends working with folks that are in college programs,

play46:02

and even PhD and advanced programs, to come in

play46:05

and work on your projects

play46:06

as part of their educational bit.

play46:10

At the end of last year, I had hosted

play46:12

a panel with the head of basically machine learning,

play46:15

AI deployment for Uber.

play46:17

And one of the things that they did

play46:19

that was really interesting,

play46:21

they essentially have a process by

play46:24

which folks who are actively

play46:26

pursuing the research aspect,

play46:28

which is really important in academia,

play46:29

are part-time working also for Uber

play46:32

and looking at the operational application

play46:35

of these components and balancing that, sort of,

play46:38

you're getting this intersection

play46:40

of industry experts and business arguments

play46:45

that probably works in the real world

play46:48

and not sucking away all the talent

play46:50

from the R&D side and that sort of basic science

play46:53

and research that's also required.

play46:55

And I think more and more,

play46:56

we're going to start to see those kinds

play46:59

of -- maybe not in that way --

play47:01

but those kinds of interactions

play47:04

and partnerships between things like academia,

play47:07

some of the research organizations,

play47:09

and commercial organizations as well.

play47:13

So I think that's one area to look at.

play47:16

I also don't think -- Don't underestimate

play47:19

the people who are interested and willing

play47:21

within your organization today

play47:23

to actually be trained up

play47:25

and to come into these areas to work.

play47:29

I spoke to Michael Kanaan, who was chair of AI

play47:32

for the Air Force, and they did actually

play47:34

an internal survey and realized that they had

play47:36

a lot of folks that were engineers

play47:38

and not necessarily computer engineers,

play47:40

but who had a lot of the skills and talents

play47:42

and interests in this area.

play47:44

And so being able to make the move

play47:45

on training internally available

play47:48

really resulted in dividends for them

play47:52

in terms of being able to build

play47:54

a skilled workforce over time.

play47:56

So those are some, maybe not traditional, ways

play47:59

of thinking about how do we develop

play48:01

and build or buy what can be very scarce

play48:06

and expensive resources.

play48:11

- [Tad] I would just jump in. Really good comments,

play48:13

Kimberly, and to help the questioner

play48:18

look for that talent:

play48:19

Don't underestimate the talent

play48:22

that you actually have in your own organization.

play48:24

Data is a very interesting view source, right?

play48:27

And you actually need a data maven

play48:29

in your organization that truly understands

play48:31

the where, that context, and the availability of the data.

play48:35

You can have the most advanced software

play48:37

and data science in the world come into your organization.

play48:41

And we've certainly done that, right?

play48:42

And we say, okay, we will say, "Okay,

play48:44

go ahead and hop into our 44 petabytes of data."

play48:47

But they don't know the context of data,

play48:49

the availability of data,

play48:50

the adjudication of the data,

play48:51

the quality of the data, all of that.

play48:54

So you need somebody in your company that really understands

play48:57

to really leverage the data well.

play48:59

And then on the flip side,

play49:00

or the way on the other side,

play49:02

you also need people that understand

play49:04

the power of the data and its applicability

play49:06

in your organization.

play49:07

Those two sources will come

play49:09

from within your organization.

play49:11

And then, so to speak, the toolmakers, right?

play49:13

The people that really understand

play49:15

the building of the model,

play49:16

that you can either hire them in-house --

play49:18

or, as Kimberly said, I'm sure

play49:21

Fernando will say -- you can outsource that.

play49:24

But they have to be people that can communicate well

play49:27

with your data mavens

play49:28

and people that really understand it,

play49:30

and the end users. Otherwise, you have this aggregated,

play49:33

this team that does not create for you,

play49:35

anything that's particularly effective

play49:37

for your organization.

play49:39

- [Fernando] Can I quickly come in on that, Abbie?

play49:40

Is that all right?

play49:41

- [Abbie] Yeah, before you do --

play49:42

- [Fernando] Because that's from a different --

play49:43

- [Abbie] Before you do, I just want to ask Tad

play49:45

if you could mute your microphone

play49:47

when you're not speaking; we're getting

play49:48

a little bit of feedback.

play49:50

Go ahead, Fernando.

play49:51

- [Fernando] Thanks, Abbie. This is interesting for me.

play49:54

So we talk in Accenture about making unicorns,

play49:58

not finding unicorns. But the reason we do is

play50:02

because we've done a study

play50:03

and a thought process of what is the future of data science?

play50:05

Where is the data science going to be in three years?

play50:07

Not for everybody, for us,

play50:08

as it relates to us, very inward.

play50:09

And the truth is what came out of that

play50:12

was that if you think three years

play50:14

in the future, it's very unlikely

play50:15

that the world is going to be full

play50:17

of generic data scientists.

play50:18

Remember, data scientist is a profession.

play50:20

It's a profession like Doctor Tad, right, is a doctor.

play50:24

This is his profession.

play50:25

You don't wake up in the morning

play50:25

and decide you're a data scientist.

play50:29

So the best thing we can do

play50:30

is create those careers in companies,

play50:32

so we can be proud of having a career track,

play50:35

which creates you these wonderful people

play50:37

that makes your industry problem.

play50:39

And, you know, and I'm actually,

play50:41

to be honest, sad: Teaching hospitals

play50:44

are like the best example in the world of this.

play50:47

I mean, there's no better example.

play50:48

So you almost need like your teaching hospital

play50:50

of AI within firms to create careers for people.

play50:55

And that's what's going to set you up,

play50:57

because if in three years we think

play50:58

we're just going to be able to pick

play50:59

these unicorns that understand my industry

play51:02

and understand this, and we're going to be in big trouble.

play51:05

So if I were to give anybody advice --

play51:08

I did this for a bunch of CEOs in the

play51:10

last few weeks -- is create a career, create careers.

play51:13

If you create careers, you're winning,

play51:15

then you don't need anybody.

play51:16

You don't need me, you don't need anybody.

play51:18

you just need to create that career.

play51:21

- [Kimberly] And I'll take it,

play51:22

I'll extend that a little further.

play51:23

One of the things that we talk about --

play51:25

and I think whether you're building

play51:26

the sort of internal curriculum

play51:28

and career paths or borrowing

play51:31

and leveraging external -- we talk a lot about STEM.

play51:34

So science, technology, engineering, math --

play51:35

absolutely critical and core

play51:37

to what we're talking about here.

play51:38

But you're also now starting to hear people

play51:40

talk about, I think the new acronym I've seen is SHAPE,

play51:43

which is about, you know, sociology,

play51:46

humanities, ethics, sustainability.

play51:50

So that somewhat the softer side of this --

play51:51

which is actually the harder piece of this

play51:53

in terms of adoption -- really thinking more critically

play51:56

about how people want to interact

play51:58

with the solutions, what the impact

play52:00

of these solutions are -- not just on users

play52:02

in terms of how they work

play52:03

and how it impacts our jobs,

play52:04

but also on the populations

play52:06

are being applied to. I think, Tad,

play52:07

you spoke to some examples of this in health care,

play52:10

where we really have to make sure

play52:11

that it's not just the users

play52:13

that understand the solution

play52:14

and are bought into how it's being applied,

play52:16

but also the populations we're using it

play52:19

on -- and hopefully for and not against.

play52:22

And so as you look to develop

play52:23

these curriculum skills, make sure

play52:25

that you're not just building

play52:27

these bastions of STEM that don't also incorporate

play52:30

and develop those skills associated with SHAPE, or

play52:34

however you want to call that out.

play52:38

- [Abbie] We have time for one or two more quick questions.

play52:42

This one is a little bit different.

play52:44

When you're thinking about business readiness,

play52:46

what do organizations have to do

play52:48

to understand and comply

play52:49

with the international, regional,

play52:51

and national regulations and regulatory bodies

play52:55

regarding the use of AI?

play53:02

- [Fernando] I can have a go, if you want.

play53:03

- [Kimberly] Yeah, please.

play53:04

- [Fernando] I'm sure, Kim, it's an interesting one,

play53:07

because every company I know

play53:09

has a body that actually reads and understands

play53:12

the regulations for them and every...

play53:13

single one of those departments

play53:16

will tell you that nobody seems to really pay attention

play53:20

in depth to those. So GDPR is a wonderful example.

play53:23

If you think in Europe about protecting your data,

play53:24

even further because of AI, and you think,

play53:26

"Oh, my God! Have you read GDPR?"

play53:28

There's no chance that we're going to misuse data --

play53:30

no chance, no chance, not possible.

play53:33

So there, so my suggestion is,

play53:36

please look at -- inside your company,

play53:38

you will find somebody who cares deeply

play53:39

about how to use technology like this.

play53:42

Otherwise, the regulators around the world,

play53:45

are doing a good job of trying

play53:46

to explain what their position is,

play53:47

even if their position is very young

play53:49

and very instilled thinking,

play53:51

and still moving. Go and look;

play53:53

it's very easy to find, even on Google,

play53:54

the regulators' pages for how they believe

play53:57

AI should be used.

play53:58

So either look inward and you'll get great advice or

play54:00

look to the regulators, they'll have great advice as well.

play54:06

- [Kimberly] Yeah, and some of this is, you know,

play54:07

a lot of the regulations today,

play54:09

Fernando referenced GDPR,

play54:11

which has provisions that then also apply

play54:14

to AI or what they call automated decision systems

play54:17

and automated systems.

play54:19

There are a lot of emerging regulations.

play54:22

So there's a lot of principles today.

play54:23

There's a lot of fundamental foundational statements now

play54:28

about what we want to do.

play54:29

There aren't necessarily outside

play54:31

of what have been traditional

play54:34

legal-regulatory concerns,

play54:36

around things like data privacy

play54:38

and security that are specific to AI,

play54:40

but they are coming.

play54:42

So the other thing to be aware of here is: Don't assume.

play54:46

You have to start looking forward a little bit

play54:47

and try to start to project

play54:49

and predict a bit of where this is going to go

play54:51

and build these things again in advance.

play54:54

Some of the new EU regulations,

play54:55

for instance, will require companies

play54:58

that are using AI-driven systems, right?

play55:00

AI-enabled systems to actually

play55:02

audit those systems and perform

play55:04

some assessment and due diligence

play55:05

of potential harms, for instance,

play55:07

and inventory and make that inventory available.

play55:10

None of these are set in stone today,

play55:12

but having folks, as Fernando said,

play55:14

that are passionate about it,

play55:16

working in that space today,

play55:17

whether it's compliance or ethics, will be cool.

play55:19

But also starting to look forward

play55:22

and plan proactively will be very important

play55:25

I understand that a lot of times,

play55:27

legal and regulatory compliance

play55:30

tends to come after harm

play55:32

has been done or, you know, ill has occurred.

play55:35

And so hopefully, we can run some lessons here

play55:38

and start to get, as organizations,

play55:40

in front of that with or without

play55:43

the mandate of regulation,

play55:45

which I think is required and is coming.

play55:51

- [Abbie] Tad, I'm going to -- I mean, yes, Tad,

play55:53

I'm going to throw this one to you.

play55:55

What would be the, what is a good practice

play55:59

to assess and monitor data-science maturity

play56:01

in your organization?

play56:03

I assume that's something that keeps changing.

play56:06

And do you have a process in place

play56:08

for sort of making sure that the things

play56:10

that you're trying to do,

play56:11

you have that maturity to be able to accomplish

play56:14

what you're trying to do?

play56:20

- [Tad] Well, first let me take myself off mute.

play56:23

Wow! That is a really complicated question, right?

play56:27

Not that easily answered. But, you know,

play56:30

working backwards, I think we said in our presentation

play56:33

you want to make sure that the solutions

play56:35

that are being built have some pragmatic

play56:38

and real-world measurable outcomes and improvement, right?

play56:41

Otherwise, it's all an academic exercise, right?

play56:44

So we do look at the, make sure that any application there

play56:48

has actually had a positive impact

play56:49

on quality, on patient service, cost,

play56:54

providers sustainability.

play56:55

Provider sustainability is a significant issue

play56:57

for us right now, especially having coming out

play56:58

of that COVID surge, which really

play57:01

put a great strain on our providers.

play57:02

So we do look at those as key metrics

play57:05

for any type of not only AI, but almost any type

play57:08

of [inaudible] process that we put into place.

play57:11

Of course, the understanding,

play57:13

so to speak the, the capability

play57:16

of that data scientist to build the models

play57:21

that we're looking for is also important.

play57:23

But assuming that we hired the right people,

play57:26

with the right qualifications, the other part

play57:29

that's really key is for that data scientists,

play57:31

to be able in the health care environment,

play57:34

to understand, truly understand

play57:35

the needs of the end users in terms of physicians.

play57:38

And we put them in front of physicians

play57:40

and physician groups as you saw earlier.

play57:42

And if they can't communicate,

play57:44

the most brilliant data scientist

play57:46

or the engineer that can build amazing things for you,

play57:49

but doesn't understand that the needs

play57:51

or the language of the clinicians

play57:54

that they're building these tools for,

play57:56

are not going to be as effective.

play57:57

So just in brief, a couple of key things

play57:59

that we look at. Obviously, there's multitude

play58:02

of other metrics that we need to look at,

play58:03

but I would say those are some of the key things.

play58:06

- [Abbie] Great. And I wish we had time to get that answer

play58:08

from everybody, but we are just about out of time.

play58:11

So, thank you, Kimberly, Fernando, and Tad

play58:14

for this really interesting and useful talk

play58:17

and thank you in the audience for your great questions.

play58:19

I wish we had time for more of them.

play58:21

And a final thank you to SAS for sponsoring this webinar.

play58:24

We hope you'll all join us for the next in the series

play58:26

which is on April 22nd, where we'll cover

play58:29

how to ensure the technical readiness

play58:31

of your organization for AI.

play58:33

Thank you so much.

Rate This

5.0 / 5 (0 votes)

Related Tags
AI ReadinessBusiness StrategyHealthcare AIData GovernanceMachine LearningRegulatory ComplianceCultural ChangeTalent AcquisitionAI EthicsTechnical Challenges