Adopting AI: Ensuring Business Readiness
Summary
TLDRThe webinar 'Adopting AI: Ensuring Business Readiness' discusses the importance of artificial intelligence in organizational futures. It explores挑选合适的问题用AI解决, launching AI initiatives, and the need for oversight and understanding of AI risks. Industry experts from diverse fields, including healthcare, share insights on AI use cases, challenges, and strategies for successful AI integration, emphasizing the need for robust data infrastructure, a skilled workforce, and compliance with evolving regulations.
Takeaways
- 🤖 AI algorithms are abstract and probabilistic, making them complex and imprecise by nature.
- 🔍 'Explainable AI' techniques can shed light on AI algorithms' predictions but cannot determine fairness or justice.
- 🧠 AI systems are adaptive and learn from data, but lack creativity and may revert to known patterns in unforeseen circumstances.
- 📈 AI systems are typically deployed at scale, which can magnify small errors and require robust feedback loops.
- 👀 AI algorithms are impressionable and know only what they've been exposed to during training and production.
- 🚫 AI algorithms can inadvertently pick up and reinforce biases present in the training data.
- 🛠️ It's crucial to define operating conditions for AI systems and engineer safety controls into their processes.
- 💡 AI solutions must be adopted and scaled effectively, with a focus on value creation and strategic alignment.
- 🌐 AI and machine learning in healthcare hold great promise for predictive analytics, personalized care, and efficiency improvements.
- 🧬 Healthcare data is vast and complex, requiring advanced NLP technologies to unlock valuable insights.
- 🔄 The success of AI in healthcare relies on a collaborative effort between data scientists, clinicians, and end-users.
Q & A
What are the six characteristics of AI algorithms that require additional business due diligence?
-The six characteristics are: 1) AI algorithms are abstract with complex inner workings; 2) They are probabilistic systems with imprecise outputs; 3) They are adaptive and respond to changes in data input; 4) They are not creative and revert to known patterns in unforeseen circumstances; 5) They are typically deployed at scale, which can magnify small errors; 6) They are impressionable and learn from the data they are exposed to during training and production.
How can organizations ensure their AI solutions deliver the intended outcomes?
-Organizations can ensure intended outcomes by establishing robust feedback loops, clearly defining operating conditions, engineering safety controls, monitoring actual usage alongside intended use, and educating users on how the application is meant to be used.
What are some challenges in adopting AI solutions in healthcare?
-Challenges include technical issues such as interpretability, bias, and drift, as well as non-technical issues like clinical validation, workflow integration, privacy concerns, data governance, and deciding whether to build or buy AI solutions.
What is the importance of explainable AI in business applications?
-Explainable AI is crucial as it provides insights into which factors most influence an algorithm's predictions. This helps in making informed decisions about whether those factors are fair, just, and aligned with the business's strategic goals.
How can businesses mitigate the inherent risks of AI?
-Businesses can mitigate risks by implementing robust safety controls, investing in explainable AI technologies, ensuring proper oversight, and maintaining a feedback loop to correct errors and improve the system over time.
What are some best practices for adopting AI solutions at scale?
-Best practices include starting with a clear understanding of business strategy, defining and finding value in AI applications, ensuring the basics are done brilliantly, focusing on people capability and culture, and proceeding responsibly while building trust.
What is the role of AI in enhancing healthcare delivery systems?
-AI can enhance healthcare delivery by providing predictive analytics for disease progression and readmission risks, personalized preventative care, natural language processing for unstructured data, and image recognition for various medical imaging needs.
How can companies balance the need for speed in AI adoption with the slower process of cultural change?
-Companies can balance speed with cultural change by prioritizing education and awareness about AI across the organization, leveraging existing talent, and fostering partnerships between industry and academia to develop a skilled workforce.
What are some strategies for finding the right AI talent?
-Strategies include looking within the organization for potential talent, partnering with academic institutions, investing in internal training programs, and considering a mix of STEM and SHAPE (sociology, humanities, ethics, sustainability) skills.
How can organizations prepare for and comply with international, regional, and national regulations regarding AI?
-Organizations should have a dedicated team or individual to understand and monitor regulatory changes, engage with regulators, anticipate future compliance needs, and integrate legal and ethical considerations into AI development and deployment.
What are some indicators of data-science maturity within an organization?
-Indicators of data-science maturity include the ability to build and implement AI solutions that improve quality, service, cost, and provider sustainability, as well as the capacity to communicate effectively with end-users and stakeholders.
Outlines
🌟 Introduction and Setting the Stage for AI Readiness
The webinar begins with Abbie Lundberg introducing the topic of AI adoption in businesses and its growing importance. Abbie outlines the webinar's agenda, which includes discussions on selecting the right AI problems to solve, ensuring business readiness for AI initiatives, understanding AI risks, and sharing expertise across industries with a focus on healthcare. The panel includes Kimberly Nevala from SAS, Fernando Lucini from Accenture Applied Intelligence, and Dr. Tad Funahashi from Kaiser Permanente, each bringing their unique perspectives on AI's strategic value and practical implementation.
🤖 AI Algorithm Characteristics and Business Diligence
Kimberly Nevala starts the discussion by highlighting six key characteristics of AI algorithms that require additional business diligence. These include the abstract nature of AI logic, the probabilistic nature of AI predictions, the adaptability of AI systems to data inputs, the magnification of small errors at scale, the impressionability of AI systems to data, and the incautious nature of AI algorithms. She emphasizes the need for explainable AI, robust safety controls, and the importance of understanding and mitigating AI's inherent risks.
📈 Business Practices for AI Readiness and Scaling
Fernando Lucini presents insights from Accenture's research on AI and business readiness, highlighting the belief among executives that scaling AI is crucial for growth. He discusses the challenges in scaling AI and the risks of not leveraging AI effectively. Fernando categorizes companies into three groups based on their AI maturity: proof-of-concept factories, strategic scalers, and industrial growers. He emphasizes the importance of affordability and accessibility of AI, defining and finding value in AI initiatives, and the need for a clear strategy and multidisciplinary teams to ensure successful AI adoption and scaling.
🏥 Applying AI in Healthcare: Challenges and Opportunities
Dr. Tad Funahashi shares the practitioner's view on AI in healthcare, discussing the use of AI and machine learning in various clinical applications. He talks about the potential of AI in predictive analytics, preventative care, and unlocking valuable information from unstructured data. Tad also addresses the technical challenges in AI, such as interpretability, bias, outlier cases, and the drift of medical practices over time. He stresses the importance of clinical validation, workflow integration, privacy, data governance, and the decision-making process in implementing AI solutions in healthcare.
🛠️ Implementation and Execution of AI in Healthcare
Tad continues the discussion on the hard parts of implementing AI in healthcare, focusing on clinical validation, workflow integration, privacy, data governance, and the build vs. buy dilemma. He emphasizes the need for a collaborative approach between data scientists and medical professionals to ensure the AI models are accurate, timely, and actionable. Tad also discusses the importance of protecting sensitive healthcare information and the challenges of integrating AI models into existing healthcare systems while complying with regulatory requirements.
🌐 Finding and Developing AI Talent
The panelists discuss the challenges of finding AI talent and suggest looking within the organization and partnering with academia to cultivate AI skills. They emphasize the importance of education and creating career paths within the company to develop a skilled workforce in AI. Fernando suggests creating unicorns within the company rather than finding them and stresses the need for a diverse set of skills, including those that understand the business context and can communicate effectively with end-users.
📜 Navigating AI Regulations and Compliance
The panelists address the need for organizations to understand and comply with international, regional, and national regulations regarding AI. They suggest looking inward to find individuals passionate about regulatory compliance and ethics, and outward to regulatory bodies for guidance. Kimberly and Fernando highlight the importance of being proactive and planning for future regulations, such as those requiring audits and assessments of AI systems for potential harms.
📊 Assessing and Monitoring Data Science Maturity
Tad shares his approach to assessing and monitoring the maturity of data science within an organization, emphasizing the importance of measurable outcomes and improvements in quality, patient service, and cost. He discusses the need for data scientists to understand the needs of end-users and the importance of communication between engineers and clinicians. Tad suggests that while there are many metrics to consider, focusing on the impact on healthcare providers and patients is key.
Mindmap
Keywords
💡AI algorithms
💡Business readiness
💡Data governance
💡Health care delivery system
💡Explainable AI
💡Adaptive systems
💡Artificial intelligence and digital transformation
💡Machine learning engineering
💡Bias in AI
💡Strategic value
Highlights
AI will be crucial for most organizations' futures, and understanding how to pick the right problems to solve with AI is essential.
Launching an AI initiative requires business readiness, including having a basic understanding of AI among users and appropriate oversight.
AI algorithms are abstract and probabilistic, with outputs that are predictions and inherently imprecise.
Explainable AI techniques can provide information on influential factors in algorithms' predictions but cannot determine fairness or justice.
AI systems are adaptive and respond to changes in data, but they are not creative and may revert to previously known patterns in unanticipated changes.
AI systems are typically deployed at scale, meaning small errors can magnify and become self-reinforcing without proper feedback loops.
AI algorithms are impressionable and their worldview is based solely on the data they are exposed to during training and production.
AI algorithms can make mistakes and are incautious; they require clearly defined operating conditions and robust safety controls.
Users' understanding of AI applications is critical for AI solutions to deliver intended outcomes, and monitoring their actual use is essential.
AI solutions are becoming more affordable and accessible, with tools like OpenAI's GPT-3 available as APIs for natural language processing.
Defining and finding value in AI initiatives is crucial, with a focus on identifying problems that can bring 10 times the value or savings.
Scaling value in AI requires doing the basics brilliantly, such as having strong data infrastructure and methodology.
AI is everyone's problem in a firm, and successful organizations ensure the right talent mix and alignment between strategy setters and workers.
Proceeding responsibly and building trust in AI involves creating a responsibility framework and ensuring transparency in AI's use.
Health care has a great deal of enthusiasm for AI and machine learning, with vast amounts of electronic health records offering potential for improved predictions and personalized care.
Technical challenges in AI for health care include interpretability, bias, outlier detection, and keeping up with changes in medical practice.
Clinical validation, workflow integration, privacy, data governance, and deciding whether to build or buy AI solutions are critical for successful implementation.
AI and machine learning in health care are part of a larger effort to increase operational efficiency and quality, requiring timely data access and clinical interventions.
Organizations must ensure their AI solutions comply with international, regional, and national regulations, and staying proactive in understanding and adhering to these regulations is key.
Assessing and monitoring data-science maturity in an organization involves measuring real-world outcomes, the capability of data scientists, and their understanding of end users' needs.
Transcripts
- [Abbie] Hello, and welcome to our webinar.
"Adopting AI: Ensuring Business Readiness."
I'm Abbie Lundberg. I'm a business technology researcher
and writer and president of Lundberg Media.
I'll be moderating today's discussion.
One way or another, artificial intelligence
will be important to most organizations' futures.
In Part One of this series,
we explored how to pick the right problems to solve with AI.
In the second installment, we'll examine
what it takes to launch an AI initiative
from a business readiness standpoint.
This includes making sure critical
business enables are in place,
including a basic level of understanding
of AI among users and appropriate oversight
of AI initiatives and business processes.
It also requires understanding
and mitigating the inherent risks of AI.
Our speakers today will discuss these issues and more.
They'll help you determine
if your organization is ready for AI,
sharing their expertise across a range of industries
with a special deep dive into AI use cases and
challenges in the sector that affects us all: health care.
Kimberly Nevala will start things off.
Kimberly is a strategic advisor at SAS
and an expert in the areas of advanced analytics,
information governance, and data-driven culture.
She helps clients understand
both the strategic value and the practical realities
of artificial intelligence and digital transformation.
Kimberly will be followed by Fernando Lucini,
managing director and global lead for data science
and machine learning engineering
at Accenture Applied Intelligence.
Fernando has spent more than 20 years
creating technologies to automate
and understand text, speech, and video data
and integrating these technologies
into business solutions for Fortune 100 companies
across a wide range of industries.
Dr. Tad Funahashi will provide the practitioner's view.
Tad is a practicing orthopedic surgeon
and the chief innovation officer
for Kaiser Permanente, Southern California.
He leads a team of physicians, consultants,
designers, data scientists,
and engineers who work together
across Kaiser Permanente to envision
and build the health-care delivery system of the future.
Welcome to you all.
And Kimberly, I'll turn it over to you.
- [Kimberly] Thank you, Abbie. All right.
So I'm going to kick things off today
by quickly reviewing six characteristics
of AI algorithms that require us
to apply additional business due diligence
as we design, deploy, and maintain
these systems in the world today.
So the first -- and you're probably
well aware of this -- is
that AI algorithms are abstract.
Unlike rule-based systems, the logic --
in which it's fairly easy to follow the logic
to get from A to Z, the inner workings
of AI algorithms can be almost in comprehensively complex.
And these are probabilistic systems.
So their outputs are predictions
which are imprecise by nature.
Now, techniques to be able to shine a light
on the inner workings of AI algorithms, known
as "explainable AI," are rapidly evolving.
but it's important to note that
while these techniques can give us information
about which factors most influence
in algorithms' predictions,
they cannot make the decision for us
whether those factors are, in fact,
right, fair, or just. In addition --
and this is important to know --
because AI systems are adaptive,
they respond to changes
in the data that, the data input they receive.
In other words: They've learned --
and this is both the good news
and the bad news. The solutions are very smart,
but they are not creative.
So if the behaviors or the environment
change in unanticipated ways,
the solution is not going to come up
with a novel offering or response.
It's going to revert to the best-fitting
previously known pattern.
And this is why we saw so many analytics
and AI models initially fail
when COVID came on the scene.
It's also important to note
that AI systems are typically deployed at scale.
And what this means is
that small errors can quickly become magnified,
and in fact, become self-reinforcing over time.
If we don't have really good feedback loops
that tell the algorithm when it's making the right choices
and the wrong or a suboptimal choice,
it is going to assume that the choices
that it makes are correct.
And that will then inform its future choices
and so on and so forth.
And you see where this leads;
you can see this in your day-to-day life
and your social-media feed --
for instance, when they seem like
they very quickly become hyper-focused
on a single theme or topic.
So it's important, then, as we think
about that, to know that these systems
are also highly impressionable.
And by that, what I mean by that,
is that they only know what they see.
So the data that an algorithm is exposed to
while it's in training,
and while it's in production, is its entire worldview.
It has no insight. It's completely blind to data
or factors not reflected in that information.
And it is really, really good at picking out,
sometimes just strange correlations
or really thin correlations in that data,
even if those correlations are spurious
or they don't reflect our desired future state.
And this is where we see things
like hiring algorithms that become biased against women
even though gender isn't actually an explicit data.
Now, in addition to being impressionable,
AI algorithms are a little like teenagers --
and I don't want to anthropomorphize here --
but they're incautious.
They're going to make mistakes,
but because they're not self-aware,
again, they're not going to know unless we tell them.
They also don't apply any level
of independent discretion or tact.
So it's critically important
that we clearly define the operating conditions
for these systems and then engineer
robust safety controls and resiliency into these processes.
And particularly, this is particularly important
because AI is increasingly just interwoven
into the fabric of our core business processes.
And this means that it becomes increasingly difficult
for us to always fully understand the downstream impacts
and implications of these systems.
It can also become easy...
It is also...I'm sorry, it's also easy
to become over-reliant or overly trusting
in the information being provided.
So if your self-parking car has never made a mistake,
you let your guard down,
even though it may...make a mistake in the future.
And as soon as you let your guard down,
the engagement model that that system
was designed to operate in has changed.
So ensuring that our users not only understand
how an application is intended to be used,
but also watching how they actually use it
is mission-critical to making sure
that AI solutions deliver their intended outcomes.
So there you go: Six characteristics of AI systems
that require increased business due diligence.
Now, with that said, I want to turn things
over to Fernando to talk about six,
I believe, best practices
for ensuring your AI solutions are adopted
and are to be adopted at scale, Fernando.
- [Fernando] Lovely, thank you very much for that.
I wanted to give you a perspective
as I'm part of Accenture,
which, of course, is a service company.
So we're great observers --
as we are having to live with these things,
we're also great observers.
So let me give you my six lessons
of what we see are good practices people have
when they do it, when they have that
business readiness, right?
But first, if somebody can move me
to the first slide -- we're going to "top
and tail" with this particular slide,
what I want you to tell you here.
So you understand that when we think about AI,
we have to think about everything that goes with it.
From the beginning to the end of the journey,
it never is just about the model.
It never is just about, you know,
the data, it's always about that journey
all the way from idea
to how you make this work in production.
So keep that in mind; we'll come back to this
but I just want you to --
and you'll have this slide later
so you can do your mental modeling --
but it's important to think of it as a complete problem.
Otherwise, business readiness becomes incomplete
and [inaudible].
So for the first -- first, I wanted to give you a couple
of bits of data from our latest research.
We've done research for around 1,500 companies,
C-level executives, asking them
about AI and their business-readiness.
And a couple of things stood out for me.
First of all, you see there's some interesting things:
that 84% of the executives that we interviewed
believed that they wouldn't achieve their growth objectives
unless they scale AI.
Okay, this was not a leading question.
So it's interesting that it comes like that. Second fact:
that 76% thought they will struggle to scale.
Okay, so "I need to scale to be successful,
but I'm going to struggle to scale" -- that's interesting.
And the last one, which is that 75%
believed they would be risking going out
of business if they didn't, if they didn't get the value.
By the way, to me, this tells me a bunch of things
as a practitioner and as a chief data scientist,
which is that there's a high level
of where we have to do in educating our executives
because it's, I mean, this is --
the numbers are almost like scare=level numbers.
The other [Thing] is, there's a lot of work to do
across the organization,
so they understand, you know,
how do we get to these, to this value?
But I thought it was interesting for you to see this.
And, Kimberly, you can move me to the next,
the second interesting piece that came out
of this research, which I hope
helps your thinking, is this:
We can really clearly see these customers
in three categories. The category of, the,
where most customers were,
is this first proof-of-concept factory category.
And I've given you the characteristics of each there --
and this is per the research,
as opposed to just my point of view --
where 80 to 85% were effectively getting,
not really seeing the value of the work in AI,
because it was stuck in the proof of concept,
never achieving production.
And there were a bunch of reasons why --
that misalignment with the CEO; not really,
you know, having a lot of labs,
but not a lot of abilities
to do the route to [inaudible] and so on and so forth.
Interestingly, the second category
was this, "strategic scalers." Whereas only 10
to 15% where they had figured out,
they had great connection to the strategy,
to the CEO, to work for the people
who are actually delivering on the ground.
They had a clear strategy
of what was important -- part
of the first session in this series;
we talked about that. Without the multidisciplinary teams,
all the seats, you see there.
And, finally, the third category,
which was the "industrial growers," a very small amount.
And so those folks, these guys were beyond readiness.
These were now real, you know, fast mode, right?
I always say it's a little bit,
AI is a little bit like building a car.
They don't, you've got to get the wheels on quick
so you can do miles, right?
So these guys had, you know,
the best car, they were rolling out.
They had a lot of miles behind them.
So I think the challenge for us
in business readiness is that line in between --
between the 85%, that Category One and that Category Two.
So let's talk about these five lessons
as they relate to this.
The first lesson is that this,
the AI is is affordable and easily accessible.
And this for me is, by the way,
in case you haven't read the book,
I quote Dr. Kai-Fu Lee's book about
China and AI in China. It's an incredible book.
Please read it. But in the first few chapters
he describes very clearly that China's doing great
in terms of using AI because they are using --
not necessarily because they're doing
fundamental research in AI,
but because they are great users of that technology.
And we can talk about, we'll talk about
the moral and the other responsible type
of positions later on this.
But as it relates to this -- and Kimberly,
if you can move into the next one --
an example of this, for example,
is this is a little fun thing
from OpenAI and their product called GPT-3,
which, in case you guys haven't played around with it,
is effectively a great breakthrough
in natural language processing.
But the point is that, you know,
it's something that you can use as an API.
It's readily available.
It can interpret and understand texts
in ways that it can even do simple arithmetic.
And those examples are put on the screen,
real examples where I can ask it to do a sequence,
one, three, five, seven,
and then we'll finish it as a Fibonacci sequence,
one, one, three and so on, and it will finish it.
So, and it will do this
on the basis of its understanding of texts, pure text.
So when in a world where we have all
of these tools available, like GPT-3,
likes the APS for [inaudible]
like. There is no single data scientist
in the world that builds or writes, line
by line, a model for our regression anymore.
They take it down from the internet and
it's done. It's very important
that if I taught business readiness
we have the muscle of use,
and accessibility is important.
Second one -- thank you, Kimberly -- is the definition
and finding a value. People that are doing
very well are very good at defining and finding value.
And let me give you a couple of examples.
Kimberly, if you can move me to the first one,
the first, the next slide: The research tells us
that the people that are doing great defining --
so this...by the way,
by "defining" and "finding," I mean,
"What is it that we need to do?
What's important, what's meaningful?
What's going to change the strategy?
What's going to be material to me, executing the strategy
of the company that happens
to have AI if it doesn't accelerate?"
We found that the people
that were doing it very well
tended to have a great return.
And the way I tend to use this,
as a rule of thumb that I want you to think of --
if you move into the next, Kimberly -- is this idea
of 10 times...it's critical
that as you approach problems in AI
and you think about the average,
a bank may have three, four, 500 things of AI
initiatives that they want to do that have AI in them.
Telecom companies might have a thousand
in that long, long list of strategic things
they want to do, where AI is a key part.
Let's think 10 times. What is the thing
that's going to bring us 10 times value 10 times saving?
Not a marginal benefit or a marginal saving.
Why? Because very rarely do we see people
actually approach problems little by little
and actually get there.
Given that there's so much technology available,
it's almost better to take a problem
that actually is more material and can actually
be real, a real part of your strategy
and that in itself, if it washes its own face,
as we say in the UK, if it has a good business case,
it will bring all the value behind.
So 10 times, not 10%, keep that.
Thank you, Kimberly.
Yes, the next slide.
So the next lesson is this scaling of value.
So if we look at the first one, we know we have to be,
we have to -- it's affordable, easily accessible.
So let's get using No. 2,
where we exercise the muscle
of defining what's valuable
and making sure we're doing that.
Third one is: Just do more of it.
How do we do more of this?
And this is a complicated one.
If you move onto the next slide, Kimberly, thank you.
There's many lessons here.
I'm going to give you two
or three because of the, so you can have context.
No. 1 is this idea of "do the basics brilliantly."
There's -- it's very, very rare that you can,
you can build a racing car or you can build
any kind of advanced technology without having the basics --
well, the wheels turning correctly,
the brakes working, all these things
that need to be true.
So for AI to work very well
and scale very well, you have to set yourself up
with great data infrastructure,
a great data science methodology.
That is, [one] that is geared towards getting you
to from the beginning to end,
a great route that allows you
to take that project and make it into production.
A great focus on the kind of technology
they need to scale certain things.
So think of the basics, and within this same sphere,
without moving from this slide,
you also have to think about simple --
really simple, sounds simple,
but are quite complicated,
but they are in scale-wise governance
and other things that are outside the normal realm.
Maybe they're not in mobile
outside of the realms doing this.
So do the basics brilliantly -- buy, build,
borrow strategies.
So all the kind of things that allow you
to have a great base.
So -- and, Kimberly, if you can move me,
if you can move me to the next, please.
So, next lesson is this idea of people, capability,
and culture, which sounds like the,
you know, the most obvious thing, right?
But the truth is, if you move onto the next one,
the truth is, we sometimes, we many times, get this wrong.
So what do we mean by this?
We have to bring people along for the journey.
AI is everybody's problem in a firm.
So the firms that do very well
tend to have the following characteristics.
They tend to have the right talent mix.
So I always joke that data scientists don't build products.
They are software engineers,
they are behavioral engineers,
all or an entire family of people building products.
So let's have that, let's have those families
of people doing our objectives, right?
So look at how are you doing this.
Your organization: Make sure
that the distance between the C-suite
and the people that are setting up
the strategy for the company and the people
that are doing the work is a small distance, [that]
it's quite, very well-aligned.
We see that as well.
And I say my favorite, you know,
that we used it for: buy before you build.
So don't try to, don't try to build everything yourself.
And if you look at some of the stats that
which were interesting, I'll give you one of those.
One of the ones that I like,
which is the idea of this employees
that are fully understand AI at scale,
there's a, quite a large distance
between the POC group and the scales, right?
If you can observe that where
the more we educate the firm
about their role in the AI lifecycle,
the more that we see that we become
a strategic scale-up, which is the place you want to be
because that's where
we're actually getting things done
and scaling them up.
So people can do better.
And the last one, which is this idea
of proceed responsibly and build trust,
I mean, this is an incredibly important topic.
I'm sure others, other people in this room
will agree, but what I want to,
if you move me to the next one,
just to introduce a few concepts, Kimberly,
I think what's important is several things:
that we have to create,
a responsibility framework that adapts
to our company, our citizens, our society
and that we're very open and transparent about how this is.
This is Category #1.
Let's make sure we're clear
about what it is that we're going to do
with this wonderful technology.
Secondly, we have to put the technological aspects
in place that allow us to have the guardrails to act safely.
There's going to be a lot of people
that are going to be working on AI
in a single company or a single government.
So to think that one individual
or many working together
are always going to get it right is not really fair.
So we really have to think
about the professionalization of AI
and how that professionalization
leads to doing these things responsibly.
And by the way, [inaudible]
and I mean, everybody's well understood,
you know, guard rails, methods,
all of this technical implementations
which connect with some of the numbers
you see there. I specifically like the 60%
of respondents reported that they still wanted
to have a manual override.
Of course you would. And again,
we're not trying to make that number change
but we're trying to do with building responsibility,
and understand the role of everybody.
So we can all be comfortable
about whatever the number is,
that it's actually doing the right thing
for our company, our government, our citizenship
and the world and the world at large,
which -- Kimberly, can move me to the last one,
which, funnily enough, as you can see,
it brings us back to the roadmap.
So it becomes clear that if you,
as you're going through this roadmap,
you're going from ideation to [questions about]:
"Do I have the value on the strategy?
Do I have the right people?
You know, am I setting up myself correctly
in governance to do this?
And am I actually then getting the value,
realizing the value, throughout this?"
These five lessons will bear you out,
which is: Get using. Make sure you're getting,
you're doing the valuable things,
the things that matter.
Make sure you're setting yourself up
to do more of those things.
And make sure your people
are coming with you on the journey.
And make sure you're watching for whether
it's within the boundaries,
that you've set yourself up morally
and ethically to continue that work.
And then within that, I will give you the challenge
before I hand off to Tad to take your own company
or your own government
through this framework. Start it,
go from start and see if you actually have, today,
the ability to go uninterrupted
to the end of that journey
and see where the journey is interrupted.
Think about why that's interrupted
and whether you can actually
make that continuous to give you that scale.
Kimberly, I think at this point,
I'm handing over to my friend, Tad.
- [Tad] All right, Kimberly. A really great framework
in terms of how to apply AI in the business sense.
What I thought we would do today is to take some
of those learnings that we've applied
to health care and share with you
some of the lessons that we've learned.
So the next slide, please.
So we'll touch on a little bit about
the use of AI and machine learning
in the area of health care.
We'll give you some clinical examples
where we've applied it,
and the challenges that we found in that,
and then some guides, which I think Kimberly
and Fernando have done a really good job
of already, in terms of addressing the challenges
as certainly applies to health care.
Next slide. So there's no doubt
that there's a great deal of enthusiasm
in the use of AI and machine learning and health care.
For example, in our institution,
we've collected electronic records
for almost two decades, amounting
to practically 44 petabytes of health care data.
Just to give you a relative scale of what that is,
it would be roughly a physician that's been in practice
for 170,000 to 180,000 years with perfect recollection
and near-perfect pattern recognition.
So if you could have such a doctor serving you
or taking care of you, there'll be a great advantage, right?
So we want a lot of stability.
We still be able to do [inaudible],
make that those...better make their predictions,
treat illnesses more precisely,
and really be a much more personalized type
of care that we could deliver for you,
if we could really unlock that data.
And we're also beginning
to look at data in the uses of machine learning
and AI and wearables. Next slide.
So the main areas that we're really looking at,
about patients of AI in health care
in our institution currently,
is: One, in predictive analytics,
with all of this experience in terms
of what's happened thus far, we can tap into that
and guess what might happen again in the future.
So looking at things like who's most likely
to progress in the disease,
who is most likely to be readmitted
after a hospitalization or for that matter,
who is most likely to be admitted
even if they haven't been admitted before.
Cause I think very few of you out there think
to yourselves, I have a week off next week.
I think I'll go to the hospital, right?
So if you could prevent you
from having to go to the hospital
it is something that we believe is much more
of a desired outcome for all of our members,
personal device can and risk stratification
is a key part of our use of data.
And then evidently it's well known
for preventative care, preventative care.
Thus far, you could say that we've used
crude instruments such as gender, age,
ethnicity, and whatnot to try to look at it,
people that are higher risk for certain diseases.
But if you could leverage that data
and really tailor the prevention
or even a tighter group you can really focus
on those groups and really have much better health outcomes.
Now, a lot of our health records are,
as you might imagine locked up
in our unstructured data that is in the notes of doctors,
in the notes of nurses and whatnot.
There's always been this joke
about doctors' [hand]writing, but it used
to not be electronic.
And nobody could read that writing, right?
Very hard to unlock. Well, even in the computer,
if it's natural, if it's unstructured,
we need natural language processing
and whatnot to really extract that information out of there.
And we're really beginning to develop
some advanced NLP technologies,
to be able to really unlock the value
in the critical knowledge.
That's locked in the unstructured portions
of our electronic records.
And, finally, I think all of you have heard
about image recognition use in health care,
such as retinal imaging,
dermatologic imaging, pathology imaging,
and then radiology imaging,
and we are also looking at all of those areas.
Next slide. So first, we need to talk
a little bit about the technical challenges,
so to speak, of doing modeling
for AI, machine learning, and health.
And I think some of these were covered previously very well
by our previous speakers: interpretability
or explainability, bias out by our edge cases,
and drift. Let's look a little bit more carefully
at each one of these. Next slide.
So in terms of interpretability:
Doctors and healthcare workers are famous
for not believing about anything
that's sent to them, right?
So there's a new study that comes out
and it's a definitive, randomized, double-blinded study
and they go, "Ha! Let me look
at that to see if it's really right."
And, in fact, that's one
of the reasons that any kind of new discovery in health care
takes about 17 years before it becomes the norm.
Well, imagine if you gave somebody a blinding insight
about what's about to happen,
but it came out of a black box, so doctors...
So we have to figure out a way
to make the data that comes out,
a little bit more interpretable,
and sometimes, interestingly, rather
than creating tremendously complex models,
we sometimes want to simplify the model,
so it's a little bit more explainable
and interpretable to the physicians. Now bias
is something that's innate in health care, right?
Most bias can be good, but some bias can be bad.
That is, we know that certain populations
that are higher-risk, that it's innate bias
in the statistics, right?
But if the bias is nefarious --
meaning that it doesn't improve the outcome
of those people -- then it's a negative type of bias.
And it could be anything from ethnic groups
and socioeconomic groups and whatnot.
We need to carefully analyze
that to make sure that the historic practices
of the past that had innate biases
that we didn't detect aren't replicated
in the future. Outlier cases, in fact,
are the ones that we stay up at night [about].
The reason you want to come see a doctor, instead
of Doctor Google, is that most of the time,
you're going to be fine with whatever it is
that's common. However, there are certain cases
that are atypical, the red herrings,
And when everybody is in my office
and I'm examining them,
I'm looking not only at what they most likely have,
but what they could possibly have.
And, unfortunately, right now
AI is not that good at detecting those.
It's more of a statistical norm, rather
than looking for outliers.
And as you know, health care practice
changes over the course of time.
We don't practice the way we did a hundred years ago,
50 years ago, or for that matter, even five, 10 years ago.
But remember: AI looks at the empiric data
that is available from our past medical records
and replicates that. I think
Kimberly talked about this earlier.
And so as practice begins to drift,
we need to make sure that our recommendations
that come out of the AI systems
are also updated and continuously improved.
Next slide. Now my data scientist
and engineer tells me all those problems
that we just talked about are the easy part.
The hard part, in actuality, is
in the implementation and execution.
That is clinical validation,
workflow integration, privacy, data governance
and MIP, and a decision
as to whether to build or buy, to create these.
Let's take it a little bit deeper.
Look at these in terms of clinical validation.
We need to make sure that the data scientists
and the doctors work hand in hand closely
so that the predictions that that machine is making
in fact makes sense.
No. 1. And No. 2 is [it's] accurate, right?
So that it's not making wrong predictions.
And then, ultimately in that process,
so that the physicians begin to develop a trust
in that data that's coming, ultimately AI,
or artificial intelligence,
in health care will most likely
become augmented intelligence.
That is a pinging of things
that we should be thinking about
or broadening our diagnostic diagnosis
in clinical arenas that AI can help us do.
Perhaps one of the most important
is workflow integration.
If the data isn't given to us timely,
if the data isn't relevant,
if it's not actionable or interpretable,
and we don't trust the data, and it's not a part
of our decision-making process
that we typically do in taking care of our patients,
all the fancy arithmetic
and the predictions are not that helpful.
But we need to make sure
that the productions of the model are presented
to the physicians in a usable manner
that's timely, that's meaningful.
It goes without saying that health care information
is incredibly sensitive.
So privacy issues are critical for us.
If we ever do share data with external companies,
and just to get into the "build or buy,"
that protection of that data,
what happens to the data after it leaves,
so to speak, our walls? The governance
of that data, ultimately even the intellectual property
that's associated with it, can really be challenges
in terms of how best to utilize that data,
because sometimes there's great engineering resources
outside, so to speak, our walls of Kaiser Permanente.
And yet, how do we leverage that expertise
without putting any of the privacy
or ownership issues at risk
in terms of build or buy?
There are models that are built
for a variety of hospital systems and
health care systems. And yet geography,
financial incentives, a variety of other things
can have slightly different impact on the data.
And so that data in the models that are built
on that are not necessarily directly transferable.
So we need to pay attention to that.
And, yeah, if you say, we want to make sure
that we build it, that's customized for our database.
It requires tremendous CPU, GPU, engineers' time,
and it can be extremely expensive to build.
And it obviously to continue to keep it updated.
So that's another area
that we need to keep attention to. Next slide.
And so if we really began to think
about the application of AI
or augmented intelligence for health care
and machine learning, it really is only one part
of the creation towards increased operational efficiency
and quality that we look at in health care.
And so it's a tool that can be potentially helpful,
but an amazingly accurate tool that's 99% accurate.
It isn't really that helpful
if you don't have timely access to the data,
if there are clinical interventions
that make any difference, you can have
If you know when it's going to rain,
but you don't have an umbrella ...
So that doesn't help you, right?
But the same thing in health care:
We know something's going to happen
but there's nothing we can do about it.
It can be sometimes frustrating
and we need to make sure
that when we do get the data insights
that there are supporting processes
and workflows that allow the intervention to happen.
We need to make sure that there are trained resources
to make that happen.
And then there's appropriate infrastructure
for not only scaling it, but also in terms
of making sure that the model is interoperable
across our entire enterprise.
And for example, we do that.
We want to make sure we have metrics that look
at both the performance of it,
from making sure that there's no bias,
that the quality is there,
in fact, it has good impact on our costs,
and ultimately that it's serving our members
in a much more efficient manner.
The next slide. So if we were to put this all together,
we kind of take a step-wise approach to that.
So as we look at applying AI,
or any large models like this -- First,
we want to make sure that we're not doing it
just because it's fun, but it's actually
solving a specific clinical problem
that we typically couldn't solve as well,
or could begin, like Fernando said,
really creating a different type of model for health care.
And once we do that, we want to make sure
that the engineers and the physicians
work side by side all along,
that the physicians have faith
in what the engineers and the models are outputting,
and that the engineers truly understand
what they're solving for.
Of course, since we get so much data,
we need to make sure that the data quality is good,
that we adjudicate data that may be coming
from slightly different sources,
that the quality of the data is at the highest level
that it can be, and then once we build
that model and steps one, two, three
are all about the model building
and the outputs are objective, right?
And that unwanted bias is not programmed into that model.
And then in the lower three, four, five,
and six are really about execution.
We need to make sure
that once we do put that in there,
that there is value in terms of quality outcomes,
service outcomes, cost, whatever it might be,
that it is in fact something
that is a value to the organization.
And as we rolled out these things,
we need to make sure that its performance,
even though it may have been good to begin with,
isn't beginning to decay over the course of time.
And I think the last goes without saying,
and certainly in health care, regulatory requirements
and oversight of all of these models
are absolutely essential to make sure
that we are compliant.
So with that let me now turn it back over
to Kimberly and Abbie, I believe,
who're going to jump us into the Q and A section of this.
Thank you, everybody, for your attention
and I look forward to our discussion.
- [Abbie] Great. Thank you for those great presentations.
Really useful information.
And we do have some great questions coming
in from the audience, and we will continue
to take your questions for the remainder of the hour
so you can keep submitting those.
To kick things off,
so there's -- there, we've gotten questions
around a couple of different areas
that I wanted to start with, the state of AI right now.
And so this question was how mature
are AI solutions now, can you find one
for any application, and how much
are customization or tailored solutions required?
So, you know, can you go out and just buy something
for a problem that you're trying to solve?
Or how much do you really need
to be customizing and doing your own work on that
or hiring somebody to do your work on that?
And I guess that's a question
for all of you. Kimberly, you want to start?
- [Kimberly] Yeah, absolutely, and I think Fernando
probably has some good experience in this area.
Well, I think that in a lot of ways,
AI is very ubiquitous in that,
a lot of the solutions that you buy
at the shop today, and a lot of cases
may have these advanced analytics capabilities
built into them. But as Tad said,
I think in a lot of cases,
what you're looking to do is to buy some
of the core capabilities,
whether that is computer vision capabilities,
natural language processing,
and then customize those
to the specific use case that is there.
So today I think the build versus buy,
there's probably still a lot more customization
that happens even when we're buying a solution
that's packaged with artificial intelligence.
It's really important in those cases
that you ensure the environment and the operating...
whether that's sort of the customer,
the way it interacts with your customers
or your consumers, your users,
the operating environment that you are deploying
that actually matches the environment it was trained in --
i.e, that it reflects the data that it was trained on.
And there's really no sort of fast track
through that part of the process,
I don't believe. But in a lot of cases.
I think most organizations find that in,
especially around core business processes,
whether that's in sales or customer experiences
with these areas, there are packaged solutions
that you can begin with validated
and customized to your needs.
That being said, there are also a lot
of off-the-shelf now packages
around that are the data science packages.
So you can get the math,
if you will, you can find organizations
that these algorithms that come out of the box,
that you can apply to your data
that come packaged with interpretability
and explainability and some of these components.
The trick again is really matching those solutions
so that you're finding for the right problem
with the sort of right hammer.
And that's probably the bigger issue right now --
not so much finding an available tool,
but the features in that the tools
that are available actually
match the problem you're trying to solve.
Fernando, your thoughts.
- [Fernando] Yeah, I started a different perspective,
which is that, if you think about it,
today, if you're a startup and you don't do some AI,
you're probably not going to get any money.
So there's tools for anything you can think of,
there's AI's in or AI's for most of you can do.
I do a little test that may make you smile,
which is to think about from the moment you wake up
to the moment you go to sleep,
write down where you feel AI
has been part of your life.
If you know a little bit about AI,
like it's my world, the list is enormous.
It's enormous. You give it to my mom and nothing, obviously.
So our perception of this.
So what I will tell you is there's tools for everything.
There's things that you can use really easily
but doing what Tad has described is mega-difficult.
So there's a lot of things available, but
it doesn't mean you're actually going to get what you need.
It's almost an excess, an excess of stuff,
but still the people that didn't do very well
still tend to be super-advanced in putting it,
which is what you're saying, Kimberly, putting it together
and making sure it starts a purpose
and in the context of a company
and in the context of all these other things.
So, and that's why I have that first lesson,
which is availability.
I mean, it's there; doesn't mean it's easy, right?
But there, absolutely, totally present.
- [Tad] Yeah, I would agree.
I think the number of tools that are available
in the area of AI is I would make it -- I can go
an auto shop or a Home Depot, right?
There's tons of tools out there.
And you can build just about anything
with all the tools that are out there.
And, in fact, we do leverage many of those tools.
At the end of the day, though,
you have to build those tools
to build something that you need
that's unique to your particular business, right?
And in order to do that, just having
a collection of tools doesn't do it.
You have to put it all together.
And not only after you put it all together,
you have to have somebody that's willing to drive it,
so to speak, right? The end user.
So that not only are the ... the tools important,
the user interface and the method
by which the output of those tools are given
to the end user is really critical, right?
So if you think about driving a car
and we didn't have those simple tools
that allowed you to steer and accelerate and brake,
A car could be incredibly hard to drive.
And if it's incredibly hard to drive, people
aren't going to use it, right?
And if you think about it, though,
all of the components of that car --
incredibly complex; in fact, many of them leverage AI now.
And I think it's not that dissimilar in
most industries and certainly not dissimilar in health care.
- [Abbie] Great, thanks. You know,
we've got a lot of questions coming in
that are very appropriately,
because we were talking about business readiness --
these questions are around the people part of doing AI.
So the first one is: Who should be on the AI team?
Who do you need on your AI team?
And Tad, if you want to start on that one
that would be great.
- [Tad] Yeah, fair.
So, first of all, the basics, right?
You need the people that understand the computer --
the engineers, the data scientists --
and those are just table stakes.
Without that, there is no data science, right?
Having said that, though, just them in a vacuum
doesn't do it for us.
We actually need the end users, for a variety of reasons.
No. 1, we need to know
and understand exactly what their problem is
and what the type of solution would best help their problem.
Right? So we need the end users.
Sometimes, though, the interpretation of that
by engineers lacks a set, so to speak.
So human-centered design people
are really critical not only to translate
what the end users are saying that they need,
but also to create the interface back
from the engineers to those users.
So it becomes very understandable, right?
And interestingly, also keeping our team --
of course, both consultants
that can manage the project,
but also classic epidemiologists.
And the statisticians that can begin to
bridge that puts that, that, that gap
for lack of a better word, between traditional statistics
that we are very comfortable with in machine learning.
So that physicians begin to understand
what's really happening.
And even with that, the interpretability,
or the explainability can be difficult.
But without it, that trust, you know,
what's happened, can be lacking.
So certainly in our field,
that's also another critical component.
- [Abbie] Fernando, you talked about the importance of --
I think your slide was people, capabilities,
and culture, and you talked about --
so when you think about that culture piece,
I was really thinking about the fact
that, you know, changing culture takes time, right?
And you also talked about the importance
of moving quickly when you're moving forward with AI.
So how do you balance the importance
of moving quickly with the fact
that culture does take time to change?
Like where do you put your focus
to sort of create, get past that dynamic?
- [Fernando] Yeah, the simple answer for me is education.
It's one of those moments where --
and by the way, COVID has an interesting effect on this,
which is that before COVID...many CEOs
were thinking, "Oh, my God!
Where's all this money that I spent on AI
that doesn't seem to have produced any value?"
And suddenly COVID hits,
and it's been an acceleration
through survival for many that needed it.
And I'm sure, Tad, you've seen a lot of this
in the health care industry.
So AI has become an accelerator
to that culture point because it was survival.
I don't know which one, if it's continuing,
going to continue to burn this bright
and help push the -- obviously,
we want this pandemic to go away, thank you.
But, but apart from that,
is it going to be have a tail
that continues to help us with this culture?
Abbie, we have the same problem with data,
in that we talk about data-driven culture,
and the truth is data-driven culture
is a culture of self-service of,
"I can serve myself, I can do things.
I am a creator of data and a user of data."
AI is a bit more complicated,
because you don't wake up in the morning
and decide you're going to create a neural net,
and you figured out the mathematics of that.
But if you follow Tad's great way of putting it,
you can learn what your role is in it.
So I think education across the widest possible base
of companies and governments on the possibilities of AI
not just the use cases, but a real notional understanding
of what it is -- not to understand the mathematics
but the notions will help remove
the humanizing view that we have of this technology,
of these machines, and help us with,
"Ah! I understand more or less
how these things kind of work, so I can
understand why this is going to work a particular way.
I've got this notional view of this."
And that really helps accelerate these things.
And, about education...
if you look at the winners, the people that do really well,
it's all education; it's a hundred percent education.
And I can give you a fun way to think about this:
...the people we don't talk about a lot:
which is, if you put five data scientists
in a room for three hours and you come back,
they've done five models in five different machines
in five different languages
that will never work together.
If you put five software engineers,
you come back five, three hours later,
there'll be no code written
but they'll have worked out how to work together.
They will have worked out
how to share; everything will be sorted.
We need -- there's a set of people
called ML engineers, who are the performance
and scaling engineering of models,
who today are probably fighting,
you know, to throw my hat out to those
that career, is probably the people
that we need the most today to make that translation.
They're the people that are going to take those two rooms
and remove the partition, so they can be together.
And we need a lot of those.
- [Abbie] It sounds like the last people you talked about,
those are people that you need
to have on your team, inside the company.
but there's other roles that maybe
you've got to go out and get
that have have particular expertise
around the data science
and the real technical parts of doing AI.
So this question is from someone in the audience;
the question is: Where do you find
the people to create AI solutions?
And they're thinking in terms of
you know, companies, countries,
talent, marketplaces. Where should people
go to look to get that AI talent?
Kimberly, you're on mute.
- [Kimberly] So let me start again, but I'll start.
And then I'll go to Fernando and Tad.
So one of the things I think that is becoming increasingly
interesting is partnerships between industry and academia.
And that could be whether you're utilizing
friends working with folks that are in college programs,
and even PhD and advanced programs, to come in
and work on your projects
as part of their educational bit.
At the end of last year, I had hosted
a panel with the head of basically machine learning,
AI deployment for Uber.
And one of the things that they did
that was really interesting,
they essentially have a process by
which folks who are actively
pursuing the research aspect,
which is really important in academia,
are part-time working also for Uber
and looking at the operational application
of these components and balancing that, sort of,
you're getting this intersection
of industry experts and business arguments
that probably works in the real world
and not sucking away all the talent
from the R&D side and that sort of basic science
and research that's also required.
And I think more and more,
we're going to start to see those kinds
of -- maybe not in that way --
but those kinds of interactions
and partnerships between things like academia,
some of the research organizations,
and commercial organizations as well.
So I think that's one area to look at.
I also don't think -- Don't underestimate
the people who are interested and willing
within your organization today
to actually be trained up
and to come into these areas to work.
I spoke to Michael Kanaan, who was chair of AI
for the Air Force, and they did actually
an internal survey and realized that they had
a lot of folks that were engineers
and not necessarily computer engineers,
but who had a lot of the skills and talents
and interests in this area.
And so being able to make the move
on training internally available
really resulted in dividends for them
in terms of being able to build
a skilled workforce over time.
So those are some, maybe not traditional, ways
of thinking about how do we develop
and build or buy what can be very scarce
and expensive resources.
- [Tad] I would just jump in. Really good comments,
Kimberly, and to help the questioner
look for that talent:
Don't underestimate the talent
that you actually have in your own organization.
Data is a very interesting view source, right?
And you actually need a data maven
in your organization that truly understands
the where, that context, and the availability of the data.
You can have the most advanced software
and data science in the world come into your organization.
And we've certainly done that, right?
And we say, okay, we will say, "Okay,
go ahead and hop into our 44 petabytes of data."
But they don't know the context of data,
the availability of data,
the adjudication of the data,
the quality of the data, all of that.
So you need somebody in your company that really understands
to really leverage the data well.
And then on the flip side,
or the way on the other side,
you also need people that understand
the power of the data and its applicability
in your organization.
Those two sources will come
from within your organization.
And then, so to speak, the toolmakers, right?
The people that really understand
the building of the model,
that you can either hire them in-house --
or, as Kimberly said, I'm sure
Fernando will say -- you can outsource that.
But they have to be people that can communicate well
with your data mavens
and people that really understand it,
and the end users. Otherwise, you have this aggregated,
this team that does not create for you,
anything that's particularly effective
for your organization.
- [Fernando] Can I quickly come in on that, Abbie?
Is that all right?
- [Abbie] Yeah, before you do --
- [Fernando] Because that's from a different --
- [Abbie] Before you do, I just want to ask Tad
if you could mute your microphone
when you're not speaking; we're getting
a little bit of feedback.
Go ahead, Fernando.
- [Fernando] Thanks, Abbie. This is interesting for me.
So we talk in Accenture about making unicorns,
not finding unicorns. But the reason we do is
because we've done a study
and a thought process of what is the future of data science?
Where is the data science going to be in three years?
Not for everybody, for us,
as it relates to us, very inward.
And the truth is what came out of that
was that if you think three years
in the future, it's very unlikely
that the world is going to be full
of generic data scientists.
Remember, data scientist is a profession.
It's a profession like Doctor Tad, right, is a doctor.
This is his profession.
You don't wake up in the morning
and decide you're a data scientist.
So the best thing we can do
is create those careers in companies,
so we can be proud of having a career track,
which creates you these wonderful people
that makes your industry problem.
And, you know, and I'm actually,
to be honest, sad: Teaching hospitals
are like the best example in the world of this.
I mean, there's no better example.
So you almost need like your teaching hospital
of AI within firms to create careers for people.
And that's what's going to set you up,
because if in three years we think
we're just going to be able to pick
these unicorns that understand my industry
and understand this, and we're going to be in big trouble.
So if I were to give anybody advice --
I did this for a bunch of CEOs in the
last few weeks -- is create a career, create careers.
If you create careers, you're winning,
then you don't need anybody.
You don't need me, you don't need anybody.
you just need to create that career.
- [Kimberly] And I'll take it,
I'll extend that a little further.
One of the things that we talk about --
and I think whether you're building
the sort of internal curriculum
and career paths or borrowing
and leveraging external -- we talk a lot about STEM.
So science, technology, engineering, math --
absolutely critical and core
to what we're talking about here.
But you're also now starting to hear people
talk about, I think the new acronym I've seen is SHAPE,
which is about, you know, sociology,
humanities, ethics, sustainability.
So that somewhat the softer side of this --
which is actually the harder piece of this
in terms of adoption -- really thinking more critically
about how people want to interact
with the solutions, what the impact
of these solutions are -- not just on users
in terms of how they work
and how it impacts our jobs,
but also on the populations
are being applied to. I think, Tad,
you spoke to some examples of this in health care,
where we really have to make sure
that it's not just the users
that understand the solution
and are bought into how it's being applied,
but also the populations we're using it
on -- and hopefully for and not against.
And so as you look to develop
these curriculum skills, make sure
that you're not just building
these bastions of STEM that don't also incorporate
and develop those skills associated with SHAPE, or
however you want to call that out.
- [Abbie] We have time for one or two more quick questions.
This one is a little bit different.
When you're thinking about business readiness,
what do organizations have to do
to understand and comply
with the international, regional,
and national regulations and regulatory bodies
regarding the use of AI?
- [Fernando] I can have a go, if you want.
- [Kimberly] Yeah, please.
- [Fernando] I'm sure, Kim, it's an interesting one,
because every company I know
has a body that actually reads and understands
the regulations for them and every...
single one of those departments
will tell you that nobody seems to really pay attention
in depth to those. So GDPR is a wonderful example.
If you think in Europe about protecting your data,
even further because of AI, and you think,
"Oh, my God! Have you read GDPR?"
There's no chance that we're going to misuse data --
no chance, no chance, not possible.
So there, so my suggestion is,
please look at -- inside your company,
you will find somebody who cares deeply
about how to use technology like this.
Otherwise, the regulators around the world,
are doing a good job of trying
to explain what their position is,
even if their position is very young
and very instilled thinking,
and still moving. Go and look;
it's very easy to find, even on Google,
the regulators' pages for how they believe
AI should be used.
So either look inward and you'll get great advice or
look to the regulators, they'll have great advice as well.
- [Kimberly] Yeah, and some of this is, you know,
a lot of the regulations today,
Fernando referenced GDPR,
which has provisions that then also apply
to AI or what they call automated decision systems
and automated systems.
There are a lot of emerging regulations.
So there's a lot of principles today.
There's a lot of fundamental foundational statements now
about what we want to do.
There aren't necessarily outside
of what have been traditional
legal-regulatory concerns,
around things like data privacy
and security that are specific to AI,
but they are coming.
So the other thing to be aware of here is: Don't assume.
You have to start looking forward a little bit
and try to start to project
and predict a bit of where this is going to go
and build these things again in advance.
Some of the new EU regulations,
for instance, will require companies
that are using AI-driven systems, right?
AI-enabled systems to actually
audit those systems and perform
some assessment and due diligence
of potential harms, for instance,
and inventory and make that inventory available.
None of these are set in stone today,
but having folks, as Fernando said,
that are passionate about it,
working in that space today,
whether it's compliance or ethics, will be cool.
But also starting to look forward
and plan proactively will be very important
I understand that a lot of times,
legal and regulatory compliance
tends to come after harm
has been done or, you know, ill has occurred.
And so hopefully, we can run some lessons here
and start to get, as organizations,
in front of that with or without
the mandate of regulation,
which I think is required and is coming.
- [Abbie] Tad, I'm going to -- I mean, yes, Tad,
I'm going to throw this one to you.
What would be the, what is a good practice
to assess and monitor data-science maturity
in your organization?
I assume that's something that keeps changing.
And do you have a process in place
for sort of making sure that the things
that you're trying to do,
you have that maturity to be able to accomplish
what you're trying to do?
- [Tad] Well, first let me take myself off mute.
Wow! That is a really complicated question, right?
Not that easily answered. But, you know,
working backwards, I think we said in our presentation
you want to make sure that the solutions
that are being built have some pragmatic
and real-world measurable outcomes and improvement, right?
Otherwise, it's all an academic exercise, right?
So we do look at the, make sure that any application there
has actually had a positive impact
on quality, on patient service, cost,
providers sustainability.
Provider sustainability is a significant issue
for us right now, especially having coming out
of that COVID surge, which really
put a great strain on our providers.
So we do look at those as key metrics
for any type of not only AI, but almost any type
of [inaudible] process that we put into place.
Of course, the understanding,
so to speak the, the capability
of that data scientist to build the models
that we're looking for is also important.
But assuming that we hired the right people,
with the right qualifications, the other part
that's really key is for that data scientists,
to be able in the health care environment,
to understand, truly understand
the needs of the end users in terms of physicians.
And we put them in front of physicians
and physician groups as you saw earlier.
And if they can't communicate,
the most brilliant data scientist
or the engineer that can build amazing things for you,
but doesn't understand that the needs
or the language of the clinicians
that they're building these tools for,
are not going to be as effective.
So just in brief, a couple of key things
that we look at. Obviously, there's multitude
of other metrics that we need to look at,
but I would say those are some of the key things.
- [Abbie] Great. And I wish we had time to get that answer
from everybody, but we are just about out of time.
So, thank you, Kimberly, Fernando, and Tad
for this really interesting and useful talk
and thank you in the audience for your great questions.
I wish we had time for more of them.
And a final thank you to SAS for sponsoring this webinar.
We hope you'll all join us for the next in the series
which is on April 22nd, where we'll cover
how to ensure the technical readiness
of your organization for AI.
Thank you so much.
تصفح المزيد من مقاطع الفيديو ذات الصلة
Increasing AI Tool Adoption by Front-Line Workers
ChatGPT resta indietro, deepfake irriconoscibili, algoritmi emotivi
GEF Madrid 2024: Financing Education in an AI era
Building trust: Strategies for creating ethical and trustworthy AI systems
The AI Governance Challenge | PulumiUP 2024
SOCIETAS DEL 12 APRILE 2024
5.0 / 5 (0 votes)