Unlock potential of Generative AI by Conquering the Spooky Mountain | Nikhil Bhojwani | TEDxBoston
Summary
TLDRThis script delves into the societal apprehensions surrounding AI, drawing parallels to historical resistance to new technologies like writing. It acknowledges AI's potential to improve governance and address biases and privacy issues but highlights the 'spooky' psychological barrier AI faces due to its human-like capabilities. The speaker emphasizes the importance of redesigning processes and organizations to integrate AI effectively, ensuring transparency, trust, and human-centricity to overcome fear and embrace AI's benefits in sectors like healthcare.
Takeaways
- 🤖 AI as a Double-Edged Sword: The script discusses the potential of AI to both improve and challenge human reliance on technology, similar to Socrates' concerns about writing in the 5th Century BC.
- 🧐 The Illusion of Knowledge: AI can create an illusion of knowledge without genuine understanding, blurring the lines between fact and fiction.
- 🔍 Societal Backlash: Every new technology, including AI, has faced societal backlash, which is a natural part of the adaptation process.
- 🛠 Addressing AI Criticism: AI criticism is often valid and leads to improved governance and performance, addressing issues like errors, bias, and privacy concerns.
- 🧠 Redefining Intelligence: AI challenges traditional notions of intelligence, as seen with achievements in games and creative tasks that were once considered uniquely human.
- 👥 The Uncanny Valley of AI: As AI becomes more human-like, it can evoke psychological fear and discomfort, which may hinder its acceptance and benefits.
- 🏥 Healthcare and AI: AI can assist in healthcare by taking on tasks like reading X-rays, but generative AI that engages in nuanced conversations raises ethical and practical questions.
- 🛑 The Spooky Mountain: The fear of AI encroaching on human territory, especially in areas like healthcare, is a significant barrier to its implementation.
- 🛑 Risk-Aversion in Healthcare: The potential for AI to cause harm in healthcare is a major concern due to the industry's high risk-aversion.
- 🌐 Accelerating AI Adoption: To address current healthcare challenges, there is a need to find ways to accelerate the adoption of AI, despite the psychological barriers.
- 🔄 Redesigning for AI: For AI to be effectively integrated, processes, organizations, and value chains must be redesigned to accommodate its capabilities and potential.
- 💡 Empathy in Change: Empathy is crucial in the change process involving AI, ensuring transparency, trust, and human-centric design to overcome the fear of AI.
Q & A
What is the main concern Socrates expressed about the information technology of his time, which he compared to modern AI?
-Socrates was concerned that the information technology of his time, specifically writing, would lead people to rely on external sources instead of their own memory and understanding. He believed it created the illusion of knowledge without genuine understanding, similar to the concerns some have with modern AI.
Why does the speaker believe that generative AI presents a distinct challenge compared to other forms of AI?
-The speaker believes generative AI presents a distinct challenge because it can engage in nuanced conversations, write complex content, and mimic human-like interactions, which can create psychological fear and unsettle people, as it treads on territory traditionally thought to belong to humans.
What is the 'uncanny valley' concept mentioned in the script, and how does it relate to AI?
-The 'uncanny valley' is the idea that as robots or AI become more human-like, we find them more appealing until a point where they are almost human but not quite, causing an eerie feeling and a loss of trust. The speaker relates this to generative AI, which can mimic human interactions closely, potentially causing discomfort.
How does the speaker describe the potential impact of AI on healthcare?
-The speaker describes AI as having the potential to help in areas such as reading X-rays, role-playing difficult conversations, and assisting clinicians with personalized care. However, the speaker also acknowledges the fear and resistance that may arise from AI's ability to perform tasks traditionally done by humans.
What does the speaker refer to as the 'spooky Mountain'?
-The 'spooky Mountain' is a metaphor used by the speaker to describe the psychological fear and resistance that people may experience when AI starts to perform tasks that were traditionally the domain of humans, such as having nuanced conversations or making ethical decisions.
Why does the speaker suggest that the adoption of AI in healthcare could be slow despite its potential benefits?
-The speaker suggests that the slow adoption of AI in healthcare could be due to the industry's risk-aversion, the psychological fear of AI's capabilities, and the time it takes for innovation to become part of routine clinical practice.
What are some of the challenges the speaker identifies in the current healthcare system that AI could potentially address?
-The speaker identifies challenges such as insufficient capacity to meet patient needs, burned-out clinicians, high costs, and inconsistent quality. AI could help by improving efficiency, reducing administrative burdens, and enhancing personalized care.
What does the speaker propose as a way to overcome the psychological fear and resistance to AI?
-The speaker proposes redesigning processes and organizations to incorporate AI effectively, ensuring transparency and trustworthiness, and infusing the change process with empathy to make it more human-centric and to keep humans in the driver's seat.
How does the speaker suggest redesigning processes to incorporate AI?
-The speaker suggests thinking about where AI can do the same tasks differently, better, faster, or cheaper, and modifying processes accordingly. Additionally, designing new processes around AI's ability to do different things that were not possible before.
What role does the speaker see for humans in the AI-integrated systems of the future?
-The speaker emphasizes that humans should always be in the loop and in charge, with AI serving as an assistant or tool to enhance human capabilities and decision-making, rather than replacing human judgment and interaction.
Why is it important to ensure that AI-integrated systems are rooted in purpose according to the speaker?
-Ensuring that AI-integrated systems are rooted in purpose is important to take people along the journey of change, to address their concerns, and to ensure that the technology serves a meaningful and beneficial role in society, thereby conquering the 'spooky Mountain' of fear and resistance.
Outlines
📚 The Illusion of Knowledge and AI's Societal Impact
This paragraph discusses the historical skepticism towards new information technologies, like writing, which Socrates criticized for potentially leading people to rely on external sources instead of their own memory and understanding. The speaker draws a parallel to modern AI, noting that while it faces valid criticism, it also has the potential to improve governance and performance. The paragraph introduces the concept of 'generative AI' and its ability to mimic human-like interactions, which can be unsettling and may create psychological barriers to its adoption, a phenomenon referred to as the 'spooky mountain'.
🤖 Generative AI's Challenges and Opportunities in Healthcare
The second paragraph delves into the unique challenges and opportunities that generative AI presents in the healthcare sector. It contrasts structured AI applications, like analyzing medical images, with unstructured ones, such as role-playing complex conversations. The speaker emphasizes the importance of human involvement in AI processes and the need to address the fear and resistance that generative AI may provoke. The paragraph also highlights the potential benefits of AI in healthcare, such as improving efficiency, reducing costs, and enhancing patient care, while acknowledging the difficulty of integrating AI into risk-averse environments.
🛠️ Overcoming Fear and Embracing AI Transformation
The final paragraph focuses on the necessary transformations at various levels—processes, organizations, and value chains—to fully leverage AI's potential. It calls for a redesign of processes to better integrate AI, a restructuring of organizations to challenge traditional models, and a reevaluation of industry value chains to create more interconnected ecosystems. The speaker stresses the importance of empathy and effective communication in managing the change process, ensuring transparency, trustworthiness, and a human-centric approach to AI implementation. The goal is to overcome the psychological fear associated with AI and to create a future where AI supports and enhances human endeavors.
Mindmap
Keywords
💡Information Technology
💡Wisdom
💡Generative AI
💡Uncanny Valley
💡Artificial Intelligence (AI)
💡Healthcare
💡Innovation
💡Psychological Fear
💡Ethics
💡Organizational Change
💡Empathy
Highlights
Technology can lead to reliance on external sources rather than personal memory and understanding.
Wisdom is derived from questioning and refining ideas, unlike the illusion of knowledge created by technology.
Socrates' critique of writing as an information technology in the 5th Century BC is analogous to modern concerns about AI.
Every new technology, from writing to smartphones, has faced societal backlash, and AI is no exception.
AI criticism is often valid and leads to improved governance and better performance, addressing errors, bias, and privacy concerns.
Generative AI presents a new challenge in implementation, distinct from traditional AI applications.
AI's ability to mimic human intelligence, such as playing games or creating art, challenges our understanding of what constitutes intelligence.
The 'uncanny valley' effect is applied to AI, where it becomes unsettlingly human-like, causing psychological fear.
The term 'spooky Mountain' is introduced to describe the fear and resistance to AI that mimics human abilities.
AI's potential in healthcare includes reading X-rays and classifying images, but also raises ethical and moral questions.
Generative AI in healthcare could role-play difficult conversations, requiring a nuanced understanding of human emotion and morality.
The fear of AI taking over human roles, especially in sensitive fields like healthcare, is a significant barrier to adoption.
Health systems worldwide are facing challenges that AI could help address, such as capacity, costs, and quality of care.
AI has the potential to provide personalized care, reduce administrative burden, and revitalize the healthcare system.
To fully leverage AI, we must be willing to redesign processes, organizations, and value chains, challenging traditional models.
The role of empathy in change processes is crucial when integrating AI to ensure transparency, trust, and human oversight.
AI's impact on value chains and industry structures requires rethinking boundaries and relationships within ecosystems.
Overcoming the psychological fear of AI and ensuring human-centric design are key to successful integration and adoption.
Transcripts
foreign
so the problem with this information
technology is that it could lead people
to rely on external sources instead of
their own memory and understanding
and in fact while wisdom comes from
questioning challenging and refining
ideas
this technology creates the illusion of
knowledge without genuine understanding
and in fact it's just as good in
producing fiction as fact
and I could be talking about chat GPT or
any of the large language models
but I'm not
these words are from Socrates in the 5th
Century BC
and he is talking about an information
technology that we call writing
and why do we know that
is because his student Plato wrote it
down
so every new technology since gave out
from writing to smartphones has faced
societal backlash
and AI is no different
why should it be
and granted a lot of the criticism that
AI has been receiving
is for good reason in fact it is going
to result in better performance
improved governance and that's going to
address some of the risks around errors
bias or privacy concerns now it's not
going to be easy but we can see a
pathway for technology
and governance to take us in that
direction we've got private companies
government
coalitions working on that
but I believe that generative AI is has
raised a new a distinct and different
impairment
to implementation after all AI is not
new we are comfortable using it to
choose a movie on Netflix
we're happy to get our cell phones and
our computers give us autocorrect at
least most of the time
but that's because we've always been
able to think about intelligence as
whatever machines have not done yet
and you know think about it when uh when
deep blue defeated Kasparov we said
that's not intelligence that's just
Brute Force analytics let's see if a
computer can beat us at goal
and then when alphago came up with a
program that defeated Lee settle
we said well that's just a machine that
learned how to play really well by
playing against itself many times that's
not intelligence
but we can't hide behind Tesla's theorem
anymore not if you've played with chat
GPT or you've used mid-journey or Dolly
to create art
here's why
we all know about the uncanny valley
right it's that idea that
when robots become more and more
human-like in appearance we find them
more
appealing to us but only up to a point
there comes to a point when they're
almost human but not quite and then they
just look eerie and our survival
Instinct kicks in and we lose our trust
in the robot
now think of what's happening with AI
now in generative AI in particular
we have an AI that can engage you know
nuanced conversation with you you have
ai that can write complex emails and you
have ai that can come up with Dad jokes
well bad ones trust me I'm an expert and
this one says why don't scientists trust
atoms
well because they make up everything
now you know chat GPD could have been
talking about itself
but that's not the problem I'm talking
about here the problem I'm talking about
is that when a computer can do this
it's really unsettling it may not be as
visceral as a response to that image
but the AI is treading on territory that
we thought belonged to us humans that
creates psychological fear and I believe
that that fear is going to be a
significant impediment to our being able
to gain the benefits of AI in the future
I like to call this the spooky Mountain
and it is particularly important
in areas such as Healthcare that are
highly human centered
so you know the AI that we're not afraid
of the Netflix kind we can use in
healthcare so we use it to read an x-ray
so it's a defined problem let's say
we've got to figure out or classify
mammograms into high risk versus low
risk we can train the model using
machine learning on a set of images that
either have or don't have abnormalities
and are appropriately annotated so
that's pretty simple and the Machine
gives us an answer there's no morality
or ethics or anything else built into
that answer
and of course we can have a discussion
about
the advantages and drawbacks of using an
AI in that situation but that's really
about the capabilities
and costs
now when you move to generative AI that
allows you to use the AI in a completely
different type of use case for example
to role play a difficult conversation
between a patient and a doctor
so this is obviously an unstructured
problem that requires to be trained on a
general model
but more importantly it's collaborative
and the AI now needs to converse with
you in a human-like way
and it's not just about logic it needs
to listen to your emotion and it needs
to mimic emotion and Morality In what it
gives back to you
that is spooky AI
and
we find it threatening just imagine that
you're the Doctor Who who would have had
that conversation what does it mean what
is your role when a computer can have
that conversation for you if you're a
researcher what's your role if a
computer can not only invent but also
file the patent so this is obviously
very hard to deal with as humans but
most importantly in healthcare
what if that AI causes harm so we are
risk-averse in healthcare and for very
good reason
but because of that partly because of
that sometimes it can take more than a
decade for Innovation to find its way
into routine into broad routine clinical
practice
but Health Systems around the world
today are suffering we have insufficient
capacity to meet the needs of all our
patients
we have burned out clinicians which only
makes that capacity problem worse we
have high costs and huge administrative
burden and we have inconsistent quality
AI can help in every one of those places
so we are hurting patients today if we
don't figure out a way to accelerate the
adoption of AI
and if we can
just imagine a world in which you or
your loved ones can gain high quality
personalized care
from an AI assisted clinician whenever
you need it for as long as you need it
knowing that a doctor can step in the
right doctor can step in at the right
time
imagine that you are that doctor and now
you have time to focus on what's most
important for your patients
knowing that you have a tireless AI
assistant to back you up on your
decisions but also to take care of all
the drudgery or you're a public health
professional and you have the tools to
pinpoint population risk and Target your
interventions to prevent the next
pandemic and I could go on with examples
like this but suffice it to say that in
this world you'd have rapid Innovation
dramatically reduced trade of burden and
therefore costs and a revitalized
education system that's better suited to
the needs of tomorrow
but to get there
we really have to change everything and
be willing to change everything
so first we need to redesign our
processes let's think about where is it
that AI can do the same thing
differently by that I mean better faster
cheaper and let's modify our processes
to do that
but then let's also think about where AI
can do different things and design new
processes around that
we should also restructure our
organizations why should our
organizations of the future be designed
on models that were developed in the
20th century let's challenge our ideas
around who should be on an executive
team how do we do governance what is the
structure of an organization today
that's based on Legacy function and
location which matters less let's also
think about the role of the individual
the role of a team and how we
collaborate
and how we supervise I mean just think
about it why should
we use outdated Norms around how many
people a manager can can manage
so-called span of control when AI is
going to change the very nature of work
and output and supervision so let's
question all of that
but it's not enough just to change
within organizations AI is going to
affect
entire value chains and if you think
about it the structure of our Industries
is based on those value chains and so
how sectors within an industry
interrelate so in the case of healthcare
that would be payers providers life
sciences and so forth we need to
question those boundaries between the
firms and redraw them and think about
how these organizations interrelate
within an ecosystem
easy right
Okay so
change is not easy and this is going to
be really hard on people particularly
because of what I said earlier about the
psychological fear of change that all
this AI is going to bring
so what do we do
well
first
we make sure that we
are passionate about
effectively communicating the vision at
the other end of this
but even as we do that
as we change our processes organizations
and systems to incorporate AI
we need to dig into that most human of
emotions or characteristics empathy
and Infuse our change processes with
that so what do I mean by this so when I
said change processes earlier
I bet a lot of you are thinking
efficiency Effectiveness
Etc but no we have to make sure that our
new processes that incorporate AI are
transparent and trustworthy
we have to make sure that we when we
design organizations that use AI that
humans are always in the loop actually
not just in the loop but humans are
always in charge in the driver's seat
and as we redesign our systems let's
pause and think deeply about making sure
that they are
rooted in purpose
because only if we do that only if we do
that can we take people with us
and conquer the spooky Mountain
thank you
[Applause]
Weitere ähnliche Videos ansehen
Hai paura dell'intelligenza artificiale? Guarda questo video
Mustafa Suleyman & Yuval Noah Harari -FULL DEBATE- What does the AI revolution mean for our future?
Penggunaan Artificial Intelligence (AI) dalam Dunia Kesehatan
The implications of AI on a Center of Excellence
The Importance of AI Governance
Kecerdasan Buatan: Apa Itu AI (Artificial Intelligence)?
5.0 / 5 (0 votes)