Unlock potential of Generative AI by Conquering the Spooky Mountain | Nikhil Bhojwani | TEDxBoston

TEDx Talks
30 May 202310:44

Summary

TLDRThis script delves into the societal apprehensions surrounding AI, drawing parallels to historical resistance to new technologies like writing. It acknowledges AI's potential to improve governance and address biases and privacy issues but highlights the 'spooky' psychological barrier AI faces due to its human-like capabilities. The speaker emphasizes the importance of redesigning processes and organizations to integrate AI effectively, ensuring transparency, trust, and human-centricity to overcome fear and embrace AI's benefits in sectors like healthcare.

Takeaways

  • 🤖 AI as a Double-Edged Sword: The script discusses the potential of AI to both improve and challenge human reliance on technology, similar to Socrates' concerns about writing in the 5th Century BC.
  • 🧐 The Illusion of Knowledge: AI can create an illusion of knowledge without genuine understanding, blurring the lines between fact and fiction.
  • 🔍 Societal Backlash: Every new technology, including AI, has faced societal backlash, which is a natural part of the adaptation process.
  • 🛠 Addressing AI Criticism: AI criticism is often valid and leads to improved governance and performance, addressing issues like errors, bias, and privacy concerns.
  • 🧠 Redefining Intelligence: AI challenges traditional notions of intelligence, as seen with achievements in games and creative tasks that were once considered uniquely human.
  • 👥 The Uncanny Valley of AI: As AI becomes more human-like, it can evoke psychological fear and discomfort, which may hinder its acceptance and benefits.
  • 🏥 Healthcare and AI: AI can assist in healthcare by taking on tasks like reading X-rays, but generative AI that engages in nuanced conversations raises ethical and practical questions.
  • 🛑 The Spooky Mountain: The fear of AI encroaching on human territory, especially in areas like healthcare, is a significant barrier to its implementation.
  • 🛑 Risk-Aversion in Healthcare: The potential for AI to cause harm in healthcare is a major concern due to the industry's high risk-aversion.
  • 🌐 Accelerating AI Adoption: To address current healthcare challenges, there is a need to find ways to accelerate the adoption of AI, despite the psychological barriers.
  • 🔄 Redesigning for AI: For AI to be effectively integrated, processes, organizations, and value chains must be redesigned to accommodate its capabilities and potential.
  • 💡 Empathy in Change: Empathy is crucial in the change process involving AI, ensuring transparency, trust, and human-centric design to overcome the fear of AI.

Q & A

  • What is the main concern Socrates expressed about the information technology of his time, which he compared to modern AI?

    -Socrates was concerned that the information technology of his time, specifically writing, would lead people to rely on external sources instead of their own memory and understanding. He believed it created the illusion of knowledge without genuine understanding, similar to the concerns some have with modern AI.

  • Why does the speaker believe that generative AI presents a distinct challenge compared to other forms of AI?

    -The speaker believes generative AI presents a distinct challenge because it can engage in nuanced conversations, write complex content, and mimic human-like interactions, which can create psychological fear and unsettle people, as it treads on territory traditionally thought to belong to humans.

  • What is the 'uncanny valley' concept mentioned in the script, and how does it relate to AI?

    -The 'uncanny valley' is the idea that as robots or AI become more human-like, we find them more appealing until a point where they are almost human but not quite, causing an eerie feeling and a loss of trust. The speaker relates this to generative AI, which can mimic human interactions closely, potentially causing discomfort.

  • How does the speaker describe the potential impact of AI on healthcare?

    -The speaker describes AI as having the potential to help in areas such as reading X-rays, role-playing difficult conversations, and assisting clinicians with personalized care. However, the speaker also acknowledges the fear and resistance that may arise from AI's ability to perform tasks traditionally done by humans.

  • What does the speaker refer to as the 'spooky Mountain'?

    -The 'spooky Mountain' is a metaphor used by the speaker to describe the psychological fear and resistance that people may experience when AI starts to perform tasks that were traditionally the domain of humans, such as having nuanced conversations or making ethical decisions.

  • Why does the speaker suggest that the adoption of AI in healthcare could be slow despite its potential benefits?

    -The speaker suggests that the slow adoption of AI in healthcare could be due to the industry's risk-aversion, the psychological fear of AI's capabilities, and the time it takes for innovation to become part of routine clinical practice.

  • What are some of the challenges the speaker identifies in the current healthcare system that AI could potentially address?

    -The speaker identifies challenges such as insufficient capacity to meet patient needs, burned-out clinicians, high costs, and inconsistent quality. AI could help by improving efficiency, reducing administrative burdens, and enhancing personalized care.

  • What does the speaker propose as a way to overcome the psychological fear and resistance to AI?

    -The speaker proposes redesigning processes and organizations to incorporate AI effectively, ensuring transparency and trustworthiness, and infusing the change process with empathy to make it more human-centric and to keep humans in the driver's seat.

  • How does the speaker suggest redesigning processes to incorporate AI?

    -The speaker suggests thinking about where AI can do the same tasks differently, better, faster, or cheaper, and modifying processes accordingly. Additionally, designing new processes around AI's ability to do different things that were not possible before.

  • What role does the speaker see for humans in the AI-integrated systems of the future?

    -The speaker emphasizes that humans should always be in the loop and in charge, with AI serving as an assistant or tool to enhance human capabilities and decision-making, rather than replacing human judgment and interaction.

  • Why is it important to ensure that AI-integrated systems are rooted in purpose according to the speaker?

    -Ensuring that AI-integrated systems are rooted in purpose is important to take people along the journey of change, to address their concerns, and to ensure that the technology serves a meaningful and beneficial role in society, thereby conquering the 'spooky Mountain' of fear and resistance.

Outlines

00:00

📚 The Illusion of Knowledge and AI's Societal Impact

This paragraph discusses the historical skepticism towards new information technologies, like writing, which Socrates criticized for potentially leading people to rely on external sources instead of their own memory and understanding. The speaker draws a parallel to modern AI, noting that while it faces valid criticism, it also has the potential to improve governance and performance. The paragraph introduces the concept of 'generative AI' and its ability to mimic human-like interactions, which can be unsettling and may create psychological barriers to its adoption, a phenomenon referred to as the 'spooky mountain'.

05:02

🤖 Generative AI's Challenges and Opportunities in Healthcare

The second paragraph delves into the unique challenges and opportunities that generative AI presents in the healthcare sector. It contrasts structured AI applications, like analyzing medical images, with unstructured ones, such as role-playing complex conversations. The speaker emphasizes the importance of human involvement in AI processes and the need to address the fear and resistance that generative AI may provoke. The paragraph also highlights the potential benefits of AI in healthcare, such as improving efficiency, reducing costs, and enhancing patient care, while acknowledging the difficulty of integrating AI into risk-averse environments.

10:04

🛠️ Overcoming Fear and Embracing AI Transformation

The final paragraph focuses on the necessary transformations at various levels—processes, organizations, and value chains—to fully leverage AI's potential. It calls for a redesign of processes to better integrate AI, a restructuring of organizations to challenge traditional models, and a reevaluation of industry value chains to create more interconnected ecosystems. The speaker stresses the importance of empathy and effective communication in managing the change process, ensuring transparency, trustworthiness, and a human-centric approach to AI implementation. The goal is to overcome the psychological fear associated with AI and to create a future where AI supports and enhances human endeavors.

Mindmap

Keywords

💡Information Technology

Information Technology refers to the use of computers, software, and networks for the storage, retrieval, and transmission of information. In the video, it is discussed as a tool that can lead to reliance on external sources rather than personal memory and understanding, which echoes Socrates' concerns about writing in the 5th Century BC.

💡Wisdom

Wisdom is the deep understanding of knowledge, experience, and insight that involves questioning, challenging, and refining ideas. The video contrasts wisdom with the illusion of knowledge created by technology, suggesting that true wisdom comes from a genuine understanding rather than superficial access to information.

💡Generative AI

Generative AI refers to artificial intelligence systems that can create new content, such as text, images, or music. The video discusses generative AI as a distinct advancement that raises new challenges and opportunities, particularly in the context of creating a sense of unease or 'spooky' factor due to its human-like capabilities.

💡Uncanny Valley

The Uncanny Valley is a concept where human replicas, such as robots, elicit a response of unease or eeriness when they appear almost, but not quite, human. The video uses this concept to describe the discomfort people may feel when AI starts to mimic human conversation and creativity, blurring the line between human and machine.

💡Artificial Intelligence (AI)

Artificial Intelligence is the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The video discusses AI's evolving capabilities and societal fears, as well as its potential to revolutionize various fields, including healthcare.

💡Healthcare

Healthcare is the organized provision of medical services to individuals or communities through various health professionals and allied health fields. The video highlights the potential of AI to assist in healthcare by taking on tasks such as reading X-rays or role-playing difficult conversations, which can help address current challenges in the industry.

💡Innovation

Innovation refers to the process of translating an idea or invention into a good or service that creates value or for which customers will pay. The video discusses the slow adoption of AI in healthcare due to risk-aversion, and how overcoming this fear could lead to rapid innovation and improved patient care.

💡Psychological Fear

Psychological fear is an emotional response to the perception of danger or threat. The video identifies psychological fear as a significant barrier to the adoption of AI, particularly when AI begins to perform tasks traditionally associated with human intelligence and interaction.

💡Ethics

Ethics are the moral principles that govern a person's or group's behavior. In the context of AI, ethics involve considerations of how AI should be used responsibly, ensuring fairness, avoiding bias, and maintaining privacy. The video touches on the importance of ethics in AI, especially in sensitive areas like healthcare.

💡Organizational Change

Organizational change refers to the process of making significant transformations in the way an organization operates. The video calls for a redesign of processes and restructuring of organizations to accommodate AI, emphasizing the need for transparency, trust, and human-centered design in these changes.

💡Empathy

Empathy is the ability to understand and share the feelings of others. The video suggests that empathy should be a key component in the change processes involving AI, ensuring that the transition is human-centric and addresses the emotional responses of those affected by the changes.

Highlights

Technology can lead to reliance on external sources rather than personal memory and understanding.

Wisdom is derived from questioning and refining ideas, unlike the illusion of knowledge created by technology.

Socrates' critique of writing as an information technology in the 5th Century BC is analogous to modern concerns about AI.

Every new technology, from writing to smartphones, has faced societal backlash, and AI is no exception.

AI criticism is often valid and leads to improved governance and better performance, addressing errors, bias, and privacy concerns.

Generative AI presents a new challenge in implementation, distinct from traditional AI applications.

AI's ability to mimic human intelligence, such as playing games or creating art, challenges our understanding of what constitutes intelligence.

The 'uncanny valley' effect is applied to AI, where it becomes unsettlingly human-like, causing psychological fear.

The term 'spooky Mountain' is introduced to describe the fear and resistance to AI that mimics human abilities.

AI's potential in healthcare includes reading X-rays and classifying images, but also raises ethical and moral questions.

Generative AI in healthcare could role-play difficult conversations, requiring a nuanced understanding of human emotion and morality.

The fear of AI taking over human roles, especially in sensitive fields like healthcare, is a significant barrier to adoption.

Health systems worldwide are facing challenges that AI could help address, such as capacity, costs, and quality of care.

AI has the potential to provide personalized care, reduce administrative burden, and revitalize the healthcare system.

To fully leverage AI, we must be willing to redesign processes, organizations, and value chains, challenging traditional models.

The role of empathy in change processes is crucial when integrating AI to ensure transparency, trust, and human oversight.

AI's impact on value chains and industry structures requires rethinking boundaries and relationships within ecosystems.

Overcoming the psychological fear of AI and ensuring human-centric design are key to successful integration and adoption.

Transcripts

play00:01

foreign

play00:09

so the problem with this information

play00:10

technology is that it could lead people

play00:13

to rely on external sources instead of

play00:15

their own memory and understanding

play00:18

and in fact while wisdom comes from

play00:20

questioning challenging and refining

play00:22

ideas

play00:23

this technology creates the illusion of

play00:25

knowledge without genuine understanding

play00:27

and in fact it's just as good in

play00:30

producing fiction as fact

play00:33

and I could be talking about chat GPT or

play00:36

any of the large language models

play00:38

but I'm not

play00:39

these words are from Socrates in the 5th

play00:42

Century BC

play00:44

and he is talking about an information

play00:46

technology that we call writing

play00:49

and why do we know that

play00:51

is because his student Plato wrote it

play00:54

down

play00:56

so every new technology since gave out

play00:59

from writing to smartphones has faced

play01:01

societal backlash

play01:03

and AI is no different

play01:05

why should it be

play01:06

and granted a lot of the criticism that

play01:09

AI has been receiving

play01:10

is for good reason in fact it is going

play01:13

to result in better performance

play01:16

improved governance and that's going to

play01:18

address some of the risks around errors

play01:20

bias or privacy concerns now it's not

play01:24

going to be easy but we can see a

play01:25

pathway for technology

play01:28

and governance to take us in that

play01:30

direction we've got private companies

play01:33

government

play01:34

coalitions working on that

play01:37

but I believe that generative AI is has

play01:41

raised a new a distinct and different

play01:44

impairment

play01:46

to implementation after all AI is not

play01:49

new we are comfortable using it to

play01:51

choose a movie on Netflix

play01:52

we're happy to get our cell phones and

play01:55

our computers give us autocorrect at

play01:57

least most of the time

play01:59

but that's because we've always been

play02:01

able to think about intelligence as

play02:03

whatever machines have not done yet

play02:07

and you know think about it when uh when

play02:11

deep blue defeated Kasparov we said

play02:13

that's not intelligence that's just

play02:15

Brute Force analytics let's see if a

play02:17

computer can beat us at goal

play02:20

and then when alphago came up with a

play02:21

program that defeated Lee settle

play02:24

we said well that's just a machine that

play02:26

learned how to play really well by

play02:28

playing against itself many times that's

play02:30

not intelligence

play02:32

but we can't hide behind Tesla's theorem

play02:34

anymore not if you've played with chat

play02:36

GPT or you've used mid-journey or Dolly

play02:39

to create art

play02:42

here's why

play02:44

we all know about the uncanny valley

play02:45

right it's that idea that

play02:49

when robots become more and more

play02:52

human-like in appearance we find them

play02:55

more

play02:55

appealing to us but only up to a point

play02:58

there comes to a point when they're

play03:00

almost human but not quite and then they

play03:02

just look eerie and our survival

play03:05

Instinct kicks in and we lose our trust

play03:07

in the robot

play03:08

now think of what's happening with AI

play03:10

now in generative AI in particular

play03:12

we have an AI that can engage you know

play03:14

nuanced conversation with you you have

play03:16

ai that can write complex emails and you

play03:19

have ai that can come up with Dad jokes

play03:23

well bad ones trust me I'm an expert and

play03:26

this one says why don't scientists trust

play03:28

atoms

play03:30

well because they make up everything

play03:33

now you know chat GPD could have been

play03:35

talking about itself

play03:37

but that's not the problem I'm talking

play03:39

about here the problem I'm talking about

play03:41

is that when a computer can do this

play03:44

it's really unsettling it may not be as

play03:47

visceral as a response to that image

play03:50

but the AI is treading on territory that

play03:53

we thought belonged to us humans that

play03:56

creates psychological fear and I believe

play03:58

that that fear is going to be a

play04:00

significant impediment to our being able

play04:02

to gain the benefits of AI in the future

play04:06

I like to call this the spooky Mountain

play04:10

and it is particularly important

play04:12

in areas such as Healthcare that are

play04:15

highly human centered

play04:18

so you know the AI that we're not afraid

play04:20

of the Netflix kind we can use in

play04:22

healthcare so we use it to read an x-ray

play04:25

so it's a defined problem let's say

play04:27

we've got to figure out or classify

play04:29

mammograms into high risk versus low

play04:31

risk we can train the model using

play04:34

machine learning on a set of images that

play04:37

either have or don't have abnormalities

play04:38

and are appropriately annotated so

play04:40

that's pretty simple and the Machine

play04:42

gives us an answer there's no morality

play04:45

or ethics or anything else built into

play04:47

that answer

play04:48

and of course we can have a discussion

play04:50

about

play04:51

the advantages and drawbacks of using an

play04:53

AI in that situation but that's really

play04:55

about the capabilities

play04:57

and costs

play04:58

now when you move to generative AI that

play05:01

allows you to use the AI in a completely

play05:03

different type of use case for example

play05:05

to role play a difficult conversation

play05:09

between a patient and a doctor

play05:11

so this is obviously an unstructured

play05:13

problem that requires to be trained on a

play05:15

general model

play05:16

but more importantly it's collaborative

play05:19

and the AI now needs to converse with

play05:21

you in a human-like way

play05:23

and it's not just about logic it needs

play05:25

to listen to your emotion and it needs

play05:27

to mimic emotion and Morality In what it

play05:30

gives back to you

play05:31

that is spooky AI

play05:35

and

play05:36

we find it threatening just imagine that

play05:39

you're the Doctor Who who would have had

play05:41

that conversation what does it mean what

play05:43

is your role when a computer can have

play05:45

that conversation for you if you're a

play05:47

researcher what's your role if a

play05:49

computer can not only invent but also

play05:52

file the patent so this is obviously

play05:54

very hard to deal with as humans but

play05:57

most importantly in healthcare

play05:59

what if that AI causes harm so we are

play06:02

risk-averse in healthcare and for very

play06:04

good reason

play06:05

but because of that partly because of

play06:07

that sometimes it can take more than a

play06:09

decade for Innovation to find its way

play06:12

into routine into broad routine clinical

play06:15

practice

play06:16

but Health Systems around the world

play06:18

today are suffering we have insufficient

play06:21

capacity to meet the needs of all our

play06:22

patients

play06:24

we have burned out clinicians which only

play06:26

makes that capacity problem worse we

play06:28

have high costs and huge administrative

play06:30

burden and we have inconsistent quality

play06:33

AI can help in every one of those places

play06:35

so we are hurting patients today if we

play06:38

don't figure out a way to accelerate the

play06:40

adoption of AI

play06:41

and if we can

play06:43

just imagine a world in which you or

play06:45

your loved ones can gain high quality

play06:47

personalized care

play06:49

from an AI assisted clinician whenever

play06:52

you need it for as long as you need it

play06:53

knowing that a doctor can step in the

play06:55

right doctor can step in at the right

play06:57

time

play06:58

imagine that you are that doctor and now

play07:00

you have time to focus on what's most

play07:02

important for your patients

play07:03

knowing that you have a tireless AI

play07:06

assistant to back you up on your

play07:08

decisions but also to take care of all

play07:10

the drudgery or you're a public health

play07:12

professional and you have the tools to

play07:14

pinpoint population risk and Target your

play07:17

interventions to prevent the next

play07:19

pandemic and I could go on with examples

play07:22

like this but suffice it to say that in

play07:24

this world you'd have rapid Innovation

play07:25

dramatically reduced trade of burden and

play07:27

therefore costs and a revitalized

play07:30

education system that's better suited to

play07:31

the needs of tomorrow

play07:34

but to get there

play07:36

we really have to change everything and

play07:38

be willing to change everything

play07:39

so first we need to redesign our

play07:41

processes let's think about where is it

play07:45

that AI can do the same thing

play07:46

differently by that I mean better faster

play07:49

cheaper and let's modify our processes

play07:51

to do that

play07:52

but then let's also think about where AI

play07:55

can do different things and design new

play07:58

processes around that

play08:00

we should also restructure our

play08:01

organizations why should our

play08:03

organizations of the future be designed

play08:05

on models that were developed in the

play08:07

20th century let's challenge our ideas

play08:10

around who should be on an executive

play08:11

team how do we do governance what is the

play08:14

structure of an organization today

play08:15

that's based on Legacy function and

play08:17

location which matters less let's also

play08:20

think about the role of the individual

play08:21

the role of a team and how we

play08:22

collaborate

play08:24

and how we supervise I mean just think

play08:25

about it why should

play08:27

we use outdated Norms around how many

play08:31

people a manager can can manage

play08:33

so-called span of control when AI is

play08:36

going to change the very nature of work

play08:37

and output and supervision so let's

play08:39

question all of that

play08:41

but it's not enough just to change

play08:43

within organizations AI is going to

play08:45

affect

play08:46

entire value chains and if you think

play08:48

about it the structure of our Industries

play08:50

is based on those value chains and so

play08:52

how sectors within an industry

play08:54

interrelate so in the case of healthcare

play08:56

that would be payers providers life

play08:58

sciences and so forth we need to

play09:00

question those boundaries between the

play09:02

firms and redraw them and think about

play09:04

how these organizations interrelate

play09:06

within an ecosystem

play09:09

easy right

play09:10

Okay so

play09:12

change is not easy and this is going to

play09:15

be really hard on people particularly

play09:17

because of what I said earlier about the

play09:20

psychological fear of change that all

play09:22

this AI is going to bring

play09:24

so what do we do

play09:25

well

play09:26

first

play09:28

we make sure that we

play09:29

are passionate about

play09:31

effectively communicating the vision at

play09:34

the other end of this

play09:36

but even as we do that

play09:39

as we change our processes organizations

play09:41

and systems to incorporate AI

play09:44

we need to dig into that most human of

play09:47

emotions or characteristics empathy

play09:50

and Infuse our change processes with

play09:52

that so what do I mean by this so when I

play09:55

said change processes earlier

play09:57

I bet a lot of you are thinking

play09:58

efficiency Effectiveness

play10:01

Etc but no we have to make sure that our

play10:04

new processes that incorporate AI are

play10:06

transparent and trustworthy

play10:08

we have to make sure that we when we

play10:10

design organizations that use AI that

play10:13

humans are always in the loop actually

play10:16

not just in the loop but humans are

play10:17

always in charge in the driver's seat

play10:20

and as we redesign our systems let's

play10:23

pause and think deeply about making sure

play10:26

that they are

play10:28

rooted in purpose

play10:30

because only if we do that only if we do

play10:33

that can we take people with us

play10:36

and conquer the spooky Mountain

play10:39

thank you

play10:40

[Applause]

Rate This

5.0 / 5 (0 votes)

Ähnliche Tags
AI ChallengesTechnology EthicsHealthcare InnovationGenerative AIEmpathyFuture of WorkOrganizational ChangePsychological FearAI GovernanceSpooky AI
Benötigen Sie eine Zusammenfassung auf Englisch?