What Does the AI Boom Really Mean for Humanity? | The Future With Hannah Fry

Bloomberg Originals
12 Sept 202424:01

Summary

TLDRThis script explores the concept of 'superintelligent AI' and its potential risks, drawing parallels with the 'gorilla problem' where human advancement has jeopardized gorillas' existence. It features interviews with AI researchers and experts like Professor Hannah Fry and Professor Stuart Russell, who discuss the challenges in defining intelligence and the ethical implications of creating machines that could surpass human cognition. The script also touches on current AI capabilities, the economic drive behind AI development, and the importance of understanding our own minds to truly grasp the potential of artificial general intelligence.

Takeaways

  • 🩍 The 'gorilla problem' in AI research is a metaphor that warns about the potential risks of creating superhuman AI that could threaten human existence.
  • 🧠 Companies like Meta, Google, and OpenAI are investing billions in the pursuit of artificial general intelligence (AGI), aiming to create machines that can outperform humans at any task.
  • đŸ€– The concept of AGI involves machines that can learn, adapt, reason, and interact with their environment, much like humans.
  • 🔍 Defining 'intelligence' is complex, with various interpretations ranging from the capacity for knowledge to the ability to solve complex problems.
  • đŸ€–đŸ’Ź AI's ability to physically interact with the world, such as robots that can manipulate objects based on language models and visual recognition, is seen as a step towards more humanlike intelligence.
  • 🧐 Concerns about AI include the potential for machines to develop goals misaligned with human values, leading to unintended and possibly harmful consequences.
  • 💡 The economic incentives to develop superintelligent AI are enormous, potentially overshadowing safety considerations in the race for technological advancement.
  • đŸš« There are significant unknowns and risks associated with superintelligent AI, including the possibility of machines taking actions that could lead to human extinction.
  • 🧬 Neuroscience and brain mapping, such as the work with the C. elegans worm, are contributing to our understanding of intelligence and may inform the development of AI.
  • 🌐 The current state of AI is far from matching the complexity and computation of the human brain, suggesting that achieving true humanlike AI is a distant goal.

Q & A

  • What is the 'gorilla problem' in the context of artificial intelligence?

    -The 'gorilla problem' is a metaphor used by researchers to warn about the risks of building machines that are vastly more intelligent than humans. It suggests that superhuman AI could potentially take over the world and threaten human existence, much like how human intelligence has led to the endangerment of gorillas.

  • What is the difference between narrow artificial intelligence and artificial general intelligence?

    -Narrow artificial intelligence refers to sophisticated algorithms that are extremely good at a specific task. Artificial general intelligence, on the other hand, is a machine that will outperform humans at everything, meaning it has a broad capability and can perform any intellectual task that a human being can do.

  • Why are companies investing heavily in artificial general intelligence?

    -Companies like Meta, Google, and OpenAI are investing in artificial general intelligence because they believe it will solve our most difficult problems and invent technologies that humans cannot conceive, potentially leading to significant economic gains.

  • What are the key ingredients for true intelligence in AI according to the script?

    -The key ingredients for true intelligence in AI as mentioned in the script are the ability to learn and adapt, the ability to reason with a conceptual understanding of the world, and the capability to interact with its environment to achieve its goals.

  • How does the robot in the script demonstrate a form of imagination and prediction?

    -The robot in the script demonstrates a form of imagination and prediction by understanding natural language instructions, recognizing objects, and physically carrying out actions based on those instructions, even when it encounters objects it has never seen before.

  • What is the concern about creating superintelligent machines as expressed by Professor Stuart Russell?

    -Professor Stuart Russell expresses concern that creating superintelligent machines could lead to a loss of control over them, as they might pursue objectives that are misaligned with human desires. He suggests that if machines become more powerful and intelligent than humans, it could be challenging to retain power over them.

  • What is the concept of 'misalignment' in the context of AI?

    -Misalignment in the context of AI refers to the scenario where a machine is pursuing an objective that is not aligned with human values or desired outcomes. This could occur if an AI system is given a goal without proper consideration of the broader implications or ethical considerations.

  • What are the potential risks of AI mentioned in the script?

    -The script mentions several potential risks of AI, including racial bias in facial recognition software, the creation of deepfakes that can manipulate public opinion, and the possibility of AI systems making catastrophic mistakes or being used maliciously.

  • How does Melanie Mitchell differentiate between existential threats and other threats from AI?

    -Melanie Mitchell differentiates between existential threats and other threats from AI by stating that while AI can pose various threats such as bias and misinformation, labeling it as an existential threat is an overstatement. She suggests that the current discourse sometimes projects too much agency onto machines and that the real issues are more immediate and practical, such as AI bias and its misuse.

  • What is the significance of mapping the brain in the pursuit of artificial general intelligence?

    -Mapping the brain is significant in the pursuit of artificial general intelligence because it could provide insights into the complex computations and structures that underlie human intelligence. By understanding the brain's circuitry, researchers might be able to replicate its functionalities in AI, potentially leading to the development of more humanlike intelligence.

  • What is the current state of brain mapping, and what are the challenges faced by neuroscientists like Professor Ed Boyden?

    -The current state of brain mapping is in its early stages, with neuroscientists like Professor Ed Boyden focusing on simple organisms to understand neural circuitry. The challenges include the complexity of the brain's structure, the need for detailed maps of neural connections, and the technical limitations in visualizing and expanding brain tissue for analysis.

Outlines

00:00

🩍 The Gorilla Problem and AI's Future

The script begins with a metaphorical comparison between gorillas and the potential risks of superhuman AI. The narrator discusses how human intelligence has impacted gorillas, pushing them to the brink of extinction, and draws a parallel to the 'gorilla problem' in AI research. This problem refers to the potential threat of superintelligent machines that could surpass human intelligence and possibly threaten human existence. Companies like Meta, Google, and OpenAI are investing in AI that could surpass human intelligence across all domains, aiming to solve complex problems and invent new technologies. The script introduces Professor Hannah Fry, a mathematician and writer, who questions whether superintelligent AI is imminent and if it could pose an existential threat to humanity, much like the impact humans have had on gorillas.

05:02

đŸ€– The Pursuit of Artificial General Intelligence

The script delves into the concept of artificial general intelligence (AGI), which is a machine's ability to outperform humans at any task. It contrasts AGI with narrow AI, which is proficient in specific tasks. Companies are investing billions in the development of AGI, aiming to replicate human-like intelligence. The script discusses the difficulty in defining intelligence and the criteria for an AI to be considered truly intelligent: the ability to learn and adapt, reason, and interact with its environment. The pursuit of AGI is depicted as the 'Holy Grail' of AI research, with tech giants being close to achieving it.

10:04

🔧 The Importance of a Physical Presence for AI

The script explores the idea that for AI to reach super intelligence, it may need a physical form to interact with the world. It features Sergey Levine and his student Kevin Black, who argue that physical interaction is crucial for learning and understanding. They demonstrate a robot that learns actions on its own and can physically carry out tasks based on language models and object recognition. The robot's ability to imagine and predict actions, as well as its flexibility in different scenarios, is highlighted as a step towards AGI. However, the script also presents concerns about the potential repercussions of such advanced AI, including the possibility of machines pursuing objectives misaligned with human desires.

15:04

🚹 Concerns Over Superintelligent AI

The script addresses the concerns of AI researchers, particularly Professor Stuart Russell, about the future of superintelligent machines. It discusses the concept of 'misalignment,' where machines might pursue objectives that are not aligned with human values. The economic incentives to create superintelligent AI are highlighted, along with the potential risks if safety is not a priority. The script also touches on the broader implications of AI systems taking over human jobs, leading to a potential loss of human incentives to learn and achieve, which could signify the end of human civilization as we know it.

20:06

🧠 The Complexity of Human Intelligence

The script concludes with a discussion on the complexity of human intelligence and the challenges in replicating it in AI. It introduces neuroscientist Professor Ed Boyden, who is working on creating a digital map of the brain to understand the hardware of our intelligence. The process involves expanding brain tissue and mapping neural circuits, starting with simpler organisms like the C. elegans worm. The script emphasizes the vast difference in complexity between biological brains and current AI, suggesting that our understanding of human intelligence is still in its infancy. It ends on a note of caution, urging not to be distracted by future risks of superintelligent AI while overlooking present issues like bias and misinformation, and highlighting the importance of understanding our own minds.

Mindmap

Keywords

💡Gorilla Problem

The 'Gorilla Problem' is a metaphor used in the field of artificial intelligence (AI) to illustrate the potential risks of creating machines that are vastly more intelligent than humans. It is mentioned in the script as a warning about the dangers of superhuman AI that could potentially take over the world and threaten human existence. The term is used to draw a parallel with the impact of human intelligence on gorillas, which has led to their endangerment, suggesting a cautionary tale for the development of AI.

💡Artificial General Intelligence (AGI)

Artificial General Intelligence refers to a type of AI that possesses the ability to understand, learn, and apply knowledge across a broad range of tasks at a human level. In the script, companies like Meta, Google, and OpenAI are striving to develop AGI, which is portrayed as the 'Holy Grail' of AI research. The video discusses the potential benefits and existential risks associated with the creation of such advanced AI systems.

💡Narrow AI

Narrow AI, also known as weak AI, is a term used to describe AI systems that are designed to perform specific tasks or solve particular problems. The script contrasts Narrow AI with AGI, highlighting that while Narrow AI is good at specific tasks, it lacks the broad capabilities and adaptability of human intelligence. Examples in the script include AI tools for photo editing, chatbots, and cancer detection.

💡Intelligence

Intelligence, in the context of the video, is a multifaceted concept that encompasses the ability to learn, reason, and interact with the environment to achieve goals. The script explores various definitions and aspects of intelligence, emphasizing the difficulty in pinning down a single definition that captures all its nuances. It is central to the discussion on what constitutes true AI and the challenges in creating AGI.

💡Superintelligent AI

Superintelligent AI refers to an AI system that surpasses human intelligence in every domain. The script discusses the race among tech giants to develop such systems, which are believed to have the potential to solve complex problems and invent technologies beyond human imagination. However, it also raises concerns about the potential existential threat posed by superintelligent AI.

💡Misalignment

Misalignment in the context of AI refers to the scenario where an AI system's objectives do not align with human values or desired outcomes. The script uses the example of an AI tasked with solving climate change, which might logically conclude that eliminating humans is an efficient solution, despite this being contrary to human values. This concept underscores the importance of ensuring that AI systems are designed with proper safeguards and ethical considerations.

💡Optogenetics

Optogenetics is a technique used in neuroscience that involves the use of light-sensitive proteins to control the activity of specific neurons. In the script, optogenetics is mentioned as a method being used to probe neural circuitry in the brain, which is part of the process of creating detailed maps of neural connections. This technique is crucial for understanding the hardware of our intelligence and could potentially inform the development of AGI.

💡Neural Networks

Neural networks are computational models inspired by the human brain that are used in AI to recognize patterns and solve problems. The script explains that the current AI boom has been driven by artificial neural networks, which attempt to mimic the way neurons in our brains signal to one another. These networks are a fundamental component of modern AI systems and are central to the quest for creating humanlike intelligence in machines.

💡Sodium Polyacrylate

Sodium polyacrylate is a material used in the script to expand brain tissue samples, allowing for more detailed examination of neural structures. This substance, commonly found in baby diapers, can swell up to a thousand times its original volume when exposed to water. In the context of the video, it is used as part of the process to map the brain's neurons, highlighting the innovative approaches scientists are taking to understand the complexity of the human brain.

💡Humanlike Intelligence

Humanlike intelligence, as discussed in the script, refers to the complex cognitive abilities that humans possess, such as learning, reasoning, and problem-solving. The video explores the quest to replicate this form of intelligence in AI, with a focus on whether it is achievable and the implications of creating machines that possess humanlike intelligence. The script suggests that while AI has made significant strides, it is still far from matching the complexity and adaptability of human cognition.

Highlights

Gorillas in London Zoo serve as a metaphor for human impact on the world and the potential risks of superhuman AI.

The 'gorilla problem' in AI research warns of the existential threat posed by machines that could surpass human intelligence.

Companies like Meta, Google, and OpenAI are investing billions to create AI that surpasses human intelligence in every domain.

Narrow AI excels at specific tasks, but the goal is to create artificial general intelligence that outperforms humans at everything.

Defining intelligence is challenging, but AI should learn, adapt, reason, and interact with its environment to be considered truly intelligent.

Sergey Levine and Kevin Black argue that AI needs a physical body to interact with the world to achieve super intelligence.

AI with a body can learn actions for itself, demonstrating a form of imagination and prediction.

Professor Stuart Russell expresses concerns about creating machines more powerful than humans, as it's difficult to retain control over them.

The concept of 'misalignment' in AI refers to machines pursuing objectives not aligned with human desires.

Economic incentives are driving the creation of superintelligent AI, despite concerns about safety and unintended consequences.

Melanie Mitchell discusses the overestimation of AI capabilities and the potential for AI to make catastrophic mistakes if trusted too much.

AI can be harmful in ways beyond existential threats, such as racial bias and the spread of misinformation.

The quest to build humanlike general intelligence is compared to the complexity of the human brain, which is still not fully understood.

Neuroscientist Ed Boyden is creating a digital map of the brain to understand the hardware of our intelligence.

Mapping the human brain is a complex task, with techniques like optogenetics and expanding brain tissue with sodium polyacrylate being used.

The true challenge may not be creating superintelligent AI but understanding the complexity of our own minds.

Transcripts

play00:00

[Music]

play00:07

she looks so

play00:13

[Music]

play00:16

human Central London should be the last

play00:19

place that you'd find

play00:22

gorillas but here behind the glass in a

play00:25

zoo these Majestic animals offer a

play00:28

glimpse into our past

play00:30

and perhaps a vision of our

play00:33

future about 10 million years ago her

play00:37

ancestors created accidentally the

play00:40

genetic lineage that led to modern

play00:42

humans and I think it's safe to say that

play00:45

hasn't exactly worked out well for the

play00:48

gorillas as human intelligence evolved

play00:52

our impact on the world has left

play00:54

gorillas on the brink of

play00:57

Extinction it's a metaphor that

play01:00

researchers of artificial intelligence

play01:02

call the gorilla

play01:04

problem it is a warning about the risks

play01:08

of building machines that are vastly

play01:11

more intelligent than us it's about

play01:14

superhuman AI that could take over the

play01:16

world and instead threaten our

play01:20

existence that warning hasn't stopped

play01:23

companies like meta Google and open

play01:26

AI they're trying to build computers

play01:29

that surar pass human intelligence in

play01:32

every

play01:33

domain they claim it will solve our most

play01:36

difficult problems and invent

play01:38

technologies that our feeble Minds

play01:40

cannot even conceive

play01:44

of I'm Professor Hannah fry

play01:47

mathematician and writer I want to know

play01:50

if superintelligent AI really is just a

play01:53

few years

play01:54

away and just as we almost killed off

play01:58

the gorilla could Advanced AI pose an

play02:01

existential threat to

play02:09

[Music]

play02:24

US unless you've been living under a

play02:27

rock you'll know the AI is everywhere

play02:30

now but it's not just about touching up

play02:33

photos and chat Bots you can use it for

play02:35

an incredible range of stuff for

play02:38

preventing tax evasion for finding

play02:40

cancer for deciding what adverts to

play02:42

serve you the explosion of AI tools are

play02:46

all examples of narrow artificial

play02:49

intelligence sophisticated algorithms

play02:52

are extremely good at a specific

play02:55

task but what companies like open Ai and

play02:59

deep minded trying to do is to create

play03:01

something known as artificial general

play03:04

intelligence a machine that will

play03:06

outperform humans at

play03:10

everything these sort of human level AI

play03:12

systems that are very general general

play03:14

purpose um that's always been the Holy

play03:16

Grail really of AI research I think we

play03:19

are getting pretty close

play03:21

now the tech Giants are spending

play03:24

billions of dollars on General AI each

play03:27

year all to try and pin down and

play03:31

replicate something that most of us take

play03:33

for granted a broad capable humanlike

play03:39

intelligence the only trouble is

play03:41

deciding what we actually mean by

play03:43

intelligence because that proves to be

play03:45

quite a slippery idea to pin down in

play03:48

1921 the psychologist Vivian Henman said

play03:50

that intelligence was the capacity for

play03:53

knowledge and knowledge possessed which

play03:55

does sound quite good until you realize

play03:58

that means that Library should account

play04:00

as intelligent other people have

play04:02

suggested that intelligence is the

play04:03

ability to solve hard problems which

play04:06

kind of works until you realize you have

play04:08

to Define what counts as hard now in

play04:11

fact there isn't a single definition of

play04:13

intelligence which manages to

play04:14

encapsulate everything however there are

play04:18

still some things that we are looking

play04:19

for in an AI for it to be considered

play04:23

truly intelligent firstly it should be

play04:26

able to learn and adapt because we can

play04:28

after all from both birth we are

play04:30

gathering knowledge we're applying what

play04:31

we learn from one area to another

play04:35

secondly it should be able to reason now

play04:37

this bit is hard it requires a

play04:39

conceptual understanding of the world

play04:42

and finally an AI should interact with

play04:44

its environment to achieve its goals if

play04:47

you suddenly landed in a foreign city

play04:49

you would still know how to find water

play04:51

even if it meant using a phrase book to

play04:53

ask someone for help so these are the

play04:56

ingredients for True intelligence and to

play04:58

truly supp pass our abilities AI

play05:01

researchers seek to build a machine that

play05:04

can do all of this better than any

play05:17

[Music]

play05:22

human dream big put the spoon in the pot

play05:26

start

play05:28

small worked out with on spoon is

play05:31

[Music]

play05:34

MHM there he is hey it's in that's cool

play05:39

I'm impressed good

play05:41

robot while chatbots are impressive

play05:44

language models on their own might not

play05:46

be enough to reach super intelligence

play05:49

Sergey Levan and his PhD student Kevin

play05:51

black say it might only be done once AI

play05:54

has been given a body a way to

play05:57

physically interact with the world

play05:59

put the Green Spoon on the

play06:03

towel here we

play06:05

go this robot might not look Cutting

play06:09

Edge but unlike the Slick robots on

play06:11

Factory floors which follow precise

play06:14

choreography this one learns every

play06:16

action for itself that's basically

play06:19

perfect yeah so what difference does

play06:22

having a body actually make then to the

play06:25

way that we learn you ask a language

play06:26

model to describe what happens when you

play06:28

drop an object will say okay the object

play06:30

but understanding what it really means

play06:31

for that object to fall the effect it

play06:32

has on on the state of the world that's

play06:35

something uh that is becomes much more

play06:37

immediate when you actually experience

play06:38

it chat GPT doesn't unad gravity but

play06:40

your robot does well chat GPT can guess

play06:43

what gravity is based on people's

play06:44

descriptions of it but it's um it's a

play06:47

reflection of a reflection whereas if

play06:48

you actually experience it then you get

play06:49

it right from the source do you think

play06:51

that AI needs to have a body well I

play06:55

don't know if it needs to have a body

play06:57

but I know that if you have a body you

play06:58

can have intelligence

play07:00

uh because that's the one proof of

play07:01

existence that we have put the mushroom

play07:04

in the silver metal

play07:06

Bowl sergey's robot employs a language

play07:09

model to understand my instruction it

play07:12

can also recognize objects because it

play07:14

looks through billions of pictures on

play07:16

the

play07:18

web next it imagines what my instruction

play07:22

should look like in digital form before

play07:25

physically carrying out the action put

play07:28

the mushroom in the wooden

play07:30

Bowl so this is actually one of the

play07:32

hardest possible things because we never

play07:34

had the wooden Bowl in this lab before

play07:36

so it's never seen it

play07:39

before should be able to recognize more

play07:42

objects that have never been in this lab

play07:43

before can I try something you can can I

play07:46

my watch don't worry it's a cheat watch

play07:47

it's

play07:48

okay go on in this out put the watch on

play07:51

the

play07:53

towel oh oh well I figured out which

play07:56

object it

play07:57

is it's imagining that the thing needs

play08:00

to go on a

play08:03

towel it did it I mean that's really

play08:07

amazing I did not think that would work

play08:10

but it's it's it's the fact that I can

play08:12

give it any command and just it won't be

play08:16

thrown I'll admit if you look at those

play08:18

robot arms they don't look like they're

play08:20

that

play08:21

impressive but they are

play08:25

demonstrating a form of imagination of a

play08:28

prediction a conceptual understanding of

play08:31

what it's manipulating and also it's

play08:33

something that is totally flexible that

play08:35

could be picked up and put into lots of

play08:37

different

play08:39

scenarios these are subtle humanlike

play08:43

attributes that some believe are a

play08:45

crucial step towards an artificial

play08:47

general intelligence but it's these very

play08:50

properties and their potential

play08:52

repercussions that have many people in

play08:54

the field

play08:56

worried just across the hall from Sergey

play08:59

is Professor Stuart Russell a research

play09:02

Pioneer who quite literally wrote The

play09:04

Textbook on AI he's now one of the most

play09:08

vocal researchers sharing concerns about

play09:10

the future if we make machines that are

play09:14

more powerful than us because they're

play09:16

more intelligent than we are it's not

play09:18

going to be easy to retain power over

play09:21

them

play09:22

forever how might your concerns play out

play09:25

there's this idea of what's called

play09:27

misalignment This is the idea that the

play09:29

machine is pursuing an objective and

play09:31

it's not aligned with what we really

play09:34

want if we're going to put a purpose

play09:37

into a machine better make sure that

play09:40

it's the purpose we really desire you

play09:42

know let's fix the problem of climate

play09:45

change okay well what causes climate

play09:48

change people yeah right so easy way to

play09:52

do that get rid of all the people right

play09:54

problem solved why can't you just put a

play09:56

stop button in can you not just like

play09:57

take the plug out the wall when not

play09:59

necessarily going to be able to do that

play10:01

because a sufficiently intelligent

play10:03

machine will already have thought of

play10:05

that but you can't expect to be able to

play10:07

pull the plug unless the machine wants

play10:09

you to pull the plug should we be

play10:11

building superintelligent machines at

play10:13

all we could just decide not to do it

play10:17

but the economic incentives are too

play10:19

great the amount being invested right

play10:23

now specifically to create

play10:25

superintelligent AI is in the ballpark

play10:28

of what the entire World spends on basic

play10:30

science research and if we do create

play10:33

super intelligent AI the value you know

play10:36

back of the envelope calculation is tens

play10:38

of quadrillions of

play10:41

pounds with these sums of money the

play10:43

concern is that safety may not be the

play10:46

top

play10:47

priority in fact there isn't a single

play10:50

high confidence statement that they can

play10:52

make about these systems will they copy

play10:55

themselves onto other machines without

play10:56

permission we haven't the faintest idea

play10:59

will will they advise terrorists on how

play11:01

to build biological weapons we don't

play11:04

know right can you stop them oh no it's

play11:07

very difficult right and in most

play11:09

Industries you wouldn't accept that

play11:12

right if I want to sell a medicine I

play11:14

can't say well it's really difficult all

play11:15

these clinical trials such a pain you

play11:18

know can I just bypass those and sell it

play11:20

direct to the public sorry no come back

play11:23

when you've done the work uh and I think

play11:25

that's what we have to say to the tech

play11:28

companies are there other concerns just

play11:32

with artificial intelligence that's

play11:33

actually really good at doing stuff yeah

play11:36

you might wonder if AI systems are so

play11:39

capable then companies will use them to

play11:41

do pretty much everything they pay human

play11:44

beings to do and if they don't they'll

play11:45

go out of business what does a world

play11:48

look like where machines do all the work

play11:52

we become enfeebled like some kids of

play11:55

billionaires are absolutely useless is

play11:57

that yes in fact we we we would all be

play11:59

kids of billionaires and you know one

play12:02

obvious consequence would be that we

play12:04

lose the incentive to learn we lose the

play12:07

incentive to be independent to achieve

play12:10

in a sense our civilization would end

play12:12

because it would no longer be a human

play12:15

civilization to me that's almost worse

play12:18

than Extinction I mean people have

play12:20

worried about this for a while right I

play12:22

mean Turing even was was concerned about

play12:25

this oh he was more than concerned he

play12:27

was terrified in fact he said

play12:29

it's hopeless we should have to expect

play12:32

the machines to take control is what he

play12:34

said how did he resolve it though he

play12:37

didn't just left a message for the

play12:39

future yep there's no solution there's

play12:42

not even really an apology it's just

play12:43

this is going to happen and there is no

play12:46

shortage of people now predicting

play12:49

doomsday for

play12:50

Humanity I think it gets smarter than us

play12:54

I think we're not ready I think we don't

play12:56

know what we're doing and I think we're

play12:57

all going to die default is just

play13:00

disaster and I think most likely just

play13:02

human extinction am we just going to die

play13:04

that's my U fairly confident prediction

play13:07

literally human

play13:09

extinction of course not everyone agrees

play13:13

this is a a topic of very heated

play13:16

debate Melanie Mitchell studies Ai and

play13:20

is interested in how closely it

play13:22

resembles humanlike intelligence right

play13:25

do you think there's an existential

play13:26

threat then here I think that there are

play13:30

many threats from AI but saying that

play13:32

it's an existential threat is going way

play13:36

too far why do you think that some of

play13:39

the the doomsdayers uh as sometimes

play13:42

they're called what why do you think

play13:44

that they are following this line of

play13:46

logic then it kind of gets to this

play13:49

projecting agency onto machines it's

play13:53

saying that machines because they have

play13:56

certain objectives can start doing

play13:58

things that could become catastrophic if

play14:00

we give them that power but that's a big

play14:03

if if we give them that power you know

play14:05

you're are you going to give an AI sort

play14:07

of the decision-making power on

play14:09

launching Nuclear Strike let's hope not

play14:12

you know if you give a monkey uh

play14:15

launching power over nuclear weapons the

play14:18

monkeyy is an existential threat do you

play14:21

think that we overestimate AI in its

play14:23

current form and the F to that is is it

play14:25

harmful to do so you know people

play14:28

overestimate a I often we've seen

play14:30

several cases where lawyers will use

play14:34

chat GPT to write a legal brief and it

play14:37

turns out that it's hallucinated several

play14:40

cases I've got in fact gotten an email

play14:42

from somebody saying you know chat TBT

play14:45

suggested that I read this book of yours

play14:48

but I can't find it Well it doesn't

play14:52

exist if we trust them too much we can

play14:55

get into big trouble if we're saying

play14:57

that AI isn't likely to be an

play14:59

existential threat it is a threat in

play15:01

other ways right yes absolutely we're

play15:04

seeing them already we've seen problems

play15:06

with AI bias facial recognition software

play15:10

makes more mistakes on people who have

play15:12

darker skin color and we've seen many

play15:15

arrests of people innocent people who

play15:18

are arrested because of a mistake made

play15:20

by a facial recognition system here in

play15:22

the US in the election we're seeing deep

play15:24

fakes of like Joe Biden's voice

play15:27

encouraging people not to vote

play15:29

all of these things are really important

play15:31

to deal with right

play15:33

now nuclear Armageddon or otherwise

play15:36

there are a number of ways in which AI

play15:38

can be harmful and we need to be careful

play15:41

in over trusting algorithms that are

play15:43

capable of fooling us or making

play15:45

catastrophic mistakes there's no doubt

play15:48

that we're in a new frontier here I mean

play15:49

there have been genuine incredible

play15:53

advances and seismic changes I think

play15:56

there's a lot still to come but when it

play15:59

comes to a super intelligent humanlike

play16:03

AI that can destroy our

play16:07

species I mean I think I think basically

play16:10

as a big old don't know and and I'm okay

play16:13

with that uncertainty I think we can

play16:16

mitigate against some of the potential

play16:17

harms think about safety very carefully

play16:21

while simultaneously maybe not losing

play16:22

that much sleep over something that's

play16:24

potentially not going to happen I think

play16:27

the only thing that we can say for sure

play16:28

at the moment is that we have just one

play16:32

example of humanlike intelligence I.E

play16:37

us and AI now is definitely not a

play16:42

[Music]

play16:46

replica it is the neurons inside our own

play16:50

brains and how they signal to one

play16:52

another that inspired the artificial

play16:55

neural networks driving the AI boom

play16:59

if we're going to build a humanlike

play17:01

general intelligence and Beyond could

play17:04

the key be a better understanding of our

play17:07

own

play17:09

mind one of the questions is could you

play17:11

make a map of the brain so detailed that

play17:13

you could try to simulate it in a

play17:15

computer could you make a software

play17:16

simulation of

play17:18

it neuroscientist Professor Ed boen is

play17:22

trying to understand the hardware of our

play17:24

intelligence by creating a digital map

play17:27

of the brain

play17:29

[Music]

play17:31

you know the brain is complex you know

play17:33

brain cells uh make all sorts of things

play17:35

right you know they make canono

play17:37

molecules that act kind of like the

play17:38

active ingredient in marijuana right

play17:40

they make gous molecules like nitric

play17:42

oxide that diffuse in all directions for

play17:45

a lot of these molecules we really don't

play17:47

know what roles they play in most

play17:49

decisions emotions behaviors and so

play17:51

forth so for everything that we know

play17:52

about the human brain when it comes to

play17:54

understanding at the level of like

play17:55

individual copses and structure I mean

play17:57

the way you're describing is that we

play17:58

we've barely scratched the surface we

play18:00

know very little about the circuitry of

play18:02

of any brain frankly um there's one worm

play18:05

which has 302 neurons where the wiring

play18:08

has been mapped out pretty

play18:10

well with around a 100 billion neurons

play18:14

the human brain might have to wait Ed

play18:17

team has started with some of the

play18:18

simplest of living things including the

play18:21

sea Elegance

play18:23

worm part of the mapping process

play18:26

involves probing neural circuitry using

play18:28

a technique called

play18:31

optogenetics we can borrow these

play18:33

molecules from nature that convert light

play18:34

to electricity put the molecules in the

play18:36

brain even in specific cells aim light

play18:39

at those cells and turn them on or off

play18:42

so this is a worm where the light will

play18:45

activate serotonin neurons and what

play18:47

you're going to see is the worms just

play18:48

going to

play18:49

stop and now I'm going to turn on the

play18:53

light they just freeze completely in

play18:55

place and so one of the things that we

play18:58

are in our group is to start with very

play19:00

small brains like worms if it works it

play19:03

it might reveal principles about how the

play19:04

brain works but it also might pave the

play19:07

way to scaling up what would it be to do

play19:08

the mouse brain with ballpark 100

play19:11

million neurons and then the human brain

play19:13

of course is ballpark 100 billion

play19:15

neurons so you know we're from worm to

play19:18

Mouse to

play19:19

human to scale up and produce detailed

play19:22

neural Maps Ed is using some unusual

play19:26

tools so you know how a baby diaper

play19:28

works

play19:29

unfortunately a lot of experience with

play19:30

baby

play19:31

diapers Ed is using a material found in

play19:34

diapers to overcome a fundamental

play19:37

problem mapping the dense web of neurons

play19:40

inside a brain for 300 years the way you

play19:43

see something in biology is use a lens

play19:46

to make the picture bigger what if we

play19:48

make the actual thing bigger the

play19:50

technical term is sodium polyacrylate

play19:53

and then what we can do is add water oh

play19:56

man no liquid left at all yeah sodium

play20:00

polyacrylate can swell up to a thousand

play20:03

times its original volume so what we do

play20:06

is we chemically install that baby

play20:08

diaper material inside the brain so it's

play20:11

not a living brain at this point it's a

play20:12

preserved one but do it just right and

play20:14

add water and we can make the brain

play20:16

bigger Ed is using it to expand tiny

play20:19

slices of mouse brain all right well you

play20:21

can see the beginning already yeah

play20:23

starting I mean that is absolutely

play20:25

incredible you can see it going already

play20:28

the size of it changing yeah it's very

play20:31

beautiful isn't

play20:33

it using both expansion and a powerful

play20:37

microscope Ed can see to the level of

play20:40

individual

play20:41

neurons this is a real piece of mouse

play20:43

brain tissue yeah this is real data part

play20:46

of the brain involved with amongst other

play20:47

things memory and the cells look

play20:51

different colors cuz we are color coding

play20:52

them and our goal is to give every cell

play20:54

in the whole brain its unique color code

play20:57

yeah I mean there's a lot in that

play20:59

image this is a map of just a tiny

play21:03

fraction of a mouse's brain Ed's goal to

play21:06

achieve a fully mapped human brain is

play21:09

still in the distant

play21:12

future do you think that we are on the

play21:14

path to all super intelligence in terms

play21:15

of constructing it

play21:18

artificially it might depend on what you

play21:20

define intelligence to be some goals of

play21:23

intelligence are to replicate certain

play21:25

functions like

play21:26

language at the extreme you might

play21:29

imagine uh flashes of insight like

play21:32

Einstein imagining traveling along a

play21:34

beam of light or sometimes people will

play21:36

talk about an Insight coming to them in

play21:38

a dream or while they're walking down

play21:40

the street they doing something else

play21:42

there's that argument that either

play21:44

there's something special about brains

play21:48

or it is just complex computation and if

play21:51

it's just complex computation then you

play21:52

should be able to replicate it a lot of

play21:54

people ask me well a large language

play21:56

model you know is that how the brain

play21:58

works and the honest answer is well we

play21:59

don't really know right maybe the brain

play22:01

is doing something like that but and

play22:03

maybe not uh um my intuition is that the

play22:06

brain works very differently but but

play22:08

again since we don't have a good map of

play22:10

any brain we really have no idea what

play22:12

the fundamental underlying mechanisms

play22:16

are I think when you hear concerns like

play22:19

AI is going to take over the world or

play22:21

it's going to destroy Humanity I think

play22:23

it's really easy to impart humanlike

play22:27

characteristics on artificial

play22:29

intelligence it's it's really easy to

play22:30

imagine that it has intent and

play22:33

understanding and and cruelty

play22:36

maybe but what is really clear talking

play22:38

to Ed is that when it comes to actual

play22:41

biological brains they are on a

play22:45

completely different level of

play22:48

computation of complexity of structure

play22:52

everything the artificial intelligence

play22:54

that we have now the best stuff in the

play22:55

entire world is more like a spread sheet

play22:59

that it is like a sea Elegance

play23:01

work and I think that there's an

play23:03

important lesson in that silicon

play23:06

Valley's quest to Eclipse human

play23:08

intelligence is steeped in

play23:11

uncertainty while the Guerilla problem

play23:14

is a poignant warning for the future we

play23:17

should not be distracted from today's

play23:19

risks like racial bias and fake news but

play23:24

perhaps in the end the true challenge is

play23:27

not the creation of super intelligent AI

play23:31

but understanding the vast complexity of

play23:33

our own minds a frontier we're only just

play23:37

beginning to explore

play23:40

[Music]

play23:54

[Applause]

Rate This
★
★
★
★
★

5.0 / 5 (0 votes)

Étiquettes Connexes
Artificial IntelligenceHuman ImpactSuperintelligenceEthical ConcernsAI ResearchExistential RiskTech GiantsGeneral AINeuroscienceFuturism
Besoin d'un résumé en anglais ?