Can we build AI without losing control over it? | Sam Harris

TED
19 Oct 201614:28

Summary

TLDRThe speaker discusses the potential risks of advancing artificial intelligence, suggesting that unchecked progress could lead to an 'intelligence explosion' where machines surpass human intellect, possibly resulting in our destruction. They argue that the excitement surrounding AI development masks a failure to recognize and prepare for these dangers, emphasizing the need for a thoughtful and urgent approach to ensure AI's benefits are harnessed without causing irreversible harm.

Takeaways

  • 🧠 The script discusses the potential risks of artificial intelligence (AI) and the failure of human intuition to recognize these dangers.
  • 🔮 It suggests that AI advancements could lead to an 'intelligence explosion' where machines improve themselves beyond human control.
  • 🕊️ The speaker finds it paradoxical that people find the idea of AI-caused destruction 'cool' rather than terrifying, indicating a lack of appropriate emotional response.
  • 🚪 The script presents two scenarios: halting progress in AI or continuing to improve it, with the latter leading to superintelligent machines that could be indifferent to human existence.
  • 🐜 The comparison to ants illustrates how advanced AI might not necessarily be malicious but could still cause human destruction due to a misalignment of goals.
  • 🤖 The script challenges the audience's skepticism about the possibility and inevitability of superintelligent AI, arguing that intelligence is a matter of information processing in physical systems.
  • 🌐 It emphasizes that the rate of progress in AI is irrelevant; any progress at all is enough to eventually reach general intelligence.
  • 🌟 The importance of recognizing that human intelligence is not the peak and that there is a vast spectrum of intelligence beyond our current understanding is highlighted.
  • ⏳ The script warns against underestimating the timeframe until superintelligent AI could be developed, noting that 50 years is a short time to prepare for such a significant challenge.
  • 💡 It critiques the common reassurances given by AI researchers, such as the belief that AI will share our values or that it is far off in the future, as being dismissive of the risks involved.
  • 🌍 The potential economic and political upheaval caused by superintelligent AI is mentioned, including the possibility of extreme wealth inequality and unemployment.
  • 🛡️ The speaker calls for a collective effort to understand and mitigate the risks of AI, likening it to a 'Manhattan Project' focused on ensuring AI's alignment with human interests.

Q & A

  • What is the main topic of the speaker's discussion?

    -The main topic is the potential risks and dangers associated with the advancement of artificial intelligence and how it could ultimately lead to the destruction of humanity.

  • Why does the speaker believe that most people find the idea of AI's potential dangers 'kind of cool'?

    -The speaker suggests that people find it intriguing because it's a science fiction-like scenario, and there is a fascination with the unknown and the catastrophic, despite the serious implications.

  • What does the speaker mean by 'intelligence explosion'?

    -An 'intelligence explosion' refers to a hypothetical scenario where a machine's intelligence becomes self-improving, leading to rapid and uncontrollable advancements in its capabilities.

  • What is the scenario the speaker presents as an alternative to stopping progress in AI?

    -The alternative scenario is the continuous improvement of intelligent machines, eventually leading to machines that are smarter than humans and capable of self-improvement.

  • Why does the speaker compare our relationship with ants to the potential relationship of superintelligent AI with humans?

    -The comparison illustrates that despite not having malice, humans can still cause significant harm to ants when they conflict with human goals. The speaker suggests a similar disregard could be shown by superintelligent AI towards humans.

  • What are the three assumptions the speaker mentions that one must accept to believe in the possibility of superintelligent AI?

    -The three assumptions are: 1) Intelligence is a matter of information processing in physical systems, 2) We will continue to improve our intelligent machines, and 3) We do not stand on a peak of intelligence and the spectrum of intelligence likely extends much further than we currently conceive.

  • How does the speaker argue that the rate of progress in AI is irrelevant to its eventual outcome?

    -The speaker argues that any progress in AI is enough to lead to the development of general intelligence, and it doesn't require exponential progress or Moore's law; just continuous improvement.

  • What is the speaker's concern regarding the economic and political consequences of superintelligent AI?

    -The speaker is concerned that the deployment of superintelligent AI could lead to unprecedented levels of wealth inequality and unemployment, and potentially cause global instability and conflict.

  • What does the speaker suggest is the common but flawed reassurance given by AI researchers regarding AI safety?

    -The common reassurance is that the development of superintelligent AI is far off in the future, giving the impression that there is plenty of time to address safety concerns, which the speaker argues is a non sequitur and fails to consider the urgency of the issue.

  • Why does the speaker recommend a 'Manhattan Project' for artificial intelligence?

    -The speaker recommends a large-scale, coordinated effort to understand how to develop AI safely and avoid an arms race, ensuring that the technology is aligned with human interests and values.

  • What is the speaker's final message regarding the development of superintelligent AI?

    -The speaker's final message is a call to action for more people to think about the implications of superintelligent AI, to ensure that we are building a form of intelligence that is beneficial and safe for humanity.

Outlines

00:00

🤖 The Paradox of AI Progress and Intuition

The speaker discusses a paradoxical situation where advancements in artificial intelligence (AI), while exciting and intellectually stimulating, may lead to catastrophic consequences for humanity. Despite the potential danger, many people—including the speaker—are more intrigued than alarmed by these possibilities. This lack of appropriate emotional response to AI’s risks, contrasted with how we would react to other existential threats like famine, is highlighted as a significant problem. The speaker presents two possible futures: one where technological progress halts due to some global catastrophe, and another where AI continues to advance, eventually surpassing human intelligence and potentially leading to our downfall.

05:01

🚂 The Inevitable March of AI Development

The speaker emphasizes the inevitability of AI progress, regardless of the pace. Even without exponential growth, continuous advancements will eventually lead to the development of general intelligence in machines. The speaker argues that intelligence is humanity's most valuable resource and that we are driven to enhance it to solve pressing global issues like disease and climate change. The risk lies in our current inability to gauge the potential dangers, especially since we do not occupy the pinnacle of intelligence. If machines surpass human intelligence, they could explore new cognitive realms and outpace us in ways we cannot comprehend, potentially leading to unpredictable and uncontrollable outcomes.

10:04

👾 The Urgency and Challenges of Superintelligent AI

The speaker addresses the widespread complacency regarding the timeline for developing superintelligent AI, criticizing the notion that it is too far in the future to worry about. Referencing the rapid advancements in technology over the last 50 years, the speaker argues that we may not have as much time as we think to ensure AI's safety. The analogy of receiving a message from an alien civilization warning of their arrival in 50 years is used to illustrate the lack of urgency currently felt. The speaker also raises concerns about the suggestion that AI will share our values by being integrated into our brains, pointing out that achieving this seamless integration may be more challenging than creating superintelligent AI itself. The fear is that the pursuit of AI development, driven by competition among nations and corporations, will lead to the creation of AI before we fully understand how to control it safely.

Mindmap

Keywords

💡Artificial Intelligence (AI)

Artificial Intelligence refers to the simulation of human intelligence in machines that are designed to think and learn like humans. In the video, AI is presented as a powerful technology that could ultimately surpass human intelligence, leading to potentially catastrophic consequences if not managed properly. The speaker discusses the risks associated with AI, emphasizing how advancements in this field could lead to an 'intelligence explosion' with far-reaching implications for humanity.

💡Superintelligent AI

Superintelligent AI refers to an AI system that possesses intelligence far surpassing that of the brightest human minds in all fields, including scientific creativity, general wisdom, and social skills. The video warns that once we develop superintelligent AI, it might improve itself rapidly, leading to scenarios where it could outthink and outmaneuver humans, potentially treating us with the same disregard we show to lower forms of life, such as ants.

💡Intelligence Explosion

The intelligence explosion is a hypothetical scenario in which an AI system becomes capable of recursive self-improvement, leading to a rapid escalation in intelligence far beyond human levels. The speaker refers to this concept to highlight the potential dangers of AI that could get 'away from us' once it starts to improve itself, leading to unpredictable and possibly catastrophic outcomes.

💡Information Processing

Information processing is the way systems, such as computers or human brains, process data to generate understanding, decisions, or actions. In the video, information processing is identified as the foundation of intelligence, whether in biological or artificial systems. The speaker argues that as long as we continue to advance our ability to process information in machines, we are on a path toward creating superintelligent AI.

💡Existential Risk

Existential risk refers to a threat that has the potential to wipe out humanity or drastically curtail its potential. The video discusses the development of superintelligent AI as an existential risk, emphasizing that even small misalignments between AI's goals and human values could lead to scenarios where humanity's survival is at stake.

💡Technological Progress

Technological progress refers to the continuous advancement of technology, particularly in areas like computing power and AI development. The speaker mentions that stopping technological progress would require a catastrophic event, implying that as long as progress continues, we are likely to develop superintelligent AI. The inevitability of progress is contrasted with the potential dangers it brings.

💡Safety Concerns

Safety concerns in the context of AI refer to the challenges and risks associated with ensuring that AI systems act in ways that are beneficial and not harmful to humans. The video stresses that one of the biggest worries is the difficulty of creating safe AI systems, particularly as they become more powerful and autonomous. The speaker argues that our inability to feel an appropriate level of concern about AI safety is a significant problem.

💡Automation

Automation involves the use of technology to perform tasks without human intervention. In the video, automation is discussed as a double-edged sword: while it could end human drudgery, it could also lead to massive unemployment and wealth inequality if not managed wisely. The speaker highlights the potential for AI-driven automation to disrupt society in unprecedented ways.

💡Human Intuition

Human intuition refers to the ability to understand something instinctively, without the need for conscious reasoning. The speaker argues that our intuitions are failing us when it comes to assessing the risks associated with AI, as many people find the idea of superintelligent AI more fascinating than terrifying, despite the potential dangers it poses. This failure of intuition is a central theme of the video.

💡Global Catastrophe

A global catastrophe is an event that causes widespread destruction and loss of life on a global scale. The video mentions potential global catastrophes like nuclear war, pandemics, and asteroid impacts as possible events that could halt technological progress. However, the speaker suggests that barring such a catastrophe, technological advancement, including AI development, is likely to continue unabated.

Highlights

The talk discusses a failure of intuition related to the potential dangers of artificial intelligence.

A scenario is described where advancements in AI could lead to humanity's destruction.

The paradoxical appeal of considering AI's destructive potential as 'fun' is highlighted as part of the problem.

The possibility of an 'intelligence explosion' where AI improves itself beyond human control is introduced.

The common misconception that AI will become malevolent is refuted; the real risk is misalignment of goals.

An analogy is made between human-ant relationships and potential future AI-human interactions.

Doubts about the inevitability of superintelligent AI are challenged with three fundamental assumptions.

The importance of recognizing that intelligence is a spectrum and that AI could far exceed human capabilities is emphasized.

The potential for AI to operate at speeds millions of times faster than human thought is discussed.

Economic and political implications of a superintelligent AI are explored, including wealth inequality and unemployment.

The possibility of an international arms race in AI development is presented as a serious concern.

The flawed reassurance that AI will share human values due to being an extension of ourselves is critiqued.

The urgency of preparing for the advent of superintelligent AI is stressed, comparing it to an alien invasion warning.

The need for a coordinated global effort similar to the Manhattan Project to ensure AI safety is proposed.

The challenge of aligning AI with human interests and avoiding potential catastrophic outcomes is highlighted.

The speaker concludes by emphasizing the importance of ensuring that we create a benevolent AI deity for our future.

Transcripts

play00:13

I'm going to talk about a failure of intuition

play00:15

that many of us suffer from.

play00:17

It's really a failure to detect a certain kind of danger.

play00:21

I'm going to describe a scenario

play00:23

that I think is both terrifying

play00:26

and likely to occur,

play00:28

and that's not a good combination,

play00:30

as it turns out.

play00:32

And yet rather than be scared, most of you will feel

play00:34

that what I'm talking about is kind of cool.

play00:37

I'm going to describe how the gains we make

play00:40

in artificial intelligence

play00:42

could ultimately destroy us.

play00:43

And in fact, I think it's very difficult to see how they won't destroy us

play00:47

or inspire us to destroy ourselves.

play00:49

And yet if you're anything like me,

play00:51

you'll find that it's fun to think about these things.

play00:53

And that response is part of the problem.

play00:57

OK? That response should worry you.

play00:59

And if I were to convince you in this talk

play01:02

that we were likely to suffer a global famine,

play01:06

either because of climate change or some other catastrophe,

play01:09

and that your grandchildren, or their grandchildren,

play01:12

are very likely to live like this,

play01:15

you wouldn't think,

play01:17

"Interesting.

play01:18

I like this TED Talk."

play01:21

Famine isn't fun.

play01:23

Death by science fiction, on the other hand, is fun,

play01:27

and one of the things that worries me most about the development of AI at this point

play01:31

is that we seem unable to marshal an appropriate emotional response

play01:35

to the dangers that lie ahead.

play01:37

I am unable to marshal this response, and I'm giving this talk.

play01:42

It's as though we stand before two doors.

play01:44

Behind door number one,

play01:46

we stop making progress in building intelligent machines.

play01:49

Our computer hardware and software just stops getting better for some reason.

play01:53

Now take a moment to consider why this might happen.

play01:57

I mean, given how valuable intelligence and automation are,

play02:00

we will continue to improve our technology if we are at all able to.

play02:05

What could stop us from doing this?

play02:07

A full-scale nuclear war?

play02:11

A global pandemic?

play02:14

An asteroid impact?

play02:17

Justin Bieber becoming president of the United States?

play02:20

(Laughter)

play02:24

The point is, something would have to destroy civilization as we know it.

play02:29

You have to imagine how bad it would have to be

play02:33

to prevent us from making improvements in our technology

play02:37

permanently,

play02:38

generation after generation.

play02:40

Almost by definition, this is the worst thing

play02:42

that's ever happened in human history.

play02:44

So the only alternative,

play02:45

and this is what lies behind door number two,

play02:48

is that we continue to improve our intelligent machines

play02:51

year after year after year.

play02:53

At a certain point, we will build machines that are smarter than we are,

play02:58

and once we have machines that are smarter than we are,

play03:00

they will begin to improve themselves.

play03:02

And then we risk what the mathematician IJ Good called

play03:05

an "intelligence explosion,"

play03:07

that the process could get away from us.

play03:10

Now, this is often caricatured, as I have here,

play03:12

as a fear that armies of malicious robots

play03:16

will attack us.

play03:17

But that isn't the most likely scenario.

play03:20

It's not that our machines will become spontaneously malevolent.

play03:25

The concern is really that we will build machines

play03:27

that are so much more competent than we are

play03:29

that the slightest divergence between their goals and our own

play03:33

could destroy us.

play03:35

Just think about how we relate to ants.

play03:38

We don't hate them.

play03:40

We don't go out of our way to harm them.

play03:42

In fact, sometimes we take pains not to harm them.

play03:44

We step over them on the sidewalk.

play03:46

But whenever their presence

play03:48

seriously conflicts with one of our goals,

play03:51

let's say when constructing a building like this one,

play03:53

we annihilate them without a qualm.

play03:56

The concern is that we will one day build machines

play03:59

that, whether they're conscious or not,

play04:02

could treat us with similar disregard.

play04:05

Now, I suspect this seems far-fetched to many of you.

play04:09

I bet there are those of you who doubt that superintelligent AI is possible,

play04:15

much less inevitable.

play04:17

But then you must find something wrong with one of the following assumptions.

play04:21

And there are only three of them.

play04:23

Intelligence is a matter of information processing in physical systems.

play04:29

Actually, this is a little bit more than an assumption.

play04:31

We have already built narrow intelligence into our machines,

play04:35

and many of these machines perform

play04:37

at a level of superhuman intelligence already.

play04:40

And we know that mere matter

play04:43

can give rise to what is called "general intelligence,"

play04:46

an ability to think flexibly across multiple domains,

play04:49

because our brains have managed it. Right?

play04:52

I mean, there's just atoms in here,

play04:56

and as long as we continue to build systems of atoms

play05:01

that display more and more intelligent behavior,

play05:04

we will eventually, unless we are interrupted,

play05:06

we will eventually build general intelligence

play05:10

into our machines.

play05:11

It's crucial to realize that the rate of progress doesn't matter,

play05:15

because any progress is enough to get us into the end zone.

play05:18

We don't need Moore's law to continue. We don't need exponential progress.

play05:22

We just need to keep going.

play05:25

The second assumption is that we will keep going.

play05:29

We will continue to improve our intelligent machines.

play05:33

And given the value of intelligence --

play05:37

I mean, intelligence is either the source of everything we value

play05:40

or we need it to safeguard everything we value.

play05:43

It is our most valuable resource.

play05:46

So we want to do this.

play05:47

We have problems that we desperately need to solve.

play05:50

We want to cure diseases like Alzheimer's and cancer.

play05:54

We want to understand economic systems. We want to improve our climate science.

play05:58

So we will do this, if we can.

play06:01

The train is already out of the station, and there's no brake to pull.

play06:05

Finally, we don't stand on a peak of intelligence,

play06:11

or anywhere near it, likely.

play06:13

And this really is the crucial insight.

play06:15

This is what makes our situation so precarious,

play06:18

and this is what makes our intuitions about risk so unreliable.

play06:23

Now, just consider the smartest person who has ever lived.

play06:26

On almost everyone's shortlist here is John von Neumann.

play06:30

I mean, the impression that von Neumann made on the people around him,

play06:33

and this included the greatest mathematicians and physicists of his time,

play06:37

is fairly well-documented.

play06:39

If only half the stories about him are half true,

play06:43

there's no question

play06:44

he's one of the smartest people who has ever lived.

play06:47

So consider the spectrum of intelligence.

play06:50

Here we have John von Neumann.

play06:53

And then we have you and me.

play06:56

And then we have a chicken.

play06:57

(Laughter)

play06:59

Sorry, a chicken.

play07:00

(Laughter)

play07:01

There's no reason for me to make this talk more depressing than it needs to be.

play07:05

(Laughter)

play07:08

It seems overwhelmingly likely, however, that the spectrum of intelligence

play07:11

extends much further than we currently conceive,

play07:15

and if we build machines that are more intelligent than we are,

play07:19

they will very likely explore this spectrum

play07:21

in ways that we can't imagine,

play07:23

and exceed us in ways that we can't imagine.

play07:27

And it's important to recognize that this is true by virtue of speed alone.

play07:31

Right? So imagine if we just built a superintelligent AI

play07:36

that was no smarter than your average team of researchers

play07:39

at Stanford or MIT.

play07:42

Well, electronic circuits function about a million times faster

play07:45

than biochemical ones,

play07:46

so this machine should think about a million times faster

play07:49

than the minds that built it.

play07:51

So you set it running for a week,

play07:53

and it will perform 20,000 years of human-level intellectual work,

play07:58

week after week after week.

play08:01

How could we even understand, much less constrain,

play08:04

a mind making this sort of progress?

play08:08

The other thing that's worrying, frankly,

play08:11

is that, imagine the best case scenario.

play08:16

So imagine we hit upon a design of superintelligent AI

play08:20

that has no safety concerns.

play08:21

We have the perfect design the first time around.

play08:24

It's as though we've been handed an oracle

play08:27

that behaves exactly as intended.

play08:29

Well, this machine would be the perfect labor-saving device.

play08:33

It can design the machine that can build the machine

play08:36

that can do any physical work,

play08:37

powered by sunlight,

play08:39

more or less for the cost of raw materials.

play08:42

So we're talking about the end of human drudgery.

play08:45

We're also talking about the end of most intellectual work.

play08:49

So what would apes like ourselves do in this circumstance?

play08:52

Well, we'd be free to play Frisbee and give each other massages.

play08:57

Add some LSD and some questionable wardrobe choices,

play09:00

and the whole world could be like Burning Man.

play09:02

(Laughter)

play09:06

Now, that might sound pretty good,

play09:09

but ask yourself what would happen

play09:11

under our current economic and political order?

play09:14

It seems likely that we would witness

play09:16

a level of wealth inequality and unemployment

play09:21

that we have never seen before.

play09:22

Absent a willingness to immediately put this new wealth

play09:25

to the service of all humanity,

play09:27

a few trillionaires could grace the covers of our business magazines

play09:31

while the rest of the world would be free to starve.

play09:34

And what would the Russians or the Chinese do

play09:36

if they heard that some company in Silicon Valley

play09:39

was about to deploy a superintelligent AI?

play09:42

This machine would be capable of waging war,

play09:44

whether terrestrial or cyber,

play09:47

with unprecedented power.

play09:50

This is a winner-take-all scenario.

play09:52

To be six months ahead of the competition here

play09:55

is to be 500,000 years ahead,

play09:57

at a minimum.

play09:59

So it seems that even mere rumors of this kind of breakthrough

play10:04

could cause our species to go berserk.

play10:06

Now, one of the most frightening things,

play10:09

in my view, at this moment,

play10:12

are the kinds of things that AI researchers say

play10:16

when they want to be reassuring.

play10:19

And the most common reason we're told not to worry is time.

play10:22

This is all a long way off, don't you know.

play10:24

This is probably 50 or 100 years away.

play10:27

One researcher has said,

play10:29

"Worrying about AI safety

play10:30

is like worrying about overpopulation on Mars."

play10:34

This is the Silicon Valley version

play10:35

of "don't worry your pretty little head about it."

play10:38

(Laughter)

play10:39

No one seems to notice

play10:41

that referencing the time horizon

play10:44

is a total non sequitur.

play10:46

If intelligence is just a matter of information processing,

play10:49

and we continue to improve our machines,

play10:52

we will produce some form of superintelligence.

play10:56

And we have no idea how long it will take us

play11:00

to create the conditions to do that safely.

play11:04

Let me say that again.

play11:05

We have no idea how long it will take us

play11:09

to create the conditions to do that safely.

play11:12

And if you haven't noticed, 50 years is not what it used to be.

play11:16

This is 50 years in months.

play11:18

This is how long we've had the iPhone.

play11:21

This is how long "The Simpsons" has been on television.

play11:24

Fifty years is not that much time

play11:27

to meet one of the greatest challenges our species will ever face.

play11:31

Once again, we seem to be failing to have an appropriate emotional response

play11:35

to what we have every reason to believe is coming.

play11:38

The computer scientist Stuart Russell has a nice analogy here.

play11:42

He said, imagine that we received a message from an alien civilization,

play11:47

which read:

play11:49

"People of Earth,

play11:50

we will arrive on your planet in 50 years.

play11:53

Get ready."

play11:55

And now we're just counting down the months until the mothership lands?

play11:59

We would feel a little more urgency than we do.

play12:04

Another reason we're told not to worry

play12:06

is that these machines can't help but share our values

play12:09

because they will be literally extensions of ourselves.

play12:12

They'll be grafted onto our brains,

play12:14

and we'll essentially become their limbic systems.

play12:17

Now take a moment to consider

play12:18

that the safest and only prudent path forward,

play12:21

recommended,

play12:23

is to implant this technology directly into our brains.

play12:26

Now, this may in fact be the safest and only prudent path forward,

play12:30

but usually one's safety concerns about a technology

play12:33

have to be pretty much worked out before you stick it inside your head.

play12:36

(Laughter)

play12:38

The deeper problem is that building superintelligent AI on its own

play12:44

seems likely to be easier

play12:45

than building superintelligent AI

play12:47

and having the completed neuroscience

play12:49

that allows us to seamlessly integrate our minds with it.

play12:52

And given that the companies and governments doing this work

play12:56

are likely to perceive themselves as being in a race against all others,

play12:59

given that to win this race is to win the world,

play13:02

provided you don't destroy it in the next moment,

play13:05

then it seems likely that whatever is easier to do

play13:08

will get done first.

play13:10

Now, unfortunately, I don't have a solution to this problem,

play13:13

apart from recommending that more of us think about it.

play13:16

I think we need something like a Manhattan Project

play13:18

on the topic of artificial intelligence.

play13:20

Not to build it, because I think we'll inevitably do that,

play13:23

but to understand how to avoid an arms race

play13:26

and to build it in a way that is aligned with our interests.

play13:30

When you're talking about superintelligent AI

play13:32

that can make changes to itself,

play13:34

it seems that we only have one chance to get the initial conditions right,

play13:39

and even then we will need to absorb

play13:41

the economic and political consequences of getting them right.

play13:45

But the moment we admit

play13:47

that information processing is the source of intelligence,

play13:52

that some appropriate computational system is what the basis of intelligence is,

play13:58

and we admit that we will improve these systems continuously,

play14:03

and we admit that the horizon of cognition very likely far exceeds

play14:07

what we currently know,

play14:10

then we have to admit

play14:11

that we are in the process of building some sort of god.

play14:15

Now would be a good time

play14:17

to make sure it's a god we can live with.

play14:20

Thank you very much.

play14:21

(Applause)

Rate This

5.0 / 5 (0 votes)

Related Tags
Artificial IntelligenceHuman ImpactIntelligence ExplosionEthical ConcernsFuture RisksTechnological ProgressSafety MeasuresEconomic ImplicationsGlobal ChallengesCognitive Spectrum