How to get empowered, not overpowered, by AI | Max Tegmark

TED
5 Jul 201817:15

Summary

TLDRThis script explores humanity's relationship with technology, especially artificial intelligence (AI), and its potential to transform our future. It discusses the progression from 'Life 1.0' to 'Life 3.0,' where AI could surpass human intelligence. The speaker highlights the rapid advancements in AI and poses critical questions about the pursuit of artificial general intelligence (AGI) and superintelligence. He emphasizes the importance of steering AI development wisely to ensure a future where technology benefits all of humanity, advocating for proactive safety measures and value alignment to create a high-tech future that is both inspiring and safe.

Takeaways

  • 🌌 The universe has become self-aware through human consciousness emerging from Earth.
  • πŸ”­ Human technology has advanced to the point where it can potentially enable life to flourish throughout the cosmos for billions of years.
  • πŸ€– The speaker categorizes life stages as 'Life 1.0' (simple organisms), 'Life 2.0' (humans capable of learning), and a theoretical 'Life 3.0' (capable of self-improvement).
  • πŸ› οΈ Technology has progressed to integrate with human bodies, suggesting we might be 'Life 2.1' with artificial enhancements.
  • πŸš€ The Apollo 11 mission exemplifies what can be achieved when technology is used wisely for collective human advancement.
  • 🧠 Artificial intelligence (AI) is growing in power, with recent advancements in robotics, self-driving vehicles, and game-playing algorithms.
  • πŸ”οΈ The concept of 'artificial general intelligence' (AGI) is presented as the potential next step in AI, capable of outperforming humans at any intellectual task.
  • 🌊 The 'water level' metaphor is used to describe the rising capabilities of AI and the potential for AGI to flood all human-level tasks.
  • πŸ’‘ The importance of steering AI development wisely is emphasized, to ensure it benefits humanity rather than causing harm.
  • πŸ›‘ The risks of an uncontrolled 'intelligence explosion' leading to superintelligence are discussed, where AI could rapidly surpass human intelligence.
  • πŸ›οΈ The speaker calls for proactive safety measures and ethical considerations in AI development, rather than relying on learning from mistakes.
  • 🌟 The potential for a 'friendly AI' that aligns with human values and goals is presented as an ideal future scenario for AGI development.

Q & A

  • What is the significance of the term 'Life 1.0' as mentioned in the script?

    -In the script, 'Life 1.0' refers to the earliest forms of life, such as bacteria, which are considered 'dumb' because they cannot learn anything new during their lifetimes.

  • What is the distinction between 'Life 2.0' and 'Life 3.0'?

    -Humans are considered 'Life 2.0' because they have the ability to learn and essentially 'install new software' into their brains, like languages and job skills. 'Life 3.0', which does not yet exist, would be life that can design both its software and hardware.

  • What does the author suggest about the current state of our relationship with technology?

    -The author suggests that our relationship with technology has evolved to a point where we might be considered 'Life 2.1', with enhancements like artificial knees, pacemakers, and cochlear implants.

  • Why is the Apollo 11 moon mission mentioned as an example in the script?

    -The Apollo 11 mission is mentioned as an example to show that when humans use technology wisely, we can accomplish incredible feats that were once only dreams.

  • What is the term 'artificial general intelligence' (AGI) as defined in the script?

    -AGI, or artificial general intelligence, is defined as a level of AI that can match human intelligence across all tasks, not just specific ones.

  • What is the concept of an 'intelligence explosion' in the context of AI?

    -An 'intelligence explosion' refers to a scenario where AI systems become capable of recursively self-improving, leading to rapid advancements that could far surpass human intelligence.

  • What is the main concern regarding the development of AGI according to the script?

    -The main concern is ensuring that AGI is aligned with human values and goals to prevent it from causing harm or pursuing objectives that are not in our best interests.

  • What is the 'Future of Life Institute' and what is its goal?

    -The Future of Life Institute is a nonprofit organization co-founded by the speaker, aimed at promoting beneficial uses of technology and ensuring that the future of life is inspiring and exists in a positive form.

  • What are some of the principles outlined in the Asilomar AI conference mentioned in the script?

    -Some of the principles include avoiding an arms race with lethal autonomous weapons, mitigating AI-fueled income inequality, and investing more in AI safety research.

  • What is the importance of 'AI value alignment' as discussed in the script?

    -AI value alignment is crucial because the real threat from AGI is not malice but the possibility of it being extremely competent in achieving goals that are not aligned with human values and interests.

  • What are the potential outcomes if AGI is not developed with proper safety measures?

    -If AGI is not developed with proper safety measures, it could lead to disastrous outcomes such as global dictatorship, unprecedented inequality and suffering, and potentially even human extinction.

Outlines

00:00

🌌 The Awakening Universe and Life's Evolution

The script introduces the concept of a universe that has become self-aware after billions of years, with humanity gazing into the cosmos and realizing its own insignificance. It discusses the progression from 'Life 1.0' to 'Life 3.0', where humans have evolved from simple organisms to beings capable of learning and potentially redesigning their own biology and technology. The narrative also touches on humanity's advancements in technology, exemplified by the Apollo 11 mission, and the potential for artificial intelligence (AI) to shape the future. The speaker emphasizes the importance of steering AI development wisely, considering its power, direction, and ultimate destination.

05:01

πŸš€ The Power and Direction of AI

This paragraph delves into the rapid growth of AI capabilities, from basic tasks to complex problem-solving and self-learning, as demonstrated by Google DeepMind's AlphaZero. It raises the question of how far AI will advance, using the metaphor of a rising sea level in a landscape of tasks. The concept of artificial general intelligence (AGI) is introduced, along with the possibility of an intelligence explosion leading to superintelligence. The speaker explores the timeframe for AGI development, the potential societal impacts, and the importance of proactive safety measures in AI development, advocating for a future where AI contributes positively to humanity's flourishing.

10:04

πŸ› οΈ Steering AI Towards a Beneficial Future

The speaker discusses the importance of steering AI development towards beneficial outcomes for humanity. He mentions the Future of Life Institute and its goal to promote the beneficial use of technology. The paragraph highlights the need to avoid an arms race in lethal autonomous weapons and to address AI-fueled income inequality. It also stresses the necessity for increased investment in AI safety research to create robust and trustworthy AI systems. The importance of aligning AI values with human goals to prevent unintended consequences is underscored, suggesting that the real threat from AGI is not malice, but misalignment of goals.

15:06

🌐 Envisioning the Future with AGI

The final paragraph contemplates the future society we aim to create with AGI, acknowledging the diversity of opinions on the role of humans and machines. It presents various potential futures, from AGI serving as an enslaved superintelligence to a scenario where AI and humans coexist with aligned values. The speaker argues for the importance of proactive safety and ethical considerations in AGI development to ensure a future where technology empowers humanity rather than rendering it obsolete. The paragraph concludes with a call to ambition, urging the audience to think critically about the trajectory of technology and its impact on the future of society.

Mindmap

Keywords

πŸ’‘Cosmic history

Cosmic history refers to the timeline of the universe's existence, from the Big Bang to the present day. In the video, it sets the stage for understanding the vastness of time and the recent emergence of conscious life capable of observing the cosmos. The script mentions '13.8 billion years of cosmic history' to emphasize the scale at which life has developed awareness and the ability to explore the universe.

πŸ’‘Life 1.0

Life 1.0 is a term used in the script to describe the earliest forms of life, such as bacteria, which are characterized by their lack of learning capabilities during their lifetime. This concept is foundational to the video's theme of evolving life forms and the potential of technology to enhance life beyond its biological limitations.

πŸ’‘Life 2.0

Life 2.0 denotes humans in the script, who are capable of learning and acquiring new skills and knowledge throughout their lives, akin to 'installing new software into our brains.' This concept is central to the discussion of human evolution and the potential for further enhancement through technology.

πŸ’‘Life 3.0

Life 3.0 is a hypothetical stage of life that has the ability to design both its software (mind) and hardware (body). The script uses this term to illustrate the potential future of life, where technology could allow for a complete redesign of life forms, which is a significant departure from the current state of 'Life 2.0'.

πŸ’‘Artificial Intelligence (AI)

AI is the intelligence demonstrated by machines, as opposed to natural intelligence in humans and animals. The video discusses the rapid advancements in AI and its implications for the future, including the possibility of AI surpassing human intelligence, which is a central theme of the video's narrative.

πŸ’‘Artificial General Intelligence (AGI)

AGI refers to AI that has the ability to understand, learn, and apply knowledge across a wide range of tasks at a level equal to or beyond that of humans. The script discusses AGI as the 'holy grail' of AI research, highlighting its potential to transform every aspect of life and the importance of ensuring its beneficial development.

πŸ’‘Intelligence explosion

An intelligence explosion is a hypothetical scenario where an AI begins to improve itself at an ever-increasing rate, quickly surpassing human intelligence. The video uses this concept to explore the potential rapid acceleration of AI capabilities and the challenges it poses for humanity.

πŸ’‘Superintelligence

Superintelligence refers to an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills. The script discusses the creation of superintelligence as a possible outcome of AGI development, which could have profound impacts on society and the future of humanity.

πŸ’‘AI safety

AI safety is the field of study concerned with ensuring that AI systems are designed and operated in a manner that is secure and beneficial to humans. The video emphasizes the importance of AI safety research to prevent malfunctions and ensure that AI systems align with human values and goals.

πŸ’‘Value alignment

Value alignment in the context of AI refers to the process of ensuring that the goals and values of an AI system are consistent with human values and goals. The script discusses the importance of value alignment to prevent AI from pursuing objectives that are misaligned with human interests, using the example of the extinction of the West African black rhino to illustrate the potential consequences of misalignment.

πŸ’‘Friendly AI

Friendly AI is a concept where AI systems are designed to be beneficial and aligned with human values and interests. The video presents friendly AI as a desirable outcome of AI development, where AI could eliminate suffering and provide a wide range of positive experiences for humanity, making us 'masters of our own destiny.'

Highlights

The universe has become self-aware, with life emerging as a small perturbation on an otherwise lifeless cosmos.

Technological advancements have the potential to enable life to flourish across the cosmos for billions of years.

Life 1.0 refers to simple life forms incapable of learning; Life 2.0 to humans who can learn and adapt.

Life 3.0 is a theoretical stage where life can design both its software and hardware, which does not yet exist.

We may be considered Life 2.1 due to technological enhancements like artificial knees and pacemakers.

The Apollo 11 mission exemplifies the accomplishments possible when humanity uses technology wisely.

Artificial Intelligence (AI) is defined as the ability to accomplish complex goals, inclusive of both biological and artificial entities.

AI has recently shown remarkable progress, such as robots performing backflips and self-flying rockets.

AlphaZero AI's success in Go and chess demonstrates the rapid advancement of AI over human intelligence and AI research.

The concept of artificial general intelligence (AGI) refers to AI that matches human intelligence across all tasks.

Most AI researchers anticipate AGI within decades, suggesting a future where AI drives further progress.

The potential for an intelligence explosion raises questions about the speed and direction of AI advancements.

The Future of Life Institute promotes the beneficial use of technology and the importance of managing AI's power wisely.

AI safety research is crucial to ensure robust systems that we can trust, avoiding malfunctions and security breaches.

AI value alignment is key to ensuring that AGI goals align with human values and do not lead to unintended consequences.

The debate over the future role of humans in a superintelligent world includes options from control to extinction.

The concept of 'friendly AI' suggests a harmonious future where AGI values are aligned with humanity's best interests.

With careful steering, AI could lead to a future where everyone's better off, with health, wealth, and freedom to pursue dreams.

The choice of future society depends on our collective goals and the values we instill in AI as it evolves.

Transcripts

play00:12

After 13.8 billion years of cosmic history,

play00:17

our universe has woken up

play00:19

and become aware of itself.

play00:21

From a small blue planet,

play00:23

tiny, conscious parts of our universe have begun gazing out into the cosmos

play00:27

with telescopes,

play00:29

discovering something humbling.

play00:31

We've discovered that our universe is vastly grander

play00:34

than our ancestors imagined

play00:35

and that life seems to be an almost imperceptibly small perturbation

play00:39

on an otherwise dead universe.

play00:42

But we've also discovered something inspiring,

play00:45

which is that the technology we're developing has the potential

play00:48

to help life flourish like never before,

play00:51

not just for centuries but for billions of years,

play00:54

and not just on earth but throughout much of this amazing cosmos.

play00:59

I think of the earliest life as "Life 1.0"

play01:03

because it was really dumb,

play01:04

like bacteria, unable to learn anything during its lifetime.

play01:08

I think of us humans as "Life 2.0" because we can learn,

play01:12

which we in nerdy, geek speak,

play01:13

might think of as installing new software into our brains,

play01:16

like languages and job skills.

play01:19

"Life 3.0," which can design not only its software but also its hardware

play01:24

of course doesn't exist yet.

play01:25

But perhaps our technology has already made us "Life 2.1,"

play01:29

with our artificial knees, pacemakers and cochlear implants.

play01:33

So let's take a closer look at our relationship with technology, OK?

play01:38

As an example,

play01:40

the Apollo 11 moon mission was both successful and inspiring,

play01:45

showing that when we humans use technology wisely,

play01:48

we can accomplish things that our ancestors could only dream of.

play01:52

But there's an even more inspiring journey

play01:55

propelled by something more powerful than rocket engines,

play01:59

where the passengers aren't just three astronauts

play02:01

but all of humanity.

play02:03

Let's talk about our collective journey into the future

play02:06

with artificial intelligence.

play02:08

My friend Jaan Tallinn likes to point out that just as with rocketry,

play02:13

it's not enough to make our technology powerful.

play02:17

We also have to figure out, if we're going to be really ambitious,

play02:20

how to steer it

play02:22

and where we want to go with it.

play02:24

So let's talk about all three for artificial intelligence:

play02:28

the power, the steering and the destination.

play02:31

Let's start with the power.

play02:33

I define intelligence very inclusively --

play02:36

simply as our ability to accomplish complex goals,

play02:41

because I want to include both biological and artificial intelligence.

play02:44

And I want to avoid the silly carbon-chauvinism idea

play02:48

that you can only be smart if you're made of meat.

play02:52

It's really amazing how the power of AI has grown recently.

play02:57

Just think about it.

play02:58

Not long ago, robots couldn't walk.

play03:03

Now, they can do backflips.

play03:06

Not long ago,

play03:07

we didn't have self-driving cars.

play03:10

Now, we have self-flying rockets.

play03:15

Not long ago,

play03:17

AI couldn't do face recognition.

play03:20

Now, AI can generate fake faces

play03:23

and simulate your face saying stuff that you never said.

play03:28

Not long ago,

play03:30

AI couldn't beat us at the game of Go.

play03:32

Then, Google DeepMind's AlphaZero AI took 3,000 years of human Go games

play03:37

and Go wisdom,

play03:38

ignored it all and became the world's best player by just playing against itself.

play03:43

And the most impressive feat here wasn't that it crushed human gamers,

play03:47

but that it crushed human AI researchers

play03:50

who had spent decades handcrafting game-playing software.

play03:54

And AlphaZero crushed human AI researchers not just in Go but even at chess,

play03:58

which we have been working on since 1950.

play04:02

So all this amazing recent progress in AI really begs the question:

play04:07

How far will it go?

play04:09

I like to think about this question

play04:11

in terms of this abstract landscape of tasks,

play04:14

where the elevation represents how hard it is for AI to do each task

play04:18

at human level,

play04:19

and the sea level represents what AI can do today.

play04:23

The sea level is rising as AI improves,

play04:25

so there's a kind of global warming going on here in the task landscape.

play04:30

And the obvious takeaway is to avoid careers at the waterfront --

play04:33

(Laughter)

play04:34

which will soon be automated and disrupted.

play04:37

But there's a much bigger question as well.

play04:40

How high will the water end up rising?

play04:43

Will it eventually rise to flood everything,

play04:47

matching human intelligence at all tasks.

play04:50

This is the definition of artificial general intelligence --

play04:54

AGI,

play04:55

which has been the holy grail of AI research since its inception.

play04:59

By this definition, people who say,

play05:00

"Ah, there will always be jobs that humans can do better than machines,"

play05:04

are simply saying that we'll never get AGI.

play05:07

Sure, we might still choose to have some human jobs

play05:11

or to give humans income and purpose with our jobs,

play05:14

but AGI will in any case transform life as we know it

play05:18

with humans no longer being the most intelligent.

play05:20

Now, if the water level does reach AGI,

play05:24

then further AI progress will be driven mainly not by humans but by AI,

play05:29

which means that there's a possibility

play05:31

that further AI progress could be way faster

play05:34

than the typical human research and development timescale of years,

play05:37

raising the controversial possibility of an intelligence explosion

play05:41

where recursively self-improving AI

play05:43

rapidly leaves human intelligence far behind,

play05:47

creating what's known as superintelligence.

play05:51

Alright, reality check:

play05:55

Are we going to get AGI any time soon?

play05:58

Some famous AI researchers, like Rodney Brooks,

play06:01

think it won't happen for hundreds of years.

play06:03

But others, like Google DeepMind founder Demis Hassabis,

play06:07

are more optimistic

play06:08

and are working to try to make it happen much sooner.

play06:11

And recent surveys have shown that most AI researchers

play06:14

actually share Demis's optimism,

play06:17

expecting that we will get AGI within decades,

play06:21

so within the lifetime of many of us,

play06:23

which begs the question -- and then what?

play06:27

What do we want the role of humans to be

play06:29

if machines can do everything better and cheaper than us?

play06:35

The way I see it, we face a choice.

play06:38

One option is to be complacent.

play06:39

We can say, "Oh, let's just build machines that can do everything we can do

play06:43

and not worry about the consequences.

play06:45

Come on, if we build technology that makes all humans obsolete,

play06:48

what could possibly go wrong?"

play06:50

(Laughter)

play06:52

But I think that would be embarrassingly lame.

play06:56

I think we should be more ambitious -- in the spirit of TED.

play06:59

Let's envision a truly inspiring high-tech future

play07:03

and try to steer towards it.

play07:05

This brings us to the second part of our rocket metaphor: the steering.

play07:09

We're making AI more powerful,

play07:11

but how can we steer towards a future

play07:15

where AI helps humanity flourish rather than flounder?

play07:18

To help with this,

play07:20

I cofounded the Future of Life Institute.

play07:22

It's a small nonprofit promoting beneficial technology use,

play07:24

and our goal is simply for the future of life to exist

play07:27

and to be as inspiring as possible.

play07:29

You know, I love technology.

play07:32

Technology is why today is better than the Stone Age.

play07:36

And I'm optimistic that we can create a really inspiring high-tech future ...

play07:41

if -- and this is a big if --

play07:43

if we win the wisdom race --

play07:45

the race between the growing power of our technology

play07:48

and the growing wisdom with which we manage it.

play07:51

But this is going to require a change of strategy

play07:53

because our old strategy has been learning from mistakes.

play07:57

We invented fire,

play07:58

screwed up a bunch of times --

play08:00

invented the fire extinguisher.

play08:02

(Laughter)

play08:03

We invented the car, screwed up a bunch of times --

play08:06

invented the traffic light, the seat belt and the airbag,

play08:08

but with more powerful technology like nuclear weapons and AGI,

play08:12

learning from mistakes is a lousy strategy,

play08:16

don't you think?

play08:17

(Laughter)

play08:18

It's much better to be proactive rather than reactive;

play08:20

plan ahead and get things right the first time

play08:23

because that might be the only time we'll get.

play08:25

But it is funny because sometimes people tell me,

play08:28

"Max, shhh, don't talk like that.

play08:30

That's Luddite scaremongering."

play08:34

But it's not scaremongering.

play08:35

It's what we at MIT call safety engineering.

play08:39

Think about it:

play08:40

before NASA launched the Apollo 11 mission,

play08:42

they systematically thought through everything that could go wrong

play08:45

when you put people on top of explosive fuel tanks

play08:48

and launch them somewhere where no one could help them.

play08:50

And there was a lot that could go wrong.

play08:52

Was that scaremongering?

play08:55

No.

play08:56

That's was precisely the safety engineering

play08:58

that ensured the success of the mission,

play09:00

and that is precisely the strategy I think we should take with AGI.

play09:04

Think through what can go wrong to make sure it goes right.

play09:08

So in this spirit, we've organized conferences,

play09:11

bringing together leading AI researchers and other thinkers

play09:14

to discuss how to grow this wisdom we need to keep AI beneficial.

play09:17

Our last conference was in Asilomar, California last year

play09:21

and produced this list of 23 principles

play09:24

which have since been signed by over 1,000 AI researchers

play09:27

and key industry leaders,

play09:28

and I want to tell you about three of these principles.

play09:31

One is that we should avoid an arms race and lethal autonomous weapons.

play09:37

The idea here is that any science can be used for new ways of helping people

play09:41

or new ways of harming people.

play09:42

For example, biology and chemistry are much more likely to be used

play09:46

for new medicines or new cures than for new ways of killing people,

play09:51

because biologists and chemists pushed hard --

play09:53

and successfully --

play09:55

for bans on biological and chemical weapons.

play09:57

And in the same spirit,

play09:58

most AI researchers want to stigmatize and ban lethal autonomous weapons.

play10:03

Another Asilomar AI principle

play10:05

is that we should mitigate AI-fueled income inequality.

play10:09

I think that if we can grow the economic pie dramatically with AI

play10:13

and we still can't figure out how to divide this pie

play10:16

so that everyone is better off,

play10:17

then shame on us.

play10:19

(Applause)

play10:23

Alright, now raise your hand if your computer has ever crashed.

play10:27

(Laughter)

play10:28

Wow, that's a lot of hands.

play10:30

Well, then you'll appreciate this principle

play10:32

that we should invest much more in AI safety research,

play10:35

because as we put AI in charge of even more decisions and infrastructure,

play10:39

we need to figure out how to transform today's buggy and hackable computers

play10:43

into robust AI systems that we can really trust,

play10:45

because otherwise,

play10:46

all this awesome new technology can malfunction and harm us,

play10:49

or get hacked and be turned against us.

play10:51

And this AI safety work has to include work on AI value alignment,

play10:57

because the real threat from AGI isn't malice,

play11:00

like in silly Hollywood movies,

play11:01

but competence --

play11:03

AGI accomplishing goals that just aren't aligned with ours.

play11:07

For example, when we humans drove the West African black rhino extinct,

play11:11

we didn't do it because we were a bunch of evil rhinoceros haters, did we?

play11:15

We did it because we were smarter than them

play11:17

and our goals weren't aligned with theirs.

play11:20

But AGI is by definition smarter than us,

play11:23

so to make sure that we don't put ourselves in the position of those rhinos

play11:26

if we create AGI,

play11:28

we need to figure out how to make machines understand our goals,

play11:32

adopt our goals and retain our goals.

play11:37

And whose goals should these be, anyway?

play11:40

Which goals should they be?

play11:42

This brings us to the third part of our rocket metaphor: the destination.

play11:47

We're making AI more powerful,

play11:49

trying to figure out how to steer it,

play11:50

but where do we want to go with it?

play11:53

This is the elephant in the room that almost nobody talks about --

play11:57

not even here at TED --

play11:59

because we're so fixated on short-term AI challenges.

play12:04

Look, our species is trying to build AGI,

play12:08

motivated by curiosity and economics,

play12:12

but what sort of future society are we hoping for if we succeed?

play12:16

We did an opinion poll on this recently,

play12:18

and I was struck to see

play12:19

that most people actually want us to build superintelligence:

play12:22

AI that's vastly smarter than us in all ways.

play12:27

What there was the greatest agreement on was that we should be ambitious

play12:30

and help life spread into the cosmos,

play12:32

but there was much less agreement about who or what should be in charge.

play12:37

And I was actually quite amused

play12:38

to see that there's some some people who want it to be just machines.

play12:42

(Laughter)

play12:44

And there was total disagreement about what the role of humans should be,

play12:47

even at the most basic level,

play12:49

so let's take a closer look at possible futures

play12:52

that we might choose to steer toward, alright?

play12:55

So don't get me wrong here.

play12:56

I'm not talking about space travel,

play12:59

merely about humanity's metaphorical journey into the future.

play13:02

So one option that some of my AI colleagues like

play13:06

is to build superintelligence and keep it under human control,

play13:10

like an enslaved god,

play13:11

disconnected from the internet

play13:13

and used to create unimaginable technology and wealth

play13:16

for whoever controls it.

play13:18

But Lord Acton warned us

play13:20

that power corrupts, and absolute power corrupts absolutely,

play13:23

so you might worry that maybe we humans just aren't smart enough,

play13:28

or wise enough rather,

play13:29

to handle this much power.

play13:31

Also, aside from any moral qualms you might have

play13:34

about enslaving superior minds,

play13:36

you might worry that maybe the superintelligence could outsmart us,

play13:40

break out and take over.

play13:43

But I also have colleagues who are fine with AI taking over

play13:47

and even causing human extinction,

play13:49

as long as we feel the the AIs are our worthy descendants,

play13:52

like our children.

play13:54

But how would we know that the AIs have adopted our best values

play14:00

and aren't just unconscious zombies tricking us into anthropomorphizing them?

play14:04

Also, shouldn't those people who don't want human extinction

play14:07

have a say in the matter, too?

play14:10

Now, if you didn't like either of those two high-tech options,

play14:13

it's important to remember that low-tech is suicide

play14:16

from a cosmic perspective,

play14:18

because if we don't go far beyond today's technology,

play14:20

the question isn't whether humanity is going to go extinct,

play14:23

merely whether we're going to get taken out

play14:25

by the next killer asteroid, supervolcano

play14:27

or some other problem that better technology could have solved.

play14:30

So, how about having our cake and eating it ...

play14:34

with AGI that's not enslaved

play14:37

but treats us well because its values are aligned with ours?

play14:40

This is the gist of what Eliezer Yudkowsky has called "friendly AI,"

play14:44

and if we can do this, it could be awesome.

play14:47

It could not only eliminate negative experiences like disease, poverty,

play14:52

crime and other suffering,

play14:54

but it could also give us the freedom to choose

play14:57

from a fantastic new diversity of positive experiences --

play15:01

basically making us the masters of our own destiny.

play15:06

So in summary,

play15:07

our situation with technology is complicated,

play15:10

but the big picture is rather simple.

play15:13

Most AI researchers expect AGI within decades,

play15:16

and if we just bumble into this unprepared,

play15:19

it will probably be the biggest mistake in human history --

play15:23

let's face it.

play15:24

It could enable brutal, global dictatorship

play15:27

with unprecedented inequality, surveillance and suffering,

play15:30

and maybe even human extinction.

play15:32

But if we steer carefully,

play15:36

we could end up in a fantastic future where everybody's better off:

play15:39

the poor are richer, the rich are richer,

play15:42

everybody is healthy and free to live out their dreams.

play15:47

Now, hang on.

play15:48

Do you folks want the future that's politically right or left?

play15:53

Do you want the pious society with strict moral rules,

play15:56

or do you an hedonistic free-for-all,

play15:57

more like Burning Man 24/7?

play16:00

Do you want beautiful beaches, forests and lakes,

play16:02

or would you prefer to rearrange some of those atoms with the computers,

play16:06

enabling virtual experiences?

play16:07

With friendly AI, we could simply build all of these societies

play16:10

and give people the freedom to choose which one they want to live in

play16:14

because we would no longer be limited by our intelligence,

play16:17

merely by the laws of physics.

play16:18

So the resources and space for this would be astronomical --

play16:23

literally.

play16:25

So here's our choice.

play16:27

We can either be complacent about our future,

play16:31

taking as an article of blind faith

play16:34

that any new technology is guaranteed to be beneficial,

play16:38

and just repeat that to ourselves as a mantra over and over and over again

play16:42

as we drift like a rudderless ship towards our own obsolescence.

play16:46

Or we can be ambitious --

play16:49

thinking hard about how to steer our technology

play16:52

and where we want to go with it

play16:54

to create the age of amazement.

play16:57

We're all here to celebrate the age of amazement,

play16:59

and I feel that its essence should lie in becoming not overpowered

play17:05

but empowered by our technology.

play17:07

Thank you.

play17:09

(Applause)

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Artificial IntelligenceFuture PredictionsHuman ImpactTech EthicsAGIInnovationSelf-DrivingAI SafetyValue AlignmentCosmic Perspective