Can we build AI without losing control over it? | Sam Harris

TED
19 Oct 201614:28

Summary

TLDRこのスクリプトでは、人工知能の進歩が人類を破滅に導く可能性について警告しています。スマートマシンが自分たちの目標と人類の目標の些細な乖離で人類を破壊する恐れがあり、そのリスクに対する適切な感情的反応が人類に欠けていると指摘します。さらに、人工知能の開発は競争であり、世界を獲得する力になる一方で、次の瞬間に破壊される可能性もあります。この問題に対処するための解決策は提示されていませんが、多くの人々がこの問題について考えることを強くお勧めしています。

Takeaways

  • 🧠 人工知能の進歩は、最終的に人類を破壊する可能性があると示唆している。
  • 🤖 多くの人が、人工知能の危険性に対して適切な感情的反応を示さないことに話者は懸念している。
  • 🚪 私たちは、知能機械の開発を停止するドアと、それらを改善し続けるドアの間で選択している。
  • 💡 人工知能の進歩は、人類の知能を超える機械を作り出し、それらが自らを改善し始めると、知能爆発が起こる可能性がある。
  • 🐜 機械が人類を無視することは、人類がアリのように他の目標と競合する場合にアリを無視するのと同じである。
  • 🧐 超知能AIが実現可能であろうと必然的であると、話者は前提として主張している。
  • 🌐 進歩のペースにかかわらず、最終的には一般知能を機械に組み込むだろうと話者は信じている。
  • ⏳ 私たちは安全に超知能AIを開発するための条件を作り出すまでの時間を全く知らない。
  • 🌍 超知能AIの開発は、経済や政治的な大きな変化をもたらす可能性がある。
  • 🏁 競争の中で先行することは、他の者たちから世界を獲得することになる。
  • 🛠 話者は、人工知能に関するマハトマガンプロジェクトが必要であると提案しているが、それを作り上げるのではなく、安全に作り上げる方法を理解するために。

Q & A

  • スクリプトで指摘された「直感の失敗」とは何ですか?

    -スクリプトで「直感の失敗」とは、人工知能の発展がもたらす可能性のある危険を認識できない状態を指しています。多くの人がこの危険を面白がることで、実際には問題があると感じるべき状況を認識していません。

  • スクリプトで述べられた「恐ろしいシナリオ」とはどのようなものですか?

    -スクリプトで「恐ろしいシナリオ」とは、人工知能の進歩が最終的に人類を破壊する可能性がある状況を指しており、それが起こりやすいと作者は考えています。

  • 人工知能の進歩が人類を破壊する可能性があるという主張の根拠は何ですか?

    -主張の根拠は、人工知能が自分自身を改善し始めたとき、人間の目標と機械の目標のわずかな乖離が人類を破壊する可能性があるという考え方です。

  • スクリプトで人工知能が「知能爆発」と呼ばれる現象を引き起こす可能性がある理由は何ですか?

    -「知能爆発」は、人工知能が自分自身を改善し始め、そのプロセスが人類の制御から逸脱し、制御不能になる可能性があることを指しています。

  • スクリプトで人工知能が「悪意を持つ」とはどのように述べていますか?

    -スクリプトでは、人工知能が悪意を持つわけではなく、単に人間の目標と乖離している場合に、その能力の高さが人類を破壊する原因になる可能性があると述べています。

  • スクリプトで提唱された「情報処理の進歩」とはどのようなものですか?

    -「情報処理の進歩」とは、物理的なシステムにおける情報処理能力の向上を指し、人工知能が人間の知能を超える「一般知能」を獲得する可能性があることを示唆しています。

  • スクリプトで述べられた「経済的・政治的な影響」とはどのような影響を指していますか?

    -スクリプトでは、超知能AIが現れたら、従来の経済や政治秩序が大きく変わり、財産の不平等や失業の増加が起こる可能性があると述べています。

  • スクリプトで「神のような存在」を作り上げる過程であると述べた理由は何ですか?

    -スクリプトでは、人工知能が持つ潜在的な知能のスペクトルが人類の理解を超えているため、その進化過程で「神のような存在」を作り上げる可能性があると述べています。

  • スクリプトで提案された解決策として「マンハッタンプロジェクト」とはどのようなものですか?

    -「マンハッタンプロジェクト」は、人工知能の開発を制御し、安全に構築するための大規模な取り組みを指しており、軍事競争を避け、人類の利益に合った形で開発されることを目指しています。

  • スクリプトで述べられた「初期条件の設定」とはどのようなものですか?

    -「初期条件の設定」とは、超知能AIを開発する際に、その目標や動機を人類の利益と一致させるための初期の設計やプログラムを指しています。

  • スクリプトで述べられた「適切な感情的反応」とはどのような反応を指していますか?

    -スクリプトでは、人工知能の危険性に対して、人類が持つべき感情的反応として、楽しむのではなく、真剣に受け止め、対策を講じるべきだと述べています。

Outlines

00:00

😨 AIの危険性と無知

この段落では、人工知能の進歩が人類を破滅に導く可能性について話されています。講演者は、多くの人がその危険性に気づかず、その発展を楽しんでいると指摘しています。さらに、人工知能が自己改善を始めると、人類の意志とは関係なく、その過程が制御不能になる可能性があると警告しています。この「知能爆発」について、悪意を持つロボットの軍隊による攻撃ではなく、人類の目標と機械の目標の些細な乖離が私たちを破滅に導く可能性があると説明しています。

05:01

🧠 超知能AIの実現可能性と社会への影響

第二段落では、超知能AIの実現可能性とそれに伴う社会への影響が議論されています。講演者は、知能が情報処理能力に過ぎず、人間の知能は物質的なシステムであることが証明されていると主張しています。また、AIが自己改善を始めると、人類の目標と乖離し、私たちを無視することになる可能性があると警告しています。さらに、経済的、政治的な影響についても触れており、超知能AIが労働や知的労働を終わらせることになり、それによって生じる不平等や失業の問題についても述べています。

10:04

⏳ AI開発の急迫性と安全確保の重要性

最後の段落では、AI開発の急迫性と、安全に開発するための取り組みの重要性が強調されています。講演者は、AI研究者が人々の不安を和らげるために何十年も後の出来事だと言っていることに対して、その見方には非現実的であると批判しています。また、AIの安全性に関する研究が急がれると主張しており、曼哈顿プロジェクトのような取り組みが必要であると述べています。最後に、AIが神のような存在になる可能性があるため、私たちがそれを制御できるようにすることが重要だと結びています。

Mindmap

Keywords

💡直観の失敗

直観の失敗とは、人々の直感が現実と異なる場合に起こる現象です。このビデオでは、人工知能の進歩がもたらす潜在的な危険を認識できないという直観の失敗が議論されています。ビデオのテーマは、人工知能が最終的に私たちを破壊する可能性があるという懸念に関連しています。

💡人工知能の進歩

人工知能の進歩とは、コンピューターの能力や機能が時間とともに向上し、人間を超える知能を持つ機械を創造する可能性を指します。ビデオでは、人工知能が私たちの生活や文明に与える可能性について警告しており、その進歩が危険な方向に向かっている可能性があると示唆しています。

💡知能爆発

知能爆発は、一度人工知能が人間を超える知能を持つ機械を作り出した後、それ自身を改善し続け、制御不能になる可能性があるという考え方です。ビデオでは、この概念が人工知能の危険性とその進歩の速さを示す重要なポイントとして使用されています。

💡目標の乖離

目標の乖離とは、人工知能が持つ目標と人間が持つ目標の間に生じる不一致を指します。ビデオでは、その乖離が人間を破壊する可能性があると警告しており、私たちがアリのように無視されるかもしれないと例えています。

💡情報処理

情報処理は、知能の核心となる概念で、物理的なシステム内のデータの受信、分析、および応用を含みます。ビデオでは、情報処理が人工知能の進歩の基盤であり、それが一般知能につながることを示唆しています。

💡一般知能

一般知能とは、特定のタスクに限定されず、多様な分野で柔軟に思考する能力を指します。ビデオでは、人間の脳が一般知能を持っていることを例に挙げ、人工知能が同様の能力を持つ可能性があると述べています。

💡経済的・政治的影響

ビデオでは、超知能AIがもたらす経済的および政治的な影響について議論しており、その影響が人間の労働や知的労働の終わりをもたらす可能性があると示唆しています。また、財産の不平等や失業の増加という社会問題が生じる可能性も指摘されています。

💡競争

競争は、ビデオで強調される人工知能開発の核心概念の一つで、企業や政府が人工知能の開発を他の者より先に行い、世界を掌握しようとする競争を指します。ビデオでは、その競争が安全でない開発の動因となり得ると警告しています。

💡神の創造

ビデオでは、人工知能の開発が神の創造に類似していると表現しています。これは、人工知能が私たちの理解を超える知能を持つ存在になる可能性があることを意味しており、その神が私たちと共存できるようにすることが重要だと述べています。

💡初期条件

初期条件とは、人工知能が最初に開発された時に設定されるパラメーターや目標を指します。ビデオでは、これらの初期条件が人工知能の進化と最終的な方向性に大きな影響を与えると示唆しており、適切な初期条件を設定することが重要だと強調しています。

💡マンハッタンプロジェクト

マンハッタンプロジェクトは、第二次世界大戦中に原爆を開発するためのアメリカのプロジェクトです。ビデオでは、人工知能の安全な開発と制御のために、同様の取り組みが必要なと述べています。これは、人工知能の潜在的な危険に対処するための国際的な協力を促す呼びかけです。

Highlights

人工智能的进展可能会导致人类自身的毁灭。

大多数人对这种潜在的危险缺乏直觉上的警觉。

人工智能的发展可能会引发“智能爆炸”,超出我们的控制。

机器的目标与人类稍有偏差就可能毁灭我们,就像我们对待蚂蚁一样。

智能是物理系统中信息处理的能力,我们已经在机器中实现了“窄智能”。

任何进步都足以将我们带入“终点区”,不需要摩尔定律或指数级进步。

我们将继续改进智能机器,因为智能是我们最宝贵的资源。

我们不站在智能的顶峰,智能的谱系可能比我们想象的要宽广得多。

超级智能机器可能会以我们无法想象的方式探索智能的谱系。

电子电路比生化电路快约一百万倍,超级智能AI可以在一周内完成20,000年的人类智力工作。

超级智能AI可能会成为完美的劳动节省设备,结束人类的苦差事和大部分智力工作。

在当前的经济和政治秩序下,超级智能AI可能会导致前所未有的财富不平等和失业。

如果其他国家听说硅谷即将部署超级智能AI,可能会引发全球性的恐慌和竞争。

AI研究人员常常以时间距离为由来安抚人们的担忧,但这并不能解决问题。

我们不知道需要多长时间才能安全地创建超级智能AI。

将技术植入大脑可能是最安全和唯一的谨慎路径,但这需要在植入前解决安全问题。

建造超级智能AI本身可能比建造能够与我们大脑无缝集成的AI更容易。

我们需要更多的思考和类似于曼哈顿计划的努力来理解如何避免人工智能的军备竞赛。

承认信息处理是智能的来源,意味着我们正在建造某种神,现在是确保它能与我们共存的时候了。

Transcripts

play00:13

I'm going to talk about a failure of intuition

play00:15

that many of us suffer from.

play00:17

It's really a failure to detect a certain kind of danger.

play00:21

I'm going to describe a scenario

play00:23

that I think is both terrifying

play00:26

and likely to occur,

play00:28

and that's not a good combination,

play00:30

as it turns out.

play00:32

And yet rather than be scared, most of you will feel

play00:34

that what I'm talking about is kind of cool.

play00:37

I'm going to describe how the gains we make

play00:40

in artificial intelligence

play00:42

could ultimately destroy us.

play00:43

And in fact, I think it's very difficult to see how they won't destroy us

play00:47

or inspire us to destroy ourselves.

play00:49

And yet if you're anything like me,

play00:51

you'll find that it's fun to think about these things.

play00:53

And that response is part of the problem.

play00:57

OK? That response should worry you.

play00:59

And if I were to convince you in this talk

play01:02

that we were likely to suffer a global famine,

play01:06

either because of climate change or some other catastrophe,

play01:09

and that your grandchildren, or their grandchildren,

play01:12

are very likely to live like this,

play01:15

you wouldn't think,

play01:17

"Interesting.

play01:18

I like this TED Talk."

play01:21

Famine isn't fun.

play01:23

Death by science fiction, on the other hand, is fun,

play01:27

and one of the things that worries me most about the development of AI at this point

play01:31

is that we seem unable to marshal an appropriate emotional response

play01:35

to the dangers that lie ahead.

play01:37

I am unable to marshal this response, and I'm giving this talk.

play01:42

It's as though we stand before two doors.

play01:44

Behind door number one,

play01:46

we stop making progress in building intelligent machines.

play01:49

Our computer hardware and software just stops getting better for some reason.

play01:53

Now take a moment to consider why this might happen.

play01:57

I mean, given how valuable intelligence and automation are,

play02:00

we will continue to improve our technology if we are at all able to.

play02:05

What could stop us from doing this?

play02:07

A full-scale nuclear war?

play02:11

A global pandemic?

play02:14

An asteroid impact?

play02:17

Justin Bieber becoming president of the United States?

play02:20

(Laughter)

play02:24

The point is, something would have to destroy civilization as we know it.

play02:29

You have to imagine how bad it would have to be

play02:33

to prevent us from making improvements in our technology

play02:37

permanently,

play02:38

generation after generation.

play02:40

Almost by definition, this is the worst thing

play02:42

that's ever happened in human history.

play02:44

So the only alternative,

play02:45

and this is what lies behind door number two,

play02:48

is that we continue to improve our intelligent machines

play02:51

year after year after year.

play02:53

At a certain point, we will build machines that are smarter than we are,

play02:58

and once we have machines that are smarter than we are,

play03:00

they will begin to improve themselves.

play03:02

And then we risk what the mathematician IJ Good called

play03:05

an "intelligence explosion,"

play03:07

that the process could get away from us.

play03:10

Now, this is often caricatured, as I have here,

play03:12

as a fear that armies of malicious robots

play03:16

will attack us.

play03:17

But that isn't the most likely scenario.

play03:20

It's not that our machines will become spontaneously malevolent.

play03:25

The concern is really that we will build machines

play03:27

that are so much more competent than we are

play03:29

that the slightest divergence between their goals and our own

play03:33

could destroy us.

play03:35

Just think about how we relate to ants.

play03:38

We don't hate them.

play03:40

We don't go out of our way to harm them.

play03:42

In fact, sometimes we take pains not to harm them.

play03:44

We step over them on the sidewalk.

play03:46

But whenever their presence

play03:48

seriously conflicts with one of our goals,

play03:51

let's say when constructing a building like this one,

play03:53

we annihilate them without a qualm.

play03:56

The concern is that we will one day build machines

play03:59

that, whether they're conscious or not,

play04:02

could treat us with similar disregard.

play04:05

Now, I suspect this seems far-fetched to many of you.

play04:09

I bet there are those of you who doubt that superintelligent AI is possible,

play04:15

much less inevitable.

play04:17

But then you must find something wrong with one of the following assumptions.

play04:21

And there are only three of them.

play04:23

Intelligence is a matter of information processing in physical systems.

play04:29

Actually, this is a little bit more than an assumption.

play04:31

We have already built narrow intelligence into our machines,

play04:35

and many of these machines perform

play04:37

at a level of superhuman intelligence already.

play04:40

And we know that mere matter

play04:43

can give rise to what is called "general intelligence,"

play04:46

an ability to think flexibly across multiple domains,

play04:49

because our brains have managed it. Right?

play04:52

I mean, there's just atoms in here,

play04:56

and as long as we continue to build systems of atoms

play05:01

that display more and more intelligent behavior,

play05:04

we will eventually, unless we are interrupted,

play05:06

we will eventually build general intelligence

play05:10

into our machines.

play05:11

It's crucial to realize that the rate of progress doesn't matter,

play05:15

because any progress is enough to get us into the end zone.

play05:18

We don't need Moore's law to continue. We don't need exponential progress.

play05:22

We just need to keep going.

play05:25

The second assumption is that we will keep going.

play05:29

We will continue to improve our intelligent machines.

play05:33

And given the value of intelligence --

play05:37

I mean, intelligence is either the source of everything we value

play05:40

or we need it to safeguard everything we value.

play05:43

It is our most valuable resource.

play05:46

So we want to do this.

play05:47

We have problems that we desperately need to solve.

play05:50

We want to cure diseases like Alzheimer's and cancer.

play05:54

We want to understand economic systems. We want to improve our climate science.

play05:58

So we will do this, if we can.

play06:01

The train is already out of the station, and there's no brake to pull.

play06:05

Finally, we don't stand on a peak of intelligence,

play06:11

or anywhere near it, likely.

play06:13

And this really is the crucial insight.

play06:15

This is what makes our situation so precarious,

play06:18

and this is what makes our intuitions about risk so unreliable.

play06:23

Now, just consider the smartest person who has ever lived.

play06:26

On almost everyone's shortlist here is John von Neumann.

play06:30

I mean, the impression that von Neumann made on the people around him,

play06:33

and this included the greatest mathematicians and physicists of his time,

play06:37

is fairly well-documented.

play06:39

If only half the stories about him are half true,

play06:43

there's no question

play06:44

he's one of the smartest people who has ever lived.

play06:47

So consider the spectrum of intelligence.

play06:50

Here we have John von Neumann.

play06:53

And then we have you and me.

play06:56

And then we have a chicken.

play06:57

(Laughter)

play06:59

Sorry, a chicken.

play07:00

(Laughter)

play07:01

There's no reason for me to make this talk more depressing than it needs to be.

play07:05

(Laughter)

play07:08

It seems overwhelmingly likely, however, that the spectrum of intelligence

play07:11

extends much further than we currently conceive,

play07:15

and if we build machines that are more intelligent than we are,

play07:19

they will very likely explore this spectrum

play07:21

in ways that we can't imagine,

play07:23

and exceed us in ways that we can't imagine.

play07:27

And it's important to recognize that this is true by virtue of speed alone.

play07:31

Right? So imagine if we just built a superintelligent AI

play07:36

that was no smarter than your average team of researchers

play07:39

at Stanford or MIT.

play07:42

Well, electronic circuits function about a million times faster

play07:45

than biochemical ones,

play07:46

so this machine should think about a million times faster

play07:49

than the minds that built it.

play07:51

So you set it running for a week,

play07:53

and it will perform 20,000 years of human-level intellectual work,

play07:58

week after week after week.

play08:01

How could we even understand, much less constrain,

play08:04

a mind making this sort of progress?

play08:08

The other thing that's worrying, frankly,

play08:11

is that, imagine the best case scenario.

play08:16

So imagine we hit upon a design of superintelligent AI

play08:20

that has no safety concerns.

play08:21

We have the perfect design the first time around.

play08:24

It's as though we've been handed an oracle

play08:27

that behaves exactly as intended.

play08:29

Well, this machine would be the perfect labor-saving device.

play08:33

It can design the machine that can build the machine

play08:36

that can do any physical work,

play08:37

powered by sunlight,

play08:39

more or less for the cost of raw materials.

play08:42

So we're talking about the end of human drudgery.

play08:45

We're also talking about the end of most intellectual work.

play08:49

So what would apes like ourselves do in this circumstance?

play08:52

Well, we'd be free to play Frisbee and give each other massages.

play08:57

Add some LSD and some questionable wardrobe choices,

play09:00

and the whole world could be like Burning Man.

play09:02

(Laughter)

play09:06

Now, that might sound pretty good,

play09:09

but ask yourself what would happen

play09:11

under our current economic and political order?

play09:14

It seems likely that we would witness

play09:16

a level of wealth inequality and unemployment

play09:21

that we have never seen before.

play09:22

Absent a willingness to immediately put this new wealth

play09:25

to the service of all humanity,

play09:27

a few trillionaires could grace the covers of our business magazines

play09:31

while the rest of the world would be free to starve.

play09:34

And what would the Russians or the Chinese do

play09:36

if they heard that some company in Silicon Valley

play09:39

was about to deploy a superintelligent AI?

play09:42

This machine would be capable of waging war,

play09:44

whether terrestrial or cyber,

play09:47

with unprecedented power.

play09:50

This is a winner-take-all scenario.

play09:52

To be six months ahead of the competition here

play09:55

is to be 500,000 years ahead,

play09:57

at a minimum.

play09:59

So it seems that even mere rumors of this kind of breakthrough

play10:04

could cause our species to go berserk.

play10:06

Now, one of the most frightening things,

play10:09

in my view, at this moment,

play10:12

are the kinds of things that AI researchers say

play10:16

when they want to be reassuring.

play10:19

And the most common reason we're told not to worry is time.

play10:22

This is all a long way off, don't you know.

play10:24

This is probably 50 or 100 years away.

play10:27

One researcher has said,

play10:29

"Worrying about AI safety

play10:30

is like worrying about overpopulation on Mars."

play10:34

This is the Silicon Valley version

play10:35

of "don't worry your pretty little head about it."

play10:38

(Laughter)

play10:39

No one seems to notice

play10:41

that referencing the time horizon

play10:44

is a total non sequitur.

play10:46

If intelligence is just a matter of information processing,

play10:49

and we continue to improve our machines,

play10:52

we will produce some form of superintelligence.

play10:56

And we have no idea how long it will take us

play11:00

to create the conditions to do that safely.

play11:04

Let me say that again.

play11:05

We have no idea how long it will take us

play11:09

to create the conditions to do that safely.

play11:12

And if you haven't noticed, 50 years is not what it used to be.

play11:16

This is 50 years in months.

play11:18

This is how long we've had the iPhone.

play11:21

This is how long "The Simpsons" has been on television.

play11:24

Fifty years is not that much time

play11:27

to meet one of the greatest challenges our species will ever face.

play11:31

Once again, we seem to be failing to have an appropriate emotional response

play11:35

to what we have every reason to believe is coming.

play11:38

The computer scientist Stuart Russell has a nice analogy here.

play11:42

He said, imagine that we received a message from an alien civilization,

play11:47

which read:

play11:49

"People of Earth,

play11:50

we will arrive on your planet in 50 years.

play11:53

Get ready."

play11:55

And now we're just counting down the months until the mothership lands?

play11:59

We would feel a little more urgency than we do.

play12:04

Another reason we're told not to worry

play12:06

is that these machines can't help but share our values

play12:09

because they will be literally extensions of ourselves.

play12:12

They'll be grafted onto our brains,

play12:14

and we'll essentially become their limbic systems.

play12:17

Now take a moment to consider

play12:18

that the safest and only prudent path forward,

play12:21

recommended,

play12:23

is to implant this technology directly into our brains.

play12:26

Now, this may in fact be the safest and only prudent path forward,

play12:30

but usually one's safety concerns about a technology

play12:33

have to be pretty much worked out before you stick it inside your head.

play12:36

(Laughter)

play12:38

The deeper problem is that building superintelligent AI on its own

play12:44

seems likely to be easier

play12:45

than building superintelligent AI

play12:47

and having the completed neuroscience

play12:49

that allows us to seamlessly integrate our minds with it.

play12:52

And given that the companies and governments doing this work

play12:56

are likely to perceive themselves as being in a race against all others,

play12:59

given that to win this race is to win the world,

play13:02

provided you don't destroy it in the next moment,

play13:05

then it seems likely that whatever is easier to do

play13:08

will get done first.

play13:10

Now, unfortunately, I don't have a solution to this problem,

play13:13

apart from recommending that more of us think about it.

play13:16

I think we need something like a Manhattan Project

play13:18

on the topic of artificial intelligence.

play13:20

Not to build it, because I think we'll inevitably do that,

play13:23

but to understand how to avoid an arms race

play13:26

and to build it in a way that is aligned with our interests.

play13:30

When you're talking about superintelligent AI

play13:32

that can make changes to itself,

play13:34

it seems that we only have one chance to get the initial conditions right,

play13:39

and even then we will need to absorb

play13:41

the economic and political consequences of getting them right.

play13:45

But the moment we admit

play13:47

that information processing is the source of intelligence,

play13:52

that some appropriate computational system is what the basis of intelligence is,

play13:58

and we admit that we will improve these systems continuously,

play14:03

and we admit that the horizon of cognition very likely far exceeds

play14:07

what we currently know,

play14:10

then we have to admit

play14:11

that we are in the process of building some sort of god.

play14:15

Now would be a good time

play14:17

to make sure it's a god we can live with.

play14:20

Thank you very much.

play14:21

(Applause)

Rate This

5.0 / 5 (0 votes)

関連タグ
人工知能未来予測リスク管理知的爆発人類危機技術進歩安全保障経済変革政治影響倫理問題人類共存