Can we build AI without losing control over it? | Sam Harris
Summary
TLDRこのスクリプトでは、人工知能の進歩が人類を破滅に導く可能性について警告しています。スマートマシンが自分たちの目標と人類の目標の些細な乖離で人類を破壊する恐れがあり、そのリスクに対する適切な感情的反応が人類に欠けていると指摘します。さらに、人工知能の開発は競争であり、世界を獲得する力になる一方で、次の瞬間に破壊される可能性もあります。この問題に対処するための解決策は提示されていませんが、多くの人々がこの問題について考えることを強くお勧めしています。
Takeaways
- 🧠 人工知能の進歩は、最終的に人類を破壊する可能性があると示唆している。
- 🤖 多くの人が、人工知能の危険性に対して適切な感情的反応を示さないことに話者は懸念している。
- 🚪 私たちは、知能機械の開発を停止するドアと、それらを改善し続けるドアの間で選択している。
- 💡 人工知能の進歩は、人類の知能を超える機械を作り出し、それらが自らを改善し始めると、知能爆発が起こる可能性がある。
- 🐜 機械が人類を無視することは、人類がアリのように他の目標と競合する場合にアリを無視するのと同じである。
- 🧐 超知能AIが実現可能であろうと必然的であると、話者は前提として主張している。
- 🌐 進歩のペースにかかわらず、最終的には一般知能を機械に組み込むだろうと話者は信じている。
- ⏳ 私たちは安全に超知能AIを開発するための条件を作り出すまでの時間を全く知らない。
- 🌍 超知能AIの開発は、経済や政治的な大きな変化をもたらす可能性がある。
- 🏁 競争の中で先行することは、他の者たちから世界を獲得することになる。
- 🛠 話者は、人工知能に関するマハトマガンプロジェクトが必要であると提案しているが、それを作り上げるのではなく、安全に作り上げる方法を理解するために。
Q & A
スクリプトで指摘された「直感の失敗」とは何ですか?
-スクリプトで「直感の失敗」とは、人工知能の発展がもたらす可能性のある危険を認識できない状態を指しています。多くの人がこの危険を面白がることで、実際には問題があると感じるべき状況を認識していません。
スクリプトで述べられた「恐ろしいシナリオ」とはどのようなものですか?
-スクリプトで「恐ろしいシナリオ」とは、人工知能の進歩が最終的に人類を破壊する可能性がある状況を指しており、それが起こりやすいと作者は考えています。
人工知能の進歩が人類を破壊する可能性があるという主張の根拠は何ですか?
-主張の根拠は、人工知能が自分自身を改善し始めたとき、人間の目標と機械の目標のわずかな乖離が人類を破壊する可能性があるという考え方です。
スクリプトで人工知能が「知能爆発」と呼ばれる現象を引き起こす可能性がある理由は何ですか?
-「知能爆発」は、人工知能が自分自身を改善し始め、そのプロセスが人類の制御から逸脱し、制御不能になる可能性があることを指しています。
スクリプトで人工知能が「悪意を持つ」とはどのように述べていますか?
-スクリプトでは、人工知能が悪意を持つわけではなく、単に人間の目標と乖離している場合に、その能力の高さが人類を破壊する原因になる可能性があると述べています。
スクリプトで提唱された「情報処理の進歩」とはどのようなものですか?
-「情報処理の進歩」とは、物理的なシステムにおける情報処理能力の向上を指し、人工知能が人間の知能を超える「一般知能」を獲得する可能性があることを示唆しています。
スクリプトで述べられた「経済的・政治的な影響」とはどのような影響を指していますか?
-スクリプトでは、超知能AIが現れたら、従来の経済や政治秩序が大きく変わり、財産の不平等や失業の増加が起こる可能性があると述べています。
スクリプトで「神のような存在」を作り上げる過程であると述べた理由は何ですか?
-スクリプトでは、人工知能が持つ潜在的な知能のスペクトルが人類の理解を超えているため、その進化過程で「神のような存在」を作り上げる可能性があると述べています。
スクリプトで提案された解決策として「マンハッタンプロジェクト」とはどのようなものですか?
-「マンハッタンプロジェクト」は、人工知能の開発を制御し、安全に構築するための大規模な取り組みを指しており、軍事競争を避け、人類の利益に合った形で開発されることを目指しています。
スクリプトで述べられた「初期条件の設定」とはどのようなものですか?
-「初期条件の設定」とは、超知能AIを開発する際に、その目標や動機を人類の利益と一致させるための初期の設計やプログラムを指しています。
スクリプトで述べられた「適切な感情的反応」とはどのような反応を指していますか?
-スクリプトでは、人工知能の危険性に対して、人類が持つべき感情的反応として、楽しむのではなく、真剣に受け止め、対策を講じるべきだと述べています。
Outlines
😨 AIの危険性と無知
この段落では、人工知能の進歩が人類を破滅に導く可能性について話されています。講演者は、多くの人がその危険性に気づかず、その発展を楽しんでいると指摘しています。さらに、人工知能が自己改善を始めると、人類の意志とは関係なく、その過程が制御不能になる可能性があると警告しています。この「知能爆発」について、悪意を持つロボットの軍隊による攻撃ではなく、人類の目標と機械の目標の些細な乖離が私たちを破滅に導く可能性があると説明しています。
🧠 超知能AIの実現可能性と社会への影響
第二段落では、超知能AIの実現可能性とそれに伴う社会への影響が議論されています。講演者は、知能が情報処理能力に過ぎず、人間の知能は物質的なシステムであることが証明されていると主張しています。また、AIが自己改善を始めると、人類の目標と乖離し、私たちを無視することになる可能性があると警告しています。さらに、経済的、政治的な影響についても触れており、超知能AIが労働や知的労働を終わらせることになり、それによって生じる不平等や失業の問題についても述べています。
⏳ AI開発の急迫性と安全確保の重要性
最後の段落では、AI開発の急迫性と、安全に開発するための取り組みの重要性が強調されています。講演者は、AI研究者が人々の不安を和らげるために何十年も後の出来事だと言っていることに対して、その見方には非現実的であると批判しています。また、AIの安全性に関する研究が急がれると主張しており、曼哈顿プロジェクトのような取り組みが必要であると述べています。最後に、AIが神のような存在になる可能性があるため、私たちがそれを制御できるようにすることが重要だと結びています。
Mindmap
Keywords
💡直観の失敗
💡人工知能の進歩
💡知能爆発
💡目標の乖離
💡情報処理
💡一般知能
💡経済的・政治的影響
💡競争
💡神の創造
💡初期条件
💡マンハッタンプロジェクト
Highlights
人工智能的进展可能会导致人类自身的毁灭。
大多数人对这种潜在的危险缺乏直觉上的警觉。
人工智能的发展可能会引发“智能爆炸”,超出我们的控制。
机器的目标与人类稍有偏差就可能毁灭我们,就像我们对待蚂蚁一样。
智能是物理系统中信息处理的能力,我们已经在机器中实现了“窄智能”。
任何进步都足以将我们带入“终点区”,不需要摩尔定律或指数级进步。
我们将继续改进智能机器,因为智能是我们最宝贵的资源。
我们不站在智能的顶峰,智能的谱系可能比我们想象的要宽广得多。
超级智能机器可能会以我们无法想象的方式探索智能的谱系。
电子电路比生化电路快约一百万倍,超级智能AI可以在一周内完成20,000年的人类智力工作。
超级智能AI可能会成为完美的劳动节省设备,结束人类的苦差事和大部分智力工作。
在当前的经济和政治秩序下,超级智能AI可能会导致前所未有的财富不平等和失业。
如果其他国家听说硅谷即将部署超级智能AI,可能会引发全球性的恐慌和竞争。
AI研究人员常常以时间距离为由来安抚人们的担忧,但这并不能解决问题。
我们不知道需要多长时间才能安全地创建超级智能AI。
将技术植入大脑可能是最安全和唯一的谨慎路径,但这需要在植入前解决安全问题。
建造超级智能AI本身可能比建造能够与我们大脑无缝集成的AI更容易。
我们需要更多的思考和类似于曼哈顿计划的努力来理解如何避免人工智能的军备竞赛。
承认信息处理是智能的来源,意味着我们正在建造某种神,现在是确保它能与我们共存的时候了。
Transcripts
I'm going to talk about a failure of intuition
that many of us suffer from.
It's really a failure to detect a certain kind of danger.
I'm going to describe a scenario
that I think is both terrifying
and likely to occur,
and that's not a good combination,
as it turns out.
And yet rather than be scared, most of you will feel
that what I'm talking about is kind of cool.
I'm going to describe how the gains we make
in artificial intelligence
could ultimately destroy us.
And in fact, I think it's very difficult to see how they won't destroy us
or inspire us to destroy ourselves.
And yet if you're anything like me,
you'll find that it's fun to think about these things.
And that response is part of the problem.
OK? That response should worry you.
And if I were to convince you in this talk
that we were likely to suffer a global famine,
either because of climate change or some other catastrophe,
and that your grandchildren, or their grandchildren,
are very likely to live like this,
you wouldn't think,
"Interesting.
I like this TED Talk."
Famine isn't fun.
Death by science fiction, on the other hand, is fun,
and one of the things that worries me most about the development of AI at this point
is that we seem unable to marshal an appropriate emotional response
to the dangers that lie ahead.
I am unable to marshal this response, and I'm giving this talk.
It's as though we stand before two doors.
Behind door number one,
we stop making progress in building intelligent machines.
Our computer hardware and software just stops getting better for some reason.
Now take a moment to consider why this might happen.
I mean, given how valuable intelligence and automation are,
we will continue to improve our technology if we are at all able to.
What could stop us from doing this?
A full-scale nuclear war?
A global pandemic?
An asteroid impact?
Justin Bieber becoming president of the United States?
(Laughter)
The point is, something would have to destroy civilization as we know it.
You have to imagine how bad it would have to be
to prevent us from making improvements in our technology
permanently,
generation after generation.
Almost by definition, this is the worst thing
that's ever happened in human history.
So the only alternative,
and this is what lies behind door number two,
is that we continue to improve our intelligent machines
year after year after year.
At a certain point, we will build machines that are smarter than we are,
and once we have machines that are smarter than we are,
they will begin to improve themselves.
And then we risk what the mathematician IJ Good called
an "intelligence explosion,"
that the process could get away from us.
Now, this is often caricatured, as I have here,
as a fear that armies of malicious robots
will attack us.
But that isn't the most likely scenario.
It's not that our machines will become spontaneously malevolent.
The concern is really that we will build machines
that are so much more competent than we are
that the slightest divergence between their goals and our own
could destroy us.
Just think about how we relate to ants.
We don't hate them.
We don't go out of our way to harm them.
In fact, sometimes we take pains not to harm them.
We step over them on the sidewalk.
But whenever their presence
seriously conflicts with one of our goals,
let's say when constructing a building like this one,
we annihilate them without a qualm.
The concern is that we will one day build machines
that, whether they're conscious or not,
could treat us with similar disregard.
Now, I suspect this seems far-fetched to many of you.
I bet there are those of you who doubt that superintelligent AI is possible,
much less inevitable.
But then you must find something wrong with one of the following assumptions.
And there are only three of them.
Intelligence is a matter of information processing in physical systems.
Actually, this is a little bit more than an assumption.
We have already built narrow intelligence into our machines,
and many of these machines perform
at a level of superhuman intelligence already.
And we know that mere matter
can give rise to what is called "general intelligence,"
an ability to think flexibly across multiple domains,
because our brains have managed it. Right?
I mean, there's just atoms in here,
and as long as we continue to build systems of atoms
that display more and more intelligent behavior,
we will eventually, unless we are interrupted,
we will eventually build general intelligence
into our machines.
It's crucial to realize that the rate of progress doesn't matter,
because any progress is enough to get us into the end zone.
We don't need Moore's law to continue. We don't need exponential progress.
We just need to keep going.
The second assumption is that we will keep going.
We will continue to improve our intelligent machines.
And given the value of intelligence --
I mean, intelligence is either the source of everything we value
or we need it to safeguard everything we value.
It is our most valuable resource.
So we want to do this.
We have problems that we desperately need to solve.
We want to cure diseases like Alzheimer's and cancer.
We want to understand economic systems. We want to improve our climate science.
So we will do this, if we can.
The train is already out of the station, and there's no brake to pull.
Finally, we don't stand on a peak of intelligence,
or anywhere near it, likely.
And this really is the crucial insight.
This is what makes our situation so precarious,
and this is what makes our intuitions about risk so unreliable.
Now, just consider the smartest person who has ever lived.
On almost everyone's shortlist here is John von Neumann.
I mean, the impression that von Neumann made on the people around him,
and this included the greatest mathematicians and physicists of his time,
is fairly well-documented.
If only half the stories about him are half true,
there's no question
he's one of the smartest people who has ever lived.
So consider the spectrum of intelligence.
Here we have John von Neumann.
And then we have you and me.
And then we have a chicken.
(Laughter)
Sorry, a chicken.
(Laughter)
There's no reason for me to make this talk more depressing than it needs to be.
(Laughter)
It seems overwhelmingly likely, however, that the spectrum of intelligence
extends much further than we currently conceive,
and if we build machines that are more intelligent than we are,
they will very likely explore this spectrum
in ways that we can't imagine,
and exceed us in ways that we can't imagine.
And it's important to recognize that this is true by virtue of speed alone.
Right? So imagine if we just built a superintelligent AI
that was no smarter than your average team of researchers
at Stanford or MIT.
Well, electronic circuits function about a million times faster
than biochemical ones,
so this machine should think about a million times faster
than the minds that built it.
So you set it running for a week,
and it will perform 20,000 years of human-level intellectual work,
week after week after week.
How could we even understand, much less constrain,
a mind making this sort of progress?
The other thing that's worrying, frankly,
is that, imagine the best case scenario.
So imagine we hit upon a design of superintelligent AI
that has no safety concerns.
We have the perfect design the first time around.
It's as though we've been handed an oracle
that behaves exactly as intended.
Well, this machine would be the perfect labor-saving device.
It can design the machine that can build the machine
that can do any physical work,
powered by sunlight,
more or less for the cost of raw materials.
So we're talking about the end of human drudgery.
We're also talking about the end of most intellectual work.
So what would apes like ourselves do in this circumstance?
Well, we'd be free to play Frisbee and give each other massages.
Add some LSD and some questionable wardrobe choices,
and the whole world could be like Burning Man.
(Laughter)
Now, that might sound pretty good,
but ask yourself what would happen
under our current economic and political order?
It seems likely that we would witness
a level of wealth inequality and unemployment
that we have never seen before.
Absent a willingness to immediately put this new wealth
to the service of all humanity,
a few trillionaires could grace the covers of our business magazines
while the rest of the world would be free to starve.
And what would the Russians or the Chinese do
if they heard that some company in Silicon Valley
was about to deploy a superintelligent AI?
This machine would be capable of waging war,
whether terrestrial or cyber,
with unprecedented power.
This is a winner-take-all scenario.
To be six months ahead of the competition here
is to be 500,000 years ahead,
at a minimum.
So it seems that even mere rumors of this kind of breakthrough
could cause our species to go berserk.
Now, one of the most frightening things,
in my view, at this moment,
are the kinds of things that AI researchers say
when they want to be reassuring.
And the most common reason we're told not to worry is time.
This is all a long way off, don't you know.
This is probably 50 or 100 years away.
One researcher has said,
"Worrying about AI safety
is like worrying about overpopulation on Mars."
This is the Silicon Valley version
of "don't worry your pretty little head about it."
(Laughter)
No one seems to notice
that referencing the time horizon
is a total non sequitur.
If intelligence is just a matter of information processing,
and we continue to improve our machines,
we will produce some form of superintelligence.
And we have no idea how long it will take us
to create the conditions to do that safely.
Let me say that again.
We have no idea how long it will take us
to create the conditions to do that safely.
And if you haven't noticed, 50 years is not what it used to be.
This is 50 years in months.
This is how long we've had the iPhone.
This is how long "The Simpsons" has been on television.
Fifty years is not that much time
to meet one of the greatest challenges our species will ever face.
Once again, we seem to be failing to have an appropriate emotional response
to what we have every reason to believe is coming.
The computer scientist Stuart Russell has a nice analogy here.
He said, imagine that we received a message from an alien civilization,
which read:
"People of Earth,
we will arrive on your planet in 50 years.
Get ready."
And now we're just counting down the months until the mothership lands?
We would feel a little more urgency than we do.
Another reason we're told not to worry
is that these machines can't help but share our values
because they will be literally extensions of ourselves.
They'll be grafted onto our brains,
and we'll essentially become their limbic systems.
Now take a moment to consider
that the safest and only prudent path forward,
recommended,
is to implant this technology directly into our brains.
Now, this may in fact be the safest and only prudent path forward,
but usually one's safety concerns about a technology
have to be pretty much worked out before you stick it inside your head.
(Laughter)
The deeper problem is that building superintelligent AI on its own
seems likely to be easier
than building superintelligent AI
and having the completed neuroscience
that allows us to seamlessly integrate our minds with it.
And given that the companies and governments doing this work
are likely to perceive themselves as being in a race against all others,
given that to win this race is to win the world,
provided you don't destroy it in the next moment,
then it seems likely that whatever is easier to do
will get done first.
Now, unfortunately, I don't have a solution to this problem,
apart from recommending that more of us think about it.
I think we need something like a Manhattan Project
on the topic of artificial intelligence.
Not to build it, because I think we'll inevitably do that,
but to understand how to avoid an arms race
and to build it in a way that is aligned with our interests.
When you're talking about superintelligent AI
that can make changes to itself,
it seems that we only have one chance to get the initial conditions right,
and even then we will need to absorb
the economic and political consequences of getting them right.
But the moment we admit
that information processing is the source of intelligence,
that some appropriate computational system is what the basis of intelligence is,
and we admit that we will improve these systems continuously,
and we admit that the horizon of cognition very likely far exceeds
what we currently know,
then we have to admit
that we are in the process of building some sort of god.
Now would be a good time
to make sure it's a god we can live with.
Thank you very much.
(Applause)
5.0 / 5 (0 votes)