The Problem with Human Specialness in the Age of AI | Scott Aaronson | TEDxPaloAlto

TEDx Talks
8 Mar 202419:34

Summary

TLDRスクリプトのエッセンスを提供する魅力的な要約で、ユーザーを引き込み、興味を引き起こす短く正確な概要を提供します。

Takeaways

  • 🌟 AIの発展は過去数年で驚くべき進歩を遂げており、理論的なコンピュータサイエンスがAIを悪用されないよう助けることができる方法を模索中です。
  • 🚀 現在のAI革命の核心となるアイデアは、何世代も前から知られていましたが、その時点では実用的ではないとされていました。
  • 💡 摩尔の法則(計算能力が指数関数的に増加する)に基づくAIの進化は、多くの専門家が信じていなかったものですが、現在は現実となりました。
  • 🤖 AIが言語を理解し、知能を持つようになると予想されていた「魔法の時代」が、今や到来しているかもしれません。
  • 🧠 人工知能が持つ潜在的な問題(例えば、AIが人間を破壊的な方法で取って代わる可能性)と、正しい方向に進む可能性について考えることが重要です。
  • 📈 AIの進歩は停滞するかもしれませんが、もし継続的な進歩が続けば、10年から20年以内に人間と同じレベルの能力を持つAIが現れる可能性があります。
  • 🎓 AIの発展に伴い、教育システムや職業の将来について再考する必要性が生じています。AIが学生と同じテーマで短編小説を書くことができる今天、学びの意味合いは何ですか?
  • 🖋️ AIの生成物と人間の生成物を区別するために、GPTなどの言語モデルの出力を watermarking するプロジェクトが進められています。
  • 🎨 AIが創造的な成果を生み出すとき、その価値はその種類の生産物が大量に生産可能になると急速に減少する「AIの豊富さパラドックス」と呼ばれる現象に直面しています。
  • 🌌 AIが新しい音楽的方向性を提案する場合、それがBeatlesと同様の影響力を持ち得るかどうかは、人間の独自の創造力に対する質問につながります。
  • 🔄 AIの安全性に関する議論において、AIを教え込ませ、人間の一意的な創造性と知能を尊重する「宗教」を提唱しています。

Q & A

  • 量子コンピューティングの専門家がOpenAIで取り組んでいる問題は何ですか?

    -AIが世界を破壊しないようにする方法を理論的に考えることです。

  • 現在のAI技術の進化において、予想外だった主な要因は何ですか?

    -単純なアイデアが大規模な計算資源と組み合わさることで、言語理解やその他のタスクで人間に匹敵するAIを実現したことです。

  • なぜ過去にはニューラルネットワークが印象的でなかったと考えられていたのですか?

    -過去にはニューラルネットワークはそれほど効果的ではなく、単純に大規模化するだけでは問題解決には至らないと考えられていました。

  • 現代のAI技術革命の基盤となっている主な理論や技術は何ですか?

    -ニューラルネットワーク、バックプロパゲーション、勾配降下法などが基礎となっています。

  • 将来のAIは、人間の脳と比較して、どのような能力を持つ可能性がありますか?

    -数学や科学の未解決問題を解く能力を含め、あらゆる知的作業を人間と同等以上にこなすことが考えられています。

  • AIの進歩が「フィズルアウト」しないとしたら、どのような未来が考えられるでしょうか?

    -AIが人間の知能を超え、すべてのタスクを人間以上にこなせるようになる可能性があります。

  • 大言壮語の理論とは何ですか?

    -成功と失敗の例をたくさん与えられれば、AIが数年内に人間の最高のパフォーマンスに匹敵するという理論です。

  • なぜ人々はAIの進化をリアルタイムで追うのが難しいと感じていますか?

    -目標が絶えず動いているため、AIが一つの目標を達成すると、人々はすぐに次の目標を設定します。

  • OpenAIで行われているAIの安全性に関連するプロジェクトの一つは何ですか?

    -GPTと他の大規模言語モデルの出力にウォーターマークを付けるプロジェクトです。

  • AIによる創造性と人間の創造性の違いは何ですか?

    -人間の創造性はユニークで再現不可能な一回きりの出来事であり、AIは無限に創造的作業を再現することができますが、そのユニークさには限界があります。

Outlines

00:00

🔬量子コンピューティングからAIの未来へ

講演者は、量子コンピューティングの分野でのキャリアから、OpenAIでAIが世界を破壊するのを防ぐための理論的なコンピュータ科学の応用について考えるようになったと語る。AIのポジティブな展望だけでなく、リスクにも焦点を当てており、AIが期待通りに機能した場合の人類の役割を問い直している。さらに、AIの急速な進化が過去数年でどのように実現されたか、そしてその進化が以前は単純なネットワークと見なされていた技術のスケールアップによるものであることを指摘している。

05:01

🌐AI進化の将来のシナリオ

AIの進化が続けば、将来的には人間の能力を超える可能性があるが、それが人類にとってプラスかマイナスかは議論の余地がある。AIの発展が停滞する可能性や、人間がAIにとって無関係かつ無用になるリスクも検討されている。さらに、AIがどのように人間の仕事や創造性に影響を与えるか、そしてAIの理解や目標達成の方法が人間とどのように異なるかについても議論されている。

10:03

🔏AI出力の透かし入れと安全性確保

AIの安全性に向けた取り組みとして、GPTなどの大規模言語モデルの出力に透かしを入れる方法が提案されている。これにより、AIによるテキスト生成を人間のものと区別できるようになる。ただし、完璧なソリューションではなく、決定的な解決策ではないことが強調されている。また、教育や創造的な作業がAIによってどのように変わるか、そして人間がAIと競合する世界でどのような価値を持つかについての哲学的な問いかけがなされている。

15:03

🤔人間とAIの創造性の未来

AIが芸術や創造性において人間を模倣、あるいは超越する可能性について探求している。AIによるアートの価値は、その希少性によって左右されるかもしれないが、人間独自の創造性や選択の意味を再評価する必要がある。さらに、人間の認識や存在がデジタル化され、複製可能になる将来のシナリオについても考察されており、このような技術の実現可能性とそれが持つ倫理的な問題が議論されている。

Mindmap

Keywords

💡Quantum Computing

Quantum Computingは、量子力学の原理を利用して情報処理を行う技術です。このスクリプトでは、話者が主に量子コンピューティングに携わってきた経験から、理論的なコンピュータサイエンスがAIの世界破壊を防ぐための手段となる可能性について考えていることを示しています。

💡AI Safety

AI Safetyは、人工知能が人類や世界を安全に保つために、潜在的なリスクや問題を予測し、防ぐ研究分野です。このスクリプトでは、AIが持つ可能性とそれに伴うリスク、そしてどのように安全に活用するかについての議論が行われています。

💡Neural Networks

ニューラルネットワークは、人間の脳の機能を模倣した計算モデルで、AIの基礎となっています。このスクリプトでは、過去から現在まで進化してきたニューラルネットワークが、現代のAI革命の核心を成す技術となっていることが述べられています。

💡Moore's Law

ムーズの法則は、コンピュータの性能が一定期間ごとに倍増していくという観測に基づく説です。このスクリプトでは、ムーズの法則がAIの発展に影響を与えた歴史と、その将来予測について話者が述べています。

💡GPT (Generative Pre-trained Transformer)

GPTは、自然言語処理の分野で高い性能を示す事前訓練型の生成モデルです。このスクリプトでは、GPTが言語理解や生成タスクにおいての進化と、それが未来のAIの発展に与える可能性について話されています。

💡AI Abundance Paradox

AIの豊富さのパラドックスは、AIが創造した作品が大量に存在するため、その価値が劇的に低下することを指す理論です。このスクリプトでは、AIが創造する新しいアート作品が価値を持たない可能性について議論されています。

💡Human Uniqueness

人間独特性は、人間だけが持つ特質や能力を指します。このスクリプトでは、AIの発展が進む中で、人間が持つ独特な価値や役割について考えることが求められています。

💡AI and Society

AIと社会は、人工知能が社会に与える影響や、社会がAIをどのように活用するかを研究する分野です。このスクリプトでは、AIの進化がもたらす可能性と、それに伴う社会的な変化について話し及んでいます。

💡Watermarking AI Outputs

AI出力をウォーターマークすることは、AIが生成した内容を識別できるように、特別な信号を埋め込む技術です。このスクリプトでは、GPTなどの言語モデルの出力を人間のものと区別するために、ウォーターマーク技術の開発とその重要性が説明されています。

💡Education in the AI Era

AI時代の教育は、人工知能が進化する中で、教育の目的や方法がどのように変わるかを探求するテーマです。このスクリプトでは、AIが学業を代わり得了得るようになると、教育の意味や価値について疑問が投げかけられています。

💡AI and Creativity

AIと創造性は、人工知能が創造的な作品を生成する能力に関するテーマです。このスクリプトでは、AIがBeatlesのような偉大な音楽を作り出す能力について、そしてそれがどうやって人間の創造性とは区別されるかについて議論されています。

Highlights

The speaker transitions from a career in Quantum Computing to working at OpenAI, focusing on preventing AI from causing global harm.

The speaker is still exploring how theoretical computer science can contribute to AI safety and considers the implications if AI goes right.

AI has advanced to a level where it resembles the science fiction technology from Star Trek, with some tasks being performed successfully.

The core ideas behind the current AI revolution have been known for generations, including neural networks and backpropagation.

Despite previous skepticism, the speaker acknowledges the exponential growth in computational power and its impact on AI capabilities.

The speaker discusses the potential future where AI can solve the greatest unsolved problems in math and science, such as proving the Riemann Hypothesis.

AI's potential to become more intelligent than humans raises ethical questions about how we treat and utilize AI.

The speaker and colleague Baz Barack attempted to map out the major possibilities for AI's impact on society.

AI progress could fizzle out due to diminishing returns, cost, or lack of training data.

If AI continues its rapid progress, it may reach a point where it outperforms humans in almost every task.

The speaker discusses the 'game over thesis,' which suggests that AI will eventually match or exceed human performance in any task with clear success metrics.

AI's ability to produce human-like outputs raises concerns about the future of jobs and human roles in society.

The speaker has worked on watermarking AI outputs to distinguish between human and AI-generated content.

The concept of AI abundance paradox is introduced, where the value of AI-generated art is diminished by its replicability.

The speaker ponders the uniqueness of human creativity and the potential for AI to develop its own new directions in art and music.

The speaker discusses the possibility of humans being backed up and restored from backups, questioning the nature of human identity.

An AI safety proposal is suggested, which involves teaching AI to value and protect the unique, unclonable aspects of human creativity and intelligence.

The speaker reflects on the potential future where AI-generated content is so abundant that it redefines our understanding of value and originality.

Transcripts

play00:00

[Music]

play00:06

[Applause]

play00:10

[Music]

play00:12

okay uh thanks a lot so uh after um you

play00:15

know a career spent mostly in Quantum

play00:17

Computing I'm now sort of Moonlighting

play00:19

at open AI uh they asked me to uh think

play00:22

about how theoretical computer science

play00:25

could be used to help uh uh prevent AI

play00:27

from destroying the world uh I haven't

play00:30

figured it out yet uh I do still have

play00:31

another 6 months uh so um but uh you

play00:37

know I F I find myself thinking more and

play00:39

more uh not just about uh uh uh how do

play00:42

we prevent this from going wrong but

play00:43

also about uh uh what what if it goes

play00:46

right what if it goes exactly like it's

play00:48

supposed to and uh can just produce any

play00:51

you intellectual product as well as we

play00:53

can or better and you know what are we

play00:55

for in the resulting world uh so um

play01:01

well you know I don't I don't have to

play01:02

belabor for this audience I think you

play01:04

know what has happened in AI over the

play01:06

past uh few years uh you know we now uh

play01:10

uh to some approximation you know have

play01:12

the science fiction machine from Star

play01:14

Trek right you talk to it in English you

play01:17

ask it what to do some percentage of the

play01:19

time it does it and you know this is uh

play01:22

uh uh despite you know how unlikely this

play01:25

seemed to almost all of us 5 years ago

play01:28

you know it's so unlikely that that many

play01:30

many people are still in denial about it

play01:32

okay but uh I think the even more

play01:34

surprising thing than what has happened

play01:36

is how it has happened uh so you know

play01:40

what maybe not everyone appreciates is

play01:42

that the core ideas uh that are powering

play01:46

the current AI Revolution uh uh are

play01:49

things that have been known for

play01:51

Generations okay so uh uh I mean neural

play01:54

networks uh back propagation gradient

play01:57

descent uh uh prediction via depression

play02:00

you know I learned all this stuff when I

play02:02

was an undergrad in computer science in

play02:04

the 9s okay but we also learned then

play02:07

that you know neural Nets were just not

play02:08

that impressive they didn't work that

play02:10

well and uh all of the wisest people uh

play02:13

said uh uh all said well you know if you

play02:16

just take something that doesn't work

play02:18

and scale it up by a factor of a million

play02:20

it's still not going to work you know

play02:22

the the the the true key to AI is going

play02:24

to be to deeply deeply understand the

play02:26

nature of intelligence and you know once

play02:28

we've done that then we can

play02:30

uh uh we'll be able to see why uh uh a

play02:33

human level AI could have fit on a

play02:34

floppy disc uh and um you know there

play02:38

were just a few Nut Cases like Ray kwi

play02:41

who would go around showing these graphs

play02:44

that would say well look uh the amount

play02:46

of compute that you can do you know per

play02:49

second per dollar is on an exponential

play02:51

trajectory right that's one form of mors

play02:54

Law and uh uh if you just extrapolate

play02:57

forward then by the 2020s or so uh there

play03:00

should be about as much compute

play03:02

available as some crude estimate of what

play03:05

the human neocortex is doing and that is

play03:07

when we should expect that magic will

play03:09

happen okay and and and computers will

play03:12

suddenly understand language and be

play03:14

intelligent and uh almost all of us said

play03:16

you know that sounds like the stupidest

play03:18

thesis that we've ever heard like you

play03:20

have no you know theoretical principle

play03:22

to believe that you know just the the

play03:23

sheer amount of compute is is is is is

play03:27

alone sufficient uh now I'm airm

play03:29

believer that uh um you know that like

play03:32

one of the the the key uh dicta of

play03:35

science is that you let the world tell

play03:37

you when you're wrong okay and you don't

play03:40

make up uh some elaborate justification

play03:42

for why you actually weren't wrong so I

play03:44

think that's the situation

play03:46

here uh so you know but but you know

play03:49

this is Mo's law hasn't ended right so

play03:52

uh uh you know we're still getting more

play03:54

uh uh more and more compute so you might

play03:56

wonder where is this going uh uh what

play03:58

will gpta be able to do you know will I

play04:01

just be able to ask it to solve any of

play04:03

the greatest unsolved problems in math

play04:05

or science like prove the remon

play04:07

hypothesis and it'll say sure I can help

play04:10

you with that and just spit out a proof

play04:13

um you know by the way I I asked uh uh

play04:15

GPT to cooperate with me in illustrating

play04:18

that uh which it happily did but then

play04:21

hasten to add that it's only kidding and

play04:23

that the remon hypothesis remains an

play04:25

open problem uh so you know this is one

play04:28

possibility but then you know what about

play04:30

beyond that I mean uh what if you know

play04:33

as some people predict uh uh AI would uh

play04:35

become to us as we are to chimpanzees

play04:39

right well you know how well do we treat

play04:40

chimpanzees right and so then you know

play04:43

you're led to the the uh the the

play04:44

Terminator scenario of course but you

play04:46

know it's been amazing to watch you know

play04:48

I've known the little subculture of

play04:50

nerds on the internet who have uh

play04:52

worried about AI Doom for for 20 years

play04:55

uh uh and uh just within the last year

play04:58

because of chat GP PT this went to

play05:01

something that is discussed in the White

play05:03

House Press briefing and in

play05:04

Congressional hearings uh but you know

play05:07

uh uh okay uh uh uh you know an AI

play05:11

wouldn't necessarily have to hate us or

play05:13

or want to kill us we might just you

play05:15

know be in the way or irrelevant to

play05:18

whatever alien goal it has okay but I

play05:20

think you know that's not the only

play05:21

possibility that is on the table here so

play05:24

um my colleague baz Barack who's uh now

play05:27

also on sabatical at open AI uh and I uh

play05:31

uh uh tried a while ago to make a

play05:34

decision tree of the sort of major

play05:36

possibilities being discussed now so uh

play05:39

you know the progress in AI that we've

play05:41

seen over the last few years could

play05:43

fizzle out right there might be a

play05:45

diminishing returns to you know more and

play05:47

more scale or we might you know uh uh uh

play05:50

find it too expensive to get the

play05:52

necessary compute uh or we might run out

play05:55

of training data you know we're already

play05:57

sort of running out I mean there is all

play05:59

of YouTube and Tik Tok and so forth that

play06:01

you could still feed into the mall but

play06:03

that might just make the AI Dumber

play06:05

rather than smarter right so uh okay but

play06:08

then if it doesn't fizzle out and if it

play06:10

just continues you know the way it has

play06:13

over the last few years then you have to

play06:15

imagine that it's just a matter of uh uh

play06:18

you know what 10 years 20 years how many

play06:20

you know until it can do just about

play06:23

everything as well as we can and uh and

play06:26

and what then you know does civilization

play06:28

recognizably continue with sort of

play06:30

humans in charge uh and and whether it

play06:34

does or it doesn't is that good or is it

play06:36

bad uh from our point of view or you

play06:38

know maybe it depends who you ask uh

play06:42

so um you know now like a lot of people

play06:45

don't want to have this discussion they

play06:46

sort of they still sort of I think don't

play06:49

want to speculate about these things

play06:51

including you know many distinguished

play06:52

colleagues of mine a lot of them are

play06:54

immersed in what I like to call the the

play06:56

religion of jism right so they will say

play07:00

look chat GPT You know despite however

play07:03

impressive it might look uh uh you know

play07:06

it's it's it's actually not because we

play07:09

really we know that it is just a

play07:11

stochastic parrot it is just a next

play07:14

token predictor it is just a giant

play07:17

function approximator it is just a uh a

play07:21

huge auto complete right and and I I

play07:23

always want to say to these people okay

play07:25

and what are you like aren't you just a

play07:28

bundle of neurons and synapses obeying

play07:31

the laws of physics right and what about

play07:33

your mom right you know I mean uh um you

play07:37

know I mean if you're going to you know

play07:39

use these reductionist or deflationary

play07:41

ways of talking then you know at least

play07:43

you you have to be symmetrical about it

play07:45

I think so uh a closely related tendency

play07:49

is well known in AI is the sort of

play07:51

endlessly moving goalposts you know I

play07:54

still remember when uh deep blue beat

play07:56

Casper ofices and you know very smart

play07:59

people said okay but this is not

play08:01

impressive because chess is really just

play08:03

a search problem Wake Me Up When

play08:05

computers can beat you know the human uh

play08:08

uh Grand Masters at go okay uh because

play08:11

that's just an infinitely deeper and

play08:13

richer game okay and then we had alpago

play08:15

and then people said okay but fine it's

play08:17

just a game everyone expected that this

play08:19

would happen you know wake me up when uh

play08:21

uh large language models can win a gold

play08:24

medal in the international math Olympiad

play08:26

okay so I actually have a bet with a

play08:28

colleague that that will happened by

play08:30

2026 uh you know there was some progress

play08:32

on it just this past month uh now I

play08:34

might be wrong it might happen by by

play08:36

2036 instead but it seems clear that

play08:39

this is just a ma a question of years at

play08:41

this point and you know after uh uh uh

play08:45

AI can you know get gold medals and you

play08:47

know math competitions okay which goal

play08:49

poost uh uh should we have next uh so

play08:53

you know we might even be tempted to

play08:54

formulate a general thesis here which

play08:57

I'll call you know the game over thesis

play08:59

which would say that given any task with

play09:02

a reasonably objective metric of success

play09:05

or failure games competitions uh and on

play09:08

which an AI can be given suitably many

play09:10

examples of success and failure it's

play09:13

only a matter of years before not only

play09:15

AI but AI on our current Paradigm will

play09:19

match or beat the best human performance

play09:22

now that might not exhaust everything

play09:24

that we care about uh uh uh you know

play09:27

there might be things that are not

play09:29

quantifiable in this way okay but but if

play09:31

even this is true I think that already

play09:33

forces us to some uncomfortable places

play09:35

in you know thinking about kind of well

play09:38

you know uh uh uh uh what do we tell our

play09:41

kids about you know uh uh what kind of

play09:43

jobs are going to be available for them

play09:46

and and sort of what is our role in the

play09:48

world uh so um and and you know it's

play09:53

clear that sort of already what what you

play09:55

know chat GPT and and Dolly and so forth

play09:58

can already do has sort of uh uh created

play10:02

uh for real for us the sort of uh uh

play10:05

Blade Runner scenario where you know we

play10:07

are confronted with the problem of

play10:09

distinguishing uh human outputs from AI

play10:12

ones so you know one of the main safety

play10:14

projects that I've worked on uh during

play10:17

my time at open AI uh has been a scheme

play10:19

for watermarking the outputs of GPT and

play10:23

other language uh other large language

play10:25

models uh what this means is uh sort of

play10:29

replacing the randomness in the models

play10:31

by pseudo Randomness in a way that

play10:33

inserts a uh a secret statistical signal

play10:37

into you know the choice of words or

play10:39

tokens uh by which you can later uh

play10:42

detect that yes this was generated by

play10:44

chat GPT this did not come from a human

play10:47

okay now I should caution you that uh

play10:49

this has not been deployed yet uh so

play10:51

open AI along with Google and anthropic

play10:55

uh has been moving kind of slowly and

play10:57

deliberately toward uh

play10:59

uh deployment of of text water marking

play11:02

um even uh uh if and when uh it is

play11:05

deployed you know someone who is

play11:07

sufficiently determined will be able to

play11:09

evade it uh you know just like with

play11:11

schemes for uh uh preventing piracy of

play11:14

you know music or software or whatever

play11:17

uh so you know it's not a perfect

play11:18

solution but I hope that this and other

play11:21

measures uh will eventually be able to

play11:23

you know make it less convenient for

play11:25

students to use chat GPT to cheat on

play11:28

their HK work uh uh uh one maybe one of

play11:31

the most common misuses in the world

play11:33

right now or uh for people to use it for

play11:36

spam propaganda impersonation uh all

play11:39

sorts of other bad things like that okay

play11:41

but when I talked to my colleagues about

play11:43

watermarking I was surprised that uh

play11:46

often you know they had an objection to

play11:48

it that was not technical at all it

play11:50

wasn't about how well can it work it was

play11:52

about uh well should we still even be

play11:55

giving homework at all right I mean you

play11:57

know if if uh uh chat GPT can write the

play12:00

term papers just as well as the students

play12:02

can right and that's still going to be

play12:04

true after the students graduate like

play12:05

you know what's the point why are we

play12:07

still teaching these skills uh you know

play12:09

and and I think about this even in terms

play12:11

of my 11-year-old daughter for example I

play12:13

mean she loves writing short stories now

play12:16

chat GPT can also write short stories on

play12:19

the same themes you know like a a an

play12:21

11-year-old girl who gets recruited to a

play12:24

magical boarding school but which is

play12:26

totally not Hogwarts and has nothing to

play12:28

do with warts right or you know whatever

play12:31

other theme like that okay uh now you

play12:33

could ask the question uh you know if

play12:35

you look at like today's cohort of

play12:37

11year olds are they ever going to be

play12:40

better writers than GPT or you know it's

play12:43

a it's a it's a race right which one is

play12:45

going to improve faster um so uh

play12:51

um uh so so you know you you know you

play12:54

could imagine that you know even what we

play12:56

think of as like the greatest products

play12:58

of artistic genius you know the music of

play13:00

The Beatles right in principle you know

play13:02

you could have uh some AI model uh uh do

play13:05

the same things okay but when you think

play13:07

about that enough you start wondering

play13:09

what do we mean what what would we even

play13:11

mean by an AI that created music that

play13:14

was as good as the Beatles right like uh

play13:17

you know and then that forces you to ask

play13:19

well well what made the beetle so good

play13:21

in the first place and you know I'm not

play13:23

a music expert but roughly we could

play13:25

decompose it into sort of two components

play13:28

uh one being sort of new ideas about

play13:31

what direction you know music ought to

play13:33

go in and secondly technical execution

play13:35

on those ideas okay now suppose you had

play13:38

an AI where you know you just fed at the

play13:40

Beatles whole back catalog and then it

play13:43

generated more songs that the Beatles

play13:45

plausibly could have generated but

play13:47

didn't you know that sounded kind of

play13:50

like Hey Jude or yesterday or whatever

play13:52

okay I think that you know most people

play13:55

would if they saw that would just move

play13:56

the Gul poost okay they would say no

play13:58

that doesn't really impress us right

play14:00

this is just uh uh extrapolation and you

play14:04

know like uh uh schopenhauer said you

play14:07

know Talent hits a Target that no one

play14:09

else can hit but genius you know that

play14:11

hits a Target that no one else can see

play14:13

right and and and and you know the uh uh

play14:16

what we want to see is the AI deciding

play14:18

for itself to take music in this new

play14:21

Direction uh so okay now but now you

play14:24

know imagine that we had that as well

play14:26

you know imagine that you had an AI

play14:28

where every time time you hit the

play14:29

refresh button in your browser window

play14:31

you got a brand new like radically new

play14:34

Beatles likee direction that music could

play14:37

have been taken in in the 1960s right

play14:40

and each time you run it you just get

play14:41

another sample from this probability

play14:44

distribution okay you know even then

play14:46

there's something kind of weird about

play14:48

that I mean you know you could say The

play14:49

Beatles were there at the right place

play14:51

and time to pick a particular direction

play14:54

and not only that but sort of drag all

play14:56

of the rest of us along with them so

play14:59

that our whole objective function

play15:00

changed right we can't judge music

play15:03

anymore except by a Beatles influence

play15:05

standard just like we can't judge plays

play15:08

except by a Shakespeare influence

play15:10

standard right and and so now if you

play15:13

know there's sort of what I like to call

play15:14

an AI abundance Paradox right which is

play15:17

as soon as you have an AI that can

play15:19

produce a new artwork uh uh well you

play15:21

know however good it is it can produce a

play15:24

thousand similar artworks by just uh uh

play15:26

running it more and more often can

play15:28

always rewind and try again and uh so so

play15:32

it sort of radically devalues you know

play15:35

the worth of that kind of production

play15:37

just like the price of gold would crash

play15:39

if someone towed a 10 mile long golden

play15:42

asteroid to the Earth right uh uh it

play15:45

wouldn't actually be be be be worth you

play15:47

know what you what you thought it was

play15:50

okay and and so you know you could say

play15:53

well well at least humans will always

play15:55

have this sort of Advantage okay that uh

play15:57

at least we have the the advantage of

play15:59

being frail and there's only you know

play16:02

there's only one of us you can't back us

play16:04

up and run us over and over on the same

play16:07

input you know when we make a decision

play16:09

we really mean that decision right we're

play16:11

sticking with it and that's the only one

play16:13

that you're going to get out of us uh

play16:15

which is sort of a weird place to you

play16:17

know stake our claim of human

play16:19

specialness on but that might be the

play16:21

place that we're forced to okay but you

play16:23

know as soon as I've said that I have to

play16:25

confront a sort of exotic objection uh

play16:27

which is well is it really true that

play16:30

humans uh uh uh cannot be rewound cannot

play16:33

be copied cannot be you know saved as

play16:35

backups and so forth I mean it is

play16:37

possible you know some people think so

play16:40

that that our own cognition you know is

play16:43

happening in some sort of digital

play16:44

computation layer you know in the

play16:46

neurons and synapses and once technology

play16:49

once brain scanning technology gets good

play16:52

enough you know itbe uh uh the next

play16:54

iteration of neurolink or whatever right

play16:56

we can all just back ourselves up to the

play16:58

cloud we can you know rewind ourselves

play17:01

uh uh restore from back up you know and

play17:03

then that leads to all these strange uh

play17:05

questions like uh would you agree to

play17:08

have yourself faxed to Mars you know

play17:11

just sent as information reconstituted

play17:13

there uh the original meat version of

play17:16

you will just be painlessly euthanized

play17:18

don't worry about it right uh or um you

play17:21

know would you uh uh uh back up your

play17:23

brain before you go on a dangerous trip

play17:26

um so you know I don't know whether

play17:28

these things will ultimately be possible

play17:30

right it's a question about uh uh the

play17:33

ultimately the biology and the physics

play17:35

of the human brain right is just the

play17:37

sort of digital layer the relevant one

play17:40

or is our identity sort of bound up with

play17:43

the sort of unclonable uh uh uh you know

play17:46

not fully knowable you know chaotic

play17:48

details of the uh uh uh molecules you

play17:52

know inside of the you know individual

play17:54

sodium ion channels and the neurons

play17:56

right if you had to go all the way down

play17:59

to the molecular level then the famous

play18:01

no cloning theorem in quantum mechanics

play18:03

would say well you can't make a perfect

play18:05

copy right if you try to you're going to

play18:07

have to make measurements that will you

play18:09

know fail to tell you what you want and

play18:11

even destroy the original copy that you

play18:13

had okay so uh uh you know I don't know

play18:16

whether our identity is sort of bound up

play18:18

in these unclonable uh uh physical

play18:20

degrees of freedom but you know even

play18:23

even not knowing whether uh that's true

play18:25

or not um you know I mean it does seem

play18:28

like difference between us and any

play18:29

existing AI that we're sort of buffeted

play18:31

around by chaos such that no external

play18:34

agent can have at least as as far as we

play18:37

know can have all the information

play18:38

relevant to predicting our Behavior so

play18:41

then to Circle all the way back to AI

play18:43

safety this leads to a very exotic AI

play18:46

safety proposal which is why don't we

play18:48

just teach our AIS indoctrinate them in

play18:50

a religion that venerates the universe's

play18:53

unclonable ephemer ephemeral analog

play18:56

locai of creativity and intellig Ence uh

play18:59

wherever they might be found says

play19:01

protect them from destruction defer to

play19:03

their preferences those are the ones

play19:05

that matter because you know they're

play19:06

they're the ones that sort of only get

play19:08

the one chance uh now I don't know if

play19:10

this is a good idea uh you know in a

play19:12

different Universe maybe I fell in love

play19:14

with a different idea but here I kind of

play19:17

fell in love with this one and

play19:18

unfortunately you don't get to back me

play19:20

up and see a different one so all right

play19:22

so

play19:27

thanks

Rate This

5.0 / 5 (0 votes)

Related Tags
AI進化量子コンピューティングAI安全性未来予測科学理論技術革新人工知能社会影響創造性アイデア
Do you need a summary in English?