Ilya Sutskever | AI will be omnipotent in the future | Everything is impossible becomes possible

Me&ChatGPT
2 Jun 202421:10

Summary

TLDR在这段视频脚本中,讨论了深度学习的发展和人工通用智能(AGI)的未来。演讲者表达了对大型神经网络能够展现出惊人行为的信念,并讨论了实现AGI所需的两个关键信念:人类大脑的复杂性和人工神经元与生物神经元的相似性。同时,也探讨了Transformer模型的潜力、算法选择的重要性以及模型规模增长对性能的影响。最后,讨论了AI安全性问题,包括超级智能带来的挑战和如何确保AI技术的健康发展。

Takeaways

  • 😀 深度学习的信念:信念来自于大脑的复杂性和人工神经元与生物神经元的相似性。
  • 😀 AGI 的定义:一个能够自动化大多数智力劳动的计算机系统,可以被视为一个计算机同事。
  • 😀 当前模型的局限:我们现有的架构还可以进一步改进,尽管现有架构已能实现显著进步。
  • 😀 对 Transformer 的看法:尽管 Transformer 很强大,但仍有改进空间,可以期待更高效的架构。
  • 😀 扩展规律的理解:对扩展规律的理解尚不完善,但在一些特定任务上有不错的预测能力。
  • 😀 意料之外的能力:神经网络在编码能力上的快速提升让人印象深刻。
  • 😀 神经网络工作的惊讶:早期神经网络的表现有限,现在它们的进步超出了预期。
  • 😀 AI 安全的三大担忧:对齐问题、控制问题、自然选择问题。
  • 😀 超智能的潜力与风险:超智能可能带来难以想象的力量和好处,但也伴随巨大的风险,需要谨慎处理。
  • 😀 人工智能的未来:AI 未来可能会帮助解决其自身带来的挑战,创造出令人难以置信的生活。

Q & A

  • 为什么早期就坚信深度学习模型的规模扩大会带来意想不到的行为?

    -需要两个信念:一是人脑很大,不同生物的脑大小与能力成正比;二是人工神经元与生物神经元在信息处理上可能足够相似,即使生物神经元更复杂。

  • AGI的定义是什么?

    -AGI是能够自动化绝大多数智力劳动的计算机系统,可以视为与人类智能相当的电脑同事。

  • 目前是否拥有实现AGI的所有要素?

    -目前的技术栈已经很复杂,Transformers等模型已经相当强大,但仍有提升空间,可能需要更高效的模型或训练方法。

  • 不同算法如LSTM和Transformer在规模扩大后是否会有相同的效果?

    -几乎可以肯定,如果对LSTM进行适当修改并扩大其隐藏状态,即使效率可能有所不同,也能实现类似的功能。

  • 我们对模型规模扩大后能力的理解有多好?

    -我们对规模法则有一定的理解,但预测模型的特定能力,尤其是新兴行为,仍然是一个挑战。

  • 在模型规模扩大过程中,哪些能力的出现最令人惊讶?

    -最令人惊讶的是神经网络的工作原理得到了验证,以及它们在编程等任务上的能力迅速提高。

  • 如何看待AI安全问题?

    -随着AI能力的提升,安全问题变得尤为重要。需要解决对齐问题、人类利益问题以及自然选择的挑战。

  • 超级智能与AGI有何不同?

    -超级智能意味着比AGI更强大的能力,可能远远超出人类的想象,需要特别的安全措施来控制其巨大的力量。

  • 如何确保超级智能的对齐问题得到解决?

    -可能需要国际组织制定高标准,确保超级智能的发展方向与人类利益一致。

  • 人类如何利用超级智能解决它自身带来的挑战?

    -希望超级智能能理解现实更深刻,帮助我们解决由其强大能力带来的问题。

  • 自然选择的挑战在超级智能时代意味着什么?

    -即使解决了对齐问题和人类利益问题,自然选择仍然会导致变化,可能需要如人机合一等新解决方案来适应。

Outlines

00:00

🤖 深度学习的信念与AGI定义

在第一段中,讨论者表达了对深度学习的信念,特别是对大型神经网络能够展现出意想不到行为的坚信。他提出,要相信这一点,需要两个信念:一是大脑的大小与能力成正比,二是人工神经元与生物神经元在信息处理上可能足够相似。此外,讨论者还分享了他对人工通用智能(AGI)的定义,即能够自动化大多数智力劳动的计算机系统,以及他对AGI的直观理解——一个像人一样聪明的计算机同事。

05:01

🔧 深度学习架构的探讨与扩展性

第二段中,讨论者对Transformers架构的必要性进行了讨论,他认为Transformers已经足够好,但可能还有更高效或更快的改进空间。讨论者还提到了LSTM与Transformers的比较,指出如果对LSTM进行适当的修改和扩展,它们也能实现类似的性能,尽管可能不如Transformers。此外,讨论者还探讨了对模型扩展性的理解,包括对现有模型性能预测的信心程度。

10:02

📈 神经网络的扩展法则与挑战

在第三段中,讨论者深入讨论了神经网络的扩展法则,指出虽然这些法则能告诉我们输入与简单性能指标之间的关系,但它们在预测模型的高级能力时存在挑战。讨论者提到了OpenAI在GPT-4开发过程中对编码问题解决准确性的扩展法则研究,这比传统的下一个词预测准确性更为相关和有价值。

15:02

😲 神经网络的惊人能力与AI安全

第四段中,讨论者表达了对神经网络工作的惊讶,尤其是它们在编程能力上的快速进步。他提到了程序合成领域的历史和深度学习如何在短时间内实现了该领域长期未能达成的目标。此外,讨论者还转向了AI安全的话题,强调了随着AI能力的提升,确保其安全使用的重要性,并提到了OpenAI最近发布的关于超级智能的文档和Sam Altman在国会前的证词。

20:02

🌐 超级智能的挑战与未来展望

最后一段中,讨论者讨论了超级智能带来的挑战,包括对齐问题、人类利益的冲突以及自然选择的影响。他强调了超级智能的潜力,以及如果我们能够克服这些挑战,将能够创造出难以想象的美好生活。讨论者还提到了可能的解决方案,如国际组织制定标准和超级智能本身帮助我们解决它所创造的挑战。

Mindmap

Keywords

💡深度学习

深度学习是一种机器学习技术,它通过模拟人脑神经网络的结构和功能来处理复杂的数据。在视频中,提到深度学习早期就展现出巨大潜力,演讲者坚信通过扩大模型规模,可以观察到意想不到的有趣行为。

💡人工神经网络

人工神经网络是深度学习的基础,它由许多简单的处理单元(即神经元)组成,这些单元可以学习数据中的模式。视频中提到,尽管生物神经元复杂,但人工神经元在信息处理上可能与生物神经元相似。

💡AGI(通用人工智能)

AGI指的是能够执行任何智能任务的计算机系统,与人类智能相当。视频中提到,AGI可以自动化大部分的智力劳动,是深度学习研究的终极目标之一。

💡Transformers

Transformers是一种深度学习模型,特别擅长处理序列数据,如自然语言。视频中讨论了Transformers模型的潜力,以及它们在扩大规模后性能的提升。

💡LSTM(长短期记忆网络)

LSTM是一种特殊类型的循环神经网络,能够学习长期依赖关系。视频中提到,如果对LSTM进行适当的修改和训练,它们也可能达到与Transformers相似的性能。

💡扩展性法则

扩展性法则描述了神经网络规模与其性能之间的关系。视频中指出,虽然我们对这种关系有一定的理解,但对于预测模型的某些突发行为仍然存在挑战。

💡AI 安全

AI 安全涉及确保人工智能系统的行为符合人类价值观和意图。视频中讨论了随着AI能力的提升,确保其安全性变得越来越重要。

💡超级智能

超级智能是指远超人类智能的AI能力。视频中提到,超级智能可能带来巨大的变革,但同时也带来了对齐(alignment)问题,即如何确保超级智能的行为与人类的最佳利益一致。

💡编码能力

编码能力是指AI系统生成或理解代码的能力。视频中提到,随着深度学习的发展,AI的编码能力得到了显著提升,这是深度学习领域一个令人惊讶的进展。

💡神经网络的工作原理

神经网络的工作原理是通过大量的简单处理单元(神经元)和它们之间的连接来学习数据中的模式。视频中讨论了人工神经元与生物神经元的相似性,以及这种相似性如何帮助我们理解神经网络的潜力。

💡自然选择

自然选择是生物学中描述物种适应环境并进化的过程。在视频中,演讲者提到即使我们成功地管理了超级智能的挑战,自然选择仍然可能导致不可预测的变化,这是需要考虑的长期问题。

Highlights

深度学习的最大主义者从早期就开始推动大型模型的发展,以期待发现意想不到的有趣行为。

对大型神经网络的信念基于两个观点:人脑的规模大,以及人工神经元与生物神经元在信息处理上可能相似。

AGI被定义为能够自动化绝大多数智力劳动的计算机系统,即与人类智能相当的电脑。

Transformers模型虽然有效,但并不意味着它们是达到AGI的唯一途径,未来可能有更好的模型。

LSTM和Transformers在理论上可以互换,但Transformers在实践中显示出更好的可扩展性。

尽管我们对模型的扩展性有一定了解,但预测模型的特定行为仍然是一个挑战。

神经网络的实用性和有效性是一个惊喜,因为早期它们并不被看好。

编码能力的快速提升,特别是程序合成领域,是深度学习带来的一个显著进步。

AI安全是随着AI能力增强而日益重要的问题,特别是当AI变得极其强大时。

超级智能的概念指的是远超人类智能的AI能力,它可能带来巨大的变革和挑战。

对齐问题(alignment problem)是超级智能安全中的关键挑战,需要确保AI的目标与人类的目标一致。

国际组织可能在制定超级智能的全球标准和规则中发挥关键作用。

人类控制超级智能可能带来的风险,需要通过智能本身来帮助解决。

自然选择的挑战,即随着时间的推移,AI和人类社会可能需要适应不断变化的环境。

克服这些挑战可以使我们创造出难以想象的美好生活。

AI的发展和应用需要在创新和安全之间找到平衡。

Transcripts

play00:00

all right welcome um there were lots of

play00:04

people who had lots of interesting

play00:05

questions so I gave myself some note

play00:08

cards so I'll I'll I'll be prepared but

play00:11

um uh maybe we start with this um you

play00:15

have always been an deep learning

play00:18

maximalist um even very very early on

play00:23

what gave you the conviction to say look

play00:26

if you just push this to larger and

play00:28

larger models we're going to to see

play00:31

really unexpected interesting Behavior

play00:35

what what gave you the conviction that

play00:36

early

play00:41

on

play00:46

so I

play00:50

claim that to get this conviction that

play00:53

to believe that large neural networks

play00:56

can do amazing

play00:57

things you need to have two beliefs

play01:01

one of the belief one of the beliefs is

play01:03

a little bit harder to get to the other

play01:06

one is

play01:07

easier so the easy belief is that the

play01:10

human brain is Big the human brain is

play01:13

big and the brain of a cat is smaller

play01:15

and the brain of an insect is smaller

play01:17

still and we correspondingly see that

play01:20

humans can do things which cats cannot

play01:22

do and so on that's

play01:26

easy the hard part is to kind of say

play01:30

well maybe an artificial neuron the

play01:33

kinds the kind of neurons that we have

play01:35

in artificial neural

play01:38

networks is not that different from a

play01:40

biological neuron as far as the

play01:44

essential information processing is

play01:46

concerned so in other words of course

play01:48

the AR the biological neuron is very

play01:50

complicated and it does so many

play01:51

different

play01:52

things but when it comes down to it you

play01:54

have signals in Signal out maybe it's a

play01:58

pretty not maybe you can explain a lot

play02:01

with a pretty simple artificial neuron

play02:04

and if you just allow yourself to say

play02:05

yeah yeah they're different yeah yeah

play02:07

biological neurons are more complex but

play02:10

let's just say suppose they are similar

play02:13

enough then you say yeah okay V now have

play02:16

an existence proof that large neural

play02:18

nets all of us can do all these amazing

play02:21

things so the existence is

play02:24

there can we then somehow make it

play02:28

so for that we need to be able to train

play02:31

but if you that's the kind of chain of

play02:34

reasoning

play02:37

which you know in the environment of my

play02:41

you know when I was in graduate school

play02:42

with

play02:43

Jeff I think it

play02:46

was we were thinking about neural Nets

play02:48

it was

play02:50

perhaps more possible more feasible to

play02:54

make this realization than it would have

play02:55

been elsewhere

play02:57

yeah certainly we tried Neal Nets

play03:01

before and we didn't quite get to the

play03:04

same results because we're doing at a

play03:06

much smaller scale and so on

play03:10

um interesting where um let's start with

play03:14

so what's your definition of AGI how

play03:16

what's your mental

play03:18

picture yeah

play03:20

so

play03:24

AGI so at open AI we have

play03:30

a document which we call the open a

play03:31

charter which outlines the goal of open

play03:33

Ai and there we offer a definition of

play03:37

AGI and we say that an AGI is a computer

play03:41

system which can automate the great

play03:45

majority of intellectual

play03:48

labor that's one useful definition m in

play03:51

some sense an AGI would

play03:54

be the intuition there is it's a

play03:57

computer that's as smart as a person so

play04:00

you might for example have a coworker MH

play04:02

that's a

play04:04

computer so that would be a def a

play04:06

definition of AGI which I think is

play04:09

intuitively satisfying the term is a bit

play04:11

ambiguous because AGI the g means

play04:14

general so is it generality that we want

play04:17

that we care about in the AGI but it's

play04:19

actually a bit more than generality we

play04:21

care about generality and competence

play04:23

needs to be General in a sense that it

play04:25

can respond sensibly when you throw

play04:27

things at it but it needs to be comp

play04:29

competent so that you when it does

play04:32

something you ask it a question or ask

play04:34

it to do something it will do it yeah I

play04:37

like the sort of very practical

play04:39

definition at the end of the day because

play04:40

it gives you some measurement where you

play04:42

can can figure out how close are you do

play04:45

do you think we have all the ingredients

play04:49

to to get to AGI um if not what's

play04:53

missing kind of in the stack it's a

play04:55

complicated stack

play04:57

already um I trans per forers really all

play05:00

we need kind of paying homage to the

play05:02

famous U attention paper

play05:10

yeah you know I won't be overly specific

play05:13

in my answer to this question but I will

play05:16

say that I think

play05:21

that no I'll comment on the second part

play05:23

of the question is is is Transformers is

play05:25

all we

play05:26

need and I think that the question is a

play05:29

bit wrong because it implies something

play05:32

binary it implies Transformers are are

play05:34

either good enough or not good

play05:37

enough

play05:38

but I think it's better to think about

play05:41

it in terms of tax where we have

play05:44

Transformers and they're pretty good mhm

play05:46

maybe we could have something better

play05:48

that would be maybe more efficient or

play05:50

maybe you'll be

play05:52

faster but we as we know when you make

play05:56

the Transformers large they still become

play05:57

better they might just become big might

play06:00

be becoming better more

play06:01

slowly so while I am totally

play06:05

sure that it will be possible to improve

play06:08

very significantly on the on the current

play06:11

architectures that we have even if we

play06:14

didn't we would be able to go extremely

play06:17

far do you think it

play06:21

matters what the algorithm is so so for

play06:24

example an lstm versus a

play06:27

Transformer just scaled up sufficiently

play06:31

maybe that's an efficiency Delta or

play06:32

something like that but don't we end up

play06:34

in the same same place at the end

play06:37

so I would say almost entirely yes with

play06:42

a caveat so there are two

play06:47

caveats Lis so I'm just thinking of how

play06:51

what level of detail to go here you know

play06:53

maybe I will I will I will skip the

play06:54

detail how many people in the audience

play06:56

know what an lstm is Oh see it's a

play07:01

around here so I think we're mostly okay

play07:04

let's dig let's let's dig in then

play07:10

so I would argue that with a few if we

play07:15

made a few simple modifications to the

play07:17

lstm their hidden states are quite small

play07:19

if you somehow made it larger and then

play07:22

we were to go through the trouble of

play07:23

figuring out how to train them

play07:26

cuz lstms are recurrent neural network

play07:29

works and we kind of forgot about them

play07:31

we haven't put in the effort to cuz you

play07:34

know how neural training works you have

play07:36

the hyper parameters well how do you set

play07:39

them it's like you don't

play07:41

know how do you set your learning rates

play07:43

if it doesn't learn can you explain why

play07:46

and so this kind of work has not been

play07:47

done for lstms so that's why our ability

play07:50

to train them is more reduced but had we

play07:53

done that work so that we were able to

play07:55

train the lstms and we just did some

play07:57

simple things to increase their hidden

play07:58

State size I think they would be worse

play08:01

than

play08:02

Transformers but we would still be able

play08:03

to go extremely far with them also okay

play08:07

um how good is our understanding of

play08:09

scaling laws like if we if we scale

play08:12

these models up how confident are you in

play08:15

being able to predict capabilities of

play08:17

these particular models how good is that

play08:21

science so that's a very good question

play08:23

the answer is so

play08:26

so I was hoping for a more definitive

play08:30

answer well for it so so is a very

play08:33

definitive answer it means we are not

play08:37

great but we are not absolutely terrible

play08:40

either but we are not great definitely

play08:42

not great so what the scaling LW tells

play08:45

you it uh relates it's a relationship

play08:48

between the inputs that you put into the

play08:50

neural network and some kind of a simple

play08:53

to

play08:54

measure performance simple to evaluate

play08:57

performance measure like you your next

play09:00

word prediction accuracy

play09:03

M and that relationship is very strong

play09:07

but what is challenging is that we don't

play09:10

really care about next word prediction

play09:13

we care about it indirectly we care

play09:15

about the other incidental benefits that

play09:17

we get out of

play09:21

it and our and so our so for example you

play09:24

all know that if you predict the next

play09:26

word accurately enough you get all kinds

play09:27

of interest in emerging properties

play09:30

those have been quite hard to predict or

play09:32

at least I'll say I'm not aware of such

play09:35

work and if anyone is looking for

play09:38

interesting research work pro problems

play09:39

to work on that would be one I will say

play09:42

I will mention one example something

play09:44

that we've done at open AI in our in in

play09:47

our runup to GPT 4 where we tried to do

play09:51

a scaling law for a more interesting

play09:54

task which is predicting accuracy at

play09:57

solving coding problems

play09:59

we were able to do that accurately very

play10:01

accurately and that's a pretty good

play10:03

thing because this is a more tangible

play10:07

metric it's not it's still it's it's an

play10:10

improvement over next step next word

play10:13

prediction accuracy as far as things

play10:15

that are relevant to us so in other

play10:17

words it's more relevant to us to know

play10:19

what the coding accuracy is going to be

play10:21

ability to solve coding problems

play10:23

compared to just ability to predict and

play10:25

Export it still doesn't answer the

play10:28

really important question of can you

play10:29

predict some emergent behavior that you

play10:34

haven't seen

play10:35

before okay

play10:39

um speaking of these capabilities that

play10:42

are kind of emerging capabilities which

play10:44

one surprised you the most as these

play10:47

models scaled what what was the thing

play10:50

where you said like well I'm kind of

play10:51

astonished these models can do

play10:56

this it's a very difficult question to

play10:59

answer

play11:02

because it's too easy to get used to

play11:05

where things

play11:07

are so there definitely have been times

play11:09

when I was surprised but you adapt so

play11:12

fast it's kind of

play11:14

crazy I think maybe the big surprise for

play11:17

me

play11:21

is you know it may it may sound a little

play11:25

odd probably to most people in this

play11:27

audience but the big surprise for me is

play11:30

that neural networks work at

play11:32

all because when I was starting my work

play11:36

in this area they didn't work or it was

play11:38

like let's define what it means to work

play11:41

at all it means they could do they could

play11:43

work a little bit but not really not in

play11:45

any serious way not in a way that anyone

play11:48

except for the most intense enthusiasts

play11:51

would care

play11:53

about and so now we see yeah like those

play11:56

neural Nets work so I guess the

play11:57

artificial neuron really is

play12:00

at least somewhat related to the

play12:02

biological neuron or at least that basic

play12:06

assumption has been validated to some

play12:10

degree what about like an emergent

play12:12

property was the one that sticks out to

play12:15

to you like for example I don't know

play12:17

code generation or did you may maybe it

play12:19

was different in your mind maybe you you

play12:22

just once you saw like hey neural Nets

play12:23

can work and they can scale yeah of

play12:26

course all these sort of properties will

play12:28

emerge because you know at at the limit

play12:31

point we're building a human brain and

play12:32

humans know how to code and humans know

play12:34

how to reason about tasks and so on um

play12:38

was that did you just expect all of that

play12:39

or did uh I've definitely been surprised

play12:43

and I'll mention why because the human

play12:46

brain can do those things it's true but

play12:48

does it follow that our training process

play12:51

will produce something similar so so it

play12:54

was definitely very amazing I

play12:56

think yeah seeing seeing

play12:59

the coding ability improved quickly that

play13:01

was

play13:03

quite quite a sight to be seen and for

play13:05

coding in particular because you know it

play13:08

went from no one has ever seen a

play13:10

computer code anything at all ever there

play13:13

was a little area of computer science

play13:15

called program synthesis mhm which

play13:18

maybe it was very Niche and it was very

play13:21

Niche because they couldn't have any

play13:23

accomplishments it was a very they had a

play13:26

very difficult experience and then this

play13:28

neural came in and said oh yeah code

play13:30

synthesis like we're going to do we're

play13:33

going to accomplish what you hope were

play13:34

hoping to achieve one day like

play13:38

tomorrow so that was

play13:41

yeah deep

play13:46

learning just just out of curiosity when

play13:48

you write code how much of your code is

play13:50

yours how much of your code is I mean

play13:52

like collaboration but I

play13:57

I I do eny en jooy I do enjoy it when

play14:00

the neural net writes most of

play14:04

it all right let's let's switch TCT here

play14:07

a little bit um as these models get more

play14:10

and more

play14:11

powerful um it's worthwhile to to also

play14:14

talk about AI safety and uh uh and open

play14:20

AI has has released the document just uh

play14:23

just recently that where you're one of

play14:25

the unders signers um uh Sam has

play14:29

testified in front of

play14:31

Congress what what worries you most

play14:35

about AI

play14:37

safety yeah I can talk about

play14:44

that

play14:46

so let's take a step back and talk about

play14:50

the state of the world so you know

play14:52

you've had the AI research happening and

play14:54

it was exciting and now you have the GPT

play14:56

models and now you all get to play with

play14:59

all the different chat bot and

play15:02

assistance and you know B and chat GPT

play15:06

and you say okay that's pretty cool it

play15:07

can do

play15:09

things and indeed there already

play15:12

are you can start perhaps worrying about

play15:17

the implications of the tools that we

play15:19

have today and I think that it is a very

play15:22

valid thing to do but that's not where

play15:26

I allocate my concern

play15:30

M the place where things get really

play15:34

tricky is when you imagine fast forward

play15:39

in some number of years a decade let's

play15:42

say how powerful will a I be of course

play15:46

with this incredible future power of AI

play15:51

which I think will be difficult to

play15:52

imagine frankly with an AI this powerful

play15:56

you could do incredible amazing

play16:00

things that are perhaps even outside of

play16:03

our

play16:04

dreams like if you can really have a

play16:07

dramatically powerful

play16:09

AI but the place where things get

play16:13

challenging are directly connected to

play16:15

the power of the AI it is powerful it is

play16:18

going to be extremely unbelievable

play16:20

unbelievably powerful and it is because

play16:22

of this

play16:24

power that's where the safety issues

play16:26

come up and I'll mention

play16:30

three I I personally see

play16:33

three you know when when you get so you

play16:36

you alluded to the letter M that uh we

play16:40

posted at open AI a few days ago

play16:43

actually

play16:44

yesterday about what we about some ideas

play16:48

that we

play16:49

think would be good to implement to

play16:53

navigate the challenges of super

play16:55

intelligence now what is super

play16:57

intelligence why did we choose CH to use

play16:59

the term super

play17:00

intelligence the reason is that super

play17:02

intelligence is meant to convey

play17:04

something that's not just like an AGI

play17:06

with AGI we said well you have something

play17:09

kind of like a person kind of like a

play17:11

coworker super intelligence is meant to

play17:13

convey something far more capable than

play17:16

that when you have such a capability

play17:18

it's like can we even imagine how it

play17:20

will be but without question it's going

play17:23

to be unbelievably

play17:24

powerful it could be used to solve

play17:28

incomprehensible hard problems if it is

play17:31

used well if we navigate the challenges

play17:34

that super intelligence POS poses we

play17:37

could we

play17:39

could radically improve the quality of

play17:41

life but the power of super intelligence

play17:44

is so vast so the concerns the concern

play17:47

number one has been expressed a lot and

play17:49

this is the scientific problem of

play17:52

alignment you might want to think of it

play17:54

from the as as an analog to nuclear

play17:57

safety you know build a nuclear reactor

play18:00

you want to get the energy you need to

play18:03

make sure that it won't melt down even

play18:04

if there's an earthquake and even if

play18:05

someone tries to I don't know smash a

play18:10

truck into it y so this is the super

play18:13

intelligence safety and it must be

play18:14

address in order to contain the vast

play18:16

power of the super intelligence this

play18:18

called the alignment problem one of the

play18:21

suggestions that we had in our in the

play18:23

PST was an approach that an

play18:27

international organization could do

play18:29

to create various standards at this very

play18:32

high level of capability and I want to

play18:33

make this other point you know about the

play18:36

post and also about um R CEO Sam Alman

play18:39

Congressional testimony where he

play18:42

advocated for regulation of AI the

play18:45

intention is primarily to put rules and

play18:50

standards of various kinds on the very

play18:55

high level of

play18:57

capability you know you could maybe

play18:59

start looking at gp4 but that's not

play19:01

really what is interesting what is

play19:04

relevant here but something which is

play19:05

vastly more powerful than that when you

play19:08

have a technology so powerful it becomes

play19:10

obvious that you need to do something

play19:12

about this

play19:13

power that's the first concern the first

play19:16

challenge to overcome the Second

play19:18

Challenge to overcome is that of course

play19:19

we are people we are humans humans of

play19:21

interests and if you have super

play19:23

intelligence is controlled by people

play19:26

well who knows what's going to happen I

play19:29

do hope that at this point we will have

play19:31

the super intelligence itself try to

play19:33

help us solve the challenge in world

play19:35

that it creates this is not no longer an

play19:37

unreasonable thing to say like if you

play19:39

imagine a super intelligence that indeed

play19:41

sees things more deeply than we do much

play19:44

more

play19:45

deeply to understand reality better than

play19:48

us we could use it to help us solve the

play19:50

challenges that it creates then there is

play19:53

the third challenge which

play19:56

is the challenge maybe of natural

play19:58

selection you know what the Buddhists

play20:00

say that change is the only constant so

play20:02

even if you do have your super

play20:03

intelligences in the world and they are

play20:04

all we managed to solve alignment we

play20:07

managed to solve no one wants to use

play20:08

them in very destructive ways we managed

play20:11

to create a life of unbelievable

play20:13

abundance which really like not just not

play20:15

just material abundance but Health

play20:18

longevity like all the things we don't

play20:21

even try dreaming about because they're

play20:23

so obviously impossible if you've got to

play20:25

this point then there is the third

play20:27

challenge of natural selection

play20:29

things

play20:30

change you know you know that natural

play20:33

selection applies to ideas to

play20:34

organizations and that's a challenge as

play20:36

well maybe the neuralink solution of

play20:40

people becoming part AI will be one way

play20:42

we will choose to address this I don't

play20:44

know but I would say that this kind of

play20:46

describes my concern and specifically

play20:49

just as the concerns are big if you

play20:51

manage man it is so worthwhile to

play20:54

overcome them because then we could

play20:56

create truly unbelievable lies

play20:59

lives for ourselves that are completely

play21:02

even

play21:04

unimaginable so it is it is like a

play21:06

challenge that's really really worth

play21:09

overcoming

Rate This

5.0 / 5 (0 votes)

Related Tags
深度学习人工智能AI安全神经网络AGI定义Transformer模型规模法则技术发展创新突破未来趋势智能协作
Do you need a summary in English?