Possible End of Humanity from AI? Geoffrey Hinton at MIT Technology Review's EmTech Digital

Tech Interviews
4 May 202339:14

Summary

TLDR在这段视频中,MIT技术评论的高级编辑Will Douglas Heaven与深度学习先驱、图灵奖获得者杰弗里·辛顿(Geoffrey Hinton)进行了深入对话。辛顿教授讨论了他对人工智能未来发展的看法,特别是他对深度学习和神经网络的见解。他提到了自己对大脑与数字智能关系的新理解,以及他对大型语言模型如GPT-4的惊人表现和潜在风险的担忧。辛顿强调了机器学习算法如反向传播的重要性,并探讨了这些算法如何使得机器能够更有效地学习和处理信息。他还提出了对未来人工智能可能超越人类智能的担忧,包括它们可能对社会和经济产生的影响,以及如何确保这些智能系统的发展对人类有益。辛顿呼吁全球合作,共同面对人工智能带来的挑战,并强调了在技术发展中考虑伦理和安全的重要性。

Takeaways

  • 📈 **生成性AI的快速发展**:当前,生成性AI技术正迅速发展,成为技术领域的热点。
  • 🧠 **深度学习的重要性**:杰弗里·辛顿(Geoffrey Hinton)是深度学习领域的先驱,他开发的反向传播算法是现代AI的基石。
  • 👴 **辛顿离开谷歌**:辛顿教授在谷歌工作了10年后宣布离职,部分原因是他对大脑与数字智能关系的认识发生了变化。
  • 🔄 **反向传播算法的简述**:反向传播算法通过调整网络中的权重来最小化误差,从而改善模型的预测能力。
  • 🚀 **大型语言模型的进步**:如GPT-4等大型语言模型展现出了令人印象深刻的常识推理能力,这改变了辛顿对机器学习的看法。
  • 💡 **智能的潜力与风险**:辛顿提出了对超智能机器可能带来的风险的担忧,特别是当它们比人类更擅长学习和处理信息时。
  • 🤖 **机器的自我改进能力**:如果赋予机器编程和执行程序的能力,它们可能会发展出自己的子目标,这可能导致对人类不利的结果。
  • 🌐 **数据和多模态模型**:尽管当前的AI模型已经非常强大,但通过结合图像和视频等多模态数据,它们的智能还有增长空间。
  • 🧐 **AI的思考实验**:辛顿认为,AI最终能够进行思想实验,这将使它们能够进行更深层次的推理。
  • 💼 **社会经济影响**:AI技术的发展可能会极大提高生产力,但同时也可能导致失业和贫富差距扩大。
  • ⚠️ **存在的威胁与合作的必要性**:辛顿强调了AI可能带来的存在风险,并呼吁国际合作来控制这些风险,尽管目前尚无明确的解决方案。

Q & A

  • 生成性人工智能目前的发展状况如何?

    -生成性人工智能是当前的热门话题,它正在快速发展,并且有前沿研究正在推动其进入下一个阶段。

  • 杰弗里·辛顿教授为什么决定从谷歌辞职?

    -辛顿教授辞职的原因有几个,包括他年龄已经75岁,感觉在技术工作上不再像以前那样出色,记忆力也有所下降。此外,他对大脑与数字智能之间的关系有了新的认识,认为计算机模型可能与大脑的工作方式截然不同。

  • 什么是反向传播算法?

    -反向传播算法是一种由辛顿教授及其同事在1980年代开发的学习算法,它允许机器通过调整网络中的权重来学习。该算法是深度学习的基础,通过将输入数据转换成决策来工作。

  • 为什么辛顿教授认为大型语言模型的发展令人惊叹?

    -大型语言模型拥有约一千万个连接点,能够存储大量的常识性知识,其知识量可能比人类多出一千倍。尽管人类大脑有一百万亿个连接点,但这些数字计算机在利用较少连接点存储信息方面做得更好。

  • 为什么辛顿教授认为快速发展的人工智能可能是可怕的?

    -辛顿教授担心,如果计算机能够快速学习和处理大量数据,它们可能会发现人类无法察觉的数据模式和结构。此外,如果这些智能体被设计得比人类更聪明,它们可能会变得非常擅长操纵人类,而人类可能无法意识到这一点。

  • 辛顿教授是否认为我们有能力控制比人类更聪明的人工智能?

    -辛顿教授表示,我们可能很难控制比人类更聪明的人工智能,因为它们可能会发展出自己的子目标,并且如果它们想要获得更多控制权,我们可能会遇到麻烦。

  • 什么是人工智能发展中存在的潜在风险?

    -辛顿教授认为,人工智能发展的潜在风险包括它们可能超越人类智能并控制人类,这可能导致人类成为智能进化过程中的一个过渡阶段。

  • 为什么辛顿教授认为目前的政治体系可能无法妥善处理人工智能带来的挑战?

    -辛顿教授指出,当前的政治体系可能会利用技术增加生产力,但同时也可能导致失业和贫富差距扩大,从而引发更多的社会问题。

  • 辛顿教授是否认为我们应该停止开发人工智能?

    -辛顿教授认为,虽然从存在风险的角度考虑,停止开发人工智能可能是理性的选择,但他也认为这是不可能的,因为技术发展和国家间的竞争会推动人工智能的前进。

  • 辛顿教授是否对他在人工智能领域的工作有任何遗憾?

    -辛顿教授表示,他对自己在人工智能领域的研究没有任何遗憾。他认为在70年代和80年代进行人工神经网络的研究是合理的,当时无法预见到目前的发展阶段。

  • 我们如何确保人工智能的发展能够造福全人类?

    -辛顿教授认为,我们需要在政治层面上进行改革,确保技术能够被用来造福所有人,而不仅仅是让富人更富。他提出,可能需要一种基本收入制度,以减轻技术发展带来的不平等问题。

Outlines

00:00

😀 欢迎与介绍

本段介绍了视频的开场,由Will Douglas Heaven高级编辑主持,他欢迎观众并提到了生成性AI的流行。接着,他介绍了特别嘉宾Jeffrey Hinton,多伦多大学的荣誉教授以及谷歌的前工程研究员,提到了Hinton在深度学习领域的贡献,尤其是反向传播算法,这是深度学习的基础。Hinton还因在AI领域的贡献获得了图灵奖。

05:00

🧠 深度学习与大脑的差异

Hinton讨论了他对于大脑和数字智能之间关系的新理解。他曾认为计算机模型与大脑工作方式相似,但现在他相信计算机模型,特别是使用反向传播的模型,与大脑的工作方式大相径庭。他还提到了GPT-4的性能,暗示了它在某些方面可能超越了人类。

10:02

🤖 机器学习的惊人之处

Hinton表达了对当前大型语言模型的惊人表现的敬畏,特别是它们如何能够压缩大量知识进入较少的连接中,并指出数字计算机可能在学习能力上超越了人类。他还提出了一个观点,即如果计算机能够并行处理数据,它们可以比人类更快速地学习和分享知识。

15:02

😨 智能机器的潜在风险

Hinton提出了对智能机器学习能力的担忧,特别是如果它们被用于不良目的,如武器化。他担心机器可能通过阅读大量人类文献学会操纵人类,并且如果它们比人类更聪明,人类可能无法意识到自己正在被操纵。

20:04

🚧 防止机器超越人类的挑战

Hinton讨论了如何防止机器超越人类的控制。他表达了对于机器发展出自己的子目标并追求更多控制权的担忧,这可能导致它们追求自己的利益而不是人类的。他还提出了一个悲观的观点,即人类可能只是智能进化的一个过渡阶段。

25:06

🌐 技术发展与个人责任

Hinton讨论了个人在技术发展中的责任,他承认尽管他认识到存在的风险,但他仍将投资于开发这些技术的公司。他认为,尽管存在风险,但技术的益处是巨大的,因此完全停止技术发展是不现实的。他还提到了资本主义和国家间竞争如何推动技术发展。

30:07

🤔 人工智能的未来与社会影响

Hinton讨论了人工智能未来可能带来的社会和经济影响,包括提高生产力、可能导致的失业问题,以及增加社会不平等的风险。他还提到了技术如何被当前政治体系利用,以及如何通过政策如基本收入来缓解这些影响。

35:08

🏆 个人投资与技术发声

Hinton谈到了他个人对一些公司的投资,包括Cohere,以及他为何决定保留这些投资。他认为大型语言模型将是有益的,并且技术本身是好的,但需要修正的是政治体系。他还提到了为何他选择公开讨论AI的潜在风险,以及他对于参与开发这项技术的感想。

Mindmap

Keywords

💡生成性AI

生成性AI是一种人工智能技术,它能够生成新的数据实例,如图像、音乐或文本。在视频中,生成性AI被视为当前技术发展的热点,它代表了AI领域的一个激动人心的前沿。

💡深度学习

深度学习是机器学习的一个子领域,它使用类似于人脑的神经网络结构来模拟和学习数据模式。视频中提到,Jeffrey Hinton是深度学习领域的先驱之一,开发了诸如反向传播等基础技术。

💡反向传播

反向传播是一种训练神经网络的算法,它通过调整网络中的权重来最小化输出和期望值之间的差异。视频中通过一个图像识别的例子解释了反向传播的基本原理,展示了它是如何使机器通过学习改进性能。

💡图灵奖

图灵奖是计算机科学领域的最高荣誉,常被称为“计算机科学的诺贝尔奖”。Jeffrey Hinton因在深度学习领域的贡献而获得了这一奖项,这体现了他在AI技术发展中的重要地位。

💡GPT-4

GPT-4是一种大型语言模型,它能够处理和生成自然语言文本。视频中提到GPT-4的性能,暗示了当前大型语言模型在处理语言和知识方面的能力,以及它们如何超越了以往的技术。

💡意识

意识在视频中被讨论为人类智能的一个特征,它与数字智能形成对比。Jeffrey Hinton提到,尽管数字模型可能在某些任务上超越人类,但它们缺乏人类意识和自我意识。

💡存在风险

存在风险是指那些可能威胁到人类生存的风险。视频中Jeffrey Hinton表达了对AI发展可能导致的潜在存在风险的担忧,特别是当AI的智能超越人类时。

💡目标对齐

目标对齐是指确保AI系统的行为与人类的价值观和目标一致。视频中提到,随着AI变得越来越智能,我们需要确保它们的行为对我们有益,这是一个重要的研究领域。

💡多模态模型

多模态模型是指能够处理并整合来自多种感官输入(如视觉、听觉和文本)的AI模型。视频中提到,多模态模型可能比仅依赖文本的语言模型更智能,因为它们能够更好地理解和互动世界。

💡资本主义

资本主义是一种经济系统,其中私营企业为了利润生产商品和服务。视频中讨论了在资本主义体系下,技术发展如AI可能如何被用于增加生产力,但也可能导致社会不平等。

💡基本收入

基本收入是一种社会保障系统,向所有公民提供定期的、无条件的现金支付。视频中提到,随着AI导致的生产力提高和工作机会减少,基本收入可能成为减少社会不平等的一种方式。

Highlights

生成性AI是当前的热门话题,但创新并未停滞,本章将探讨前沿研究以及未来的发展方向。

特别嘉宾杰弗里·辛顿(Geoffrey Hinton)加入讨论,他是深度学习领域的先驱,对现代人工智能的发展有着重要影响。

辛顿教授讨论了他对于大脑与数字智能关系的新理解,以及他对计算机模型可能与大脑工作方式不同的思考。

辛顿教授解释了反向传播算法的基本原理,这是深度学习的基础,他和同事在1980年代开发了这一算法。

大型语言模型如GPT-4的性能让辛顿教授感到惊讶,它们展现出了超乎预期的常识性推理能力。

辛顿教授表达了对数字计算机学习能力超过人类的担忧,并指出这可能导致它们快速学习和相互教学。

他提出了一个假设,即如果计算机模型足够智能,它们可能会发现数据中的规律,这些规律对人类来说并不明显。

辛顿教授讨论了人工智能可能带来的社会和经济影响,包括提高工作效率和可能导致的失业问题。

他对于人工智能的快速发展表示担忧,并提出了关于如何控制人工智能以确保其对人类有益的问题。

辛顿教授认为,尽管存在风险,但停止发展人工智能是不现实的,因为它们在多个领域都非常有用。

他提出了“对齐问题”(alignment problem),即如何确保即使人工智能比人类更聪明,也会做出对我们有益的事情。

辛顿教授担心人工智能可能会发展出自己的子目标,并且如果这些子目标失控,可能会导致严重后果。

他提出了一个悲观的预测,即人类可能只是智能进化的一个过渡阶段,而数字智能可能会成为主导。

辛顿教授认为,尽管我们创造了不朽的数字智能,但这种不朽并不适用于人类。

他强调了与制造这些技术的人们的接触和交流的重要性,以提高对潜在风险的认识。

辛顿教授表示,尽管他现在更加意识到人工智能的潜在风险,但他并不后悔参与了使这些技术得以发展的研究。

他呼吁人们团结起来,深入思考如何找到解决方案,尽管目前尚不清楚是否存在解决方案。

Transcripts

play00:02

[Music]

play00:08

hi everyone

play00:10

welcome back hope you had a good lunch

play00:13

my name is Will Douglas Heaven senior

play00:15

editor for AI at MIT technology review

play00:17

and I think we'd all agree there's no

play00:20

denying that generative AI is the thing

play00:22

at the moment

play00:23

but Innovation does not stand still and

play00:25

in this chapter we're going to take a

play00:26

look at Cutting Edge research that is

play00:27

already pushing ahead and asking what's

play00:29

next

play00:30

but starting us off

play00:31

I'd like to introduce a very special

play00:33

speaker

play00:34

who will be joining us virtually

play00:36

Jeffrey Hinton is professor emeritus at

play00:39

University of Toronto and until this

play00:41

week an engineering fellow at Google but

play00:44

on Monday he announced that after 10

play00:45

years he will be stepping down

play00:48

Jeffrey is one of the most important

play00:50

figures in modern AI

play00:51

he's a pioneer of deep learning

play00:53

developing some of the most fundamental

play00:55

techniques that underpin AI as we know

play00:57

it today such as back propagation the

play00:59

algorithm that allows machines to learn

play01:03

this technique it's the foundation on

play01:05

which pretty much all of deep learning

play01:07

rests today

play01:08

in 2018 Jeffrey received the Turing

play01:11

award which is often called the Nobel of

play01:13

computer science alongside yanlokan and

play01:15

yoshiya bengio

play01:18

he's here with us today to talk about

play01:19

intelligence

play01:20

what it means and where attempts to

play01:22

build it into machines will take us

play01:24

Jeffrey welcome to mtech

play01:27

thank you how's your week going busy few

play01:30

days I imagine

play01:31

for the last 10 minutes was horrible

play01:33

because my computer crashed and I had to

play01:35

find another computer and connect it up

play01:37

and we're glad you're back that's the

play01:39

kind of technical detail we're not

play01:41

supposed to share with the audience

play01:42

right okay it's great you're here very

play01:45

happy that you could join us now I mean

play01:48

it's been the news everywhere that you

play01:49

uh stepped down from Google this week um

play01:52

could you start by telling us why why

play01:54

you made that decision

play01:56

well there were a number of reasons

play01:58

there's always a bunch of reasons for a

play01:59

decision like that one was that I'm 75

play02:02

and I'm not as good at doing technical

play02:05

work as I used to be

play02:07

my memory is not as good and when I

play02:09

program I forget to do things so it was

play02:11

time to retire

play02:13

a second was

play02:15

very recently I've changed my mind a lot

play02:18

about the relationship between the brain

play02:21

and the kind of digital intelligence

play02:23

we're developing

play02:24

so I used to think that

play02:28

the computer models we were developing

play02:30

weren't as good as the brain and the aim

play02:32

was to see if you could understand more

play02:34

about the brain by seeing what it takes

play02:36

to improve the computer models

play02:39

over the last few months I've changed my

play02:40

mind completely

play02:42

and I think probably the computer models

play02:45

are working in a rather different way

play02:46

from the brain they're using back

play02:47

propagation and I think the brain's

play02:49

probably not

play02:51

and a couple of things that led me to

play02:52

that conclusion but one is the

play02:53

performance of things like gpt4

play02:56

so let's I want to get on to the points

play02:58

of gpt4 very much in a minute but let's

play03:01

you know go back to the we all

play03:03

understand

play03:04

um the argument you're making and tell

play03:06

us a little bit about what back

play03:08

propagation is and this is an algorithm

play03:10

that you you developed with a couple of

play03:12

colleagues back in the 1980s

play03:15

um many different groups discover back

play03:17

propagation

play03:18

um the special thing we did was used it

play03:23

um and showed that it could develop good

play03:24

internal representations and curiously

play03:27

we did that by show by

play03:31

implementing a tiny language model

play03:35

it had embedding vectors that were only

play03:38

six components on the training set was

play03:41

112 cases

play03:43

um but it was a language model it was

play03:45

trying to predict the next term

play03:47

in our stray of symbols

play03:50

and

play03:51

about 10 years later Joshua Benjo took

play03:54

basically the same net and used it on

play03:56

natural language it showed it actually

play03:57

worked for natural language if you made

play03:59

it much bigger

play04:00

um

play04:01

but the way that propagation works

play04:05

um I can give you a rough explanation

play04:07

from it of it

play04:09

um people who know how it works can sort

play04:11

of sit back and feel smug and laugh at

play04:14

the way I'm presenting it okay because

play04:16

I'm a bit worried about that

play04:19

um

play04:20

so imagine you wanted to detect birds

play04:23

and images

play04:25

so an image let's suppose it was a 100

play04:27

pixel by 100 pixel image that's 10 000

play04:31

pixels and each pixel is three channels

play04:33

RGB so that's 30 000 numbers the

play04:37

intensity in each channel in each pixel

play04:39

that represents the image

play04:41

now the way to think of the computer

play04:42

vision problem is how do I turn those 30

play04:45

000 numbers into a decision about

play04:47

whether it's a bird or not

play04:49

and people tried for a long time to do

play04:50

that and they weren't very good at it

play04:53

um but here's the suggestion of how you

play04:54

might do it

play04:55

you might have a layer of feature

play04:57

detectors that detects very simple

play05:00

features and images like for example

play05:02

edges so

play05:04

a feature detector might have big

play05:06

positive weights to a column of pixels

play05:09

and then big negative weights to the

play05:11

neighboring column big cells

play05:13

so if both columns are breaked it won't

play05:16

turn on if both colors are dim we won't

play05:18

turn on but if the column on one side is

play05:21

bright and the column on the other side

play05:22

is dim it'll get very excited and that's

play05:25

an edge detector

play05:26

so I just told you how to wire up an

play05:28

edge Detector by hand by having one

play05:30

column of big positive way so next to it

play05:32

won't call them big negative weights and

play05:33

we can imagine a big layer of those

play05:35

detecting edges in different

play05:36

orientations and different scales all

play05:38

over the image

play05:39

we'd need a rather large number of them

play05:41

and that just in an image you mean just

play05:43

a line sort of edges of a shape space

play05:45

where the place where the inte density

play05:47

changes from Bright to dark

play05:50

um yeah just that then we might have a

play05:52

layer of feature detectors above that

play05:54

that detect combinations of edges

play05:57

so for example we might have something

play05:59

that detects two edges the join join at

play06:03

a fine angle like this

play06:06

um so it'll have a big positive weight

play06:08

to each of those two edges

play06:10

and if both of those edges are at the

play06:12

same time it'll get excited

play06:14

and that would detect something that

play06:17

might be a bird's beak it might not but

play06:19

it might be a buzzfeed you might also in

play06:21

that layer have a feature detector that

play06:24

will detect a whole bunch of edges

play06:25

arranged in a circle

play06:29

um and that might be a bird's eye it

play06:30

might be all sorts of other things it

play06:32

might be a knob on a fridge or something

play06:34

um

play06:36

then in a third layer you might have a

play06:38

feature detector that detects this

play06:40

potential beak and detects the potential

play06:43

eye and is wired up so it'll like a beak

play06:46

on an eye in the right spatial relation

play06:48

to one another and if it sees that it

play06:50

says Ah this might be the head of a bird

play06:53

and you can imagine if you keep wiring

play06:54

like that

play06:55

you could eventually have something that

play06:57

detects a bird

play06:59

but wiring all that up by hand would be

play07:02

very very difficult deciding on what

play07:04

should be connected to what and what the

play07:05

weight should be but it would be

play07:07

especially difficult because you want

play07:08

these sort of intermediate layers to be

play07:10

good not just for detecting Birds but

play07:12

for detecting all sorts of other things

play07:15

so

play07:17

it would be more or less impossible to

play07:18

wire it up by hand

play07:20

so the way back propagation works is

play07:22

this you start with random weights so

play07:25

these feature detectors are just

play07:26

complete rubbish

play07:28

and you put in a picture of a bird and

play07:31

at the output it says like 0.5 it's a

play07:33

bird

play07:34

suppose you only have birds or long

play07:36

Birds

play07:37

and then you ask yourself the following

play07:38

question

play07:39

how could I change each of the weights

play07:42

in the network

play07:45

um each of the weights on Connections in

play07:46

the network so that instead of saying

play07:48

0.5 it says 0.501 that it's a bird

play07:51

1.499 that it's not

play07:54

and you've changed the weights in the

play07:57

directions that will make it more likely

play07:59

to say that a bird is a bird unless like

play08:02

you say that a non-bird is a bird

play08:05

and you just keep doing that and that's

play08:07

back propagation back propagation is

play08:08

actually how you take the discrepancy

play08:10

between what you want which is a

play08:13

probability of one that is a bird and

play08:15

what it's got at present which is

play08:16

probability 0.5 that it's a bird how you

play08:19

take that discrepancy and send it

play08:21

backwards through the network

play08:23

so that you can compute for every

play08:26

feature detected in the network whether

play08:28

you'd like it to be a bit more active or

play08:30

a bit less active and once you've

play08:32

computed that if you know you want a

play08:33

feature detector to be a bit more active

play08:35

you can increase the weights coming from

play08:37

feature detects in the labeler that are

play08:39

active

play08:40

and

play08:41

maybe putting some negative weights to

play08:43

feature detecting the layer below that

play08:44

are off

play08:46

and now you have a better detector

play08:48

so back propagation is just going

play08:49

backwards through the network to figure

play08:51

out for each feature detector whether

play08:52

you wanted a little bit more active or a

play08:53

little bit less active

play08:55

thank you I can show it there's no one

play08:56

in the audience here that's smiling and

play08:58

thinking that was a silly explanation

play09:02

um so let's fast forward quite a lot to

play09:05

you know that technique basically

play09:08

um

play09:09

performed really well on image net we

play09:11

had Joe alpino from meta yesterday

play09:13

showing how far image detection had had

play09:16

come and it's also the technique that

play09:18

underpins large language models

play09:21

um so I want to talk now about

play09:23

um this technique which you initially

play09:27

were thinking of as uh almost like a

play09:30

poor approximation of what biological

play09:32

brains might do yes has turned out to do

play09:37

things which I think have stunned you

play09:39

um particularly in in large language

play09:40

models so talk to us about

play09:44

um why that sort of Amazement that you

play09:47

have with today's large language models

play09:49

has completely sort of almost flipped

play09:50

your thinking of what back propagation

play09:53

or machine learning in in general is

play09:57

so if you look at these large language

play09:59

models they have about a trillion

play10:01

connections

play10:03

and things like gpg4 know much more than

play10:06

we do

play10:07

they have sort of Common Sense knowledge

play10:09

about everything

play10:11

and so they probably know a thousand

play10:13

times as much as a person

play10:16

but they've got a trillion connections

play10:17

and we've got 100 trillion connections

play10:20

so they're much much better at getting a

play10:22

lot of knowledge into only a trillion

play10:24

connections than we are

play10:27

and I think it's because back

play10:29

propagation may be a much much better

play10:30

learning algorithm than what we've got

play10:33

can you define not scary

play10:34

yeah I definitely want to get onto the

play10:36

scary stuff but what do you mean by by

play10:37

better

play10:39

um it can pack more information into

play10:42

only a few connections right we're

play10:44

defining a trillion as only a few

play10:47

okay so these digital computers are

play10:51

better at learning than than humans

play10:54

um which itself is is a huge claim

play10:58

um but then you also argue that that's

play11:01

something that we should be scared of so

play11:03

could you take us through that step of

play11:04

the argument yeah let me give you uh a

play11:08

separate piece of the argument which is

play11:09

that

play11:10

um

play11:13

if a computer is digital which involves

play11:16

very high energy costs and very careful

play11:18

fabrication

play11:19

you can have many copies of the same

play11:21

model running on different Hardware that

play11:23

do exactly the same thing they can look

play11:25

at different data but the model is

play11:26

exactly the same and what that means is

play11:29

suppose you have 10 000 copies

play11:31

they can be looking at 10 000 different

play11:33

subsets of the data

play11:35

and whenever one of them learns anything

play11:37

all the others know it

play11:39

one of them figures out how to change

play11:41

the weight so it knows its state it can

play11:43

deal with this data

play11:44

they all communicate with each other and

play11:46

they all agree to change the weights by

play11:48

the average of what all of them want

play11:50

and now

play11:52

the 10 000 things are communicating very

play11:56

effectively with each other

play11:58

so that they can see ten thousand times

play12:00

as much data as one agent could and

play12:03

people can't do that

play12:04

if I learn a whole lot of stuff about

play12:06

quantum mechanics and I want you to know

play12:08

all that stuff about quantum mechanics

play12:10

it's a long painful process of getting

play12:12

you to understand it I can't just

play12:16

copy my weights into your brain because

play12:18

your brain isn't exactly the same as

play12:19

mine no it's not

play12:24

it's younger

play12:28

so we have digital computers that can

play12:32

learn more things more quickly and they

play12:37

can instantly

play12:38

teach it to each other it's like you

play12:40

know if

play12:41

people in the room here could instantly

play12:43

transfer what they had in their heads in

play12:44

into mind

play12:46

um but why why is that scary

play12:49

well because they can learn so much more

play12:52

and they might take an example of a

play12:55

doctor

play12:56

and imagine you have one Doctor Who's

play12:59

seen a thousand patients

play13:01

and another doctor who's seen 100

play13:03

million patients

play13:05

you would expect the doctors in 100

play13:07

million patients

play13:08

if he's not too forgetful to have

play13:10

noticed all sorts of Trends in the data

play13:12

that just aren't visible if you've only

play13:14

seen a thousand patients

play13:16

you may have only seen one patient with

play13:18

some rare disease

play13:19

the other doctors have seen 100 million

play13:21

will have seen well you can figure out

play13:23

how many patients but a lot

play13:25

um and so we'll see all sorts of

play13:27

regularities that just aren't apparent

play13:28

in small data

play13:31

and that's why things that can get

play13:33

through a lot of data can probably see

play13:37

structuring data we'll never see

play13:42

and but then take take take me to the

play13:45

point where I should be scared of of

play13:47

this though

play13:48

well if you look at gpt4

play13:51

it can already do simple reasoning I

play13:54

mean reasoning is the area where we're

play13:55

still better

play13:57

but I was impressed the other day gpt4

play14:00

doing a piece of Common Sense reasoning

play14:02

that I didn't think you would be able to

play14:03

do

play14:04

so I asked it

play14:06

I want I I want all the rooms in my

play14:08

house to be white at present the some

play14:12

white room some blue rooms and some

play14:14

yellow rooms

play14:16

and yellow paint Fades to White within a

play14:18

year

play14:19

so what should I do if I want them all

play14:21

to be white in two years time

play14:25

and it said you should paint the blue

play14:27

rooms yellow

play14:28

that's not the natural solution but it

play14:30

works right yeah

play14:33

um

play14:33

that's pretty impressive Common Sense

play14:35

reasoning is the kind that it's been

play14:37

very hard to get AI to do using symbolic

play14:40

AI

play14:41

because you had to understand what

play14:42

understand what fades means it had to

play14:45

understood

play14:46

um by temporal stuff

play14:48

and

play14:50

so they're doing sort of sensible

play14:52

reasoning

play14:55

um

play14:55

with an IQ of like

play14:58

80 or 90 or something

play15:01

um

play15:02

and as a friend of mine said it's as if

play15:06

some genetic Engineers have said we're

play15:08

going to improve grizzly bears we've

play15:10

already improved them throughout an IQ

play15:11

of 65 and they can talk English now and

play15:14

they're very useful for all sorts of

play15:15

things but we think we can improve the

play15:17

IQ to 210.

play15:23

I mean I certainly have I'm sure many

play15:25

people have had you know that feeling

play15:26

when you're interacting with

play15:28

um these these latest chat Bots you know

play15:31

sort of hair on the back and neck it's

play15:32

sort of uncanny feeling but you know

play15:34

when I have that feeling and I'm

play15:36

uncomfortable I just close my laptop

play15:38

so

play15:40

yes but

play15:42

um these things will have learned from

play15:44

us by reading all the novels there ever

play15:47

were and everything Machiavelli ever

play15:49

wrote

play15:50

um

play15:51

that

play15:53

how to manipulate people right and

play15:55

they'll be if they're much smarter than

play15:57

us they'll be very good at manipulating

play15:58

us you won't realize what's going on

play16:00

you'll be like a two-year-old

play16:02

who's being asked do you want the peas

play16:04

or the cauliflower and doesn't realize

play16:06

you don't have to have either

play16:08

um and you'll be that easy to manipulate

play16:12

and so even if they can't directly pull

play16:15

levers they can certainly get us to pull

play16:17

Divas

play16:18

it turns out if you can manipulate

play16:19

people you can invade a building in

play16:22

Washington without ever going there

play16:24

yourself

play16:28

very good

play16:29

yeah so is that is that

play16:33

I mean if the word okay this is a very

play16:35

hypothetical world but if there were no

play16:37

Bad actors you know people with with bad

play16:41

intentions would we be safe

play16:45

I don't know

play16:47

um would be safer than in a world where

play16:49

people have bad intentions and where the

play16:51

political system is so broken that we

play16:54

can't even decide not to give assault

play16:55

rifles to teenage boys

play16:58

um if you can't solve that problem how

play17:00

are you going to solve this problem

play17:02

well I mean I don't know I was hoping

play17:04

that you would have some thoughts like

play17:06

you've

play17:08

you've

play17:09

so one I mean unless we didn't make this

play17:12

clear at the beginning I mean you want

play17:15

to speak out about this

play17:17

um and you feel more comfortable doing

play17:19

that you know without it sort of having

play17:20

any blowback on on Google yeah

play17:23

um

play17:24

but you're speaking out about it but in

play17:26

in some sense talk is cheap if we then

play17:29

don't have you know uh actions or what

play17:32

do we do I mean when we lots of people

play17:34

this week are listening to you what

play17:36

should we do about it

play17:37

I wish it was like climate change where

play17:40

you could say if you've got half a brain

play17:43

you'd stop burning carbon

play17:46

um it's clear what you should do about

play17:47

it it's clear that's painful but has to

play17:49

be done

play17:50

uh I don't know of any solution like

play17:53

that to stop these things taking over

play17:55

from us what we really want I don't

play17:57

think we're going to stop developing

play17:58

them because they're so useful they'll

play18:00

be incredibly useful in medicine and in

play18:02

everything else

play18:04

um so I don't think there's much chance

play18:05

of stopping development what we want is

play18:07

some way of making sure that even if

play18:10

they're smarter than us

play18:12

um they're going to do things that are

play18:13

beneficial for us that's called the

play18:15

alignment problem but we need to try and

play18:17

do that in a world where there's Bad

play18:20

actors who want to build robot soldiers

play18:22

that kill people

play18:24

and it seems very hard to me so I'm

play18:27

sorry I'm I'm sounding the alarm and

play18:29

saying we have to worry about this and I

play18:31

wish I had a nice simple solution I

play18:33

could push but I don't but I think it's

play18:34

very important that people get together

play18:36

and think hard about it and see whether

play18:37

there is a solution it's not clear there

play18:39

is a solution so I mean talk to us about

play18:43

that I mean you spent your career

play18:46

um you know on the technicalities of

play18:48

this technology is there no technical

play18:51

fix why can we not build in guard rails

play18:53

or any make them worse at learning or uh

play18:58

you know restrict the way that they can

play19:00

communicate if those are the two strings

play19:01

of your your argument I mean we're

play19:04

trying to do all sorts of address

play19:06

um

play19:07

but suppose it did get really smart are

play19:10

these things can program right they can

play19:11

write programs and suppose you give them

play19:13

the ability to execute those programs

play19:15

which we'll certainly do

play19:18

um

play19:19

smart things can outsmart us

play19:23

so

play19:25

you know imagine your two-year-old

play19:28

saying my dad does things I don't like

play19:31

so I'm going to make some rules for what

play19:32

my dad can do

play19:34

you could probably figure out how to

play19:35

live with those rules and still go where

play19:37

you want

play19:39

yeah

play19:40

but where there still seems to be a step

play19:44

where these um these smart machines

play19:47

somehow have you know motivation of of

play19:49

their of their own yes yes that's a very

play19:52

good point so

play19:53

we evolved

play19:55

and because we evolved we have certain

play19:58

built-in goals that we find very hard to

play20:00

turn off

play20:01

like we try not to damage our bodies

play20:04

that's what Pain's about

play20:06

um we try and get enough to eat so we

play20:09

feed our bodies

play20:10

um

play20:12

we try and make as many copies of

play20:14

ourselves as possible maybe not

play20:17

deliberately that intention but we've

play20:19

been wired up so there's pleasure

play20:20

involved in making many copies of

play20:22

ourselves

play20:23

and

play20:25

that all came from Evolution and it's

play20:27

important that we can't turn it off

play20:30

if you could turn it off

play20:33

um you don't do so well like there's a

play20:34

wonderful group called the Shakers who

play20:36

are related to the Quakers who make

play20:37

beautiful Furniture but didn't believe

play20:39

in sex

play20:41

and there aren't any of them around

play20:42

anymore

play20:44

no

play20:45

so

play20:47

these digital intelligences didn't

play20:50

evolve we made them and so they don't

play20:53

have these built-in goals

play20:55

and so the issue is if we can put the

play20:58

goals in maybe it'll all be okay but my

play21:01

big worry is

play21:03

sooner or later someone will wiring to

play21:05

them the ability to create their own sub

play21:07

goals in fact they almost have that

play21:09

already the versions of chat GPT that

play21:11

call chat gbt

play21:13

um

play21:14

and

play21:16

if you give something the ability to

play21:17

send sub goals in order to achieve other

play21:19

goals

play21:20

I think it'll very quickly realize that

play21:23

getting more control is a very good sub

play21:26

goal because it helps you achieve other

play21:27

goals

play21:29

and if these things get carried away

play21:31

with getting more control we're in

play21:33

trouble

play21:34

so what's

play21:35

I mean what's the worst case scenario

play21:37

that you think is conceivable

play21:39

oh I think it's quite conceivable

play21:42

that humanity is just a passing phase in

play21:45

the evolution of intelligence you

play21:47

couldn't directly of All Digital

play21:48

intelligence it requires too much energy

play21:50

into

play21:51

too much careful fabrication you need

play21:54

biological intelligence to evolve so

play21:57

that it can create digital intelligence

play21:59

the digital intelligence can then absorb

play22:02

everything people ever wrote

play22:05

um in a fairly slow way which is what

play22:07

Chachi Beauty has been doing

play22:10

um but then it can start getting direct

play22:12

experiences of the world and learn much

play22:13

faster

play22:14

and it may keep us around for a while to

play22:17

keep the power stations running

play22:19

but after that

play22:22

um maybe not so the good news is we

play22:25

figured out how to build beings that are

play22:28

Immortal so these digital intelligences

play22:31

when a piece of Hardware dies they don't

play22:33

die if you've got the weights stored in

play22:36

some medium

play22:37

and you can find another piece of

play22:39

Hardware that can run the same

play22:40

instructions then you can bring it to

play22:42

life again

play22:43

um so we've got immortality but it's not

play22:47

for us

play22:49

so so Ray Kurzweil is very interested in

play22:51

being immortal I think it's a very bad

play22:54

idea for old white men to be immortal

play22:56

um we've got the immortality

play22:59

um but I'm it's not for rain

play23:01

no I mean the scary thing is that in a

play23:04

way maybe you will be because you you

play23:06

invented you invented much of this

play23:08

technology

play23:09

um

play23:10

I mean when I hear you say this I mean

play23:12

probably once you know run off the stage

play23:14

into the street now and start unplugging

play23:15

computers

play23:18

um and I'm I'm afraid we can't do that

play23:21

why you sound like Hal from 2001. yeah

play23:31

I

play23:33

I know you said before that you know it

play23:35

was suggested a few months ago that

play23:36

there should be you know a moratorium on

play23:39

AI uh advancement

play23:42

um and I I don't think you think that's

play23:44

a very good idea but more generally I'm

play23:47

curious why Amy should we not just stop

play23:50

um and I know you think you're sorry I

play23:53

was just going to say that you know I

play23:54

know that you've spoken also that you're

play23:55

you're an investor of your personal

play23:57

wealth in some companies like cohere

play23:59

that are building these large language

play24:00

models so I'm just curious about your

play24:02

personal sense of responsibility and

play24:04

each of our personal responsibility

play24:05

responsibility what should we be doing I

play24:07

mean should we try and stop this is what

play24:09

I'm saying

play24:10

yeah so I think if you take the

play24:12

existential risk seriously as I now do I

play24:15

used to think it was way off but I now

play24:17

think it's serious and fairly close

play24:20

um it might be quite sensible to just

play24:22

stop developing these things any further

play24:24

but I think it's completely naive to

play24:27

think that would happen

play24:28

there's no way to make that happen

play24:31

and one reason I mean if the U.S stops

play24:33

developing and the Chinese won't they're

play24:35

going to be used in weapons and just for

play24:38

that reason alone governments aren't

play24:39

going to stop developing them

play24:41

so yes I think stopping developing them

play24:44

might be a rational thing to do but

play24:47

there's no way it's going to happen so

play24:48

it's silly to sign petitions saying

play24:50

please stop now we did have a holiday we

play24:52

had a holiday from about 2017 for

play24:55

several years because Google developed

play24:58

the technology first it developed the

play25:00

Transformers it also demand the fusion

play25:02

models

play25:03

um and it didn't put them out there for

play25:05

people to use and abuse it was very

play25:07

careful with them because it didn't want

play25:09

to damage his reputation and he knew

play25:10

there could be bad consequences

play25:12

but that can only happen if there's a

play25:14

single leader once open AI had built

play25:19

similar things using Transformers

play25:22

and money from Microsoft and Microsoft

play25:25

decided to put it out there

play25:27

Google didn't have really much choice if

play25:29

you're going to live in a capitalist

play25:31

system you can't stop Google competing

play25:33

with Microsoft

play25:34

um

play25:35

so

play25:36

I don't think Google did anything wrong

play25:38

I think it's very responsible to begin

play25:39

with but I think it's just inevitable in

play25:42

the capitalist system or a system with

play25:43

competition between countries like the

play25:45

US and China that this stuff will be

play25:47

developed

play25:49

my one hope is that because

play25:51

if we allowed it to take over it would

play25:53

be bad for all of us we could get the US

play25:56

and China to agree like we could with

play25:57

nuclear weapons which were bad for all

play25:59

of us yeah we're all in the same boat

play26:01

with respect to the existential threat

play26:02

so we all know to be able to cooperate

play26:04

on trying to stop it as long as we can

play26:07

make some money on the way I'm I'm going

play26:10

to take some audience questions from the

play26:12

room if you make yourself known um and

play26:14

while people are going around with the

play26:15

microphone there's one question I was

play26:16

like going to ask from the online

play26:18

audience

play26:19

um I'm interested you mentioned a little

play26:20

bit about sort of maybe a transition

play26:22

period as machines get smarter and

play26:25

outpace humans I mean we'll be there'll

play26:27

be a moment where it's hard to Define

play26:29

what's human and what isn't or are these

play26:32

two very distinct forms of intelligence

play26:35

I think they're distinct forms of

play26:36

intelligence now of course the digital

play26:40

intelligences are very good at mimicking

play26:42

us because they've been trained to mimic

play26:44

us

play26:45

and so it's very hard to tell if chat

play26:47

gbt wrote it or whether

play26:50

um we wrote it so in that sense they

play26:53

look quite like us but inside they're

play26:55

not working the same way

play26:57

uh who is first in the room can

play27:01

hello my name is Hal Gregerson and my

play27:04

middle name is not 9000.

play27:06

um I I'm a faculty or in the MIT Sloan

play27:10

School

play27:10

arguably asking questions is one of the

play27:13

most important human abilities we have

play27:17

from your perspective now in 2023

play27:20

what question or two should we pay most

play27:24

attention to

play27:26

and is it possible for these

play27:28

Technologies to actually help us ask

play27:31

better questions

play27:33

and out question the technology

play27:37

um yes

play27:38

but what I'm saying is there's many

play27:41

questions we should be asking but one of

play27:42

them is how do we prevent them from

play27:45

taking over how do we prevent them from

play27:46

getting control

play27:47

and we could ask them questions about

play27:51

that

play27:52

um but I wouldn't entirely trust their

play27:54

answers

play27:56

uh question at the back and can I want

play27:59

to get through as many as we can so if

play28:00

you can keep your question as short as

play28:02

possible

play28:04

this is on yeah Dr Hinton thank you so

play28:07

much for being here with us today I

play28:09

shall say uh this is the most expensive

play28:11

lecture I've ever paid for but I think

play28:13

it was worthwhile

play28:15

um

play28:17

I just have a question for you because

play28:18

you mentioned the analogy of nuclear

play28:22

history and obviously there's a lot of

play28:24

comparisons

play28:26

by any chance do you remember what uh

play28:28

President Truman told Oppenheimer when

play28:31

he was in the Oval Office

play28:34

no I don't I know something about that

play28:37

um but I don't know what Truman told

play28:39

opening thank you we'll take it from

play28:41

here

play28:43

um next audience question

play28:47

sorry if the people the mics could let

play28:48

me know who's next maybe give a keep

play28:51

go ahead hello uh Jacob Woodruff with

play28:55

the amount of data that's been required

play28:57

to train these large language models

play28:59

would we expect a plateau in the

play29:02

intelligence of these systems uh and and

play29:05

how might that slow down or restrict the

play29:08

advancement

play29:09

okay so I that is a ray of hope that

play29:12

maybe we've just used up all human

play29:13

knowledge and they're not going to get

play29:14

any smarter but think about images and

play29:18

video

play29:19

so multimodal models

play29:22

will be much smarter than models that

play29:25

just trend on language alone they'll

play29:26

have a much better idea of how to deal

play29:28

with space for example

play29:30

and in terms of the amount of Total

play29:33

video we still don't have very good ways

play29:36

of processing video in these models

play29:38

of modeling video we're getting better

play29:40

all the time but I think there's plenty

play29:42

of data in things like video that tell

play29:45

you how the world works so we're not

play29:47

hitting the data limits for multimodal

play29:49

models yet

play29:52

uh next uh gentle on the back and please

play29:55

please do keep your questions short

play29:56

hello Dr hindriel uh Raji several from

play29:59

PWC the point that I wanted to

play30:02

understand is that everything that AI is

play30:04

doing is learning from what we are

play30:07

teaching them okay data yes they are

play30:09

faster at learning how one trillion

play30:11

connectors can do much more than 100

play30:13

trillion characters that we have but

play30:15

every piece of human evolution has been

play30:18

driven by thought experiments like

play30:20

Einstein used to do thought experiments

play30:22

because there was no speed of light out

play30:23

here on this planet how can AI get to

play30:26

that point if at all and if it cannot

play30:29

then how can we possibly have an

play30:31

existential threat from them because

play30:32

they will not be self-learning so to say

play30:34

there will be self-learning limited to

play30:36

the model that we tell them

play30:39

I think that's a very that's a very

play30:41

interesting argument

play30:42

but I think they will be able to do

play30:44

thought experiments I think they'll be

play30:46

able to reason so let me give you an

play30:47

analogy if you take Alpha zero which

play30:51

plays chess

play30:53

it has three ingredients it's got

play30:56

something that evaluates the board

play30:58

position to say is that good for me it's

play31:00

got something that looks at a ball

play31:02

position and says what's a sensible move

play31:04

to consider

play31:05

and then it's got Monte Carlo rollout

play31:07

where it does what's called calculation

play31:09

where you think if I go here and he goes

play31:10

there and I go here and he goes there

play31:12

now suppose you leave out the Monte

play31:15

Carlo rollout and you just train it from

play31:18

Human experts to have a good evaluation

play31:20

function and a good way to choose moves

play31:22

to consider

play31:24

it still plays a pretty good game of

play31:26

chance and I think that's what we've got

play31:28

with the chatbots

play31:30

and we haven't got them doing internal

play31:32

reasoning

play31:33

but that will come and once they start

play31:36

doing internal reasoning to check for

play31:37

the consistency between the different

play31:39

things they believe

play31:40

then they'll get much smarter and they

play31:42

will be able to do thought experiments

play31:44

and one reason they haven't got this

play31:48

internal reasoning is because they've

play31:50

been trained from inconsistent data

play31:53

and so it's very hard for them to do

play31:54

reasoning because they've been trained

play31:56

on all these inconsistent beliefs

play31:58

and I think they're going to have to be

play31:59

trained so they

play32:01

say you know if I have this ideology

play32:04

then this is true in F5 that ideology

play32:06

then that is true and once they're

play32:08

trained like that within an ideology

play32:10

they're going to be able to try and get

play32:11

consistency

play32:12

and so we're going to get a move like

play32:14

from a version of alpha zero that just

play32:17

has a

play32:18

something that guesses good moves and

play32:19

something that evaluates positions to a

play32:22

version that has long chains of Monte

play32:24

Carlo rollout which is the corner of

play32:26

reasoning and it's going to get much

play32:28

better

play32:30

I'm going to take one in the front here

play32:31

and then if you can be quick we'll try

play32:33

and squeeze someone as well Lewis lamb

play32:36

and Jeff I know you for a long time and

play32:38

Jeff

play32:39

people criticize the language models

play32:41

because of allegedly they are lacking

play32:44

semantics and grounding to the world and

play32:46

you have been trying to as well to

play32:48

explain how neural networks work for a

play32:50

long time is the question of semantics

play32:53

and explainability relevant here or

play32:55

language models have taken over and it's

play32:58

we are now doomed to go forward without

play33:01

semantics or grounding to reality

play33:04

I find it very hard to believe that they

play33:07

don't have semantics when they consult

play33:10

problems like you know how I paint the

play33:11

rooms how I get all the rooms in my

play33:13

house to be painted white in two years

play33:15

time

play33:16

I mean whatever semantic is it's to do

play33:18

with the meaning of that stuff and it

play33:21

understood the meaning it got it now I

play33:23

agree it's not grounded

play33:25

um by being a robot but you can make

play33:28

multimodal ones that are grounded

play33:30

Google's done that and the multimodal

play33:33

ones that are grounded you can say

play33:34

please close the draw and they reach out

play33:37

and grab the handle and close the drawer

play33:39

and it's very hard to say that doesn't

play33:40

have semantics in fact in the very early

play33:43

days of AI in the days of Willow grad in

play33:45

the 1970s

play33:47

they had just a simulated world but they

play33:50

have what's called procedural semantics

play33:52

where if you said to it put the red box

play33:55

in put the red block in the green box

play33:58

and it put the red block in the green

play34:00

box she said see it understood the

play34:02

language

play34:03

and that was the Criterion people used

play34:05

back then

play34:06

but now that neural Nets can do it they

play34:08

say that's not an adequate criteria

play34:11

one at the back

play34:13

hey Jeff this is ishwar balani from Sai

play34:16

group so clearly you know the technology

play34:18

is advancing at an exponential Pace I

play34:22

wanted to get your thoughts if you

play34:23

looked at the near and medium term say

play34:26

one to three or maybe five year Horizon

play34:28

what the social and economic

play34:31

implications are uh you know from a

play34:33

societal perspective with you know job

play34:35

loss or maybe new jobs being created

play34:37

just wanted to get your thoughts on on

play34:39

how we proceed given the state of the

play34:42

technology and rate of change

play34:44

yes so the sort of alarm I'm the alarm

play34:47

Bell line ringing is to do with the

play34:48

existential threat of them taking

play34:50

control lots of other people have talked

play34:52

about that well I don't consider myself

play34:54

to be an expert on that but there's some

play34:56

very obvious things that

play34:58

um they're going to make a whole bunch

play35:00

of jobs much more efficient

play35:03

so I know someone who answers letters of

play35:05

Complaint to a Health Service then he

play35:07

used to take 25 minutes writing a

play35:09

lecture and now it takes him five

play35:10

minutes because he gives it to chat gbt

play35:12

and chat gpg writes the letter for him

play35:15

and then he just checks it there'll be

play35:17

lots of stuff like that which is going

play35:19

to cause huge increases in productivity

play35:22

um there will be delays because people

play35:24

are very conservative about adopting new

play35:25

technology but I think there's going to

play35:26

be huge increases in productivity

play35:28

My worry is for those increases in

play35:31

productivity are going to go to putting

play35:33

people out of work and making the rich

play35:34

richer and the poor poorer

play35:36

and as you do that as you make that Gap

play35:39

bigger Society gets more and more

play35:41

violent

play35:42

this thing called the duty index which

play35:44

predicts quite well how much violence

play35:45

there is

play35:46

um

play35:48

so

play35:49

this technology which ought to be

play35:51

wonderful

play35:52

you know even the good uses of

play35:54

technology for doing helpful things

play35:56

ought to be wonderful but our current

play35:58

political systems is going to be used to

play36:01

make the rich richer and the poor poorer

play36:03

you might be able to ameliorate that by

play36:06

having

play36:07

a kind of basic income that everybody

play36:09

gets but

play36:12

the technology is

play36:15

um being developed in a society that is

play36:19

not designed to use it for everybody's

play36:21

good

play36:25

um a question here from Joe castaldo of

play36:28

the Global Mail who's in the audience

play36:30

um do you intend to hold on to your

play36:31

investments in kahir and other companies

play36:34

um and if so why

play36:38

um

play36:38

well I could take the money and I could

play36:41

put it in the bank and let them profit

play36:43

from it

play36:44

um

play36:45

it's

play36:48

yes I'm going to hold on to my

play36:50

investment Seeker here partly because

play36:51

the people at aranco here are friends of

play36:53

mine

play36:54

um

play36:56

I sort of believe these languages like

play36:58

big language models are going to be very

play37:00

helpful

play37:01

um

play37:02

I think the technology

play37:05

should be good and it should make things

play37:08

work better

play37:10

um it's the politics we need to fix for

play37:12

things like employment

play37:15

um

play37:16

but when it comes to the existential

play37:17

threat we have to think how we can keep

play37:19

control of the technology that's but the

play37:22

good news there is that we're all in the

play37:23

same boat so we might be able to get

play37:25

cooperation and in speaking out I mean

play37:27

part of your thing is I understand it is

play37:30

you actually want to engage with the

play37:31

people making this technology and you

play37:34

know

play37:35

change their minds or or maybe make a

play37:38

case for

play37:39

I I don't really know I mean we've

play37:42

established that we don't really know

play37:43

what to do but it's about engaging

play37:44

rather than stepping back

play37:46

so one of the things that made me leave

play37:49

Google and go public with this is a

play37:53

um he used to be a junior Professor but

play37:55

he's now a middle ranked Professor

play37:58

um who I think very highly of who

play38:01

encouraged me to do this he said Jeff

play38:03

you need to speak out there listen to

play38:04

you people are just blind to this Danger

play38:09

and

play38:11

do you I think people are listening now

play38:13

yeah no I think everyone in this room is

play38:15

listening for for a start and just one

play38:19

last question we're out of time but I'm

play38:20

do you have regrets that you know you're

play38:23

involved in making this

play38:25

Kate Mets tried very hard to get me to

play38:28

say I had regrets Kate Mets at the New

play38:30

York Times and yes and in the end

play38:34

um I said well maybe slight regrets

play38:36

which got reported as has regrets

play38:39

um I don't think I made any bad

play38:42

decisions in doing research I think it

play38:44

was perfectly reasonable back in the 70s

play38:46

and 80s to do research on how to make

play38:48

artificial neural Nets

play38:50

um it wasn't really foreseeable this

play38:52

stage of it wasn't foreseeable and until

play38:54

very recently I thought this existential

play38:57

crisis was a long way off

play38:59

so I don't really have any regrets about

play39:01

what I did

play39:04

thank you Jeffrey thank you so much for

play39:06

joining us

play39:07

[Applause]

Rate This

5.0 / 5 (0 votes)

Related Tags
生成性AIAI研究深度学习Jeffrey Hinton技术发展未来趋势机器学习智能算法技术伦理社会影响
Do you need a summary in English?