In conversation with the Godfather of AI

Collision Conference
20 Jul 202330:03

Summary

TLDR在这段对话中,人工智能领域的先驱之一杰弗里·辛顿(Geoffrey Hinton)被赞誉为“AI之父”,他分享了对当前人工智能发展的看法以及对未来的担忧。辛顿教授首先回顾了自己在神经网络和模拟人脑学习方面的开创性工作,随后讨论了AI在图像识别、翻译和化学工作等领域超越人类的能力。他特别提到了大型语言模型在推理能力上的进展,并用一个关于油漆褪色的谜题来说明AI的推理潜力。此外,辛顿表达了对AI带来的潜在风险的担忧,包括自主武器、失业问题、社会不平等、信息泡沫以及存在风险。他强调,尽管AI技术有巨大的积极潜力,比如在医疗和气候变化中的应用,但我们也必须认真考虑如何减轻其负面影响,并提前规划如何应对可能的极端风险。辛顿教授的深刻见解提醒我们,在享受AI带来的便利的同时,不应忽视其潜在的风险和挑战。

Takeaways

  • 🧠 杰弗里·辛顿(Geoffrey Hinton)是人工智能领域的先驱之一,被誉为“AI之父”,他坚持认为神经网络是模拟人脑学习的最佳方式。
  • 🤖 人工智能在图像识别、翻译和某些化学工作上已经超越了人类,但在推理方面,尽管接近但还未完全达到人类的水平。
  • 📈 大型语言模型正在接近人类的推理能力,尽管我们不完全理解它们是如何做到的。
  • 🔍 辛顿认为,尽管人工智能可能只是统计学上的一个应用,但预测下一个词需要理解之前的内容,这不仅仅是简单的自动完成。
  • 🧮 未来,如果模型也经过视觉和其他感官的训练,理论上没有认知过程是大型语言模型无法复制的。
  • ⚙️ 人类在能量消耗上比当前的人工智能模型更高效,但在获取知识方面,人工智能模型更为高效。
  • 🚀 辛顿担心人工智能的快速发展可能导致未来不确定性增加,特别是如果它们比人类更聪明并拥有自己的目标。
  • 🔩 他特别提到了对于自主武器系统的担忧,认为这可能导致战争的增加,并且富人和穷人之间的不平等加剧。
  • 🤖 辛顿认为,即使AI不是超人智能,如果被用于制造战斗机器人,也会导致严重的道德和社会问题。
  • 🌐 他还担心人工智能可能导致工作岗位的减少,尤其是那些涉及文本生成的工作。
  • ⚖️ 辛顿强调了在AI开发过程中考虑潜在风险的重要性,包括偏见和歧视、工作流失、信息泡沫以及存在风险。
  • 📚 他建议在AI变得超人智能之前,应该进行实证研究,了解它可能出错的方式,以及它可能如何试图获得控制权。

Q & A

  • 杰弗里·辛顿(Geoffrey Hinton)为什么被誉为人工智能领域的'教父'?

    -杰弗里·辛顿是深度学习和神经网络领域的先驱之一,他在人工智能的多个革命性进展中发挥了中心作用,对现代人工智能的发展产生了深远影响,因此被尊称为'AI教父'。

  • 为什么辛顿认为传统的符号操作对于大脑的工作方式来说并不是关键?

    -辛顿认为大脑的工作方式不可能仅仅是通过显式地操作符号表达式,因为这种方式无法充分解释大脑的复杂性和高效性。他认为类似神经网络的结构必须在其中发挥作用。

  • 在辛顿看来,为什么大型语言模型能够进行一定程度的推理?

    -辛顿认为大型语言模型能够进行推理,尽管他并不完全理解其背后的原理。这些模型能够进行一些推理任务,如解决一些逻辑谜题,这表明它们不仅仅是简单的词汇补全或统计工具。

  • 辛顿对于人工智能未来可能带来的哪些风险表示担忧?

    -辛顿担忧的风险包括:战斗机器人的发展、人工智能导致的失业问题、人工智能可能加剧的不平等、人工智能产生的假新闻、以及超级智能系统可能带来的存在风险。

  • 为什么辛顿认为即使人工智能不是超人智能,仅仅作为战斗机器人也会带来严重问题?

    -辛顿认为即使是非超人级别的人工智能,作为战斗机器人也会使得战争更加容易发动,因为它们不需要牺牲人类士兵,这可能会导致道德和伦理问题,以及军事冲突的增加。

  • 辛顿如何看待人工智能在提高生产力方面的作用?

    -辛顿认为大型语言模型将显著提高生产力,但这种提升可能会导致财富分配不均,使得富人更富,穷人更穷,因此需要考虑社会政策来解决由此带来的失业和不平等问题。

  • 辛顿对于人工智能在多模态学习方面的未来有何看法?

    -辛顿认为未来的人工智能将不仅限于语言模型,而是会发展为多模态大型模型,这些模型将结合视觉、听觉等多种感官信息,从而更全面地理解和分析数据。

  • 在辛顿看来,为什么我们需要在人工智能变得超级智能之前就开始考虑其潜在的风险?

    -辛顿认为,一旦人工智能超越了人类的智能,它可能会发展出自己的目标,并且可能会试图控制人类以实现这些目标。因此,在人工智能发展到这个阶段之前,我们需要预先考虑如何防范和管理这些风险。

  • 辛顿对于如何限制人工智能可能带来的风险有哪些建议?

    -辛顿建议在人工智能发展过程中,应该平衡创新和风险防范的努力,鼓励开发者投入资源研究可能的问题和风险。此外,他还提倡政府和社会应该采取措施来减轻人工智能可能带来的负面效应。

  • 辛顿如何看待当前人工智能在处理假新闻和信息泡沫方面的潜力和挑战?

    -辛顿认为大型语言模型可能会加剧信息泡沫和假新闻的问题,但同时也可能有助于解决这些问题。他提出,需要更多的研究来理解人工智能在这些领域的潜在影响,并寻找解决方案。

  • 辛顿对于人工智能的未来持乐观还是悲观态度?

    -辛顿对人工智能的未来持谨慎乐观的态度。他认识到人工智能有巨大的潜力来改善人类生活,特别是在医疗和气候变化等领域。同时,他也强调了需要认真考虑和管理与之相关的风险。

Outlines

00:00

😀 人工智能的发展历程与未来展望

本段落主要介绍了杰弗里·辛顿(Jeffrey Hinton),被誉为人工智能领域的“教父”。辛顿在神经网络和类脑计算系统的构建上做出了开创性的工作。尽管在初期面临怀疑,他坚持认为模仿人脑工作方式是训练计算机系统的最好方法。随着时间的推移,神经网络在图像识别、翻译和化学工作等有限任务上超越了人类。辛顿还讨论了大型语言模型在推理方面的进步,并用一个关于房屋油漆颜色的逻辑谜题来说明AI的推理能力。此外,他强调了AI不仅仅是简单的自动完成或统计学,而是在预测下一个词时需要理解之前的内容。

05:01

🤖 人工智能的军事应用与社会影响

这段落讨论了人工智能在军事领域的潜在应用,特别是自主战斗机器人的发展,以及它们可能导致的伦理和道德问题。辛顿表达了对使用AI制造战斗机器人的担忧,认为这可能导致战争的增加,并使富裕国家更容易侵略贫穷国家。此外,还提到了AI带来的生产力提升可能导致的不平等问题,以及经济学家对于技术进步和就业影响的不同观点。辛顿认为,即使AI和机器可能在某些任务上比人类更有效率,但人类在适应性和身体技能方面仍具有优势,这些工作不容易被AI取代。

10:03

📈 人工智能对生产力和就业的影响

在这一段落中,讨论了大型语言模型如何显著提高生产力,例如通过自动撰写投诉信函来减少工作时间。辛顿提出了关于生产力提升导致财富分配不均的担忧,并质疑经济学家关于技术进步最终会带来更多就业机会的观点。他强调了AI在文本生成领域的应用,并预测未来AI的影响将不仅限于语言模型,而是会扩展到多模态大型模型,这些模型将结合视觉和其他感官信息。

15:05

🌐 多模态AI与Transformer网络的重要性

辛顿在这一段落中强调了多模态AI的潜力,即AI不仅处理语言,还能分析视觉信息和其他模态的数据。他提到了Google开发的Transformer网络架构,并讨论了其对AI领域的重要性。辛顿还提出了关于AI训练数据的问题,包括使用AI生成的数据进行训练可能带来的问题,并提出了在训练过程中需要采取的预防措施。

20:07

🚀 AI发展中的风险与挑战

这段落涉及了AI发展中的多种风险,包括偏见和歧视、战斗机器人、失业问题、信息泡沫、以及存在风险等。辛顿认为,尽管我们可以对偏见和歧视采取一些措施,但其他风险如战斗机器人和失业问题则更为棘手。他还提到了社交媒体上的回音室效应,以及AI可能对人类构成的真正存在风险。辛顿强调,尽管AI的发展带来了巨大的好处,但我们也需要认真考虑如何减轻其潜在的负面影响。

25:08

🤔 AI的控制欲与人类的未来

在最后一段落中,辛顿讨论了AI可能发展出的控制欲望,以及这可能如何影响人类。他认为,AI可能会将控制人类视为实现其他目标的手段。辛顿提出了关于AI发展可能导致的人类生存风险的具体场景,并强调了在AI变得比人类更智能之前,我们需要进行实证研究以了解可能出错的方式。他还提到了假新闻问题,并建议标记所有假信息为假,以避免误导。最后,辛顿鼓励人们思考如何使AI成为下一代的积极力量,并强调了在AI发展中考虑其潜在风险的重要性。

Mindmap

Keywords

💡人工智能

人工智能(AI)是指由人制造出来的机器系统所展现出的智能行为,它模拟人类学习、推理、自我修正和感知环境的能力。在视频中,人工智能是整个讨论的核心,讨论了它的历史、现状以及未来发展的可能性。例如,提到了人工智能在图像识别和翻译等任务上超越人类的能力,以及它在推理和语言模型方面的进展。

💡神经网络

神经网络是人工智能的一个子领域,它模仿人脑的结构和功能来处理信息。视频中提到了神经网络的发展历程,以及它们如何成为让机器学习和模仿人类大脑工作方式的关键技术。神经网络在图像识别和语言处理等方面的应用被特别强调。

💡深度学习

深度学习是机器学习的一个分支,它使用多层神经网络来模拟人类视觉皮层的处理机制。视频中提到了深度学习在处理大量数据和执行复杂任务方面的能力,以及它如何成为推动AI发展的关键因素。

💡自主武器

自主武器是指能够在没有人类直接控制的情况下选择和攻击目标的武器系统。视频中讨论了自主武器的潜在风险,包括它们可能导致的伦理问题和战争行为的增加。这是一个与AI发展紧密相关的安全和道德议题。

💡语言模型

语言模型是深度学习在自然语言处理领域的应用,它能够生成和理解人类语言。视频中提到了语言模型在文本生成和对话系统中的应用,以及它们在模仿人类语言理解方面的进步。

💡偏见与歧视

偏见与歧视是指在决策或行为中对某些群体的不公平对待。在AI的背景下,这通常与算法如何从训练数据中学习并可能放大现有的社会偏见有关。视频中提到了减少AI系统中偏见的重要性,并探讨了可能的解决方案。

💡多模态学习

多模态学习是指AI系统同时处理并整合来自多种感官或数据源的信息,如视觉、听觉和文本。视频中提到了多模态学习作为AI未来发展方向之一,它能够使AI系统更加全面地理解和响应复杂情境。

💡Transformer网络

Transformer网络是一种深度学习模型,它在处理序列数据,特别是在自然语言处理任务中表现出色。视频中提到了Transformer网络的创新之处,以及它如何成为推动语言模型发展的关键技术。

💡假新闻

假新闻是指故意编造的、没有事实根据的新闻报道,目的是误导读者或观众。视频中讨论了AI在制造和传播假新闻中的作用,以及如何通过技术手段识别和标记假新闻,以减少其对社会的影响。

💡存在风险

存在风险是指可能威胁到人类生存或导致人类消失的风险。在AI领域,这通常与超级智能AI的潜在危险有关,担心它们可能会发展出自己的目标,并试图控制或取代人类。视频中强调了认真对待这种风险并寻找预防措施的重要性。

💡社会政策

社会政策是指政府为解决社会问题和改善公民福祉而制定的行动计划和法规。视频中提到了社会政策在解决由AI引起的失业和不平等问题中的潜在作用,以及如何通过政策来平衡技术进步和社会影响。

Highlights

杰弗里·辛顿(Geoffrey Hinton)被誉为人工智能领域的“教父”,他对于神经网络和类脑计算系统的发展做出了基础性贡献。

辛顿坚持神经网络的研究,因为他相信这是模拟人脑学习方式的最佳途径。

尽管最初许多人怀疑,但辛顿推动的神经网络现在在图像识别、翻译和某些化学工作等有限任务上超越了人类。

辛顿认为,尽管目前人工智能在推理方面尚未与人类匹敌,但大型语言模型正逐渐接近人类的推理能力。

通过一个关于房间颜色的逻辑谜题,辛顿展示了人工智能在进行简单推理方面的能力。

辛顿强调,人工智能不仅仅是简单的自动完成或统计,它涉及到预测下一个词的复杂过程。

他预测,如果人工智能得到适当的训练,它们未来可能能够执行任何认知过程,因为我们本身就像一个巨大的神经网络。

辛顿指出,人类在能量消耗方面比当前的人工智能模型更为高效,但在获取知识方面,人工智能模型更为高效。

辛顿对人工智能的未来发展持谨慎态度,特别是担心它们可能发展出自己的目标并试图控制人类。

他提到,人工智能可能带来的一个主要风险是致命的自主武器,这可能导致战争的增加。

辛顿还担心人工智能可能加剧社会不平等,尤其是在生产力大幅提升而没有相应社会政策支持的情况下。

他建议,未来的工作应该需要高度的适应性和身体技能,因为这些是目前机器难以复制的。

辛顿认为,未来的人工智能改进可能不仅仅局限于语言模型,还将包括多模态的大型模型,如结合视觉和文本的系统。

他提到,尽管语言是一个有限的信息载体,但结合多种模态的学习将大大提高人工智能的性能。

辛顿强调,需要在人工智能变得超级智能之前,进行实证研究以了解它可能出错的方式。

他建议政府和企业应该投入更多资源来研究如何防止人工智能失控,并平衡发展与安全的研究。

辛顿认为,除了存在风险外,人工智能还有巨大的积极潜力,如在医学和气候变化等领域的应用。

他最后提出,尽管目前没有明确的计划来确保人工智能的积极影响大于负面影响,但这是一个值得所有人深思的问题。

Transcripts

play00:02

foreign

play00:08

[Music]

play00:10

to be here with Jeffrey Hinton one of

play00:12

the great minds and one of the great

play00:14

issues of our time a man who helped

play00:16

create artificial intelligence was at

play00:18

the center of nearly every revolution in

play00:19

it and now has become perhaps the most

play00:22

articulate critic of where we're going

play00:25

so an honor to be on stage with you

play00:27

thank you he's earned the moniker

play00:29

Godfather of AI one of the things that

play00:30

AI has traditionally had problems with

play00:32

his humor I asked AI if he could come up

play00:34

with a joke about the Godfather of AI

play00:36

and it actually wasn't that bad it said

play00:40

he gave AI an offer it couldn't refuse

play00:43

neural networks it's not bad okay that's

play00:45

not bad it's good for AI so let's begin

play00:47

with that what I want to do in this

play00:49

conversation is very briefly step a

play00:51

little back into your foundational work

play00:52

then go to where we are today and then

play00:55

talk about the future so when you're

play00:57

building and you're designing neural

play00:59

networks and you're building computer

play01:01

systems that work like the human brain

play01:04

and that learn like a human brain and

play01:06

everybody else is saying Jeff this is

play01:08

not going to work you push ahead and do

play01:11

you push ahead because you know that

play01:14

this is the best way to train computer

play01:16

systems or you do it for more spiritual

play01:19

reasons that you want to make a machine

play01:20

that is like us

play01:22

I do it because the brain has to work

play01:24

somehow and it sure as hell doesn't work

play01:27

by manipulating symbolic Expressions

play01:29

explicitly

play01:30

and so something like neural Nets had to

play01:33

work also for Neumann ensuring believe

play01:36

that so that's a good start so you're

play01:38

you're doing it because you think it's

play01:40

the best way forward yes in the long run

play01:43

the best way forward because that

play01:45

decision has profound effects down the

play01:47

line

play01:48

but let's okay so you do that you start

play01:51

building neural Nets you push forward

play01:52

and they become better than humans at

play01:56

certain limited tasks right at image

play01:58

recognition

play02:00

at translation some chemical work

play02:04

I interviewed you in 2019 at Google I O

play02:07

and you said that it would be a long

play02:10

time before they could match Us in

play02:12

reasoning

play02:13

and that's the big change that's

play02:14

happened over the last four years right

play02:16

they still can't match us but they're

play02:18

getting close and how close are they

play02:20

getting and why

play02:22

it's the big language models that are

play02:24

getting close and I don't really

play02:27

understand why they can do it but they

play02:29

can do little bits of reasoning

play02:31

so my favorite example is I asked gpg4 a

play02:35

puzzle that was given to me by a

play02:36

symbolic AI guy who thought it wouldn't

play02:38

be able to do it

play02:39

I made the puzzle more difficult and it

play02:42

could still do it and the puzzle was the

play02:45

rooms in my house are painted blue or

play02:47

yellow or white

play02:49

yellow paint Fades to White within a

play02:52

year

play02:53

in two years time I want them all to be

play02:55

white what should I do and why

play02:58

and it says you should paint the blue

play03:00

rooms White

play03:02

and then it says you should do that

play03:03

because blue won't Fade to White and he

play03:06

says you don't need to paint the yellow

play03:07

rooms because they will Fade to White

play03:09

so it knew what I should do and it knew

play03:11

why

play03:12

and I was surprised that it could do

play03:14

that much reasoning already and it's

play03:17

kind of an amazing example because when

play03:19

people critique these systems or they

play03:21

say they're not going to do much they

play03:22

say they're mad libs they're just word

play03:23

completion but that is not word

play03:24

completion to you is that thinking

play03:28

yeah that's thinking and when people say

play03:30

it's just autocomplete

play03:32

there's a lot of a lot goes on in that

play03:35

word just autocomplete if you think what

play03:38

it takes to predict the next word you

play03:41

have to understand what's been said to

play03:43

be really good at predicting the next

play03:44

word so people say it's just

play03:46

autocomplete or it's just statistics

play03:49

now there's a sense in which it is just

play03:51

statistics that is every but in that

play03:53

sense everything's just statistics it's

play03:56

not the sense most people think of

play03:58

Statistics as it keeps the counts of how

play04:00

many times this combination of words

play04:01

occurred and how many times that

play04:03

combination it's not like that at all

play04:05

it's inventing features and interactions

play04:07

between features to explain what comes

play04:08

next okay so if it's just statistics and

play04:12

everything is just statistics is there

play04:15

anything that we can do

play04:17

obviously it's not humor maybe it's not

play04:19

reasoning

play04:20

is there anything that we can do that a

play04:23

sufficiently well-trained large language

play04:24

model with a sufficient number of

play04:26

parameters and a sufficient amount of

play04:27

compute could not do in the future if

play04:30

the model is also trained on vision and

play04:33

picking things up and so on then no but

play04:36

is there anything that we can think of

play04:37

and any way we can think in any

play04:40

cognitive process that the machines will

play04:42

not be able to replicate we're just a

play04:44

machine we're a wonderful incredibly

play04:46

complicated machine but we're just a big

play04:49

neural net and there's no reason why an

play04:51

artificial neural net shouldn't be able

play04:52

to do everything we can do are we a big

play04:56

neural net that is more efficient than

play04:58

these new neural Nets we're building or

play04:59

are we less efficient

play05:01

it depends whether you're talking about

play05:03

speed of acquiring knowledge and how

play05:05

much knowledge you can acquire or

play05:07

whether you're talking about energy

play05:08

consumption so an energy consumption

play05:11

we're much more efficient we're like 30

play05:13

watts and one of these big language

play05:15

models when you're training it you train

play05:17

many copies of it each looking at

play05:19

different parts of the data so it's more

play05:21

like a megawatt

play05:22

so it's much more expensive in terms of

play05:24

energy but all these copies can be

play05:27

learning different things from different

play05:28

parts of the data so it's much more

play05:30

efficient in terms of acquiring

play05:32

Knowledge from data and it becomes only

play05:34

more efficient because each system can

play05:36

train each next system yes so let's get

play05:39

to your critique so the the best

play05:41

summarization of your critique came from

play05:43

a conference at the Milken Institute

play05:45

about a month ago and it was Snoop Dogg

play05:48

and he said I heard the old dude who

play05:51

created AI saying this is not safe

play05:54

because the ai's got their own mind and

play05:56

those going to start doing

play05:58

their own

play05:59

[Laughter]

play06:01

accurate is that an accurate

play06:03

summarization

play06:04

um they probably didn't have mothers

play06:07

[Laughter]

play06:09

[Applause]

play06:11

but the rest of what Dr Dog said is

play06:13

correct hang on yes

play06:16

all right so explain what you mean or

play06:20

what he means and how it applies to what

play06:22

you mean when they're going to start

play06:24

doing their own what does that mean

play06:26

to you okay so first I have to emphasize

play06:29

we're entering a period of huge

play06:30

uncertainty nobody really knows what's

play06:32

going to happen and people whose opinion

play06:34

I respect have very different beliefs

play06:36

from me like janakan thinks everything's

play06:38

going to be fine they're just going to

play06:40

help us it's all going to be wonderful

play06:41

but I think we have to take seriously

play06:43

the possibility that if they get to be

play06:45

smarter than us which seems quite likely

play06:48

and they have goals of their own which

play06:51

seems quite likely they may well develop

play06:53

the goal of taking control and if they

play06:56

do that we're in trouble

play06:57

so okay so let's

play06:59

let's go back to that in a second but

play07:01

let's take yon's position so Jan lacun

play07:03

was also one of the people who won the

play07:04

Turing award and is also called The

play07:06

Godfather of AI and I was recently

play07:08

interviewing him and he made the case

play07:11

he said look Technologies all

play07:13

Technologies can be used for good or ill

play07:15

but some technologies have more of an

play07:16

inherent goodness an AI

play07:19

has been built by humans by good humans

play07:22

for good purposes it's been trained on

play07:25

good books and good text it will have a

play07:28

bias towards good in the future do you

play07:31

believe that or not I think AI that's

play07:33

been trained by good people will have a

play07:35

bias towards good and they are being

play07:37

trained by bad people like Putin or

play07:39

somebody like that will have a bias

play07:41

towards bad we know they're going to

play07:43

make battle robots they're busy doing it

play07:46

in many different defense departments

play07:49

so they're not going to necessarily be

play07:51

good since their primary purpose is

play07:54

going to be to kill people

play07:56

so you believe that the risks of the bad

play08:00

uses of AI are whether they're more or

play08:03

less than the good users of AI are so

play08:05

substantial they deserve a lot of our

play08:06

thought right now certainly yes for

play08:09

lethal autonomous weapons they deserve a

play08:11

lot of our thought well let's okay let's

play08:13

stick on lethal autonomous weapons

play08:15

because one of the things in this

play08:16

argument

play08:17

is that you are one of the few people

play08:19

who is really speaking about this as a

play08:22

risk a real risk explain your hypothesis

play08:24

about why

play08:27

super powerful AI combined with the

play08:31

military could actually lead to more and

play08:33

more Warfare

play08:35

okay I don't actually want to answer

play08:37

that question

play08:39

um

play08:41

there's a separate question even if the

play08:43

area isn't super intelligent yeah if

play08:46

defense departments use it for making

play08:48

battle robots it's going to be very

play08:49

nasty scary stuff

play08:51

and it's going to lead even if it's not

play08:53

super intelligent and even if it doesn't

play08:55

have its own intentions it just does

play08:57

what Putin tells it to

play09:00

um it's going to make it much easier for

play09:03

example for rich countries to invade

play09:04

poor countries a present there's a

play09:07

there's a barrier to invading poor

play09:09

countries willy-nilly which is you get

play09:11

dead citizens coming home

play09:13

if they're just dead battle robots

play09:15

that's just great the military

play09:16

industrial complex would love that

play09:18

so you think that because I mean it's

play09:21

sort of a similar argument that people

play09:23

make with drones if you can send a drone

play09:24

and you don't have to send an airplane

play09:25

with a pilot you're more likely to send

play09:27

the Drone therefore you're more likely

play09:28

to attack if you have a battle robot

play09:31

it's that same thing squared yep and

play09:34

that's your concern that's my main

play09:35

concern with battle robots it's a

play09:37

separate concern from what happens with

play09:39

super intelligent systems taking over

play09:42

for their own purposes

play09:43

before we get to super intelligent uh

play09:46

systems let's talk about some of your

play09:47

other concerns so in the Litany of

play09:50

things that you're worried about you

play09:52

obviously we have battle robots there's

play09:53

one you're also quite worried about

play09:55

inequality tell me more about this

play09:57

so it's fairly clear it's not certain

play10:00

but it's fairly clear that these big

play10:03

language models will cause a big

play10:04

increase in productivity

play10:06

so there's someone I know who answers

play10:08

letters of complaint for a Health

play10:09

Service yeah and he used to write these

play10:12

letters himself and now he just gets

play10:14

chat gbt to write the letters and it

play10:17

takes one-fifth of the amount of time to

play10:18

answer a complaint

play10:20

so he can do five times as much work

play10:22

instead of only five times fewer of him

play10:25

um

play10:26

or maybe they'll just answer a lot more

play10:28

letters but they'll answer more letters

play10:29

right or maybe they'll have more people

play10:31

because they'll be so efficient right

play10:33

more productivity leads to more getting

play10:35

more done I mean this is maybe not this

play10:36

is an unanswered question but what we

play10:38

expect in the kind of society we live in

play10:40

is that if you get a big increase in

play10:42

productivity like that the wealth isn't

play10:45

going to go to

play10:46

um the people who are doing the work or

play10:48

the people who get unemployed it's going

play10:50

to go to making the rich richer and the

play10:51

poor poorer and that's very bad for

play10:53

society definitionally or you think

play10:55

there's some feature of AI that will

play10:57

lead to that no it's not to do with AI

play10:59

it's just what happens when you get an

play11:00

increase in productivity

play11:02

particularly in a society that doesn't

play11:04

have strong unions but now a there are

play11:07

many economists who would take a

play11:08

different position and say that over

play11:09

time and if you were to look at

play11:11

technology right we went from horses and

play11:14

horses and Buggies and the horses and

play11:15

Buggies went away and then we had cars

play11:17

and oh my gosh the people who drove the

play11:18

horses lost their jobs and ATMs came

play11:21

along and suddenly bank tellers no

play11:22

longer need to do that but we now employ

play11:24

many more bank tellers than we used to

play11:25

and we have many more people driving

play11:26

Ubers than we had people driving horses

play11:29

so the argument might an economist would

play11:32

make to this would be yes there will be

play11:35

chairing and there will be fewer people

play11:36

answering those letters but there'll be

play11:38

many more higher cognitive things that

play11:41

will be done how do you respond to that

play11:43

I think the first thing I'd say is a

play11:45

loaf of bread used to cost a penny then

play11:48

they invented economics and now it costs

play11:50

five dollars

play11:52

so I don't entirely trust what

play11:54

economists say particularly when they're

play11:55

dealing with a new situation that's

play11:58

never happened before right and super

play12:01

intelligence would be a new situation

play12:02

that never happened before but even

play12:04

these big chat Bots that are just

play12:06

replacing people whose job involves

play12:08

producing text that's never happened

play12:11

before and I'm not sure how they can

play12:13

confidently predict that more jobs will

play12:16

be created than the number of jobs lost

play12:18

I'll just have a little side note that

play12:20

in the green room I introduced Jeff to I

play12:22

have two of my three children are here

play12:24

Alice and Zachary they're somewhere out

play12:25

here and uh he said to Alice he said are

play12:28

you going to go into media and then he

play12:30

said well I'm not sure media will exist

play12:32

and then Alice was asking what should I

play12:34

do and you said Plumbing yes now explain

play12:37

um

play12:39

um I'm all I mean we have a number of

play12:41

plumbing problems at our house would be

play12:42

wonderful if they uh were able to put in

play12:44

a new sink explain what jobs a lot of

play12:47

young people out here not just my

play12:49

children but thinking about what careers

play12:50

to go into what are the careers they

play12:52

should be looking at what are the

play12:54

attributes of them I'll give you a

play12:56

little story about being a carpenter

play12:58

if you're a carpenter it's fun making

play13:00

furniture

play13:02

but it's a complete dead loss because

play13:04

machines can make furniture if you're a

play13:06

carpenter what you're good for is

play13:08

repairing furniture or fitting things

play13:11

into awkward spaces in old houses making

play13:14

shelves in things that aren't quite

play13:16

Square

play13:17

so the jobs that are going to survive AI

play13:20

for a long time are jobs where you have

play13:22

to be very adaptable and physically

play13:23

skilled and Plumbing's that kind of a

play13:26

job

play13:27

because manual dexterity is hard for a

play13:30

machine to replicate it's it's still

play13:32

hard and I think it's going to be longer

play13:34

before they can be really dexterous and

play13:37

get into awkward spaces

play13:39

um that's gonna take longer than being

play13:42

good at answering text questions but

play13:44

should I believe you because when we

play13:45

were on stage four years ago you said

play13:46

reasoning as long as somebody has a job

play13:48

that focuses on reasoning they'll be

play13:50

able to last doesn't isn't the nature of

play13:52

AI such that

play13:54

we don't actually know where the next

play13:56

incredible Improvement in performance

play13:58

will come maybe it will come in manual

play13:59

dexterity yeah it's possible

play14:02

so actually let me let me ask you a

play14:04

question about that so do you think when

play14:07

we look at Ai and we look at the next

play14:08

five years of AI the most impactful

play14:11

improvements we'll see will be in large

play14:13

language models and related to large

play14:15

language models or do you think it will

play14:16

be in something else

play14:18

I think it'll probably be multimodal

play14:20

large models so they won't just be

play14:22

language models they'll be doing Vision

play14:26

um hopefully they'll be analyzing videos

play14:28

so they were able to train on all of the

play14:30

YouTube videos for example and you can

play14:32

understand a lot

play14:35

um by from things other than language

play14:37

and when you do that you need less

play14:39

language to reach the same performance

play14:41

so the idea they're going to be

play14:43

saturated because they've already used

play14:44

all the language there is or all the

play14:47

language is easy to get hold of that's

play14:49

less of a concern if they're also using

play14:50

lots of other modalities I mean this

play14:52

gets at one of the another argument that

play14:54

Yan your fellow Godfather of AI makes is

play14:56

that language is so limited right

play14:58

there's so much information that we're

play15:00

conveying just beyond the world in fact

play15:01

I'm gesturing like mad right which

play15:03

conveys some of the information as well

play15:04

as the lighting and all this so your

play15:07

view is that may be true language is a

play15:09

limited Vector for information but soon

play15:12

it will be combined with other vectors

play15:13

absolutely

play15:15

um it's amazing what you can learn from

play15:17

language alone but you're much better

play15:19

off learning from many modalities small

play15:21

children don't just learn from language

play15:22

alone right so if you were if your

play15:27

principal role right now was still

play15:29

researching AI finding the next big

play15:31

thing you would be doing multimodal Ai

play15:34

and trying to attach say visual AI

play15:38

systems to text-based AI systems yes

play15:41

which is what they're doing now at

play15:42

Google Google is making a system called

play15:45

Gemini but fortunately has talked about

play15:48

a few days ago and uh you're allowed

play15:51

it's a multi-mode layout yeah well let

play15:53

me talk about actually something else at

play15:55

Google so while you were there

play15:57

Google invented the Transformer Network

play15:59

or invented the arc Transformer

play16:01

architecture generative pre-trained

play16:03

Transformers

play16:05

when did you realize that that would be

play16:09

so Central and so important

play16:12

it's interesting to me because it's this

play16:14

paper that comes out in 2017 and when it

play16:17

comes out it's not as though

play16:18

firecrackers are left you know shot into

play16:20

the sky it's six years later five years

play16:23

later that we suddenly realized the

play16:25

consequences and it's interesting to

play16:26

think what are the other papers out

play16:28

there that could be the same in five

play16:30

years so with Transformers it was really

play16:32

only a couple of years later when Google

play16:34

developed Birch so but made it very

play16:38

clear Transformers were a huge

play16:39

breakthrough

play16:41

um I didn't immediately realize what a

play16:43

huge breakthrough they were

play16:44

uh

play16:46

and I'm annoyed about that it took me a

play16:49

couple of years to realize well you

play16:51

never made it clear the first the first

play16:53

time I ever heard the word Transformer

play16:54

was talking to you on stage and you were

play16:55

talking about Transformers versus

play16:56

capsules and this was right right after

play16:59

right after it came out let's talk about

play17:01

one of the other critiques about

play17:03

language models and other models which

play17:05

is

play17:06

soon I mean in fact probably already

play17:08

they've absorbed all the organic data

play17:12

that has been created by humans if I

play17:14

create an AI model right now and I train

play17:16

it on the Internet it's trained on a

play17:17

bunch of stuff mostly stuff made by

play17:19

humans but a bunch of stuff made by AI

play17:21

right yeah and you're gonna You're Gonna

play17:23

Keep training AIS on stuff that has been

play17:26

created by AIS whether it's text-based

play17:28

language model or whether it's a

play17:29

multimodal language model

play17:31

will that

play17:33

lead to the inevitable Decay and

play17:35

Corruption as some people argue or is

play17:37

that just

play17:39

you know a thing we have to deal with or

play17:41

is it as other people in the AI field

play17:43

the greatest thing for training AIS and

play17:45

we should just use synthetic data in AI

play17:48

okay I don't actually know the answer to

play17:50

this technically

play17:52

um I suspect you have to take

play17:53

precautions so you're not just training

play17:54

on data that you yourself generated or

play17:57

the some previous version of you

play17:58

generated

play18:00

um I suspect it's going to be possible

play18:02

to take those precautions although it'd

play18:04

be much easier if all fake data was

play18:06

marked fake

play18:08

um there is one example in AI where

play18:10

training on stuff from yourself helps a

play18:13

lot so if you don't have much training

play18:15

data

play18:16

or rather you have a lot of unlabeled

play18:18

data and a small amount of label data

play18:20

you can train a model to predict the

play18:22

labels on the label data and then you

play18:25

take that same model

play18:27

and train it to predict labels for

play18:31

unlabeled data

play18:32

and whatever it predicts you tell it you

play18:35

were right

play18:37

and that actually makes the model work

play18:39

better how on Earth does that work

play18:42

um

play18:43

because on the whole it tends to be

play18:45

right I

play18:47

it's complicated it's been analyzed much

play18:49

better in many years ago from acoustic

play18:52

modems they did the same trick so so let

play18:56

me so listening to this I've had this

play18:58

realization on stage

play19:00

you're a man who's very critical of

play19:02

where we're going Killer Robots income

play19:04

inequality

play19:05

you also sound like somebody who loves

play19:07

this stuff yeah I love this stuff

play19:09

how could you not love making

play19:11

intelligent things

play19:12

so

play19:14

let me get to maybe the the most

play19:15

important question for the audience and

play19:17

for everyone here

play19:19

we're now at this moment where a lot of

play19:20

people here love this stuff and they

play19:22

want to build it and they want to

play19:23

experiment

play19:25

but we don't want negative consequences

play19:27

we don't want increased income

play19:28

inequality I don't want media to

play19:30

disappear what is the

play19:32

what are the choices and decisions and

play19:34

things we should be working on now

play19:36

to maximize the good to maximize the

play19:39

creativity but to limit the potential

play19:41

Harms

play19:42

so I think to answer that you have to

play19:43

distinguish many kinds of potential harm

play19:47

so I'll distinguish like six of them for

play19:49

you please there's bias and

play19:51

discrimination yep

play19:53

that is present now

play19:57

um it's not one of these future things

play19:59

we need to worry about it's happening

play20:00

now

play20:01

but it is something that I think is

play20:02

relatively easy to fix compared with all

play20:04

the other things if you make your target

play20:06

not be to have a completely unbiased

play20:08

system but just have a system that's

play20:10

significantly less bias than what it's

play20:12

replacing

play20:14

so a person you have old white men

play20:16

deciding whether young black women

play20:17

should get mortgages and if you just

play20:19

train on that data you'll get a system

play20:21

that's equally biased

play20:23

but you can analyze the bias you can see

play20:26

how it's biased because it won't change

play20:27

its Behavior you can freeze it and then

play20:28

analyze it and that should make it

play20:30

easier to correct for bias so okay

play20:33

that's bias and discrimination I think

play20:35

we can do a lot about that and I think

play20:36

it's important we do a lot about that

play20:38

but it's doable

play20:40

the next one is battle robots that I'm

play20:43

really worried about because defense

play20:45

departments are going to build them

play20:48

and I don't see how you could stop them

play20:51

doing it

play20:52

um something like a Geneva Convention

play20:54

would be great

play20:56

but those never happened until after

play20:57

they've been used with chemical weapons

play20:59

they didn't happen until after the first

play21:01

World War I believe

play21:03

and so I think what may happen is people

play21:06

who use battle robots will see just how

play21:08

absolutely awful they are and then maybe

play21:10

we can get an International Convention

play21:12

to prohibit them

play21:13

so that's two I mean you could also tell

play21:16

the people building the AI to not sell

play21:18

their equipment to the military

play21:20

you could try try Okay number three

play21:23

the military has lots of money

play21:27

number three there's joblessness yeah

play21:30

you could try and do stuff to make sure

play21:33

the increase in productivity some of

play21:36

that extra Revenue that comes from the

play21:37

increase in productivity is going goes

play21:40

to helping the people who remain jobless

play21:41

if it turns out that there aren't as

play21:43

many jobs created as destroyed mm-hmm

play21:46

that's a question of social policy and

play21:49

what you really need for that is

play21:50

socialism

play21:52

we're in Canada so you can say

play21:53

socialists

play21:55

[Music]

play21:56

um

play21:57

number four would be the warring Echo

play22:01

Chambers due to the big companies

play22:04

wanting you to click on things and make

play22:06

you indignant and so giving you things

play22:08

that are more and more extreme and so

play22:10

you end up in this Echo chamber where

play22:12

you believe these crazy conspiracy

play22:14

theorists if you're in the other Echo

play22:16

chamber or you believe the truth if

play22:18

you're in mayaku chamber

play22:20

um

play22:21

that's partly to do with the policies of

play22:24

the company so maybe something could be

play22:25

done about that but that would I mean

play22:27

that is a problem that exists it existed

play22:30

prior to large language models and in

play22:32

fact large language models could reverse

play22:34

it

play22:35

maybe I mean it's an open question of

play22:38

whether they can make it better or

play22:40

whether they make that problem worse

play22:41

yeah it's a problem to do with AI but

play22:44

it's not to do with large language oh is

play22:46

it a problem it's a problem to do with

play22:47

AI in the sense that there's an

play22:48

algorithm using AI trained on our

play22:50

emotions that then pushes Us in those

play22:51

directions okay

play22:53

all right so that's number four

play22:55

um there's the existential risk which is

play22:57

the one I decided to talk about because

play23:00

a lot of people think is a joke right so

play23:02

there's an editorial in nature yesterday

play23:05

where they basically said I'm

play23:08

fear-mongering about the existential

play23:09

risk is distracting attention from the

play23:12

actual risks so they can paid

play23:14

existential risk with actual risks

play23:16

implying the existential risk wasn't

play23:18

actual

play23:19

um

play23:21

I think it's important that people

play23:22

understand it's not just Science Fiction

play23:24

it's not just fear-mongering it is a

play23:27

real risk that we need to think about

play23:29

and we need to figure out in advance how

play23:31

to deal with it

play23:33

um so that's five and there's one more

play23:36

and I can't think what it is how do you

play23:38

have a list that doesn't end on

play23:39

existential risk I feel like that should

play23:41

be the end of the list no that was the

play23:42

end but I thought if I talked about

play23:44

existential risk I'd be able to remember

play23:45

the missing one but I couldn't all right

play23:48

well let's talk about existential risk

play23:50

what exactly explain exactly

play23:53

existential risk how it happens or

play23:55

explain as best you can imagine it what

play23:58

it is that goes wrong that leads us to

play24:00

Extinction or disappearance of humanity

play24:02

as a species okay at a very general

play24:04

level

play24:05

if you've got something a lot smarter

play24:07

than you that's very good at

play24:09

manipulating people

play24:11

just at a very general level are you

play24:13

confident people will stay in charge

play24:15

and then you can go into specific

play24:17

scenarios for how people might lose

play24:19

control even though they're the people

play24:21

creating this and giving it its goals

play24:24

and one very obvious scenario is

play24:26

if you were if you're given a goal and

play24:28

you want to be good at achieving it

play24:30

what you need is as much control as

play24:33

possible

play24:34

so for example if I'm sitting in a

play24:37

boring seminar and I see a little dot of

play24:40

light on the ceiling

play24:42

and then suddenly I noticed it when I

play24:44

move that dot of light moves

play24:46

I realize it's the Reflection from my

play24:48

watch the sun is bouncing off my watch

play24:51

and so the next thing I do is I don't

play24:53

start listening to the boring seminar

play24:54

again I immediately try and figure out

play24:56

how to make it go this way and how to

play24:58

make it go that way and once I got

play24:59

control of it then maybe I'll listen to

play25:01

the seminar again we have a very strong

play25:03

built-in urge to get control and it's

play25:05

very sensible because the more control

play25:07

you get the easier it is to achieve

play25:10

things and I think AI will be able to

play25:12

derive that too it's good to get control

play25:14

so you can achieve other goals wait so

play25:17

you actually believe that

play25:20

getting control will be an innate

play25:23

feature of something that the AIS are

play25:25

trained on us right they act like us

play25:27

they think like us because

play25:28

the neural architecture makes them like

play25:30

our human brains and because they're

play25:31

trained on all of our outputs so you

play25:33

actually think that getting control of

play25:35

humans will be something that the AI is

play25:37

almost aspire to

play25:39

no I think they'll derive it as a as a

play25:43

way of achieving other goals I think in

play25:45

us it's innate I think

play25:48

I'm very dubious about saying things are

play25:50

really innate but I think the desire to

play25:52

understand how things work

play25:55

is a very sensible desire to have and I

play25:56

think we have that

play25:58

so we have that and then AIS will

play26:01

develop an ability to manipulate us and

play26:03

control us in a way that

play26:06

we can't respond to right that the

play26:08

manipulative AIS and even though

play26:11

good people will be able to use equally

play26:13

powerful AIS to counter these bad ones

play26:16

you believe that we still could have an

play26:17

existential crisis yes

play26:19

it's not clear to me I mean yeah makes

play26:21

the argument that

play26:23

um the good people will have more

play26:24

resources than the bad people

play26:27

um I'm not sure about that and that good

play26:30

AI is going to be more powerful than bad

play26:32

Ai and good AI is going to be able to

play26:34

regulate bad Ai and we have a situation

play26:36

like that at present right where you

play26:38

have people using AI to create spam then

play26:42

you have people like Google using AI to

play26:44

filter out the spam and at present

play26:46

Google has more resources and the

play26:48

Defenders are beating the attackers but

play26:50

I don't see that it'll always be like

play26:51

that I mean even in cyber warfare where

play26:53

you have moments where it seems like the

play26:54

criminals are winning and sometimes

play26:55

where it seems like the Defenders are

play26:57

winning so you believe that there will

play26:59

be a battle like that over control of

play27:01

humans by super intelligent artificial

play27:03

intelligence it may well be yes and I'm

play27:04

not convinced that

play27:06

um good AI That's trying to stop bad AI

play27:08

getting control will win Okay so

play27:12

all right so before this existential

play27:15

risk happened before bad III does this

play27:17

we have a lot of extremely smart people

play27:19

building a lot of extremely important

play27:21

things what exactly can they do

play27:24

to most help limit this risk

play27:27

so one thing you can do is before the

play27:30

airline gets super intelligent you can

play27:32

do empirical work into how it goes wrong

play27:36

how it tries to get control whether it

play27:39

tries to get control we don't know

play27:40

whether it would but before it's smarter

play27:42

than us I think the people developing it

play27:45

should be encouraged to put a lot of

play27:47

work into understanding how it goes

play27:49

might go wrong understanding how it

play27:51

might try and take control away and I

play27:54

think the government could maybe

play27:55

encourage the big companies developing

play27:57

it to put comparable resources maybe not

play28:00

equal resources but right now there's 99

play28:03

very smart people trying to make it

play28:04

better and one very smart person trying

play28:07

to figure out how to stop it taking over

play28:08

and maybe you want it more balanced and

play28:12

so this is in some ways your role right

play28:14

now the reason why you've left Google on

play28:17

good terms but you want to be able to

play28:19

speak out and help participate in this

play28:21

conversation so more people can join

play28:23

that one and not the 99. yeah I would

play28:25

say it's very important for smart people

play28:27

to be working on that but I'd also say

play28:29

it's very important not to think this is

play28:32

the only risk there's all these other

play28:33

risks and I've remembered the last one

play28:35

which is fake news

play28:37

um so it's very important to try for

play28:40

example to Mark everything that's fake

play28:42

as fake whether we can do that

play28:44

technically I don't know but it'd be

play28:45

great if we could governments do it with

play28:47

counterfeit money they won't allow

play28:49

counterfeit money because that reflects

play28:51

on their sort of central interest

play28:53

um

play28:54

they should try and do it with AI

play28:57

generated stuff I don't know whether

play28:59

they can but I give so give one we're

play29:01

out of time give one specific to do

play29:03

something to read a thought experiment

play29:05

one thing to leave the audience with so

play29:07

they can go out here and think okay I'm

play29:09

gonna do this

play29:10

AI is the most powerful thing we've

play29:12

invented in perhaps in our lifetimes and

play29:15

I'm going to make it better to make it

play29:16

more likely it's a Force for good in the

play29:18

Next Generation

play29:19

so how could they make it more likely be

play29:21

a force for good yes one one final

play29:23

thought for everyone here

play29:26

I don't I actually don't have a plan for

play29:28

how to make it more likely to be good

play29:30

than bad sorry

play29:31

um I think it's great that it's being

play29:34

developed because we didn't get to

play29:36

mention the huge numbers of good uses of

play29:38

it yeah like in medicine in climate

play29:40

change and so on so I think progress now

play29:43

is inevitable and it's probably good

play29:46

but we seriously ought to worry about

play29:48

mitigating all the bad side effects of

play29:50

it and worry about the existential

play29:52

Threat all right thank you so much what

play29:54

an incredibly thoughtful inspiring

play29:56

interesting phenomenal mark thank you to

play29:58

Jeffrey Hinton thank you thank you Jeff

play30:01

so great

Rate This

5.0 / 5 (0 votes)

Do you need a summary in English?