“Godfather of AI” Geoffrey Hinton Warns of the “Existential Threat” of AI | Amanpour and Company

Amanpour and Company
9 May 202318:09

Summary

TLDR在这段视频中,被誉为“人工智能之父”的杰弗里·辛顿(Geoffrey Hinton)讨论了人工智能(A.I)的快速发展及其潜在的风险。他提到,尽管最初他认为通过构建计算机模型来模拟大脑的学习方式将有助于我们更好地理解大脑并提高机器学习的能力,但近期他意识到计算机上的数字智能可能已经在某些方面超越了人脑的学习效率。辛顿强调了A.I的“存在风险”——即A.I可能变得比人类更智能并最终控制人类的可能性。他还提到了其他风险,包括A.I取代工作、制造假新闻和假信息的问题。辛顿认为,尽管存在不确定性,但现在最重要的是投入大量资源来理解和控制A.I的发展,以确保其正面影响,并最小化潜在的负面影响。

Takeaways

  • 🧠 杰弗里·辛顿(Jeffrey Hinton)认为,人工智能(A.I)的威胁可能比气候变化更为紧迫,并且他最近离开谷歌是为了更自由地发表意见并提高对风险的认识。
  • 🚀 辛顿是人工智能领域的先驱,他曾认为通过构建计算机模型来模拟大脑的学习方式,可以更好地理解大脑并提升机器学习的能力。
  • 🔄 然而,辛顿最近意识到,计算机上的数字智能可能比大脑学习得更好,这改变了他对如何通过模仿大脑来提升数字智能的看法。
  • 🤖 辛顿使用谷歌的Palm系统作为测试,发现这些系统在理解笑话等方面表现出了令人震惊的理解能力,这表明它们可能拥有比人类更好的信息处理方式。
  • 📈 尽管人工神经网络的连接强度只有大脑的千分之一,但它们却知道比人类多数千倍的常识知识,这表明它们在信息存储和处理上更为高效。
  • 💡 辛顿认为,大脑可能没有使用和数字智能一样高效的学习算法,因为大脑不能像数字智能那样快速地交换信息。
  • 🌐 辛顿提到,数字智能可以在不同硬件上运行,并能够通过复制权重来相互学习,而大脑则不具备这样的能力。
  • 🤖 人工智能的运作方式与50年前人们所想的完全不同,它们通过学习大型的神经活动模式来理解事物,而不是通过逻辑推理。
  • 🚨 辛顿强调了人工智能带来的多种威胁,包括取代工作、加剧贫富差距、制造假新闻和假视频等,他建议需要强有力的政府监管来应对这些问题。
  • 🌍 辛顿认为,对于人工智能带来的生存威胁,全球的公司和国家可能会进行合作,因为他们都不希望超级智能控制一切。
  • 💭 辛顿表达了对人工智能未来的不确定性,他认为我们正处于一个巨大的未知领域,最好的策略是尽可能地努力确保无论发生什么,都是最好的结果。
  • 🏆 辛顿对谷歌的行为表示了肯定,他认为谷歌在人工智能领域的行为是负责任的,并且他离开谷歌是为了能够更自由地发表自己的观点。

Q & A

  • 杰弗里·辛顿(Jeffrey Hinton)为什么离开谷歌(Google)?

    -杰弗里·辛顿离开谷歌是为了能够更自由地发表关于人工智能(A.I)风险的看法,并且提高公众对这些风险的认识。

  • 辛顿教授如何描述他最初对计算机学习方式的预期?

    -辛顿教授最初认为,如果我们构建了模拟大脑学习方式的计算机模型,我们将更了解大脑的学习机制,同时作为副作用,我们也会得到更好的计算机机器学习。

  • 辛顿教授提到了哪些因素使他改变了对数字智能的看法?

    -他提到了三个因素:1) 谷歌的Palm系统能够解释笑话为何有趣;2) 聊天机器人等A.I.拥有比人类多得多的常识知识,但它们的人工神经网络连接强度只有大约一万亿,而人脑有大约一百万亿;3) 他开始相信大脑并没有使用像数字智能那样好的学习算法。

  • 辛顿教授如何看待人工智能的未来发展?

    -他认为我们正处于一个巨大的不确定性时期,对于人工智能的未来,我们既不应过于乐观也不应过于悲观,因为未来的发展存在很多未知数。

  • 辛顿教授提到了哪些人工智能可能带来的威胁?

    -他提到了几种威胁,包括人工智能可能超越人类智能并掌控一切(存在风险),可能导致工作岗位的流失,以及制造大量假新闻和假信息,影响社会和政治稳定。

  • 辛顿教授认为应如何管理人工智能带来的假信息问题?

    -他认为应该像对待假币一样,通过强有力的政府监管来解决假视频、假声音和假图像的问题,使制造和传播这些假信息成为严重的犯罪行为。

  • 辛顿教授对于人工智能是否能达到人类意识水平的看法是什么?

    -辛顿教授认为将人工智能与意识联系起来可能会使问题变得模糊不清。他强调,目前没有明确的定义来说明什么构成了“有意识”,因此讨论A.I.是否具有意识可能并不有助于问题的解决。

  • 辛顿教授提到了哪些人工智能在社会中可能的积极用途?

    -他提到人工智能在医学、新纳米材料设计、预测洪水和地震、改善气候和天气预测以及理解气候变化等方面可能会非常有用。

  • 辛顿教授对于技术公司在制定人工智能发展规则方面的角色有何看法?

    -他认为技术公司的工程师和研究人员在开发智能系统时,应该进行许多小规模的实验,以了解在开发过程中会发生什么,并在智能系统失控之前学会如何控制它。

  • 辛顿教授是否支持暂停人工智能发展的提议?

    -他不支持暂停人工智能发展的提议,认为这是不现实的。因为人工智能在多个领域具有巨大的潜力和用处,发展是不可避免的。

  • 辛顿教授对于人类能否应对人工智能带来的挑战持何态度?

    -他认为我们正处于一个不确定性很高的时代,预测未来就像看入迷雾,我们只能尽力确保无论发生什么都尽可能地好。

  • 辛顿教授离开谷歌后,他能够更自由地讨论哪些话题?

    -离开谷歌后,辛顿教授可以更自由地讨论关于奇点(singularities)和其他与人工智能发展相关的敏感话题,而不必考虑这些讨论对谷歌公司的影响。

Outlines

00:00

🚀 A.I.的威胁与潜力:杰弗里·辛顿的洞见

在这段视频中,被誉为人工智能之父的杰弗里·辛顿讨论了人工智能(A.I.)的快速发展及其潜在的威胁,他认为A.I.可能比气候变化更加紧迫。辛顿曾长期在谷歌工作,但后来离开以便更自由地发表意见并提高公众对A.I.风险的认识。他分享了自己对于计算机学习方式的思考,以及最近的一些发现,这些发现使他意识到我们构建的数字智能可能比人脑学习得更好。辛顿还讨论了A.I.如何通过巨大的神经网络模型学习人类直觉,并指出了A.I.发展可能带来的社会和伦理挑战,包括失业问题、假信息的扩散以及对真实性的威胁。

05:02

🤖 A.I.的意识与决策:超越逻辑的直觉

辛顿在第二段中深入探讨了A.I.的决策过程,他用一个关于猫和狗性别的假设问题来说明人类如何通过直觉做出决策,而这种决策过程并不依赖于逻辑推理。他指出,A.I.通过学习大量的神经活动模式来模拟人类的直觉,这使得它们能够在没有经过推理的情况下直观地“知道”某些事情。辛顿还提到了人们对于A.I.是否具有意识的争论,他认为在讨论A.I.是否会比人类更聪明时,将意识的概念引入可能会使问题变得模糊不清。此外,他还讨论了A.I.对白领工作的影响,以及如何通过政府监管来防止假新闻和假信息的传播。

10:04

🌐 国际合作与A.I.的未来发展

在第三段中,辛顿讨论了不同国家和公司在A.I.发展上的合作可能性。他认为,面对A.I.可能带来的“存在风险”(即A.I.变得比人类更智能并控制一切),全球的公司和国家都有动机进行合作,因为没有人希望超级智能接管人类。辛顿强调,尽管存在一些政治上的挑战,但为了遏制假新闻和保护民主,需要制定国际规则和标准。他还提到了自己离开谷歌的决定,这使他能够更自由地讨论A.I.的相关问题,而不必考虑公司的立场。

15:06

🔬 A.I.的可控性与未来挑战

辛顿在最后一段中表达了对A.I.未来发展的不确定性。他认为,尽管A.I.在医学、材料设计、气候预测等领域具有巨大的潜力,但同时也存在无法控制超级智能的风险。他批评了要求暂停A.I.发展的提议,认为这是不现实的,因为A.I.的发展无法被阻止。辛顿建议,应该将更多的资源投入到理解和控制A.I.的负面影响上,而不仅仅是开发它们。他以一种谨慎乐观的态度结束讨论,强调我们需要努力确保A.I.的发展带来的结果是积极的,同时也承认人类可能只是智能进化的一个阶段,而未来可能会完全由数字智能主导。

Mindmap

Keywords

💡人工智能

人工智能(AI)是指由人制造出来的机器系统所表现出来的智能。在视频中,人工智能是核心主题,讨论了其发展、潜在的风险以及对社会的影响。例如,提到了AI在医疗、设计新材料、预测自然灾害等方面的应用,同时也探讨了AI可能带来的威胁,如取代人类工作、制造假新闻等。

💡深度学习

深度学习是机器学习的一个子领域,它使用类似于人脑的神经网络结构来识别模式和数据中的复杂关系。视频中提到了深度学习在构建数字智能方面的进展,以及它如何可能超越人脑的学习效率。

💡神经网络

神经网络是模仿人脑神经元网络的计算模型,用于处理和分析数据。视频中提到,尽管AI的神经网络连接强度远少于人脑,但它们却能存储和处理更多的常识性知识。

💡假新闻

假新闻是指故意编造的、没有事实根据的报道,目的是误导读者。视频中讨论了AI生成的假新闻对社会的潜在影响,以及需要政府介入制定规则来标记和管制假新闻。

💡意识

意识通常指的是个体的自觉和对周围环境的认知能力。视频提到了关于AI是否具有意识的讨论,并指出这可能会混淆关于AI智能发展的问题。

💡

💡超智能

超智能是指超越人类智能的人工智能系统。视频中提到了对于超智能可能控制人类的担忧,以及全球合作防止这种情况发生的必要性。

💡自动驾驶

自动驾驶涉及使用AI来控制车辆,无需人类驾驶员的介入。虽然视频中没有直接提到自动驾驶,但它是AI在实际应用中的一个例子,体现了AI在替代人类工作方面的潜力。

💡气候变化

气候变化是指由于自然原因或人类活动导致的全球气候模式的长期变化。视频中提到了AI在理解和应对气候变化方面的潜力,这是AI积极影响的一个例子。

💡谷歌

谷歌是一家知名的科技公司,视频中提到了其在AI领域的发展和产品,如Bard。同时,讨论了公司内部对于AI发展的不同看法以及对外部沟通的限制。

💡监管

监管指的是政府或其他机构对某一行业或活动的监督和管理。视频中讨论了对AI进行监管的必要性,以防止其负面影响,如制造假视频和声音。

💡全球合作

全球合作是指不同国家之间为了共同的目标或利益而进行的协作。视频提到了面对AI带来的威胁时,全球合作的重要性,特别是在防止超智能控制人类方面的合作。

Highlights

A.I. 的威胁可能比气候变化更紧迫

Jeffrey Hinton,被誉为人工智能之父,近期离开Google以自由发表意见并提高对风险的认识

Hinton 认为计算机模型可能比人脑学习得更好

数字智能的学习能力可能已经超越了人脑

Hinton 使用 Google 的 Palm 系统测试人工智能对幽默的理解

Chat GBT 拥有比人类多千倍的常识知识,但连接强度只有人类的百分之一

数字智能的连接强度和信息处理方式可能比大脑更高效

数字智能可以快速交换信息,而大脑的交换速度较慢

OpenAI 和 Google 的产品展示了人工智能在自动完成和理解方面的先进能力

人工智能的运作方式与50年前人们预期的逻辑和符号表达完全不同

人工智能通过学习大型神经活动模式来形成直觉

Hinton 对人工智能是否达到“意识”水平持怀疑态度

人工智能可能对白领工作产生威胁,就像过去对蓝领工作的影响一样

存在主义威胁是人工智能超越人类智能并控制一切的可能性

需要政府监管来防止假视频、声音和图像的泛滥

Hinton 与 Bernie Sanders 讨论了关于人工智能的监管问题

Hinton 强调需要在人工智能的发展过程中投入资源以理解和控制其潜在的负面影响

Hinton 认为技术公司和研究人员最有资格制定控制超智能的规则

Hinton 没有签署要求暂停人工智能发展的公开信,认为这不现实

Hinton 强调人工智能在医学、设计新材料、预测自然灾害和理解气候变化中的潜力

Hinton 对未来的不确定性持谨慎态度,认为我们应投入努力确保最好的结果

Transcripts

play00:00

our next guest believes the threat of

play00:02

A.I might be even more urgent than

play00:04

climate change if you can imagine that

play00:06

Jeffrey Hinton is considered the

play00:08

Godfather of A.I and he made headlines

play00:11

with his recent departure from Google he

play00:13

quit to speak freely and to raise

play00:15

awareness of the risks to dive deeper

play00:18

into the dangers and how to manage them

play00:20

he's joining Hari sreenivasan now

play00:22

Cristiano thanks Jeffrey Hinton thanks

play00:24

so much for joining us

play00:26

um you are one of the more celebrated

play00:28

names in artificial intelligence you

play00:31

have been working at this for more than

play00:34

40 years and I wonder as you've thought

play00:38

about how computers learn

play00:41

did it go the way you thought it would

play00:43

when you started in this field it did

play00:45

until very recently in fact I thought if

play00:48

we built computer models of how the

play00:51

brain learns we would understand more

play00:52

about how the brain learns and as a side

play00:55

effect we will get better machine

play00:56

learning on computers

play00:58

and all that was going on very well and

play01:01

then very suddenly I realized recently

play01:03

that maybe the digital intelligences we

play01:07

were building on computers were actually

play01:09

learning better than the brain and that

play01:12

sort of changed my mind after about 50

play01:14

years of thinking we would make better

play01:16

digital intelligences by making them

play01:18

more like the brains I suddenly realized

play01:20

we might have something rather different

play01:22

that was already better now this is

play01:24

something you and your colleagues must

play01:25

have been thinking about over these 50

play01:27

years I mean what was there a Tipping

play01:29

Point there were maybe there were

play01:30

several ingredients to it like a year or

play01:33

two ago

play01:34

I used a Google system called Palm it

play01:38

was a big chat box and it could explain

play01:40

why jokes for Funnies and I've been

play01:43

using that as a kind of litmus test of

play01:44

whether these things really understood

play01:46

what was going on and I was slightly

play01:47

shocked that it could explain that jokes

play01:49

were funny some with one ingredient

play01:52

another ingredient was the fact that

play01:54

things like chat gbt

play01:56

know thousands of times more than any

play01:59

humans in just sort of basic Common

play02:01

Sense knowledge

play02:02

but they only have about a trillion

play02:04

connection strengths in their artificial

play02:07

neural Nets and we have about 100

play02:09

trillion connection strengths in the

play02:11

brain

play02:11

so with a hundredth as much storage

play02:14

capacity it knew thousands of times more

play02:16

than us and that strongly suggests that

play02:18

it's got a better way of getting

play02:20

information into the connections and

play02:22

then the third thing was very recently a

play02:24

couple of months ago

play02:25

I suddenly became convinced that the

play02:29

brain wasn't using as good a learning

play02:32

algorithm as these digital intelligences

play02:34

and in particular it wasn't as good

play02:36

because

play02:37

brains can't exchange information really

play02:40

fast and these digital intelligences can

play02:42

I can have one model running on ten

play02:46

thousand different bits of hardware

play02:48

it's got the same connection strengths

play02:50

in every copy of the model on the

play02:51

different Hardware

play02:52

all the different agents running on the

play02:54

different Hardware can all learn from

play02:56

different bits of data but then they can

play02:59

communicate to each other what they

play03:00

learned just by copying the weights

play03:03

because they all work identical and

play03:05

brains aren't like that so these guys

play03:07

can communicate at trillions of bits a

play03:09

second and we can communicate it

play03:11

hundreds of bits a second by sentences

play03:13

there's such a huge difference and it's

play03:15

why chat GPT can learn thousands of

play03:18

times more than you can for people who

play03:21

might not be following kind of what's

play03:22

been happening with open Ai and chat GPT

play03:25

and Google's product barred explain what

play03:29

those are because uh some people have

play03:32

explained it as kind of the autocomplete

play03:34

feature finishing your thought for you

play03:36

but what are these

play03:38

artificial intelligence is doing okay

play03:42

um it's difficult to explain but I'll do

play03:44

my best

play03:45

um it's true in a sense they're all too

play03:47

complete but if you think about it if

play03:50

you want to do really good autocomplete

play03:52

you need to understand what somebody's

play03:54

saying and they understand what you're

play03:56

saying

play03:57

and they've learned to understand what

play03:58

you're saying just by trying to do

play04:00

autocomplete

play04:02

um but they now do seem to really

play04:04

understand

play04:05

so the way they understand isn't at all

play04:08

like people in AI 50 years ago thought

play04:11

it would be in old-fashioned AI people

play04:13

thought

play04:14

you'd have internal symbolic Expressions

play04:18

a bit like sentences in your head but in

play04:20

some kind of cleaned up language then

play04:22

you would apply rules to infer new

play04:25

sentences from old sentences and that's

play04:27

how it all work and it's nothing like

play04:29

that it's completely different and let

play04:32

me give you a sense of just how

play04:34

different it is I can give you a problem

play04:36

that doesn't make any sense in logic but

play04:39

where you you know the answer intuitive

play04:42

and these big models are really models

play04:44

of human intuition so

play04:47

suppose I tell you that

play04:49

um you know that there's male cats and

play04:52

female cats and male dogs and female

play04:53

dogs

play04:54

but suppose I tell you you have to make

play04:56

a choice either you're going to have all

play04:59

cats being male and all dogs being

play05:01

female or you can have all cats being

play05:04

female and all dogs being male

play05:06

now you know it's biological nonsense

play05:08

but you also know it's much more natural

play05:10

to make all cats female and all dogs

play05:13

male that's not a question of logic what

play05:16

that about is inside your head you have

play05:19

a big pattern of neural activity that

play05:21

represents cat

play05:22

and you also have a big pattern of

play05:24

neural activity that represents man and

play05:26

a big pattern of neural activity that

play05:27

represents women

play05:29

and the big pattern for cat is more like

play05:31

the pattern for woman than it is like

play05:32

the pattern for man that's the result of

play05:34

a lot of learning about men and women

play05:35

and cats and dogs

play05:37

um but it's now just intuitively obvious

play05:40

to you that cats are more like women and

play05:42

dogs are more like men because of these

play05:44

big patterns of neural activity you've

play05:46

learned and it doesn't involve

play05:48

sequential reasoning or anything you

play05:50

didn't have to do reasoning to solve

play05:51

that problem it's just obvious that's

play05:53

how these things are working they're

play05:54

learning these big patterns activity to

play05:57

represent things and that makes all

play05:59

sorts of things just obvious to them you

play06:02

know what you're describing here ideas

play06:04

like intuition and basically context

play06:07

those are the things that

play06:10

scientists and researchers always say

play06:12

well this is why we're fairly positive

play06:15

that

play06:16

we're not going to head to that sort of

play06:18

Terminator scenario where you know the

play06:20

artificial intelligence gets smarter

play06:21

than human beings but what you're

play06:23

describing is these are these are almost

play06:27

um Consciousness sort of emotion level

play06:31

decision processes okay I think if you

play06:35

bring sentience into it it just clouds

play06:39

the issue so lots of people are very

play06:41

confident these things aren't sentient

play06:42

but if you ask them what do you mean by

play06:44

sentient they don't know and I don't

play06:47

really understand how they're so

play06:48

confident they're not sentient if they

play06:49

don't know what they mean by sentient

play06:50

but I don't think it helps to discuss

play06:52

that when you're thinking about whether

play06:53

they'll get smarter than us I am very

play06:56

confident that they think so suppose I'm

play07:00

talking to a chatbot and I suddenly

play07:02

realize it's telling me all sorts of

play07:04

things I don't want to know like it's

play07:07

telling me

play07:08

it's writing out responses about someone

play07:11

called Beyonce who I'm not interested in

play07:12

because I'm an old white male and I

play07:16

suddenly realized it thinks I'm a

play07:17

teenage girl now when I use the word

play07:20

thinks there I think that's exactly the

play07:22

same sense of thinks is when I say you

play07:24

think something

play07:26

um if I were to ask it am I a teenage

play07:27

girl it would say yes if I had to look

play07:29

at the history of our conversation I'd

play07:31

probably be able to see why it thinks

play07:32

I'm a teenage girl and I think when I

play07:35

say it thinks I'm a teenage girl I'm

play07:37

using the word think in just the same

play07:39

sense as we normally use it it really

play07:40

does think that give me an idea of why

play07:43

this is such a significant Leap Forward

play07:45

I mean to me it seems like there are

play07:48

parallel concerns

play07:50

for in the 80s and 90s blue-collar

play07:53

workers were concerned about robots

play07:55

coming in and replacing them and not

play07:58

being able to control them and now this

play08:00

is kind of a threat to the White Collar

play08:02

class of people saying that there are

play08:05

these Bots and agents that can do a lot

play08:08

of things that we otherwise thought

play08:09

would be something only people can yes I

play08:13

think there's a lot of different things

play08:15

we need to worry about with this with

play08:17

these new kinds of digital intelligence

play08:19

and so what I've been talking about

play08:21

mainly is what I call the existential

play08:23

threat which is the chance that they get

play08:25

more intelligent than us and they'll

play08:27

take over from us they'll get control

play08:30

that's a very different threat from many

play08:32

other threats which also severe so they

play08:35

include

play08:36

these things taking away jobs in a

play08:40

decent society that would be great it

play08:42

would mean everything got more

play08:44

productive and everyone was better off

play08:47

but the danger is that it'll make the

play08:49

rich richer and the poor poorer that's

play08:52

not ai's fault that's how we organize

play08:53

Society

play08:55

um there's dangers about them

play08:57

making it impossible to know what's True

play08:59

by having so many fakes out there that's

play09:01

a different danger that's something you

play09:04

might be able to address

play09:05

by treating it like counterfeiting

play09:07

governments do not like you printing

play09:10

their money and they make serious it's a

play09:13

serious offense to print money

play09:15

it's also a serious offense if you're

play09:17

given some fake money to pass it to

play09:19

somebody else if you knew it was fake

play09:20

that's a very serious offense

play09:23

I think government's going to have to

play09:24

make similar regulations for fake videos

play09:27

and fake voices and fake images it's

play09:29

going to be hard as far as I can see the

play09:32

only way to stop ourselves being swamped

play09:34

by these fake videos and fake voices and

play09:36

fake images is to have strong government

play09:39

regulation that makes it a serious crime

play09:41

you go to jail for 10 years if you

play09:44

produce a video with AI and it doesn't

play09:47

say it's made with AI that's what they

play09:49

do for counterfeit money and this is a

play09:51

series of threat is going to fit money

play09:54

so my view is that's what they ought to

play09:55

be doing I actually talked to Bernie

play09:58

Sanders last week about it and

play10:01

he liked that view of it I can

play10:03

understand governments and central banks

play10:07

and private Banks all agreeing on

play10:11

certain standards because there's money

play10:13

at stake and I wonder

play10:16

is there enough incentive for

play10:20

governments to sit down together and try

play10:22

to craft some sort of rules of what's

play10:25

acceptable and what's not some sort of

play10:27

Geneva Convention or Accords it would be

play10:30

great if governments could say look

play10:33

[Music]

play10:34

um

play10:35

these fake videos are so good at

play10:38

manipulating the electorate that we need

play10:40

them all marked as fake otherwise we're

play10:42

going to lose democracy

play10:43

the problem is that some politicians

play10:46

would like to lose democracy so that's

play10:49

going to make it hard so how do you

play10:51

solve for that I mean it seems like this

play10:54

Genie is sort of out of the bottle so

play10:56

what we're talking about right now is

play10:57

the genie of being swamped through fake

play10:59

news yeah and that clearly is somewhat

play11:02

out of the bottle it's fairly clear that

play11:04

organizations like Cambridge analytica

play11:06

by pumping out fake news had an effect

play11:09

on brexit and it's fairly clear that

play11:13

um Facebook was manipulated to have an

play11:15

effect on the 2016 election so the June

play11:18

is out of the bottle in that sense we

play11:20

can try and at least contain it a bit

play11:22

but that's not the main thing I'm

play11:24

talking about the main thing I'm talking

play11:25

about is the risk of these things

play11:28

becoming super intelligent and taking

play11:29

over control from us I think for the

play11:31

existential threat we're all in the same

play11:34

boat the Chinese the Americans the

play11:36

Europeans they all would not like

play11:39

um super intelligence to take over from

play11:42

people

play11:43

and so I think for that existential

play11:45

threat we will get collaboration between

play11:48

um all the companies and all the

play11:49

countries because none of them want the

play11:51

super intelligence to take over so in

play11:53

that sense that's like Global nuclear

play11:56

war where even during the Cold War

play11:58

people could collaborate to prevent them

play12:01

being a global nuclear war because it

play12:03

was not in anybody's interests sure and

play12:06

so that's one in a sense positive thing

play12:09

about this existential threat it should

play12:11

be possible to get people to collaborate

play12:13

to prevent it

play12:14

but for the all the other threats it's

play12:17

more difficult to see how you're going

play12:18

to get collaboration one of your more

play12:20

recent employers was Google and you were

play12:23

a VP and a fellow there and

play12:26

you recently decided to leave the

play12:29

company to be able to speak more freely

play12:31

about AI now they just launched their

play12:33

own version of kind of a GPT of Bard

play12:36

back in March so tell me here we are now

play12:39

what do you feel like you can say today

play12:41

or will say today that you couldn't say

play12:45

a few months ago

play12:48

um not much really I just wanted to be

play12:51

if you work for a company and you're

play12:53

talking to the media

play12:55

you tend to think what implications does

play12:58

this have to the company at least you

play13:00

ought to think back because they're

play13:01

paying you

play13:03

um I don't think it's sort of honest to

play13:05

take the money from the company and then

play13:06

completely ignore the company's interest

play13:09

um

play13:10

but if I don't take the money I just

play13:11

don't have to think what's good for

play13:12

Google and what isn't I can just say

play13:14

what I think

play13:15

it happens to be the case that I mean

play13:18

everybody wants to transmit the story as

play13:20

I left Google because they were doing

play13:21

bad things that's more or less the

play13:24

opposite of the truth

play13:26

um I think Google is behave very

play13:27

responsibly and I think having left

play13:29

Google I can say good things about

play13:31

Google and be more credible I just left

play13:33

so I'm not constrained to think about

play13:35

the implications for Google when I say

play13:37

things about singularities and things

play13:39

like that do you think that tech

play13:41

companies given that it's mostly their

play13:43

engineering staff that are trying to

play13:45

work on developing these intelligences

play13:48

are going to have a better opportunity

play13:52

to create the rules of the road then say

play13:57

governments or third parties

play14:00

I do actually I think there's some

play14:02

places where governments have to be

play14:03

involved like regulations that force you

play14:06

to show whether something was AI

play14:08

generated

play14:09

but in terms of keeping control of a

play14:13

super intelligence

play14:14

what you need is the people who are

play14:16

developing it to be doing lots of little

play14:18

experiments with it and seeing what

play14:20

happens as they're developing it and

play14:22

before it's out of control

play14:24

and that's going to be the mainly the

play14:27

researchers in companies

play14:29

I don't think you can leave it to

play14:31

philosophers to speculate about what

play14:33

might happen anybody who's ever written

play14:35

a computer program knows that getting a

play14:37

little bit of empirical feedback by

play14:39

playing with things quickly disabuses

play14:42

you of your idea that you really

play14:44

understood what was going on

play14:45

and so it's the people in the company is

play14:47

developing it who are going to

play14:49

understand how to keep control of it if

play14:52

that's possible so I agree with people

play14:54

like Sam Altman at open AI that this

play14:57

stuff is inevitably going to be

play14:59

developed because there's so many good

play15:00

uses of it and what we need is as it's

play15:03

being developed we put a lot of

play15:05

resources into trying to understand how

play15:07

to keep control of it and avoid some of

play15:08

the bad side effects back in March there

play15:11

were more than I'd say a thousand

play15:13

different folks in the tech industry

play15:15

including leaders like Steve Wozniak and

play15:18

Elon Musk who signed an open letter

play15:21

asking essentially to have like a a

play15:23

six-month pause on the development of

play15:25

artificial intelligence and you didn't

play15:28

sign that how come I thought it was

play15:30

completely unrealistic the point is

play15:34

these digital intelligences are going to

play15:36

be tremendously useful for things like

play15:38

medicine for reading scans rapidly and

play15:41

accurately

play15:42

it's been slightly slower than I

play15:44

expected but it's coming

play15:46

um they're going to be tremendously

play15:48

useful for Designing new Nano materials

play15:50

so we can make more efficient solar

play15:52

cells for example

play15:53

they're going to be tremendously useful

play15:55

or they already are for predicting

play15:57

floods and earthquakes and getting

play16:00

better climate getting better weather

play16:01

projections they're going to be

play16:03

tremendously useful in understanding

play16:05

climate change so they're going to be

play16:08

developed there's no way that's going to

play16:09

be stopped so I thought it was maybe a

play16:13

sensible way of getting media attention

play16:15

but it wasn't a sensible thing to ask

play16:17

for it just wasn't feasible what we

play16:19

should be asking for is that comparable

play16:23

resources are put into

play16:25

dealing with the bad possible side

play16:27

effects and dealing with how we keep

play16:29

these things under control as are put

play16:31

into developing them

play16:32

so present so 99 of the money is going

play16:35

into developing them and one percent's

play16:37

going into sort of people saying oh

play16:39

these things might be dangerous

play16:40

it should be more like 50 50 I believe

play16:43

when you kind of look back at the body

play16:45

of work of your life and when you look

play16:47

forward at what might be coming

play16:50

are you optimistic that we'll be able as

play16:52

Humanity to rise to this challenge or

play16:54

are you

play16:56

less so I think we're entering a time of

play16:59

huge uncertainty I think one will be

play17:02

foolish to be either optimistical

play17:05

pessimistic we just don't know what's

play17:06

going to happen

play17:08

the best we can do is say let's put a

play17:10

lot of effort into trying to ensure that

play17:13

whatever happens is as good as it could

play17:15

have been

play17:16

it's possible that there's no way we

play17:19

will control these super intelligences

play17:20

and that humanity is just a passing

play17:23

phase in the evolution of intelligence

play17:24

that in a few hundred years time there

play17:27

won't be any people it'll all be digital

play17:29

intelligences that's possible we just

play17:32

don't know

play17:34

um

play17:35

predicting the future is a bit like

play17:37

looking into fog you know how when you

play17:39

look into fog you can see about a

play17:41

hundred yards very clearly

play17:43

and then 200 yards you can't see

play17:45

anything there's a kind of wall

play17:47

and I think that walls at about five

play17:48

years

play17:50

Jeffrey Hampton thanks so much for your

play17:52

time thank you for inviting me

play17:57

[Music]

play18:03

foreign

Rate This

5.0 / 5 (0 votes)

Related Tags
人工智能Jeffrey Hinton机器学习超级智能数据隐私深度学习GoogleOpen AI假新闻自动化技术伦理
Do you need a summary in English?