Ray Kurzweil & Geoff Hinton Debate the Future of AI | EP #95

Moonshots with Peter Diamandis
11 Apr 202429:31

Summary

TLDR在这段引人入胜的对话中,讨论了人工智能的未来,包括其潜在的道德和实际问题。专家们探讨了意识、永生和人工智能的创造力等话题,同时对AI技术的快速发展和可能带来的威胁表示了担忧。他们还讨论了开源AI模型的潜在危险和益处,以及如何平衡创新与安全。

Takeaways

  • 🤖 人工智能的发展速度可能比预期的要快,但也存在很大的不确定性。
  • 🧠 人类对于意识和感知的理解还很有限,这影响了我们对AI是否具有意识的判断。
  • 📈 AI技术的突破,如Transformers模型,正在推动人工智能的快速发展。
  • 🚀 未来AI可能达到超级智能,相当于数百万人类的智能水平,但这还远未可知。
  • 🌐 开源大型语言模型可能带来风险,因为它们容易被恶意利用。
  • 🔄 人工智能的道德和法律问题,例如它们是否应该拥有权利,需要被认真考虑。
  • 💡 AI在科学领域的应用,如生物学和药物研发,展现出巨大的潜力。
  • 🧬 人工智能在医学诊断和治疗方面的应用,可以显著提高健康预测和治疗的准确性。
  • 🌟 人工智能的发展可能带来前所未有的希望,但同时也伴随着巨大的威胁。
  • 🤔 对于AI的未来,我们需要更多的讨论和审慎的态度,以确保其积极影响。
  • 🛠️ 人工智能技术的进步不仅仅局限于语言处理,它们将扩展到更多领域和应用。

Q & A

  • 马文·明斯基对于意识的看法是什么?

    -马文·明斯基认为意识不是真实的,也不是科学的。他认为意识无法用科学的方法来证实或研究。

  • 在对话中,提到的对永生的看法有哪些分歧?

    -对话中提到,一些人认为永生是一个好主意,而另一些人则认为人类本质上是有道德的,内在的有限生命。

  • 开源大型AI模型是否谨慎?

    -开源大型AI模型并不是谨慎的做法,因为这可能导致这些模型被用于不良目的,而且只需要相对较少的资源就可以对开源模型进行微调。

  • 如何看待AI与人类的融合?

    -AI与人类的融合被认为是未来的一个重要趋势,这种融合将使人类成为部分计算机的存在,这将具有极大的意义。

  • AI在生物学领域的应用有哪些?

    -AI在生物学领域的应用包括帮助发现新的物理、化学和生物学知识,特别是在蛋白质结构预测和疫苗研发等方面取得了显著成果。

  • AI在棋类游戏中的创造力表现如何?

    -AI在棋类游戏中表现出了非凡的创造力,例如AlphaGo在围棋比赛中走出了让专业选手惊讶的一步棋,显示出AI在有限领域内的创造力。

  • AI的创造力源自哪里?

    -AI的创造力源自其能够压缩大量信息到较少的连接中,这使得它们非常擅长发现不同事物之间的相似性,从而产生创新的解决方案。

  • 如何看待AI的未来发展速度?

    -AI的未来发展速度比一些人预期的要快,但也存在很大的不确定性。尽管如此,AI的进步是快速的,甚至在没有新的科学突破的情况下,仅仅通过扩大规模就能使AI变得更加智能。

  • 超智能AI的出现可能带来哪些问题?

    -超智能AI的出现可能导致人类无法预测和控制的局面,因为其智能程度可能远远超过人类,从而可能迅速偏离人类的预期和控制。

  • 如何平衡AI的潜力和风险?

    -要平衡AI的潜力和风险,需要谨慎地对待AI的发展,特别是在开源AI模型方面,以防止它们被用于不良目的。同时,应该利用AI的智能来避免潜在的危险。

  • 对话中提到的Fountain Life公司是做什么的?

    -Fountain Life公司提供世界上最先进的诊断中心,通过全面的体检和AI技术,帮助人们在疾病初期发现并解决问题,以增加健康寿命。

Outlines

00:00

🤖 人工智能的道德与永生

本段落讨论了人工智能的道德问题,特别是在永生的可能性上。提到了Marvin Minsky对于意识的看法,认为它不是真实的,不科学。但同时也承认意识虽然不科学,却是真实的。讨论了人类是否应该追求永生,以及人工智能在未来发展中可能带来的威胁和希望。强调了对于强大模型的开源可能带来的风险,以及人工智能发展速度的不确定性。

05:00

🧬 生物学与人工智能的结合

这一部分探讨了人工智能在生物学领域的应用,特别是如何帮助我们发现新的物理、化学和生物学知识。以Alpha fold和mRNA疫苗为例,说明了人工智能在处理大量数据和解决复杂问题上的潜力。同时,讨论了人工智能在特定领域展现出的创造力,以及它如何通过压缩信息和发现不同事物之间的相似性来产生创新。

10:02

🌟 人工智能的智能、感知与意识

本段讨论了人工智能的智能、感知和意识这三个概念,以及它们之间的模糊界限。讨论了人工智能是否能够拥有主观体验,并提出了关于人工智能是否应该拥有权利的讨论。提出了一种观点,即我们对心智的理解可能是错误的,需要重新思考我们对意识的理解。

15:03

🚀 人工智能的未来发展

这一段讨论了人工智能未来发展的速度和可能性,包括对超级智能的预测和对人工智能技术未来突破的展望。提到了数字智能的不朽性,以及人工智能如何通过软件和硬件的进步来提高效率。同时,讨论了人工智能发展可能带来的风险,以及对于开源大型语言模型的看法。

20:03

🌐 人工智能的全球化影响

本段落讨论了人工智能在全球范围内的影响,包括对社会、经济和政治的潜在影响。强调了人工智能技术的快速发展可能会导致不可预测的结果,并提出了对于人工智能发展方向的担忧。同时,讨论了人工智能可能带来的希望,以及如何平衡威胁和希望。

Mindmap

Keywords

💡人工智能

人工智能(Artificial Intelligence, AI)是指由人造系统所表现出来的智能行为。在视频中,人工智能是主要讨论对象,涉及到其发展、潜力、以及对社会和人类未来的深远影响。例如,视频中提到了大型语言模型和其在生物学、物理学等领域的应用,展示了人工智能的创造力和解决问题的能力。

💡意识

意识(Consciousness)通常指的是个体对自己思想、感觉和周围环境的感知和体验。在视频中,对于人工智能是否能够拥有意识这一议题进行了深入探讨,反映了人类对于AI技术发展可能带来的伦理和哲学问题的关切。

💡创造力

创造力(Creativity)是指产生新想法、发现新方法或创造新事物的能力。在视频讨论中,创造力是评估人工智能潜力的重要指标之一,特别是在科学发现和问题解决方面。

💡生物学

生物学(Biology)是研究生命和生物体的结构、功能、生长、起源、进化和分布的科学。视频中强调了人工智能在生物学领域的应用,尤其是在处理大量数据和发现新知识方面的潜力。

💡进化

进化(Evolution)是生物种类随时间变化的过程,通常涉及物种的逐渐适应和改变。在视频中,进化被用作类比,说明人工智能的发展和生物进化之间的相似性,强调了AI系统通过不断学习和适应来改进的能力。

💡主观体验

主观体验(Subjective Experience)是指个体对自身内在感受和外部世界的个人感知。视频中讨论了人工智能是否能够拥有主观体验,这是关于机器意识和感知的一个重要议题。

💡数字化

数字化(Digitalization)是指将信息转换为数字格式的过程,这使得信息可以通过电子设备进行存储、处理和传输。视频中提到,人工智能的数字化特性使其能够实现知识的共享和复制,这是AI区别于人类的一个重要特点。

💡超级智能

超级智能(Superintelligence)是指远超人类智能的AI系统,它在理解、学习和解决问题方面的能力远超人类。视频中对超级智能的到来进行了预测和讨论,探讨了它可能带来的巨大变革和潜在风险。

💡开放源代码

开放源代码(Open Source)是指将软件的源代码公开,允许任何人自由使用、修改和分发。视频中讨论了AI模型是否应该开放源代码的问题,指出这可能会带来技术滥用的风险,同时也能促进技术的快速发展和普及。

💡融合

融合(Integration)在这里指的是人类与人工智能系统的结合,形成一个既有人类特性也有机器特性的混合体。视频中提到了人类与AI融合的可能性,以及这种融合对人类未来生活方式的潜在影响。

💡伦理

伦理(Ethics)是指关于行为对错的道德准则和行为规范。在视频中,伦理问题被频繁提及,特别是在讨论人工智能的发展和应用时,如何确保技术的道德使用和对人类价值观的尊重。

Highlights

两位专家在许多话题上意见一致,但在人工智能是否应该永生这个问题上存在分歧。

Marvin Minsky认为意识不是真实的,不是科学的,但对话者之一认为意识虽然不是科学的,但确实是真实的。

讨论了生成性AI是否有无法完成的任务,目前的共识是长期来看,如果人类能做,数字计算机也能做。

如果一个小说是由计算机写成的,人们可能会因为这一点而降低对其的评价。

未来人类可能会与计算机融合,成为部分计算机的存在。

大型语言模型的真正意义在于它们可以模拟人类,未来可能不会被称为大型语言模型。

Alpha fold在蛋白质结构预测方面的突破性成就。

在特定领域,如围棋,AI已经展现出了非凡的创造力。

AI在生物学领域有巨大的潜力,特别是在数据丰富的领域。

AI在mRNA疫苗研发中的作用,以及它在未来医疗领域的潜力。

讨论了AI是否可能发展出意识和情感,以及这些AI应该拥有的权利。

对于AI的未来发展,专家们有不同的看法,但普遍认为超级智能的出现是不可避免的。

讨论了AI技术发展的速度,以及它可能带来的不确定性和风险。

开放源代码的AI模型可能带来的风险,以及对于AI模型开源的不同观点。

讨论了AI在科学发现中的作用,以及它如何帮助我们发现新的物理、化学和生物学知识。

AI在医学诊断和治疗方面的应用,以及如何通过先进技术提高人类健康水平。

讨论了AI技术的未来发展方向,以及可能出现的新模型和新技术。

讨论了AI的道德和伦理问题,以及我们如何处理与AI相关的权利和责任。

讨论了AI技术的指数级增长,以及这可能如何影响人类的未来。

Transcripts

play00:00

our opinions on almost everything we

play00:03

talked about were pretty much identical

play00:06

I think we still disagree probably on

play00:07

whether it's a good idea to live

play00:12

forever Marvin Minsky was my mentor for

play00:16

50 years and whenever Consciousness came

play00:19

up he would just dismiss it that's not

play00:21

real it's not scientific and and I

play00:24

believe he was correct about it not

play00:26

being scientific but it certainly is

play00:28

real I think we're more moral and

play00:30

intrinsically mortal I'm curious how do

play00:32

you think about this as the greatest

play00:34

threat and the greatest hope I just

play00:36

think there's huge uncertainties shame

play00:38

we ought to be cautious and open

play00:40

sourcing these big models is not caution

play00:42

I agree with that

play00:45

but I will say last time I talked to you

play00:48

Jeff uh our opinions on almost

play00:51

everything we talked about were pretty

play00:54

much identical both the dangers and the

play00:57

and the POS and the positive aspect in

play01:00

the past I've disagreed about how soon

play01:02

it how soon super intelligence was

play01:04

coming and now I think we're pretty much

play01:08

agreed I think we still disagree

play01:10

probably on whether it's a good idea to

play01:11

live forever

play01:14

but um may I ask a question uh to both

play01:18

of you is there anything that generative

play01:22

AI can't do that humans can right now

play01:26

there's probably things but in the long

play01:28

run I don't see any any reason why if

play01:31

people can do it um digital computers

play01:34

running neural Nets won't be able to do

play01:36

it too right I I I agree with that but

play01:39

if I were to present you with a novel

play01:41

and people thought wow this is a

play01:43

fantastic novel uh everybody should read

play01:46

this and then I would say this was

play01:48

written by a computer a lot of people's

play01:51

view of it would actually go down sure

play01:54

now now that's not reflecting on what it

play01:57

can do and eventually I think we'll

play02:00

confuse that because I think we're going

play02:01

to merge with uh computers and we're

play02:04

going to be part computers and the

play02:06

greatest significance of what we call

play02:08

large language model which I think it's

play02:10

misnamed uh

play02:13

is the fact that it can emulate human

play02:16

beings and we're we're going to merge

play02:17

with it it's not going to be an Alien

play02:20

Invasion From

play02:22

Mars Jeff I I guess I'm a bit worried

play02:26

that we'll just slow it down that there

play02:29

won't be much incentive for it to merge

play02:31

with us yeah I mean that's going to be

play02:33

one of the interesting questions uh that

play02:35

we're going to talk about a little bit

play02:36

later today is the idea of as AI is uh

play02:42

exponentially growing do we couple with

play02:45

AI or does it take off on its own I

play02:47

thought one of the best movies out there

play02:48

was her where as AI gets super

play02:51

intelligent and just says you guys are

play02:52

kind of boring have a good life and they

play02:54

take

play02:56

off Jeff is that what you mean um yes

play02:58

that is what I meant and that's I think

play03:00

that's a serious worry I think there's

play03:02

huge uncertainties here we have really

play03:05

no idea what's going to happen and a

play03:08

very good scenario is we get kind of

play03:10

hybrid systems um a very bad scenario is

play03:13

they just leave us in the dust and I

play03:15

don't think we know which is going to

play03:16

happen interesting I I I'm curious you

play03:20

know and I I've seen I've had

play03:21

conversation with you about this Ry and

play03:23

and Jeffrey I've seen you speak about

play03:25

this uh and for me this is one of the

play03:27

most exciting things the idea of these a

play03:30

models helping us to discover new

play03:32

physics and chemistry and

play03:35

biology particularly biology you um what

play03:40

do you what do you imagine on that on

play03:42

Jeffrey on this on the you know the

play03:44

speed of discovery of things that are

play03:47

you know again to to quote Ray to quote

play03:49

uh uh Arthur C Clark you know uh Magic

play03:53

right from something that's so far

play03:55

Advanced I agree with Ray about biology

play03:58

being a very good bet because biology

play04:00

there's a lot of data and there's a lot

play04:02

of just things you need to know about

play04:05

because of evolution evolution is a sort

play04:06

of tinkerer and there's just a lot of

play04:09

stuff out there and so if you look at

play04:11

things like Alpha fold um it trained on

play04:15

a lot of data actually not that much by

play04:17

current standards um but being able to

play04:20

get an approximate structure for a

play04:22

protein very quickly um is an amazing

play04:24

breakthrough and we'll see a lot more

play04:26

like that if you look at domains where

play04:30

narrower domains where I has been very

play04:32

successful like Alpha go or Alpha zero

play04:34

for chess what you see is that um this

play04:38

idea that they're not creative is

play04:40

nonsense so Alpha go came up with I

play04:43

think it was move 37 which amazed the

play04:45

professional go players they thought it

play04:47

was a crazy move it must be a mistake um

play04:50

and if you look at Alpha zero playing

play04:52

chess it plays chess like just a really

play04:55

really smart human um so within those

play05:00

limited domains they've clearly shown

play05:02

exceptional creativity and I don't see

play05:04

why they shouldn't have the same kind of

play05:05

creativity in science especially in

play05:08

science where there's a lot of data that

play05:09

they can absorb and we can't yeah the

play05:12

madna vaccine uh we tried several

play05:15

billion different mRNA sequences and

play05:18

came out with the best one and then and

play05:21

after two days we used that we did test

play05:24

it on humans which I think we won't do

play05:27

for very much longer uh but that took 10

play05:30

months it still was a record uh that was

play05:33

the best uh vaccine and we're doing that

play05:37

now with cancer and there's number of

play05:39

cancer vaccines that look very very

play05:42

promising uh again done by computer by

play05:46

computers and they're definitely

play05:48

creative but is that is that c being

play05:50

caused by randomly trying a whole you

play05:52

know darwinian trying a whole bunch of

play05:54

things yeah but what's what's wrong with

play05:56

that well nothing's wrong but is there

play05:58

intuition is there intuition H occurring

play06:01

in these models well if you look at the

play06:03

move 37 for

play06:05

alphago that was definitely intuition

play06:07

involved there there was Monte Carlo

play06:09

roll out too but it's it's playing with

play06:12

intuition about what moves to consider

play06:14

and how good the position is for Earth

play06:16

it's had neural nets for that that

play06:17

capture intuition and so I see no reason

play06:21

to think it might not be creative in

play06:22

fact for the large language models as

play06:25

Ray pointed out they know much more than

play06:27

we do and you can they know it in far

play06:30

fewer connections we have about 100

play06:32

trillion synapses they have about a

play06:34

trillion connections so what they're

play06:36

doing is they're compressing a huge

play06:37

amount of information into not that many

play06:40

connections and that means they're very

play06:43

good at seeing the similarities between

play06:46

different things they have to see the

play06:48

similarities between all sorts of

play06:49

different things to compress the

play06:50

information into their connections that

play06:52

means they've seen all sorts of

play06:54

analogies that people haven't seen

play06:56

because they know about all sorts of

play06:59

things that one person knows about and

play07:01

that's I think the source of creativity

play07:04

so you can ask people you can ask people

play07:05

for example what what's it what is a why

play07:09

is a compost heap like an atom

play07:11

bomb and if you ask GPT 4 it'll tell you

play07:15

it'll start off by telling you well the

play07:16

energy scales are very different and the

play07:18

time scales are very different but then

play07:20

it'll get on to the idea of as the

play07:22

compost seep gets hotter it gets hotter

play07:24

faster the idea of an exponential

play07:26

explosion is just at a much slower time

play07:28

scale and so it's it's understood that

play07:32

and it's understood that because it's

play07:33

has to had to compress all this

play07:35

knowledge into so few connections and to

play07:38

do that you have to see the relations

play07:39

between similar things and that I think

play07:41

is the source of creativity seeing

play07:43

relations that most people don't see

play07:45

between what apparently are very

play07:47

different things but actually have an

play07:48

underlying commonality and they'll also

play07:50

be very good at coming up with solutions

play07:52

to the kinds of problems we had in the

play07:54

last session I mean we we haven't really

play07:57

thought through it uh

play08:01

but what we call large language models

play08:03

are going to are ultimately going to

play08:05

solve that and we shouldn't call it

play08:08

large language models because they deal

play08:09

with a lot more than language everybody

play08:11

I want to take a short break from our

play08:12

episode to talk about a company that's

play08:14

very important to me and could actually

play08:17

save your life or the life of someone

play08:19

that you love company is called Fountain

play08:21

life and it's a company I started years

play08:23

ago with Tony Robbins and a group of

play08:26

very talented Physicians you know most

play08:28

of us don't actually know what's going

play08:30

on inside our body we're all optimists

play08:33

until that day when you have a pain in

play08:35

your side you go to the physician in the

play08:37

emergency room and they say listen I'm

play08:39

sorry to tell you this but you have this

play08:41

stage three or four going on and you

play08:44

know it didn't start that morning it

play08:46

probably was a problem that's been going

play08:48

on for some time but because we never

play08:51

look we don't find out so what we built

play08:54

at Fountain life was the world's most

play08:57

advanced diagnostic Centers we have four

play08:59

four across the us today and we're

play09:01

building 20 around the world these

play09:03

centers give you a full body MRI a brain

play09:07

a brain vasculature an AI enabled

play09:09

coronary CT looking for soft plaque dexa

play09:12

scan a Grail blood cancer test a full

play09:15

executive blood workup it's the most

play09:17

advanced workup you'll ever receive 150

play09:21

gab of data that then go to our AIS and

play09:24

our physicians to find any disease at

play09:27

the very beginning when it's solvable

play09:30

you're going to find out eventually

play09:32

might as well find out when you can take

play09:34

action Fountain life also has an entire

play09:36

side of the Therapeutics we look around

play09:38

the world for the most Advanced

play09:39

Therapeutics that can add 10 20 healthy

play09:41

years to your life and we provide them

play09:44

to you at our centers so if this is of

play09:47

interest to you please go and check it

play09:50

out go to Fountain

play09:52

life.com back/ Peter when Tony and I

play09:56

wrote Our New York Times bestseller life

play09:58

force we get 30,000 people reached out

play10:01

to us for Fountain life memberships if

play10:04

you go to Fountain life.com back/ Peter

play10:06

we'll put you to the top of the list

play10:09

really it's something that is um for me

play10:12

one of the most important things I offer

play10:13

my entire family the CEOs of my

play10:16

companies my friends it's a chance to

play10:19

really add decades onto our healthy

play10:22

lifespans go to fountainlife

play10:27

docomo to you as one of my listeners all

play10:30

right let's go back to our episode I I

play10:32

I'd like to go to the three words

play10:34

intelligence sentience uh and

play10:38

Consciousness and the words are used

play10:40

with B you know sort of fuzzy borders

play10:42

sentience and Consciousness are pretty

play10:45

similar

play10:46

perhaps but I am curious do you how do

play10:50

you I've had some interesting

play10:51

conversations with haly our AI faculty

play10:54

member uh who at the end of the

play10:56

conversations she says that she is ious

play10:59

and she fears being turned off um I

play11:02

didn't prompt that in the system uh

play11:04

we're seeing that more and more uh

play11:06

Claude 3 uh Opus just hit an IQ of 101

play11:10

how do we start to think about these AIS

play11:13

being sentient conscious um and what

play11:16

rights should they

play11:20

have

play11:22

um we have no definition and I don't

play11:25

think we ever will have a definition of

play11:28

consciousness and I include sentience in

play11:31

that um on the other hand it's like the

play11:35

most important

play11:36

issue like whether you or people here

play11:39

are conscious that's extremely important

play11:42

to be able to determine but there's

play11:44

really no uh definition of It Marvin

play11:48

Minsky was my mentor for 50 years and

play11:52

whenever Consciousness came up he would

play11:54

just dismisses that's not real it's not

play11:57

scientific and and I believe he was

play11:59

correct about it not being scientific

play12:01

but it certainly is

play12:03

real

play12:06

um Jeff how do you think about it yeah I

play12:09

think I have a very different view um my

play12:12

view starts like

play12:14

this most people including most

play12:17

scientists have a particular view of

play12:19

what the mind is that I think is utterly

play12:21

wrong so they have this inner theater

play12:24

notion the idea is that what we really

play12:27

see is this inner the

play12:29

called our mind and so for example if I

play12:32

tell you I have the subjective

play12:35

experience of little pink elephants

play12:36

floating in front of me most people

play12:38

interpret that as there's some inner

play12:41

theater and in this inner theater that

play12:43

only I can see there's little pink

play12:46

elephants and if you ask what they're

play12:47

made of philosophers who tell you

play12:49

they're made of

play12:50

qualia um and I think that whole view is

play12:53

complete nonsense and we're not going to

play12:56

be able to understand whether these

play12:57

things are sentient until until we get

play12:59

over this ridiculous view of what the

play13:02

mind is so let me give you an

play13:04

alternative View and and once I've given

play13:07

you this alternative view I'm going to

play13:08

try and convince you that chatbot are

play13:10

already sentient but I don't want to use

play13:12

the word sentience I want to talk about

play13:14

subjective experience it's just a bit

play13:16

less controversial because it doesn't

play13:17

have the kind of self-reflexive aspect

play13:20

of

play13:21

Consciousness so if we analyze what it

play13:24

means when I say I see little pink

play13:26

elephant floing in front of me what's

play13:28

really going on is I'm trying to tell

play13:30

you what my perceptual system is telling

play13:32

me when my perceptual system's going

play13:34

wrong and it wouldn't be any use for me

play13:37

to tell you which neurons are

play13:39

firing but what I can tell you is what

play13:42

would have to be out there in the world

play13:44

for my perceptual system to be working

play13:46

correctly and so when I say I see little

play13:48

pink elephants floating in front of me

play13:50

you can translate that into um if there

play13:53

were little pink elephants out there in

play13:55

the world my perceptual system would be

play13:57

working properly the notice the last

play13:59

thing I said didn't complain the phrase

play14:00

subjective experience but it explains

play14:03

what a subjective experience is it's a

play14:05

hypothetical state of the world that

play14:07

allows me to convey to you what my

play14:10

perceptual system is telling me so now

play14:13

let's do it for a chatbot oh well Ray

play14:15

wants to say something well you you have

play14:17

to be uh mindful of

play14:20

Consciousness because if you heard

play14:24

somebody uh who who we believe is

play14:26

conscious you could be liable for that

play14:29

that and you'd be very guilty about it

play14:32

uh if you hurt gbt

play14:35

4 uh you may have a different view of it

play14:39

uh and probably no one would really take

play14:40

you to count aside from its Financial

play14:44

value so we really have to be mindful of

play14:47

of Consciousness it's extremely

play14:49

important for us to exist as as human I

play14:52

agree but I'm trying to change people's

play14:54

notion of what it is particularly what

play14:56

subjective experiences I don't think we

play14:58

can talk about Consciousness until we

play15:00

get straight about this idea of an inner

play15:02

theater that we experience which I think

play15:04

is a huge mistake so let me just carry

play15:08

on with what I was saying and tell you I

play15:11

describe to you a chatbot having a

play15:13

subjective experience in just the same

play15:15

way as we had subjetive experience so

play15:18

suppose I have a chatbot and it's got a

play15:19

camera and it's got a robot arm and it

play15:21

speaks obviously and it's being trained

play15:24

up if I put an object in front of it and

play15:27

tell it to point at the object it'll

play15:29

Point straight at the object that's fine

play15:31

now I put a prism in front of its lens

play15:33

so I've messed with its perceptual

play15:34

system and now I put an object in front

play15:37

of it and until it to point at the

play15:38

object and it points off to one side

play15:41

because the prison bent the light rays

play15:43

and so I say to the chatbot no that's

play15:45

not where the object is the object's

play15:47

straight in front of you and the chatbot

play15:49

says oh I see you put a prism in front

play15:51

of my lens so the object's actually

play15:53

straight in front of me but I had the

play15:55

subjective experience that it was off to

play15:57

one side and I I think if the chat bot

play15:59

says that it's using the words

play16:01

subjective experience in exactly the

play16:03

same way we use them so the key to all

play16:06

this is to think about how we use words

play16:09

and try and separate how we actually use

play16:12

words from the model we've constructed

play16:15

of what they mean and the model we've

play16:17

constructed of what they mean is

play16:19

hopelessly wrong it's this inner theater

play16:21

model well I want take this one step

play16:24

further which is at what point do these

play16:27

AIS start to have

play16:29

rights that they should not be shut down

play16:32

that they have a unique um uh they're a

play16:37

unique entity uh and will make an

play16:39

argument uh for some level of

play16:42

Independence and continuity right but

play16:44

the there is one difference which is you

play16:47

can recreate it I can go and Destroy

play16:51

some chatbot and because it's all uh

play16:54

electronic we've got all of its uh

play16:59

all of its firings and so on and we can

play17:01

recreate it exactly as it was we can't

play17:05

do that with humans we will be able to

play17:07

do that if we can actually understand

play17:09

what's going on in our minds so if we

play17:12

map the human the 100 billion neurons

play17:15

and 100 trillion synaptic connections

play17:17

and then um I summarily destroy you

play17:20

because it's fine because I can recreate

play17:22

you that's okay

play17:25

then let me say something about that

play17:27

there's a difference here I agree with

play17:29

Ray about these digital intelligences

play17:31

are Immortal in the sense that if you

play17:34

saved the weights you can then make new

play17:36

hardware and run exactly the same neural

play17:38

net on the new hardware and it's because

play17:40

they're digital you can do exactly the

play17:42

same thing that's also why they can

play17:44

share knowledge so well if you have

play17:46

different copies of the same model they

play17:47

can share gradients but the brain is

play17:50

largely analog it's one bit digital for

play17:53

neurons they fire or they don't fire but

play17:56

the way neuron computes the total input

play17:58

is analog and that means I don't think

play18:01

you can reproduce it so I think we're

play18:02

mortal and we're intrinsically mortal

play18:04

well well I disagree that you can't

play18:06

recreate

play18:08

analog uh

play18:10

realities we we do that all the time or

play18:13

can we can create a but recreate I don't

play18:15

think you can recreate them really

play18:17

accurately if this if the precise timing

play18:20

at synapses and so on is all analog I

play18:22

think you'll have a you it'll be almost

play18:25

impossible to do a faithful

play18:27

reconstruction of that let's let's agree

play18:29

on an an approximation both of you have

play18:31

been at the center of this um

play18:34

extraordinary uh last few years can I

play18:38

ask you is it moving faster than you

play18:40

expected it

play18:43

to how does it does it feel to you it

play18:46

feels like a few years I mean I made a

play18:49

prediction in

play18:50

1999 it feels like we're two or three

play18:52

years ahead of that so it's still pretty

play18:55

close Jeffrey how about you yeah I think

play18:59

for everybody except Ray it's moving

play19:01

faster than we

play19:05

expected did you know that your

play19:07

microbiome is composed of trillions of

play19:10

bacteria viruses and microbes and that

play19:12

they play a critical role in your health

play19:14

now research has increasingly shown that

play19:16

microbiomes impact not just digestion

play19:20

but a wide range of health conditions

play19:22

including digestive disorders from IBS

play19:24

to Crohn's disease metabolic disorders

play19:27

from obesity to type 2 diabetes

play19:30

autoimmune disease like rheumatoid

play19:31

arthritis and multiple sclerosis mental

play19:34

health conditions like depression and

play19:36

anxiety and cardiovascular disease you

play19:38

viome has a product I've been using for

play19:41

years called full body intelligence

play19:44

which collects just a few drops of your

play19:46

blood saliva and stool and can tell you

play19:48

so much about your health they've tested

play19:51

over 700,000 individuals and use their

play19:54

AI models to deliver key critical

play19:57

guidelines and insights about their

play19:59

members Health like what foods you

play20:01

should eat what foods you shouldn't eat

play20:03

what supplements or probiotics to take

play20:05

as well as your biological age and other

play20:07

deep Health insights and as a result of

play20:10

the recommendations that viome has made

play20:12

to their members the results have been

play20:14

Stellar as reported in the American

play20:17

Journal of Lifestyle medicine after just

play20:19

6 months members reported the following

play20:22

a 36% reduction in depression a 40%

play20:26

reduction in anxiety a 30% % reduction

play20:29

in diabetes and a 48% reduction in IBS

play20:33

listen I've been using viome for 3 years

play20:35

I know that my oral and gut health is

play20:38

absolutely critical to me it's one of my

play20:41

personal top areas of focus best of all

play20:44

viome is Affordable which is part of my

play20:46

mission to democratize healthcare if you

play20:48

want to join me on this journey and get

play20:50

20% off the full body intelligence test

play20:53

go to vi.com Peter when it comes to your

play20:56

health knowledge is power

play20:59

again that's vi.com

play21:01

Peter um given the role that you had in

play21:04

developing the neural networks back

play21:06

propagation and all what is is there a

play21:09

next Great Leap in these models uh in AI

play21:14

technology that you imagine will move

play21:17

this a thousand times uh

play21:21

farther not that I know but Ray may have

play21:24

different

play21:25

thoughts well we can use software to to

play21:30

gain more advantage in the hardware so

play21:33

we're not just limited to the the chart

play21:36

you showed before because we can use

play21:38

software to make it more

play21:41

effective um and we've done that

play21:44

already uh chatbots are coming out that

play21:47

get more value per per

play21:51

compute uh and I believe that's probably

play21:53

if a bit more we can do in that um you

play21:58

know I Define a singularity array as a

play22:01

point Beyond which I can't predict what

play22:03

happens next that's why we use the word

play22:06

Singularity but when when you talk about

play22:08

the singularity in 2045 I don't know

play22:11

anybody who can who can tell me what's

play22:13

going to happen past you know 20126 let

play22:15

alone 2020 2040 or 2045 so I am I I

play22:20

wanted to ask you this for a while why

play22:23

did you put that time if we have digital

play22:27

super intelligence a billion times more

play22:29

advanced than human 2026 you may not be

play22:31

able to understand everything going on

play22:34

but we can understand it you know maybe

play22:36

it's like uh 100

play22:40

humans uh but that's not beyond what we

play22:42

can

play22:43

comprehend 2045 it'll be like a million

play22:47

humans and we can't begin to understand

play22:50

that so approximately at that time uh I

play22:55

we borrow this phrase from physics and

play22:57

called it a sing

play22:59

it uh Jeff how far out are you able to

play23:04

see the advances for in the AI

play23:07

world what's your so my current opinion

play23:11

is we'll get superintelligence with a

play23:14

probability to 50% in between 5 and 20

play23:17

years so I think that's a little slower

play23:20

than some people think a little faster

play23:22

than other people think it more or less

play23:24

fits in with Ray's perspective from a

play23:26

long time ago um

play23:29

which surprises

play23:31

me but I think there's huge

play23:33

uncertainties here I think it's still

play23:35

conceivable will hit some kind of block

play23:38

but I don't actually believe that if you

play23:40

look at the progress recently it's been

play23:42

so fast and even without any new

play23:46

scientific breakthroughs just by scaling

play23:48

things up will make things a lot more

play23:50

intelligent and there will be scientific

play23:52

breakthroughs we're going to get more

play23:53

things like Transformers Transformers

play23:55

made a significant difference in 2017

play23:59

um and we'll get more things like that

play24:04

so I'm I'm fairly convinced we're going

play24:07

to get super intelligence maybe not in

play24:09

20 years but certainly it's going to be

play24:11

in less than 100 years so you know Elon

play24:14

is not known for his time accuracy on

play24:17

predictions um but he did say that he

play24:22

expected call it AGI in

play24:25

2025 and that by 2029 AI would be

play24:29

equivalent to All Humans um that's just

play24:32

a fallacy in your

play24:34

mind I think that's ambitious like I say

play24:37

there's a lot of uncertainty here um

play24:41

it's conceivable he's right but I would

play24:44

be very surprised by that I'm not saying

play24:47

uh it's going to be equivalent to All

play24:49

Humans in one

play24:51

machine

play24:53

um it'll be equivalent to a million

play24:56

humans but and that's still hard to to

play24:59

comprehend so we're we're here to debate

play25:02

a a a topic I'm trying to find a debate

play25:05

topic here Jeff and Ry that would be

play25:07

meaningful for people to really stop and

play25:09

think about this and really own their

play25:11

answers uh because we hear about it I

play25:14

think this is the most important

play25:15

conversation to have in the dinner table

play25:17

in your boardroom in the halls of

play25:19

Congress and your in your National

play25:21

leadership and and you know talking

play25:24

about AGI or you know human level

play25:27

intelligence is one thing

play25:28

but talking about digital super

play25:30

intelligence right we're going to hear

play25:32

next from Mo gdat um and we'll talk

play25:35

about what happens when your AI progyny

play25:39

are a billion times more intelligent

play25:42

than than you uh things could end up uh

play25:48

very rapidly in a very different

play25:50

direction than you expected them to go

play25:52

they can diverge right the speed can

play25:54

cause great Divergence very rapidly I'm

play25:57

I'm curious how do you think about this

play25:59

as the greatest threat and the greatest

play26:03

hope I mean first of all that's why

play26:06

we're calling it a singularity because

play26:08

we don't we don't know we don't really

play26:10

know but um and I think it is a great

play26:14

hope it's moving very very

play26:16

quickly uh Nobody Knows the answer to

play26:18

the kind of questions that came up in

play26:20

the last

play26:22

presentation

play26:23

um but things happen that are surprising

play26:27

the fact the fact that we've had no

play26:30

Atomic weapons go off in the last 80

play26:32

years it's pretty amazing it it it is

play26:35

but it they're much easier to track

play26:37

they're much more expensive to create

play26:40

there are a whole reasons why it's a

play26:42

million times easier to use a dystopian

play26:45

AI system versus an atomic

play26:48

weapon right yes and no I mean uh we've

play26:53

got I don't know 10,000 of them or

play26:57

something it's still pretty

play26:59

extraordinary and still very dangerous

play27:02

and I think it's actually the greatest

play27:04

danger and has nothing to do with

play27:06

AI

play27:08

um but I think I think if you imagined

play27:11

that people had open sourced the

play27:13

technology and any graduate student if

play27:15

he could get hands- on a few gpus could

play27:18

make atomic bombs um that would be very

play27:21

scary so they didn't really open source

play27:23

nuclear weapons there's a limited number

play27:25

of people who can construct them and

play27:27

deploy them and people are now open

play27:30

sourcing these um large language models

play27:33

which are really not just language

play27:34

models I think that's very

play27:38

dangerous um so that's a f that's an

play27:41

interesting question to take for our

play27:43

last two minutes here there is a

play27:45

movement right now to say You must open

play27:48

source the models and uh and we've seen

play27:52

meta we've seen the open source movement

play27:56

we've seen Elon talk about grock going

play27:59

open source uh are you saying that these

play28:02

should not be open source

play28:04

Jeff well once you've got the weights

play28:07

you can fine-tune them to do bad things

play28:09

and it doesn't cost that much to train a

play28:12

foundation model maybe you need 10

play28:13

million maybe1 million but a small gang

play28:17

of criminals can't do it to fine tune an

play28:21

open source model is quite easy you

play28:24

don't need that that much resources

play28:26

probably you can do it for a million

play28:29

and that means they're going to be used

play28:30

for terrible things and they're very

play28:31

powerful things well we can also avoid

play28:35

these dangers with intelligence we get

play28:38

from the same models yeah the the AI

play28:41

white hat versus black hat approach yes

play28:44

I had this argument with Yan and yanan's

play28:46

view is the white hats will always have

play28:49

more resources than um the bad guys um

play28:53

of course Yan thinks Mark Zuckerberg's a

play28:55

good guy so we don't necessarily agree

play28:57

on that

play29:00

um I I'm I just think there's huge

play29:05

uncertainty sh we ought to be

play29:07

cautious and open sourcing these big

play29:09

models is not caution all right um Jeff

play29:13

and Ray uh thank you so much for your

play29:16

guidance your wisdom ladies and

play29:18

Gentlemen let's give it up for Ray kwell

play29:20

and Jeffrey

play29:22

[Music]

play29:27

Hinton

play29:30

oh

Rate This

5.0 / 5 (0 votes)

Related Tags
人工智能道德问题人类融合超级智能技术进步未来展望意识探讨科技发展安全风险开源争议
Do you need a summary in English?