The Singularity Is Nearer featuring Ray Kurzweil | SXSW 2024

SXSW
18 Mar 202459:21

Summary

TLDR在这段对话中,Ray Kurzweil,一位在人工智能领域工作超过六十一年的科学家,分享了他对人工智能未来发展的深刻见解。Kurzweil预测,到2029年,计算机将通过图灵测试,而到了2045年,我们将达到技术奇点,届时人类将能够备份大脑,实现思维的数字化。他讨论了大型语言模型的快速发展,以及它们在创造新知识和解决问题方面的潜力。此外,Kurzweil还探讨了纳米机器人和脑机接口技术的重要性,以及这些技术可能带来的伦理挑战。他认为,尽管存在风险,但技术的进步将继续是人类发展的积极力量,并鼓励年轻人追随自己的激情,为创造一个更加智能和平等的未来做出贡献。

Takeaways

  • 🧠 我们正在经历AI领域的快速发展,特别是在大型语言模型方面,这些模型的发展速度甚至超出了一些专家的预期。
  • ⚙️ 大型语言模型和AI系统的发展并不完全依赖于对人类大脑工作原理的理解,而是更多地依赖于计算能力和连接的数量。
  • 📈 库兹韦尔曲线(Kurzweil Curve)展示了技术进步的指数增长,这解释了为何大型语言模型在近期变得可行。
  • 🧪 通过模拟生物过程,AI已经能够创造出新的医疗解决方案,如Moderna疫苗,这展示了AI在创造性任务中的潜力。
  • 🤖 纳米机器人和脑机接口是实现技术奇点(singularity)的另外两个关键技术,尽管它们的发展速度慢于AI。
  • 🧵 脑机接口和纳米机器人的发展速度受到人体试验伦理和安全性考量的限制。
  • 💡 技术进步,包括AI和纳米技术,将继续推动经济增长和社会平等。
  • 🔮 技术奇点预计将在2045年实现,届时人类将能够备份和恢复大脑的完整内容。
  • 🧐 尽管我们对未来的预测存在不确定性,但历史表明,技术进步带来的利益通常超过风险。
  • 👴 随着寿命的延长,人们对于生活规划和职业发展的观念将发生根本性变化,退休年龄可能会变得不那么重要。
  • 🌐 随着技术的发展,人们对于工作的看法将转变,创造性和个人兴趣将成为选择职业的重要考量因素。

Q & A

  • Ray Kurzweil 在人工智能领域工作了多久?

    -Ray Kurzweil 在人工智能领域工作了61年。

  • Kurzweil 预测的图灵测试将会在何时实现?

    -Kurzweil 在1999年预测,计算机将在2029年通过图灵测试。

  • Kurzweil 对于意识的看法是什么?

    -Kurzweil 认为意识虽然非常重要,但它不是一个科学概念,因为我们无法证明一个实体是否具有意识。

  • Kurzweil 认为人类的情感和感觉能否被计算和数学所捕捉?

    -Kurzweil 认为人类的情感和感觉,特别是与人相爱时的行为和感觉,可以依赖于连接模式,因此可以被计算和数学所捕捉。

  • Kurzweil 提到的长寿逃逸速度是什么意思?

    -长寿逃逸速度是指通过科学进步,人们每年因年龄增长而损失的健康时间将被科学进步所补偿,甚至超过,从而实现寿命的延长。

  • Kurzweil 预测的奇点(singularity)将在何时到来?

    -Kurzweil 预测奇点将在2045年到来,届时技术将达到一个临界点,人类的生活将会发生翻天覆地的变化。

  • Kurzweil 如何看待人类与机器的结合对伦理的挑战?

    -Kurzweil 认为人类与机器的结合将带来伦理挑战,例如权力分配、才能的定义以及个体的公平性等问题,但他相信这些问题可以通过人类的创造力和合作来解决。

  • Kurzweil 对于人类未来是否应该专注于他们的热情而非仅仅是收入的看法是什么?

    -Kurzweil 建议年轻人应该追求他们的热情所在,因为未来将有更多的机会和可能性,而不仅仅是追求经济收入。

  • Kurzweil 认为大型语言模型的快速发展是否令人惊讶?

    -Kurzweil 表示他对于大型语言模型的快速发展感到有些惊讶,因为他预计这些进展会发生在几年后,而不是当前。

  • Kurzweil 如何看待人类与机器的融合对个体独特性的影响?

    -Kurzweil 认为即使人类与机器融合,分享记忆和信息,个体仍然是独特的,因为每个人都有不同的兴趣和选择。

  • Kurzweil 对于人工智能的未来发展有哪些担忧?

    -Kurzweil 担忧人工智能的快速发展可能带来的风险,例如纳米技术可能被滥用造成灾难,但他也相信我们将开发出能够预防这些风险的技术。

Outlines

00:00

😀 介绍与AI的长期工作

这一段介绍了演讲者Ray的背景,他比任何其他在世的人都有更长的AI研究历史。讨论了他与Marvin Minsky的师生关系,以及他们对AI未来的预测,包括语言模型和图灵测试。

05:02

📈 库兹韦尔曲线的准确性

Ray解释了库兹韦尔曲线,展示了计算能力的指数增长。他讨论了大型语言模型的可行性,以及技术进步如何影响能源价格和可再生能源的增长。

10:06

🧠 大型语言模型与大脑的比较

在这一段中,Ray讨论了大型语言模型与人类大脑的连接数量和组织方式的差异。他还提到了模型的效率和未来可能达到的连接数量。

15:09

🤖 通往人工通用智能(AGI)的道路

Ray讨论了达到人工通用智能所需的计算能力,以及软件和学习算法的重要性。他还提到了我们如何通过模拟生物学来处理非语言问题,例如疾病治疗。

20:11

🧐 意识的科学性

在这一段中,Ray和观众探讨了意识的科学性,以及我们是否能够证明某些事物是否具有意识。他还讨论了情感和人类行为如何可能通过计算和数学来表达。

25:12

🚀 长寿逃逸速度和奇点

Ray预测了长寿逃逸速度的概念,即科技进步将使我们的寿命延长。他讨论了奇点的概念,以及到2045年我们将能够备份大脑的内容。

30:19

💡 创造力与AI的未来

Ray讨论了AI在创造力方面的可能性,包括模拟生物学来开发疫苗和治疗疾病。他强调了AI尝试所有可能性并创造出新事物的能力。

35:21

🧬 纳米机器人和脑机接口

在这一段中,Ray讨论了纳米机器人和脑机接口的发展,以及它们对实现奇点的重要性。他还提到了这些技术发展的缓慢速度和涉及的伦理问题。

40:23

🤔 人类与机器的未来角色

Ray讨论了人类与机器智能的未来关系,以及人类在解决复杂问题中的角色。他还提到了技术普及和平等性提高的可能性。

45:26

🌟 对未来的乐观态度

Ray表达了他对技术未来的乐观态度,讨论了技术如何改善生活并减少贫困。他还提到了技术发展可能带来的风险,以及我们如何准备应对未来的挑战。

50:30

📚 年轻一代的准备

Ray对年轻一代提出了建议,鼓励他们追随自己的激情,而不仅仅是追求经济利益。他还讨论了长寿对个人生活规划的影响。

55:32

🤖 AI的可信赖性

Ray讨论了AI的可信赖性问题,包括如何确保AI系统在集成到人类生活和决策过程中的可靠性。

⚙️ 技术风险与未来挑战

Ray讨论了技术发展可能带来的风险,例如纳米技术可能造成的灾难性后果,并提出了预防这些风险的策略。

Mindmap

Keywords

💡人工智能(AI)

人工智能是指由计算机系统执行的、通常与人类智能相关联的任务,如学习、推理、解决问题、知识理解、语言识别、视觉感知、自然语言处理等。在视频中,Ray Kurzweil讨论了他在AI领域的长期工作,以及AI如何模拟人类智能并超越人类能力。例如,他提到了大型语言模型的发展,以及它们如何能够快速生成文章,这是人类无法做到的。

💡图灵测试

图灵测试是艾伦·图灵提出的一种测试,旨在评估机器是否具有人类智能。如果机器能够让人在不知情的情况下无法区分它与另一个真实人类的反应,那么它就被认为通过了图灵测试。在视频中,Ray提到他曾在1999年预测到2029年计算机将通过图灵测试,并且他认为这个时间表可能会提前。

💡奇点(Singularity)

奇点是一个理论未来点,在这个点上,技术增长变得几乎无限快,也变得不可预测。它通常与超智能机器的存在相关联,这些机器能够进行自我改进。Ray在视频中讨论了奇点的概念,预测到2045年技术将达到一个点,人类可以备份他们的大脑,并且即使身体被摧毁,也能够重新创建一个人的大脑和意识。

💡纳米机器人(Nanobots)

纳米机器人是纳米技术领域的一个概念,指的是能够在纳米尺度上操作的微型机器人。在视频中,Ray提到纳米机器人对于实现奇点至关重要,因为它们能够进入大脑并在粒子层面上理解大脑的活动,从而实现与机器的直接通信。

💡脑机接口(Brain-Machine Interface, BMI)

脑机接口是一种直接连接大脑和外部设备的通信路径。在视频中,Ray讨论了BMI在实现人类与机器智能融合中的作用,它将允许我们更快速、更直接地与计算机交互,而不需要通过外部设备,如手机或键盘。

💡意识(Consciousness)

意识通常指的是个体对自己和周围环境的感知和体验。在视频中,Ray和主持人探讨了意识的本质,以及它是否可以被科学地定义和测量。Ray认为意识不是一个科学概念,因为我们无法证明一个实体是否具有意识。

💡长寿逃逸速度(Longevity Escape Velocity)

长寿逃逸速度是Ray Kurzweil提出的一个概念,指的是医疗科技进步速度超过人类衰老速度的点,从而实现理论上的无限寿命。在视频中,Ray预测到2029年,人们通过科学进步获得的时间将等于或超过他们失去的时间,从而实现长寿逃逸速度。

💡计算能力(Computational Power)

计算能力指的是计算机执行任务的能力,通常与处理速度和效率相关。在视频中,Ray展示了计算能力的指数增长曲线,并讨论了这种增长如何使得大型语言模型和未来的技术进步成为可能。

💡创造力(Creativity)

创造力是指产生新颖和有价值的想法的能力。在视频中,Ray讨论了计算机如何通过模拟和尝试所有可能性来展现创造力,例如在设计Moderna疫苗时,计算机模拟了数亿种不同的mRNA序列,以找到最有效的疫苗。

💡大型语言模型(Large Language Models, LLMs)

大型语言模型是人工智能中的一种技术,它使用大量的数据来训练深度学习模型,以理解和生成人类语言。在视频中,Ray提到了大型语言模型的能力,它们可以快速生成文章和回答问题,而且它们的功能不仅限于语言处理,还能处理图像和其他非语言数据。

💡模拟生物学(Simulated Biology)

模拟生物学是指使用计算机模拟来研究生物系统和生物过程。在视频中,Ray提到了模拟生物学在疫苗开发中的应用,其中计算机模拟被用来测试不同的mRNA序列,这是传统方法无法比拟的。

Highlights

Ray Kurzweil, 拥有超过61年人工智能研究经验的科学家,预测到2029年计算机将通过图灵测试。

Kurzweil 认为,自动通用智能(AGI)比图灵测试更为重要,它指的是计算机能够模仿任何人类的行为。

大型语言模型的发展速度超出了Kurzweil的预期,他认为这可能是因为计算能力的指数级增长。

Kurzweil 提出,技术进步遵循指数曲线,类似于可再生能源技术的发展。

他预测,到2045年技术将达到奇点,届时人类将能够备份大脑并实现某种形式的不朽。

Kurzweil 认为,即使在达到奇点之后,人类的个性和兴趣差异仍将存在,但不平等可能会减少。

他提出,随着技术的发展,人类将能够通过模拟生物学来创造新的药物和治疗方法。

Kurzweil 讨论了意识的科学性问题,并认为意识不是一个科学概念,因为它无法被测量或证明。

他预测,通过纳米机器人和脑机接口,人类将能够极大地扩展智能并解决健康问题。

Kurzweil 认为,随着技术的进步,人类将更加平等,因为技术将使更多人能够获得知识和能力。

他提出,尽管存在风险,但技术发展的历史表明,智能系统将帮助人类避免危险并创造更好的未来。

Kurzweil 讨论了大型语言模型的可解释性和透明度问题,以及如何提高人们对AI的信任。

他预测,随着计算能力的增长,AI将能够解决目前人类无法解决的复杂问题,如疾病治疗和气候变化。

Kurzweil 强调,尽管AI和机器将变得极其智能,但人类仍将扮演关键角色,因为我们的创造力和决策能力是不可或缺的。

他提出,年轻一代应该追随自己的激情,同时学习如何设计和架构未来的智能系统,以确保它们的安全和有益。

Kurzweil 讨论了人类与机器智能融合的未来,其中包括了对个人隐私、伦理和存在主义问题的考量。

他预测,随着技术的发展,人类的生活方式将发生根本性的变化,包括工作、教育和社交等方面。

Transcripts

play00:00

(audience applauding)

play00:06

- All right.

play00:07

I'm so excited to be here with you, Ray.

play00:09

- It's great to be here.

play00:11

Great to see everybody together.

play00:14

- Yeah. - Beautiful audience.

play00:16

- So, my favorite thing in that introduction of you

play00:18

is that you have been working in AI

play00:20

longer than any other human alive,

play00:22

which means, if you live forever,

play00:25

and we'll get to that,

play00:26

you will always have that distinction.

play00:29

- I think that's right.

play00:32

Marvin Minsky was actually my mentor.

play00:36

If he were alive today,

play00:37

he would actually be more than 61 years.

play00:41

We're gonna bring him back also.

play00:43

- So, maybe you'll, I'm not sure

play00:44

how we'll count the distinction then.

play00:47

- [Audience] Louder, louder.

play00:51

- All right, so we're gonna fix the audio,

play00:53

but this is what we're gonna do with this conversation.

play00:55

I'm gonna start out asking Ray some questions

play00:57

about where we are today.

play00:59

We'll do that for a few minutes.

play01:00

Then we'll get into what has to happen

play01:02

to reach the singularity.

play01:04

So, the next 20 years.

play01:06

Then we'll get into discussion about

play01:07

what the singularity is, what it means,

play01:09

how it would change our lives.

play01:10

And then at the end we'll talk a little bit about how,

play01:12

if we believe this vision of the future,

play01:14

what it means for us today.

play01:15

Ask your questions.

play01:16

They'll come in, I'll ask 'em

play01:17

as they go in the different sections of the conversation,

play01:20

but let's get cracking.

play01:21

- Can you hear me?

play01:23

(audience answers indistinctly)

play01:27

- You can't hear, Ray?

play01:28

(audience answers indistinctly)

play01:30

Well, this will be recorded.

play01:32

You guys are gonna all live forever.

play01:34

There'll be plenty of time.

play01:36

It will be fine.

play01:38

I'm just gonna get started.

play01:39

I assume the audio will get worked out.

play01:40

They do a fabulous job here at South by.

play01:43

- I think they should be able to hear me and you.

play01:46

(audience laughing)

play01:49

- All right, we got this over on the right?

play01:51

(audience applauding)

play01:54

Audio engineers, are we good to go?

play01:58

We're good to go, all right.

play02:01

All right, first question, Ray.

play02:03

So, you've been working in AI for 61 years?

play02:07

- Oh wait, can you hear me?

play02:09

- [Audience] No.

play02:12

- That's not.

play02:13

- So, everybody in the front can hear you,

play02:15

but nobody in the back can hear you.

play02:16

- Can you hear me now?

play02:18

- [Audience] Yes. - Okay.

play02:21

- All right. - I'll speak louder.

play02:25

- First question, so you've been living

play02:28

in the AI revolution for a long time.

play02:29

You've made lots of predictions,

play02:31

many of which have been remarkably accurate.

play02:35

We've all been living in

play02:37

a remarkable two year transformation

play02:39

with large language models a year and a half.

play02:42

What has surprised you about the innovations

play02:45

in large language models and what has happened recently?

play02:48

- Well, I did finish this book a year ago,

play02:52

and didn't really cover large language models.

play02:55

So, I delayed the book to cover that.

play03:02

But I was expecting this that to happen

play03:10

like a couple of years later.

play03:12

I mean, I made a prediction in 1999

play03:16

that would happen by 2029,

play03:20

and we're not quite there yet, but we will.

play03:24

But it looks like it's maybe

play03:25

a year or two ahead of schedule.

play03:31

So, that was maybe a bit of a surprise.

play03:33

- Wait, you predicted back in 1999

play03:36

that a computer would pass the Turing Test in 2029.

play03:38

Are you revising that to something more closer to today?

play03:46

- No, I'm still saying 2029.

play03:52

The definition of the Turing Test is not precise.

play03:57

We're gonna have people claiming

play03:59

that the Turing Test has been solved

play04:02

and people are saying that

play04:03

GPT-4 actually passes it, some people.

play04:07

So, it's gonna be like maybe two or three years

play04:09

where people start claiming

play04:12

and then they continue to claim

play04:13

and finally, everybody will accept it.

play04:16

So, it's not like it happens in one day,

play04:18

- But you have a very specific definition

play04:20

of the Turing Test.

play04:21

When do you think we'll pass that definition?

play04:25

- Well, the Turing Test is actually not that significant,

play04:28

'cause that means that you can,

play04:33

a computer will pass for a human being.

play04:38

And what's much more important is AGI,

play04:41

automatic general intelligence,

play04:43

which means that it can emulate any human being.

play04:46

So, you have one computer,

play04:48

and it can do everything that any human being can do,

play04:52

and that's also 2029.

play04:54

It all happens at the same time.

play04:56

But nobody can do that.

play04:57

I mean, just take an average large language model today.

play05:02

You can ask it anything

play05:04

and it will answer you pretty convincingly.

play05:07

No human being can do all of that.

play05:10

And it does it very quickly.

play05:11

It'll write a very nice essay in 15 seconds

play05:16

and then you can ask it again and it'll write another essay

play05:19

and no human being can actually perform at that level.

play05:23

- Right, so you have to dumb it down to actually

play05:25

have a convincing Turing Test.

play05:26

- [Ray] To have a Turing Test you have to dumb it down.

play05:28

- Yeah, let me ask the first question from the audience

play05:31

since I think it's quite relevant to where we are,

play05:33

which is Brian Daniel.

play05:34

Is the Kurzweil Curve still accurate?

play05:37

- [Ray] Say again?

play05:37

- [Nick] Is the Kurzweil Curve still accurate?

play05:40

- Yes, in fact it's, can I see that?

play05:43

- [Nick] Let's pull the slides up. First slide.

play05:48

- [Ray] So, this is an 80-year track record.

play05:52

This is an exponential growth.

play05:54

A straight line on this curve means exponential curvature.

play06:01

If it was sort of exponential,

play06:02

but not quite, it would curve.

play06:05

This is actually a straight line.

play06:08

It started out with a computer

play06:11

that did 0.0000007 calculations

play06:19

per second per constant dollar.

play06:21

That's the lower left hand corner.

play06:23

At the upper right hand corner,

play06:25

it's 65 billion calculations per second

play06:28

for the same amount of money.

play06:31

So, that's why large language models

play06:33

have only been feasible for two years.

play06:35

Prior, we actually had large language models before that,

play06:38

but it didn't work very well.

play06:41

And this is an exponential curve.

play06:44

Technology moves in an exponential curve.

play06:48

We see that, for example, having renewable energy

play06:55

come from the Sun and wind,

play07:00

that's actually an exponential curve.

play07:03

It's increased, it's gone.

play07:06

We've decreased the price by 99.7%.

play07:12

We've multiplied the amount of energy

play07:14

coming from solar energy a million fold.

play07:18

So, this kind of curve really

play07:22

directs all kinds of technology.

play07:27

And this is the reason that we're making progress.

play07:31

I mean, we knew how to do large language models years ago,

play07:36

but we're dependent on this curve, and it's pretty amazing.

play07:41

It started out increasing relay speeds,

play07:44

then vacuum tubes, then integrated circuits,

play07:47

and each year it makes the same amount of progress,

play07:50

approximately regardless of where you are on this curve.

play07:56

We just added the last point.

play07:58

And it's again, we basically multiply this

play08:03

by two every 1.4 years.

play08:08

And this is the reason that computers are exciting,

play08:12

but it actually affects every type of technology.

play08:15

And we just added the last point like a two weeks ago.

play08:19

- Okay. All right, so let me ask you a question.

play08:23

You know, you wrote book about how to build a mind.

play08:26

You have a lot about how the human mind is constructed.

play08:29

A lot of the progress in AI, AI systems are being built

play08:32

on what we understand about neural networks, right?

play08:34

So, clearly our understanding of this helps with AI.

play08:39

In the last two years,

play08:40

by watching these large language models,

play08:42

have we learned anything new about our brains?

play08:45

Are we learning about

play08:46

the insides of our skulls as we do this?

play08:48

- It really has to do with the amount of connections.

play08:52

The brain is actually organized fairly differently.

play08:56

The things near the eye, for example, deal with vision.

play09:02

And we have different ways of implementing

play09:04

different parts of the brain that remember different things.

play09:07

We actually don't need that.

play09:09

In a large language model, all the connections are the same.

play09:13

We have to get the connections up to a certain point.

play09:16

If it approximately matches what the brain does,

play09:19

which is about a trillion connections,

play09:23

it will perform kind of like the brain.

play09:25

We're kind of almost at that point.

play09:27

- [Nick] Wait, so you think.

play09:28

- GPT-4 is 400 billion.

play09:31

The next ones will be a trillion or more.

play09:34

- So, the construction of these models,

play09:36

they are more efficient in their construction

play09:38

than our brains are?

play09:41

- We make them to be as efficient as possible,

play09:44

but it doesn't really matter how they're organized.

play09:47

And we can actually create certain software

play09:51

that will actually expand the amount of connections

play09:54

more for the same amount of computation.

play10:00

But it really has to do with how many connections

play10:06

are particular computers is responsible for.

play10:11

- So, as we approach AGI,

play10:15

we're not looking for a new understanding

play10:17

of how to make these machines more efficient?

play10:19

The transformer architecture was clearly very important.

play10:22

We can really just get there with more compute.

play10:25

- But the software and the learning is also important.

play10:28

I mean, you could have a trillion connections,

play10:31

but if you didn't have something to learn from,

play10:34

it wouldn't be very effective.

play10:35

So, we actually have to be able to collect all this data.

play10:39

So, we do it on the web and so on.

play10:41

I mean, we've been collecting stuff on the web

play10:45

for several decades.

play10:48

That's really what we're depending on to be able

play10:52

to train these large language models.

play10:58

And we shouldn't actually call them large language models,

play11:02

because they deal with much more than language.

play11:05

I mean, it's language,

play11:06

but you can add pictures,

play11:09

you can add things that affect disease

play11:14

that have nothing to do with language.

play11:17

In fact, we're using now simulated biology

play11:23

to be able to simulate different ways to affect disease.

play11:31

And that has nothing to do with language,

play11:34

but they really should be called large event models.

play11:38

- Do you think there's anything that happens

play11:40

inside of our brains that can be captured

play11:42

by computation and by math?

play11:45

- No. I mean, what would that be? I mean.

play11:48

(Ray and audience laughing)

play11:50

- Okay, quick poll of the audience.

play11:52

Raise your hand if you think there's something in your brain

play11:54

that cannot be captured by computation or math, like a soul.

play11:59

All right, so convince them that they're wrong, Ray.

play12:01

- I mean, consciousness is very important,

play12:05

but it's actually not scientific.

play12:08

There's no way I could slide somebody in

play12:10

and the light will go on.

play12:12

Oh, this one's conscious.

play12:13

No, this one's not.

play12:15

It's not scientific,

play12:19

but it's actually extremely important.

play12:23

And another question, why am I me?

play12:26

How come what happens to me?

play12:28

I'm conscious of, and I'm not conscious

play12:31

of what happens to you.

play12:34

These are deeply mysterious things,

play12:37

but they're really not, it's really not conscious.

play12:39

So, Marvin Minsky, who was my mentor for 50 years, he said,

play12:44

it's not scientific and therefore

play12:45

we shouldn't bother with it.

play12:47

And any discussion of consciousness,

play12:49

he would kind of dismiss, but he actually did.

play12:54

His reaction to people was totally dependent

play12:57

on whether he felt they were conscious or not.

play12:59

So, he actually did use that.

play13:03

But it's not something that we're ignoring,

play13:05

because there's no way to tell

play13:08

whether something's conscious.

play13:11

And that's not just something

play13:12

that we don't know and we'll discover.

play13:15

There's really no way to tell

play13:17

whether or not something's conscious.

play13:19

- What do you mean, like this is not conscious

play13:20

and you know, the gentleman

play13:21

sitting right there is conscious.

play13:23

I'm pretty confident.

play13:24

- How do you prove that?

play13:29

I mean we kind of agree with human

play13:32

that humans are conscious.

play13:34

Some humans are conscious, not all humans.

play13:36

(audience laughing)

play13:38

But how about animals? We have big disagreements.

play13:43

Some people say animals are not conscious

play13:47

and other people think animals are conscious.

play13:49

Maybe some animals are conscious, and others are not.

play13:52

There's no way to prove that.

play13:55

- Okay, I wanna run down this consciousness question,

play13:59

but before we do that, I wanna make sure

play14:00

I understood your previous answer correctly.

play14:03

So, the feeling I get of being in love

play14:06

or the feeling, any emotion that I get

play14:12

could eventually be represented

play14:13

in math in a large language model?

play14:16

- Yeah, I mean certainly the behavior,

play14:18

the feelings that you have,

play14:20

if you are with somebody that you love.

play14:25

It's definitely dependent on what the connections do.

play14:28

You can tell whether or not that's happening.

play14:32

- All right, and back to,

play14:37

is everybody here convinced?

play14:38

- [Audience] No.

play14:39

= Not entirely.

play14:40

All right, well close enough.

play14:41

So, you don't think that it's worth

play14:44

trying to define consciousness?

play14:45

I mean, you spend a fair amount in your book

play14:47

giving different arguments about what consciousness means,

play14:49

but it seems like your argument on stage

play14:51

that we shouldn't try to define it?

play14:56

- There's no way to actually prove it.

play14:58

I mean, we have certain agreements.

play15:01

I agree that all of you are conscious,

play15:02

you actually made it into this room.

play15:04

So, that's a pretty good indication that you're conscious.

play15:09

But that's not a proof.

play15:12

And there may be human beings

play15:14

that don't seem quite conscious at the time.

play15:18

Are they conscious or not?

play15:20

And animals, I mean I think elephants

play15:22

and whales are conscious,

play15:24

but not everybody agrees with that.

play15:26

- So, at what point can we then,

play15:28

essentially how long will it be until we can,

play15:32

essentially download the entire contents of your brain

play15:36

and express it through some kind of a machine?

play15:40

- That's actually an important question,

play15:42

'cause we're gonna talk about longevity.

play15:45

We're gonna get to a point

play15:46

where we have longevity escape velocity.

play15:49

And it's not that far away.

play15:51

I think if you're diligent,

play15:53

you'll be able to achieve that by 2029.

play15:55

That's only five or six years from now.

play16:00

And that, so right now you go through a year,

play16:03

use up a year of your longevity,

play16:05

but you get back from scientific progress

play16:08

right now about four months.

play16:10

But that scientific progress is on an exponential curve.

play16:13

It's gonna speed up every year.

play16:15

And by 2029, if you're diligent,

play16:18

you'll use up a year of your longevity with a year passing.

play16:21

But you'll get back a full year.

play16:23

And past 2029, you'll get back more than a year.

play16:27

So, you'll actually go backwards in time.

play16:30

Now, that's not a guarantee of infinite life

play16:33

because you could have a 10-year-old

play16:38

and you could compute his longevity as many, many decades

play16:41

and he could die tomorrow.

play16:45

But what's important about

play16:46

actually capturing everything in your brain,

play16:49

we can't do that today,

play16:51

and we won't be able to do that in five years.

play16:54

But you will be able to do that by the singularity,

play16:57

which is 2045.

play16:59

And so, at that point you can actually go inside the brain

play17:02

and capture everything in there.

play17:04

Now, your thinking is gonna be a combination

play17:07

of the amount you get from computation,

play17:11

which will add to your thinking.

play17:14

And that's automatically captured.

play17:18

I mean, right now, anything that's you have

play17:21

in a computer is automatically captured today.

play17:26

And the kind of additional thinking we'll have

play17:29

by adding to our brain that will be captured.

play17:33

But the connections that we have in the brain

play17:40

that we start with will still have that.

play17:44

That's not captured today,

play17:45

but that will be captured in 2045.

play17:47

We'll be able to go inside the brain

play17:49

and capture that as well.

play17:51

And therefore, we'll actually capture the entire brain,

play17:56

which will be backed up.

play17:58

So, even if you get wiped out,

play18:00

you walk into a bomb and it explodes,

play18:02

we can actually recreate everything

play18:04

that was in your brain by 2045.

play18:08

That's one of the implications of the singularity.

play18:14

Now, that's doesn't absolutely guarantee,

play18:17

because I mean the world could blow up

play18:19

and all the computer,

play18:26

all the things that contained computers could blow up

play18:28

and so you wouldn't be able to to recreate that.

play18:35

So, we never actually get to a point

play18:36

where we absolutely guarantee that you live forever.

play18:40

But most of the things that right now would upset

play18:46

capturing that will be overcome by that time.

play18:50

- There's a lot there, Ray.

play18:52

Let's start with escape velocity.

play18:55

So, do you think that anybody in this audience,

play18:57

in their current biological body

play18:59

will live to be 500 years old?

play19:02

- You're asking me?

play19:03

- Yeah.

play19:05

- Absolutely, I mean, if you're gonna

play19:08

be alive in five years,

play19:10

and I imagine all of you will be alive in five years.

play19:14

- Oh okay, if they're alive for five years,

play19:16

they will likely live to be 500 years old?

play19:20

- If they're diligent.

play19:21

And I think the people in this audience will be diligent so.

play19:25

- Wow, all right.

play19:26

Well, you can drink whatever you want

play19:28

as long as you don't get run over tonight,

play19:29

'cause you don't have to worry about decline.

play19:31

(audience laughing)

play19:32

All right, so let me ask you a question.

play19:34

I wanna get, we're gonna spend a lot of time

play19:36

on what the singularity is,

play19:37

what it means, and what it'll be like.

play19:38

But I wanna ask some questions that'll lead us up there.

play19:40

So, I'm gonna take this question

play19:41

from Mark Sternberg and modify it slightly.

play19:44

In the timeframe, AI will be able to do,

play19:47

or sufficiently sophisticated computers in your argument

play19:50

can do everything that the human brain can do.

play19:53

What will they not be able to do in the next 10 years?

play20:01

- Well, one thing has to do with being creative.

play20:07

And some people go, they'll be able

play20:08

to do everything a human can do,

play20:11

but they're not gonna be able to create new knowledge.

play20:14

That's actually wrong,

play20:15

because we can simulate, for example, biology.

play20:20

And the Moderna vaccine for example,

play20:23

we didn't do it the usual way,

play20:24

which is somebody sits down and thinks,

play20:26

well, I think this might work.

play20:28

And then they try it out.

play20:29

It takes years to try it out in multiple people

play20:33

and it's one person's idea about what might work.

play20:36

They actually listed everything that might work

play20:40

and there was actually several billion

play20:41

different mRNA sequences and they said let's try them all.

play20:46

And they tried every single one by simulating biology

play20:50

and that took two days.

play20:52

So, one weekend they tried out

play20:53

several billion different possibilities

play20:56

and then they picked the one

play20:57

that turned out to be the best.

play21:00

And that actually was the Moderna vaccine up until today.

play21:10

Now, they did actually test it on humans.

play21:12

We'll be able to overcome that as well,

play21:15

'cause we'll be able to test

play21:18

using simulated biology as well.

play21:20

They actually decided to test it.

play21:23

It's a little bit hard to give up testing on humans.

play21:26

We will do that.

play21:27

So, you can actually try out every single one,

play21:30

pick the best one, and then you can try out that

play21:33

by testing on a million simulated humans

play21:37

and do that in a few days as well.

play21:39

And that's actually the future

play21:40

of how we're gonna create medications for diseases.

play21:44

And there's lots of things going on now with cancer

play21:46

and other diseases that are using that.

play21:51

So, that's a whole new method.

play21:54

This actually starting now.

play21:56

Started right with the Moderna vaccine.

play21:58

We did another cure for a mental disease

play22:06

that's actually now in stage three trials.

play22:10

That's gonna be how we create medications from now on.

play22:14

- But what are the frontiers?

play22:15

What can we not do?

play22:17

- So, that's where computers being creative

play22:21

and it's not just actually trying something

play22:23

that occurs to it.

play22:25

It makes a list of everything that's possible

play22:27

and tries it all.

play22:28

- Is that creativity or is that just brute force

play22:31

with maximum capability?

play22:34

- It's much better than any other form of creativity.

play22:39

And yes, it's creative,

play22:40

'cause you're trying out every single possibility

play22:43

and you're doing it very quickly

play22:45

and you come up with something that we didn't have before.

play22:47

I mean, what else would creativity be?

play22:51

- All right, so we're gonna

play22:52

cross the frontier of creativity.

play22:53

What will we not cross?

play22:55

What are the challenges that will be

play22:56

outstanding the next 10 years?

play22:57

- Well, we don't know everything,

play23:00

and we haven't gone through this process.

play23:02

It does require some creativity to imagine what might work.

play23:07

And we have to also be able to simulate it

play23:10

in a biochemical simulator.

play23:15

So, we actually have to figure that out

play23:18

and we'll be using people for a while to do that.

play23:22

So, we don't know everything.

play23:24

I mean, to be able to do everything

play23:26

a human being can do is one thing,

play23:28

but there's so much we don't know that we wanna find out.

play23:33

And that requires creativity.

play23:36

That will require some kind of human creativity

play23:42

working with machines.

play23:45

- All right, let's go back to what's gonna happen

play23:46

to get us to the singularity.

play23:48

So, clearly we have the chart

play23:50

that you showed on the power of compute.

play23:51

It's been very steady, you know, moving straight up,

play23:54

you know, on a logarithmic scale on a straight line.

play23:56

There are a couple of other elements

play23:58

that you think are necessary to get to the singularity.

play24:01

One, is the rise of nanobots

play24:03

and the other is the rise of brain machine interfaces.

play24:06

And both of those have gone more slowly than AI.

play24:10

So, convince the audience that.

play24:12

- Well, it would be slow,

play24:14

because anytime you affect the human body,

play24:19

a lot of people are gonna be concerned about it.

play24:23

If we do something with computers, we have a new algorithm,

play24:27

or we increase the speed of it,

play24:36

nobody really is concerned about it.

play24:39

You can do that.

play24:40

Nobody cares about any dangers in it.

play24:46

I mean that's the reality.

play24:47

- [Nick] Well, there's some dangers

play24:48

that people care about, yes.

play24:49

- Yeah, but it goes very, very quickly.

play24:52

That's one of the reasons it goes so fast.

play24:55

But if you're affecting the body,

play24:57

we have all kinds of concerns

play25:00

that it might affect it negatively.

play25:02

And so, we wanna actually try it on people.

play25:05

- But the reason brain machine interfaces

play25:09

haven't moved in an exponential curve

play25:11

isn't just because, you know,

play25:14

lots of people are concerned about the risks to humans.

play25:17

I mean, as you explain in the book,

play25:19

they just don't work as well as they could.

play25:26

- If we could try things out without having to test it,

play25:29

it would go a lot faster.

play25:30

I mean, that's the reason it goes slowly.

play25:41

There's some thought now that we could actually

play25:45

figure out what's going on inside the brain

play25:47

and put things into the brain

play25:49

without actually going inside the brain.

play25:51

We wouldn't need something like brain link.

play25:54

We could just, I mean there's some tests

play25:59

where we can actually tell what's going on in the brain

play26:02

without actually putting something inside the brain.

play26:05

And that might actually be a way

play26:07

to do this much more quickly.

play26:10

- But your prediction about the singularity,

play26:12

depends, maybe I'm reading it wrong,

play26:14

not just on the continued exponential growth of compute,

play26:17

but on solving this particular problem too, right?

play26:25

- Yes, because we wanna increase

play26:28

the amount of intelligence that humans can command.

play26:32

And so, we have to be able

play26:32

to marry the best computers with our actual brain.

play26:38

- And why do we have to do that?

play26:39

Because like right now, here I go,

play26:41

I have my phone in some ways this augments my intelligence.

play26:44

It's wonderful.

play26:45

- Yeah, but it's very slow.

play26:46

I mean, if I ask you a question,

play26:48

you're gonna have to type it in,

play26:50

or speak it and it takes a while.

play26:52

I mean, I ask a question

play26:54

and then people fool around with their computer.

play26:57

It might take 15 seconds or 30 seconds.

play27:00

It's not like it just goes right into your brain.

play27:05

I mean, these are very useful.

play27:06

These are brain extenders.

play27:08

We didn't have these a little while ago.

play27:12

Generally, in my talks, I ask people,

play27:15

"Who here has their phone?"

play27:17

I'll bet here maybe there's one or two people,

play27:20

but everybody here has their phone.

play27:24

That wasn't two, five years ago,

play27:26

definitely wasn't two, 10 years ago.

play27:29

And it is a brain extender,

play27:31

but it does have some speed problems.

play27:35

So, we wanna increase that speed.

play27:37

A question could just come up where we're talking

play27:41

and the computer would instantly tell you what the answer is

play27:44

without you having to fool around with an external device,

play27:48

and that's almost feasible today.

play27:52

And something like that would be helpful to do this.

play27:57

- But could you not get a lot of the good

play28:01

that you talk about if we just kept.

play28:03

The problem with connecting our brains to the machines

play28:06

is suddenly you're in this whole world,

play28:08

these complicated privacy issues

play28:09

where stuff is being injected in my brain,

play28:11

stuff in my brain is, you know, is going elsewhere.

play28:13

Like you're opening up a whole host of ethical,

play28:16

moral, existential problems.

play28:17

Can't you just make the phones a lot better?

play28:21

- Well, that's the idea that we can do that

play28:23

without having to go inside your brain,

play28:27

but be able to tell what's going on in your brain

play28:29

externally without going inside the brain,

play28:36

you know, with some kind of device.

play28:37

- All right, well, let's keep moving into the future.

play28:39

So, we're moving into the future.

play28:40

We have exponential growth of computer.

play28:42

We solve a way of, you know, ideally figuring out

play28:45

how to communicate directly with your brain

play28:47

to speed things up.

play28:47

Explain why nanobots are essential

play28:49

to your vision of where we're going.

play28:52

- Well, if you really wanna tell

play28:53

what's going on inside the brain,

play28:56

you've gotta be able to go

play28:57

at the level of the particles in the brain

play29:01

so we can actually tell what they're doing,

play29:06

and that's feasible.

play29:10

We can't actually do it, but we can show that it's feasible.

play29:14

And that's one possibility.

play29:20

We're actually hoping that you could do this

play29:22

without actually affecting the brain at all.

play29:29

- Okay. All right, so we're pushing ahead.

play29:31

We've got nanobots that running around inside of our brain.

play29:33

They're understanding our head,

play29:35

they're extracting thoughts, they're inputting thoughts.

play29:38

Let's go to this nice question,

play29:39

which fits in lovely from Louise Condraver.

play29:42

What are the five main ethical questions

play29:44

that we will face as that happens?

play29:52

- Is four enough?

play29:54

- Four is fine.

play29:57

There might even be six, Ray, but you can give us four.

play30:07

- I mean we're gonna have a lot more power,

play30:11

if we can actually with our own brain control computers.

play30:18

Is I give people too much power?

play30:24

Also, I mean right now we talk about

play30:28

having a certain amount of value based on your talent.

play30:39

This will give talent to people

play30:41

who otherwise don't have talent.

play30:44

And talent won't be as important,

play30:48

because you'll be able to gain talent

play30:51

just by merging with the right kind of large language model,

play30:56

or whatever we call them.

play31:00

And it also seemed kind of arbitrary

play31:03

why we would give more power

play31:05

to somebody who has more talent,

play31:08

'cause they didn't create that talent,

play31:10

they just happened to have it.

play31:15

But everybody says we should give

play31:20

somebody who has talents in an area more power.

play31:26

This way you'd be able to gain talent,

play31:30

as in the "Matrix".

play31:31

You could learn to fly a helicopter

play31:36

just by downloading the right software

play31:38

as opposed to spending a lot of time doing that.

play31:43

Is that fair or unfair?

play31:50

I mean I think that would fall

play31:52

into the ethical challenge area.

play32:03

And it's not like we get to the end of this and say,

play32:07

okay, this is finally what the singularity is all about

play32:10

and people can do certain things

play32:12

and they can't do other things, but it's over.

play32:15

We will never get to that point.

play32:17

I mean this curve is gonna continue.

play32:20

The other curve, it's gonna continue indefinitely.

play32:26

And we've actually shown, for example,

play32:28

with nanotechnology we can create a computer

play32:31

where one liter computer would actually

play32:35

match the amount of power that all human beings today have.

play32:41

Like 10th to the 10th persons

play32:45

would all fit into one liter computer.

play32:51

Does that create ethical problems?

play32:56

So, I mean a lot of the implications kind of run against

play33:00

what we've been assuming about human beings.

play33:05

- Wait, on the talent question, which is super interesting.

play33:08

Do you feel like everybody,

play33:11

when we get to 2040 will have equal capacities?

play33:16

- I think we'll be more different,

play33:18

because we'll have different interests

play33:19

and you might be into some fantastic type of music

play33:25

and I might be into some kind of

play33:27

literature or something else.

play33:28

I mean we're gonna have different interests

play33:33

and so, we'll excel at certain things

play33:37

depending on what your interests are.

play33:40

So, it's not like we all have the same amount of power,

play33:43

but we all have fantastic power

play33:45

compared to what we have today.

play33:47

- And if you're in Texas where there are no regulations,

play33:49

you'll probably get it first

play33:50

instead of you in Massachusetts.

play33:51

- Exactly, yeah.

play33:53

(audience laughing)

play33:54

- Let me ask you another ethical question,

play33:55

while we're on this one.

play33:56

So, about a few minutes ago you mentioned the capacity to,

play34:00

you know, replicate someone's brain and bring 'em back.

play34:03

So, let's say I do that with my father.

play34:04

Passed away six years ago sadly.

play34:07

I bring him back and I'm able

play34:09

to create a mind and a body just like my father's, right?

play34:13

It's exact perfect replica, all of his thoughts.

play34:16

What happens to the, all the bills that he owed when he die?

play34:20

Because like that's a lot of money

play34:22

and a lot of bill collectors call me.

play34:23

Do we have to pay those off or are we good?

play34:27

- Well, we're doing something like that with my daughter

play34:34

and you can read about this in her book

play34:36

and it's also in my book.

play34:38

We collected everything my father had written.

play34:42

He died when I was 22.

play34:44

So, he is been dead for more than 50 years.

play34:50

And we fed that into a large language model

play34:55

and basically, asked it the question,

play34:58

of all the things he ever wrote,

play35:01

what best answers this question?

play35:04

And then you could put any question you want

play35:07

and then you could talk to him.

play35:08

You'd say something,

play35:10

you'd then go through everything he ever had written

play35:13

and find the best answer

play35:15

that he actually wrote to that question.

play35:18

And it actually was a lot like talking to him.

play35:21

You could ask him what he liked about music.

play35:23

He was a musician.

play35:26

He actually liked Brahms the best,

play35:29

and it was very much like talking to him.

play35:34

And I reported on this in my book

play35:36

and Amy talks about this in her book.

play35:41

And Amy actually asked the question,

play35:43

could I fall in love with with this person

play35:46

even though I've never met him?

play35:48

And she does a pretty good job.

play35:50

I mean you really do fall in love with this character

play35:52

that she creates even though she never met him.

play36:02

So, we can actually, with today's technology,

play36:05

do something where you can actually emulate somebody else.

play36:10

And I think as we get further on

play36:11

we can actually do that more and more responsibly

play36:16

and more and more that really would match that person

play36:21

and actually, emulate the way he would move,

play36:23

and so on, his tone of voice.

play36:25

- And well, you know, my dad, he loved Brahms too,

play36:27

particularly those piano trios.

play36:28

So, if we can solve the back taxes problem,

play36:30

we'll get my dad and your dad's bots hang out,

play36:34

it would be great.

play36:34

- Well, yeah, that'd be cool.

play36:37

- All right.

play36:39

(audience laughing)

play36:40

All right, we got 20 minutes left.

play36:42

I wanna get to the thing that I most wanna understand,

play36:44

'cause it's something that's,

play36:45

by the way, this book is wonderful.

play36:46

I think you guys are all gonna get

play36:47

signed copies of it when it comes out.

play36:49

It's truly remarkable, as are all of Ray's books,

play36:52

whether you agree or disagree,

play36:53

they'll definitely make you think more.

play36:55

One of the things that I don't think you do in this book

play36:57

is describe what a day will be like in 2045

play37:03

when we're all much more intelligent.

play37:05

So it's 2045, we're all a million times as intelligent.

play37:09

I wake up, do I have breakfast or do I not have breakfast?

play37:16

- Well, the answer to that question is

play37:19

kind of the same as it's now,

play37:21

but first of all, the reason it's called a singularity

play37:29

is because we don't really fully understand that question.

play37:35

Singularity is borrowed from physics.

play37:37

Singularity in physics is where you have a black hole

play37:42

and no light can escape.

play37:43

And so, you can't actually tell what's going on

play37:45

inside the black hole.

play37:47

And so, we call it a singularity, a physical singularity.

play37:51

So, this is a historical singularity,

play37:54

but we're borrowing that term from physics

play37:57

and call it a singularity,

play37:59

because we can't really answer the question.

play38:01

If we actually multiply our intelligence a million fold,

play38:04

what's that like?

play38:06

It's a little bit like asking a mouse,

play38:10

gee, what would it be like,

play38:11

if you had the amount of intelligence of this person?

play38:16

The mouse wouldn't really even understand the question.

play38:20

It does have intelligence,

play38:22

has a fair amount of intelligence,

play38:24

but it couldn't understand that question.

play38:26

It couldn't articulate an answer.

play38:30

That's a little bit what it would be like for us

play38:32

to take the next step in intelligence by adding

play38:37

all the intelligence that the singularity would provide.

play38:39

- Wait, wait, I just wanna make sure I understand.

play38:41

- But I'll give you one answer.

play38:44

I said if you're diligent, you'll achieve

play38:48

longevity escape velocity in five or six years.

play38:58

And if we wanna actually emulate everything

play39:04

that's going on inside a brain,

play39:08

let's go out a few more years.

play39:10

Let's say the 2040, 2045.

play39:13

Now, there's a lot, you talk to a person,

play39:17

they've got all the connections that they had originally,

play39:20

plus all this additional connections

play39:22

that we add through having them access computers

play39:29

and that becomes part of their thinking.

play39:33

So, can you suppose that person like blows up

play39:39

or something happens to their mind.

play39:43

You definitely can recreate everything

play39:45

that's of a computer origin.

play39:49

'Cause we do that now, anytime we create anything

play39:51

with a computer, it's backed up.

play39:53

So, if the computer goes away,

play39:56

you've got the backup and you can recreate it.

play39:59

Maybe says, okay,

play40:00

but what about their thinking in their normal brain

play40:05

that's not done with computers?

play40:10

We don't have some ways of backing that up.

play40:13

When we get to the singularity with 2045,

play40:15

we'll be able to back that up as well,

play40:18

because we'll be able to figure out,

play40:21

we'll have some ways of actually figuring out

play40:23

what's going on in that sort of mechanical brain.

play40:32

And so, we'll be able to back up both their normal brain

play40:36

as well as the computer edition.

play40:41

And I believe that's feasible by 2045.

play40:46

- In your vision of it.

play40:48

- So, you can back up their entire brain.

play40:51

Now, that doesn't guarantee,

play40:52

I mean the whole world could blow up

play40:54

and you lose all the data centers.

play40:56

And so, it's not absolute guarantee.

play40:59

- That' ll be ashamed,

play41:02

but what I don't understand is

play41:04

will we even be fully distinct people

play41:06

if we're sharing memories

play41:08

and we're all uploading our brains to the cloud

play41:12

and we're getting all this information coming back

play41:14

directly into our neocortex, are we still distinct?

play41:20

- Yes, but we could also

play41:24

find new ways of communicating.

play41:27

So, the computers that extend my brain

play41:32

interact with computers to extend your brain.

play41:35

We could create something that's like a hybrid or not

play41:40

and it would be up to our own decision

play41:42

as to whether or not to do that.

play41:44

So, there'll be some new ways of communicating.

play41:47

- Let me ask another question about this.

play41:49

This is what, when I was reading the book,

play41:51

this is where I kept getting stuck.

play41:52

You are extremely optimistic, right?

play41:55

You're optimistic about where we are today.

play41:58

You're optimistic that technology

play41:59

has been a massive force for good.

play42:01

You're optimistic that it'll continue

play42:02

to be a massive force for good.

play42:04

Yet, there is a lot of uncertainty

play42:06

in the future you were describing.

play42:09

- Well, first of all, I'm not necessarily optimistic

play42:15

the things that can go wrong.

play42:18

We had things that can go wrong before we had computers.

play42:25

When I was a child, atomic weapons were created

play42:32

and people were very worried about an atomic war.

play42:36

And we would actually get under our desk

play42:37

and put our hands behind our head

play42:39

to protect us against an atomic war.

play42:43

And it seemed to work, actually.

play42:45

We're still here,

play42:47

but if you would ask people,

play42:50

we had actually two weapons that went off in anger

play42:54

and killed a lot of people within a week.

play42:58

And if you'd ask people, what's the chance

play43:00

that we're gonna go another 80 years

play43:01

and this will never happen again.

play43:03

Nobody would say that, that was true,

play43:09

but it has happened.

play43:11

Now, that doesn't mean it's not gonna happen next week,

play43:15

but anyway, that's a great danger.

play43:18

And I think that's a much greater danger than computers are.

play43:23

Yes, there are dangers,

play43:24

but the computers will also be more intelligent

play43:27

to avoid kinds of dangers.

play43:32

Yes, there's some bad people in the world,

play43:35

but I mean, go back 80, 90 years,

play43:40

we had 100 million people die in Asia

play43:44

and Europe from World War II.

play43:48

We don't have wars like that anymore.

play43:50

We could, and we certainly

play43:52

have the atomic weapons to do that.

play43:57

And you could also imagine computers

play43:58

could be involved with that.

play44:03

But if you actually look,

play44:06

and this goes right through war and peace.

play44:09

First of all, you, if you look at my lineage of computers

play44:15

going from tiny fraction of one calculation to 65 billion,

play44:20

that's a 20 quadrillion fold increase

play44:23

that we've achieved in 80 years.

play44:29

And look at this,

play44:30

US personal income is done in constant dollars.

play44:33

So, this has nothing to do with inflation.

play44:37

And this is the average income in the United States.

play44:45

It's multiplied by about a hundred fold

play44:53

and we live far more successfully,

play44:57

if you actually, people say,

play44:59

oh, things were great 100 years ago, they weren't.

play45:03

And you can look at this chart,

play45:05

and lots of, I've got 50 charts in the book,

play45:08

which are the kind of progress we've made.

play45:11

Number of people that live in dire poverty

play45:14

has gone down dramatically.

play45:16

And we actually did a poll where they asked people,

play45:19

people that live in poverty, has it gone up or down?

play45:23

80% said it's gone up.

play45:25

But the reality is it's actually fallen by 50%,

play45:36

in the last 20 years.

play45:39

So, what we think about the past,

play45:43

is really the opposite of what's happened.

play45:46

Things have gotten far better than they have

play45:49

and computers are gonna make things even better.

play45:52

I mean, just the kind of things you can do now

play45:54

with a large language model didn't exist two years ago.

play45:58

- Do you ever worry that take it as a given,

play46:03

if computers have made things better,

play46:04

take it as a given that personal income will keep going up.

play46:07

Do you ever worry it's just coming too quickly

play46:09

and it'll be better if maybe the slope of the Kurzweil Curve

play46:12

was a little less steep?

play46:13

- I's a big difference in the past.

play46:16

I mean, talk about what effect did the railroad have?

play46:21

I mean, lots of jobs were lost

play46:23

or even the cotton ginny that happened 200 years ago

play46:27

and people were quite happy

play46:29

making money with the cotton ginny

play46:31

and suddenly that was gone and machines were doing that.

play46:35

And people say, well, wait till this gets going,

play46:37

all jobs will be lost.

play46:39

And that's actually what was said at that time.

play46:45

But actually, income went up

play46:48

and more and more people worked.

play46:51

And if you say, well, what are they gonna do?

play46:54

You couldn't answer that question,

play46:55

because it was in industries that nobody had a clue of.

play46:59

Like for example, all of electronics.

play47:04

So, things are getting better even if jobs are lost.

play47:10

Now, you can certainly point to jobs

play47:12

like take computer programming.

play47:19

Google has, I don't know, 60,000 people

play47:21

that program computers and lots of other companies do.

play47:27

At some point, that's not gonna be a feasible job.

play47:31

They can already code.

play47:33

Large language models can write code

play47:35

not quite the way an expert programmer can do.

play47:40

But how long is that gonna take?

play47:43

It's measured in years, not in decades.

play47:49

Nonetheless, I believe that things will get better,

play47:52

because we wipe out jobs,

play47:55

but we create other ways of having an income.

play48:00

And if you actually point to something,

play48:03

let's say this machine

play48:06

and this is being worked on, can wash dishes.

play48:10

You just have a bunch of dishes that'll pick the ones

play48:13

that have to go in the dishwasher

play48:14

and clean everything else up,

play48:16

and that will wash dishes for you.

play48:21

Would we want that not to happen?

play48:24

Would we say, well, this is kind of upsetting things,

play48:27

let's get rid of it.

play48:28

It's not gonna happen.

play48:29

And no one would would advocate that.

play48:33

So, we'll find things to do.

play48:37

We'll have other methods of distributing money

play48:42

and it'll continue these kinds of curves

play48:46

that we've seen already.

play48:48

- It's kind of remarkable that we got large language models

play48:50

before we've got robotic dishwashers.

play48:54

You have grandchildren, you know?

play48:56

What would you tell a young person?

play48:58

You know, they buy in, they agree that

play49:00

or you know, how would you tell them

play49:03

to best prepare themselves for what will be a,

play49:06

if you're correct, a remarkably different future?

play49:11

- I'd be less concerned about what will make money

play49:15

and much more concerned about what turns them on.

play49:21

They love video games and so they should learn about that.

play49:27

They should read literature that turns them on.

play49:30

Some of those literature in the future

play49:32

will be created by computers,

play49:36

and find out what in the world

play49:42

has a positive effect on their mental being.

play49:46

- And if you know that your child or your grandchild,

play49:50

this gets to one of the questions

play49:51

that is asked on the screen here.

play49:53

If you know that someone is gonna live

play49:55

for hundreds of years, as you predict,

play49:58

how does that affect the way,

play50:00

certainly it means they shouldn't retire at 65.

play50:02

But what else does it change

play50:04

about the way they should think about their lives?

play50:06

- Well, I talk to people and they say,

play50:08

"Well, I wouldn't wanna live past 100."

play50:11

Or maybe they're a little more ambitious to say,

play50:14

"I don't wanna live past 110."

play50:19

But if you actually look at

play50:22

when people decide they've had enough

play50:25

and they don't wanna live anymore, that never, ever happens

play50:30

unless these people are in some kind of dire pain.

play50:33

They're in physical pain, or emotional pain,

play50:36

or spiritual pain, or whatever,

play50:39

and they just cannot bear to be alive anymore.

play50:42

Nobody takes their lives other than that.

play50:47

And if we can actually overcome many kinds of

play50:51

physical problems and cancer's wiped out and so on,

play50:56

which I expect to happen,

play50:58

people will be even that much more happy to live

play51:03

and they'll wanna continue to experience tomorrow,

play51:08

and tomorrow's gonna be better and better.

play51:12

These kinds of progress, it's not gonna go away.

play51:16

So, people will want to live,

play51:22

you know, unless they're in dire pain.

play51:24

But that's what the whole sort

play51:26

of medical profession is about,

play51:28

which is gonna be greatly amplified by tomorrow's computers.

play51:32

- Can I ask you a great question

play51:33

that has popped on the screen.

play51:34

This is from Colin McCabe.

play51:35

"AI is a black box, nobody knows how it was built.

play51:39

How do you show that AI is trustworthy to users

play51:41

who want to trust it, adopt it, and accept it?

play51:44

Particularly, if you're gonna upload it

play51:46

directly into your brain?"

play51:50

- Well, it's not true that nobody knows how they work.

play51:53

- Right. Most people who are using a large language model

play51:57

don't know what data sense went into it.

play51:59

They're things that happen in the transformer layer

play52:01

that even the architects don't understand.

play52:04

- Right, but we're gonna learn more and more about that.

play52:08

And in fact, how computers work will be,

play52:10

I think a very common type of talent

play52:15

that people want to gain.

play52:20

And ultimately, we'll have more trust of computers.

play52:24

I mean, large language models aren't perfect

play52:26

and you can ask it a question

play52:27

and it can give you something that's incorrect.

play52:32

I mean, we've seen that just recently.

play52:38

The reason we have these computers

play52:42

give you incorrect information is

play52:44

it doesn't have the information to begin with

play52:47

and it actually, doesn't know what it doesn't know.

play52:50

And that's actually, something we're working on

play52:54

so that it knows, well, I don't know that

play52:58

that's actually very good, if it can actually say that.

play53:01

'Cause right now it'll find the best thing it knows

play53:04

and if it's never trained on that information

play53:08

and there's nothing in there that tells you,

play53:10

it'll just give you the best guess,

play53:12

which could be very incorrect.

play53:16

And we're actually, learning to be able to

play53:18

figure out when it knows and when it doesn't know.

play53:21

But ultimately, we'll have a pretty good confidence

play53:29

when it knows and when it doesn't know.

play53:31

And we can actually, rely on what it says.

play53:34

- So, your answer to the question is,

play53:35

A, we will understand more,

play53:37

and B, they'll be much more trustworthy,

play53:39

so it won't be as risky to not understand them?

play53:42

- Right. - Okay.

play53:43

You've spent your life making predictions,

play53:47

some of which, like the Turing Test,

play53:49

you've held onto 'em been remarkably accurate.

play53:51

As you move from a overwhelming optimist

play53:53

to now slightly of a pessimist.

play53:55

What is your prediction?

play53:57

- Well, my books have always had a chapter

play53:58

on how these things can go wrong.

play54:03

- Tell me a prediction that you are chewing over right now,

play54:08

but you're not sure whether you wanna make it

play54:10

or whether you don't wanna make it.

play54:15

- I mean there's well known dangers in nanotechnology,

play54:21

if someone were to create a nanotechnology

play54:24

that replicates well known,

play54:28

if it replicates everything into paperclips.

play54:32

Turn the entire world into paperclips.

play54:36

That would not be positive.

play54:38

- No.

play54:40

Unless you're staples, but then.

play54:43

- And that's feasible.

play54:46

Take somebody who's a little bit mental to do that,

play54:55

but it could be done and we actually,

play55:01

will have something that actually avoids that.

play55:07

So, we'll have something that can detect

play55:10

that this is actually turning everything into paperclips

play55:13

and destroy it before it does that.

play55:18

But I mean I have a chapter in this new book

play55:21

"The Singularity is Nearer"

play55:25

that talks about the kinds of things that could happen.

play55:27

- Oh, the most remarkable part of this book

play55:29

is he does exactly the mathematical calculations

play55:31

on how long it would take nanobots

play55:33

to turn the world into gray goo

play55:34

and how long it would take the blue goo

play55:36

to stop the gray goo, that's remarkable.

play55:38

The book will be out soon.

play55:39

You definitely need to read until the end.

play55:41

But this leads to a,

play55:43

maybe let me try and answer the question I asked before is,

play55:46

what should young people think about and be working on?

play55:49

And you said their passions and what turns them on.

play55:52

Shouldn't they be thinking through

play55:57

how to design and architect these future systems

play56:00

so they're less likely to turn us

play56:02

into gray goo or paper clips?

play56:03

- Yeah, absolutely, yeah.

play56:04

I don't know if everybody wants to work on that but.

play56:06

- But folks in this room, right, technologically minded,

play56:08

you guys should all be working on

play56:10

not turning us into gray goo, right?

play56:11

- Yes, that'd be on the list, you know.

play56:14

- But then that leads to another question,

play56:16

which is, what will the role of humans be

play56:19

in thinking through that problem

play56:21

when they're only a millionth, or a billionth,

play56:23

or a trillionth as intelligent as machines?

play56:28

- Say that again.

play56:29

- So, we're gonna have these really hard problems to solve.

play56:32

- Yeah. - Right?

play56:33

Right now we are along with our machines, you know,

play56:38

we can be extremely intelligent,

play56:39

but 10 years from now, 15 years from now,

play56:42

there will be machines that will be

play56:43

so much more intelligent than us.

play56:45

What will the role of humans be

play56:48

in trying to solve these problems?

play56:50

- First of all, I see those as extensions of humans.

play56:52

And we wouldn't have them,

play56:53

if we didn't have humans to begin with.

play56:56

And humans have a brain that can think these things through.

play56:58

And we have this thumb,

play57:01

it's not really very much appreciated,

play57:04

but like whales and elephants,

play57:06

actually have a larger brain than we have

play57:08

and they can probably think deeper thoughts,

play57:10

but they don't have a thumb.

play57:11

And so, they don't create technology.

play57:15

A monkey can create, it actually has a thumb,

play57:18

but it's actually down an inch or so

play57:22

and therefore it really can't grab very well.

play57:24

So, it can create a little bit of technology,

play57:26

but the technology it creates

play57:28

cannot create other technology.

play57:30

So, the fact that we have a thumb means

play57:32

we can create integrated circuits

play57:36

that can become a large language model

play57:39

that comes from the human brain.

play57:48

And it's actually trained with everything

play57:49

that we've ever thought.

play57:51

Anything that human beings have thought

play57:53

that's been documented,

play57:54

and it can go into these large language models.

play57:59

And everybody can work on these things.

play58:02

And it's not true, well,

play58:03

only certain wealthy people will have it.

play58:06

I mean, how many people here have phones?

play58:09

If it's not 100% it's like 99.9%.

play58:14

And you don't have to be kind of from a wealthy group.

play58:20

I mean, I see people who are homeless

play58:22

who have their own phone.

play58:25

It's not that expensive.

play58:29

And so, that represents

play58:32

the distribution of these capabilities.

play58:35

It's not something you have to be

play58:36

fabulously wealthy to afford.

play58:39

- So, you think that we're heading into a future

play58:41

where we're gonna live much longer

play58:42

and we'll be much more equal?

play58:44

- Say again?

play58:45

- Well, you think we're heading into a society

play58:46

where we'll live much longer, be wealthier,

play58:48

but also much more equality?

play58:50

- Yes, absolutely.

play58:51

And we've seen that already.

play58:53

- All right. Well, we are at time,

play58:55

but Ray and I'll be back in 2124, 2224 and 2324.

play59:01

So, thank you for coming today.

play59:02

Thank you so much.

play59:03

He is an American treasure.

play59:04

Thank you, Ray Kurzweil.

play59:07

(dramatic music)

Rate This

5.0 / 5 (0 votes)

Étiquettes Connexes
人工智能Ray Kurzweil技术发展人类未来意识问题创造力纳米技术脑机接口伦理挑战智能增强未来社会
Besoin d'un résumé en anglais ?