The Singularity Is Nearer featuring Ray Kurzweil | SXSW 2024

SXSW
18 Mar 202459:21

Summary

TLDR雷・カーツワイルがAIの未来について語るインタビュー。彼は、AIの発展が加速しており、2029年には問題解決能力に優れたAGI(一般的な知能)が達成されると予測。また、2045年にはシンギュラリティが到来し、人間の脳をコンピュータと同期することで、記憶や思考をバックアップできるようになるが、これは絶対的な保証ではなく、倫理的な問題が生じる可能性もあると述べた。

Takeaways

  • 🎤 イベントの開始とRayの長年のAI分野での貢献について触れられる。
  • 🧠 RayのメンターであるMarvin Minskyの影響と、彼の復活に関する話題が提起される。
  • 📈 Rayが語る、人工知能の発展において重要な要素となる、コンピュータの能力の指数関数的な成長について。
  • 🤖 人工知能が進化する中で、特に大規模言語モデルの発展が注目される。
  • 💡 Rayは1999年に2029年にコンピュータがチューリングテストを突破すると予測し、現在もその見通しを維持している。
  • 🌐 技術の進歩は太陽光や風力などの再生可能エネルギーにも影響を与えており、価格が99.7%減少し、エネルギー量は100万倍増加している。
  • 🧠 脳の構造とAIの進歩の関係について、Rayは脳の接続の数について重要視する。
  • 💭 意識に関する議論で、Rayは意識は科学的な定義がないと主張し、但它は非常に重要な存在である。
  • 🚀 2045年までに脳の全てをコンピュータにバックアップできるようになるとRayは予測する。
  • 🌟 2045年の時点で、人間の知能が100万倍に増強されると想定され、その影響についても触れられる。
  • 📚 Rayは人々の関心を、将来の技術的進歩や社会の変化に向けるよう勧告する。

Q & A

  • レイ・カーツワイルはどの分野で最も長く働いた経験があると主張されていますか?

    -レイ・カーツワイルは、AI分野で最も長く働いた経験があると主張されています。

  • Marvin Minskyはレイ・カーツワイルにとってどのような存在でしたか?

    -Marvin Minskyはレイ・カーツワイルのメンターであり、彼の存在は非常に重要でした。

  • レイ・カーツワイルはいつから2029年にコンピュータがチューリングテストを突破すると予測しましたか?

    -レイ・カーツワイルは1999年からコンピュータが2029年にチューリングテストを突破すると予測していました。

  • レイ・カーツワイルが言及した「奇点」とは何を指すのでしょうか?

    -「奇点」とは、人工知能が人間を超える知能を持つようになると予想される未来の状態を指します。

  • レイ・カーツワイルはなぜ、脳マシンインターフェースとナノボットの進歩がAIと同様の指数関数的な成長を遂げていないと考えますか?

    -レイ・カーツワイルは、脳マシンインターフェースとナノボットがAIと同様の指数関数的な成長を遂げていない主な理由として、人体に影響を与えるための多くの制限と懸念があると考えています。

  • レイ・カーツワイルは人工知能による未来の医療についてどのように見ていますか?

    -レイ・カーツワイルは、人工知能が未来の医療においてシミュレーテッドバイオロジーを使用して新しい医薬を開発し、個々の患者のシミュレーションを行い、最適な治療法を提供する鍵となると見ています。

  • レイ・カーツワイルは意識についてどう考えていますか?

    -レイ・カーツワイルは、意識は非常に重要ですが、科学的な定義がないため、意識の議論を避けるべきだと考えています。

  • レイ・カーツワイルは人間の感情や経験が計算や数学で再現可能であると信じていますか?

    -レイ・カーツワイルは、人間の感情や経験が計算や数学で再現可能であると信じており、例えば愛情や喜びなどの感情は、脳の接続性に依存しているため、再現が可能だと述べています。

  • 2045年には人間の脳を完全にバックアップすることが可能になるとレイ・カーツワイルは述べていますが、これはどういう意味を持ちますか?

    -2045年には人間の脳を完全にバックアップすることが可能になるとレイ・カーツワイルは述べており、これは人間の思考や記憶、知識を含む脳のすべての情報を保存し、将来的に復元することができるという意味を持っています。

  • レイ・カーツワイルが提唱する「长寿脱出速度」とは何ですか?

    -「长寿脱出速度」とは、科学の進歩によって人間の寿命が毎年延び、失われる寿命と得られる寿命のバランスが取れ、結果的に時間旅行のように過去に戻ることができるという概念です。

  • レイ・カーツワイルは若者に対して将来のキャリア選択についてどうアドバイスしていますか?

    -レイ・カーツワイルは若者に対して、収入を考えるよりも自分にとって意味のあることに焦点を当てるべきだとアドバイスしており、自分にとって刺激的で熱心な分野に取り組むことを勧めています。

Outlines

00:00

🎤 イベントの開会とAIとの対話

イベントが開始され、オーディエンスが拍手を送ります。Rayが登場し、AI分野で最も長年働いた人物であることが紹介されています。Marvin Minskyが彼のメンターであったことも触れられ、RayはAIの未来について語り始めます。

05:02

🤖 AIの進化と言語モデルの革新

Rayは、AIが過去61年間で進化し、言語モデルの発展について語ります。彼は1999年の予測が数年早いかもしれないと感じている様子が描写されています。また、人工知能(AGI)の重要性や、コンピューターが人間を模倣できる時が来ることについても話されています。

10:06

🧠 脳とAIの関係について

Rayは脳とAIの関係について説明します。脳の構造と言語モデルの異同、そしてAIが脳を模倣するプロセスについて語られます。また、意識に関する議論が行われ、Rayは意識を科学的に定義できないと主張しています。

15:09

💡 意識と技術の進歩

Rayは、意識についてさらに深く掘り下げ、科学的な定義がない理由を説明します。彼は意識の重要性についても話し、意識を科学的に測定できないという難しさについても触れています。

20:11

🧬 脳のアップロードと永続性

Rayは脳のアップロードと永続性に関する話題に触れます。2029年までに脳の内容を機械で再現できる時代が来るという予測を行い、そのプロセスと技術の進歩について説明しています。

25:12

🚀 2045年の未来と技術の進化

Rayは2045年の未来について語り、その時点で人間の知能がどのように進化するかについて予測しています。また、技術の進歩によって生じる倫理的な問題や、人間の役割がどう変わるかについても議論されています。

30:19

🌐 AIの信頼性と未来の教育

RayはAIのブラックボックス性質と、それが信頼性のある技術となるために必要なステップについて話します。また、若い世代がどのように未来の技術社会に備えるかについてアドバイスを提供しています。

35:21

🎇 未来の展望と人間の役割

最終的に、Rayは人間の未来の役割と、技術がもたらす可能性について語ります。彼は人間の寿命が延び、知能が向上し、社会がより平等になる未来を予測しています。また、技術の進歩を通じて、より豊かな人生を送ることができると強調しています。

Mindmap

Keywords

💡AI

AI(人工知能)は、コンピューターシステムが人間の思考や行動を模倣すること。本视频中では、AIの発展とその可能性について話し、特に言語モデルについて重点的に説明している。

💡Singularity

Singularity(奇点)は、技術が非常に急速に進化し、人類の理解や制御の範囲を超える未来の状態を指す。本视频中では、2045年までにSingularityが来ることを予測し、その影響について議論している。

💡Longevity

Longevity(長寿)は、生命の持続期間を延ばすことを指す。本视频中では、科学の進歩により、人類がより長く生き、さらには「长寿逃避速度」を達成するとの予測を述べている。

💡Nanotechnology

Nanotechnology(ナノテクノロジー)は、極めて微細なスケールで物質を操作する技術を指す。本视频中では、ナノボットが脳を理解し、思考を抽出・投入できるようになることが、未来の視覚化において重要な要素となっている。

💡Brain-Machine Interface

Brain-Machine Interface(BMI)は、脳と機械の間での直接的なコミュニケーションを可能にする技術です。本视频中では、BMIが人間の知能を増強し、コンピューターと直接接続することを可能にするとの予測をしています。

💡Ethics

Ethics(倫理)は、人間の行動や決定について正しいと間違ったことを判断する基準や原理を指します。本视频中では、技術の進歩により生じる倫理的な問題について話し、特に能力の平等や記憶の共有などについて議論している。

💡Turing Test

Turing Test(チューリングテスト)は、コンピューターが人間に匹敵する知能を持っているかどうかを判断するためのテストです。本视频中では、AIがチューリングテストをクリアする可能性について言及しており、その定義や意味について議論している。

💡Consciousness

Consciousness(意識)は、自己の存在や感覚、思考などについて認識する状態を指します。本视频中では、意識が科学的な概念ではないと主張し、意識に関する議論の難しさについて説明している。

💡Exponential Growth

Exponential Growth(指数成長)は、ある量が時間とともに指数関数的に増加することを指します。本视频中では、技術の進歩が指数的な成長を遂げており、その影響により未来の社会や生活がどのように変化するかについて説明している。

💡Large Language Models

Large Language Models(大型言語モデル)は、自然言語処理のための高度な機械学習モデルです。本视频中では、大型言語モデルの発展とその応用について話し、特にGPT-4のようなモデルについて詳細に説明している。

💡Gray Goo

Gray Gooは、ナノテクノロジーが誤った方向に進展し、最終的に全ての物質を同一の形状に変換するというSF的な概念です。本视频中では、このような事態を避けるために、未来の技術開発において注意が必要と述べている。

Highlights

Ray Kurzweil, a renowned AI expert, shares his insights on the future of AI and its impact on human life.

Kurzweil predicts that large language models like GPT-4 are ahead of schedule, having expected them to emerge a few years later.

The Turing Test, a measure of a machine's ability to exhibit intelligent behavior, is expected to be passed by 2029 according to Kurzweil's 1999 prediction.

Kurzweil emphasizes the importance of AGI (Automatic General Intelligence), where a machine can emulate any human being.

The Kurzweil Curve, an 80-year track record of exponential growth in computational power, remains accurate and is a driving force behind technological progress.

Kurzweil discusses the potential of large language models to transform not just language but also areas like disease simulation and renewable energy.

The brain's organization and efficiency differ from large language models, which have uniform connections and can be expanded beyond the brain's capacity.

Kurzweil argues that consciousness, while important, is not scientific and thus not easily definable or quantifiable.

The concept of 'longevity escape velocity' is introduced, where technological advancements could offset the natural loss of human lifespan.

By 2045, Kurzweil envisions the ability to back up and recreate the human brain, including both biological and computational elements.

Kurzweil addresses ethical concerns about power distribution and fairness in a future where intelligence can be enhanced or downloaded.

The role of humans in designing and architecting AI systems to prevent potential catastrophic outcomes like 'gray goo' is discussed.

Kurzweil remains optimistic about the positive impact of technology, despite acknowledging potential risks and uncertainties.

The importance of passion and interest in shaping the future of work and personal development is emphasized for younger generations.

Kurzweil envisions a future where humans will have increased power and capabilities, but also greater equality and wealth distribution.

The potential for AI to create accurate emulations of deceased individuals, as demonstrated by Kurzweil's work with his father's writings, is explored.

Kurzweil discusses the feasibility of nanobots understanding and interacting with the brain at a particle level, enhancing our understanding of its workings.

Transcripts

play00:00

(audience applauding)

play00:06

- All right.

play00:07

I'm so excited to be here with you, Ray.

play00:09

- It's great to be here.

play00:11

Great to see everybody together.

play00:14

- Yeah. - Beautiful audience.

play00:16

- So, my favorite thing in that introduction of you

play00:18

is that you have been working in AI

play00:20

longer than any other human alive,

play00:22

which means, if you live forever,

play00:25

and we'll get to that,

play00:26

you will always have that distinction.

play00:29

- I think that's right.

play00:32

Marvin Minsky was actually my mentor.

play00:36

If he were alive today,

play00:37

he would actually be more than 61 years.

play00:41

We're gonna bring him back also.

play00:43

- So, maybe you'll, I'm not sure

play00:44

how we'll count the distinction then.

play00:47

- [Audience] Louder, louder.

play00:51

- All right, so we're gonna fix the audio,

play00:53

but this is what we're gonna do with this conversation.

play00:55

I'm gonna start out asking Ray some questions

play00:57

about where we are today.

play00:59

We'll do that for a few minutes.

play01:00

Then we'll get into what has to happen

play01:02

to reach the singularity.

play01:04

So, the next 20 years.

play01:06

Then we'll get into discussion about

play01:07

what the singularity is, what it means,

play01:09

how it would change our lives.

play01:10

And then at the end we'll talk a little bit about how,

play01:12

if we believe this vision of the future,

play01:14

what it means for us today.

play01:15

Ask your questions.

play01:16

They'll come in, I'll ask 'em

play01:17

as they go in the different sections of the conversation,

play01:20

but let's get cracking.

play01:21

- Can you hear me?

play01:23

(audience answers indistinctly)

play01:27

- You can't hear, Ray?

play01:28

(audience answers indistinctly)

play01:30

Well, this will be recorded.

play01:32

You guys are gonna all live forever.

play01:34

There'll be plenty of time.

play01:36

It will be fine.

play01:38

I'm just gonna get started.

play01:39

I assume the audio will get worked out.

play01:40

They do a fabulous job here at South by.

play01:43

- I think they should be able to hear me and you.

play01:46

(audience laughing)

play01:49

- All right, we got this over on the right?

play01:51

(audience applauding)

play01:54

Audio engineers, are we good to go?

play01:58

We're good to go, all right.

play02:01

All right, first question, Ray.

play02:03

So, you've been working in AI for 61 years?

play02:07

- Oh wait, can you hear me?

play02:09

- [Audience] No.

play02:12

- That's not.

play02:13

- So, everybody in the front can hear you,

play02:15

but nobody in the back can hear you.

play02:16

- Can you hear me now?

play02:18

- [Audience] Yes. - Okay.

play02:21

- All right. - I'll speak louder.

play02:25

- First question, so you've been living

play02:28

in the AI revolution for a long time.

play02:29

You've made lots of predictions,

play02:31

many of which have been remarkably accurate.

play02:35

We've all been living in

play02:37

a remarkable two year transformation

play02:39

with large language models a year and a half.

play02:42

What has surprised you about the innovations

play02:45

in large language models and what has happened recently?

play02:48

- Well, I did finish this book a year ago,

play02:52

and didn't really cover large language models.

play02:55

So, I delayed the book to cover that.

play03:02

But I was expecting this that to happen

play03:10

like a couple of years later.

play03:12

I mean, I made a prediction in 1999

play03:16

that would happen by 2029,

play03:20

and we're not quite there yet, but we will.

play03:24

But it looks like it's maybe

play03:25

a year or two ahead of schedule.

play03:31

So, that was maybe a bit of a surprise.

play03:33

- Wait, you predicted back in 1999

play03:36

that a computer would pass the Turing Test in 2029.

play03:38

Are you revising that to something more closer to today?

play03:46

- No, I'm still saying 2029.

play03:52

The definition of the Turing Test is not precise.

play03:57

We're gonna have people claiming

play03:59

that the Turing Test has been solved

play04:02

and people are saying that

play04:03

GPT-4 actually passes it, some people.

play04:07

So, it's gonna be like maybe two or three years

play04:09

where people start claiming

play04:12

and then they continue to claim

play04:13

and finally, everybody will accept it.

play04:16

So, it's not like it happens in one day,

play04:18

- But you have a very specific definition

play04:20

of the Turing Test.

play04:21

When do you think we'll pass that definition?

play04:25

- Well, the Turing Test is actually not that significant,

play04:28

'cause that means that you can,

play04:33

a computer will pass for a human being.

play04:38

And what's much more important is AGI,

play04:41

automatic general intelligence,

play04:43

which means that it can emulate any human being.

play04:46

So, you have one computer,

play04:48

and it can do everything that any human being can do,

play04:52

and that's also 2029.

play04:54

It all happens at the same time.

play04:56

But nobody can do that.

play04:57

I mean, just take an average large language model today.

play05:02

You can ask it anything

play05:04

and it will answer you pretty convincingly.

play05:07

No human being can do all of that.

play05:10

And it does it very quickly.

play05:11

It'll write a very nice essay in 15 seconds

play05:16

and then you can ask it again and it'll write another essay

play05:19

and no human being can actually perform at that level.

play05:23

- Right, so you have to dumb it down to actually

play05:25

have a convincing Turing Test.

play05:26

- [Ray] To have a Turing Test you have to dumb it down.

play05:28

- Yeah, let me ask the first question from the audience

play05:31

since I think it's quite relevant to where we are,

play05:33

which is Brian Daniel.

play05:34

Is the Kurzweil Curve still accurate?

play05:37

- [Ray] Say again?

play05:37

- [Nick] Is the Kurzweil Curve still accurate?

play05:40

- Yes, in fact it's, can I see that?

play05:43

- [Nick] Let's pull the slides up. First slide.

play05:48

- [Ray] So, this is an 80-year track record.

play05:52

This is an exponential growth.

play05:54

A straight line on this curve means exponential curvature.

play06:01

If it was sort of exponential,

play06:02

but not quite, it would curve.

play06:05

This is actually a straight line.

play06:08

It started out with a computer

play06:11

that did 0.0000007 calculations

play06:19

per second per constant dollar.

play06:21

That's the lower left hand corner.

play06:23

At the upper right hand corner,

play06:25

it's 65 billion calculations per second

play06:28

for the same amount of money.

play06:31

So, that's why large language models

play06:33

have only been feasible for two years.

play06:35

Prior, we actually had large language models before that,

play06:38

but it didn't work very well.

play06:41

And this is an exponential curve.

play06:44

Technology moves in an exponential curve.

play06:48

We see that, for example, having renewable energy

play06:55

come from the Sun and wind,

play07:00

that's actually an exponential curve.

play07:03

It's increased, it's gone.

play07:06

We've decreased the price by 99.7%.

play07:12

We've multiplied the amount of energy

play07:14

coming from solar energy a million fold.

play07:18

So, this kind of curve really

play07:22

directs all kinds of technology.

play07:27

And this is the reason that we're making progress.

play07:31

I mean, we knew how to do large language models years ago,

play07:36

but we're dependent on this curve, and it's pretty amazing.

play07:41

It started out increasing relay speeds,

play07:44

then vacuum tubes, then integrated circuits,

play07:47

and each year it makes the same amount of progress,

play07:50

approximately regardless of where you are on this curve.

play07:56

We just added the last point.

play07:58

And it's again, we basically multiply this

play08:03

by two every 1.4 years.

play08:08

And this is the reason that computers are exciting,

play08:12

but it actually affects every type of technology.

play08:15

And we just added the last point like a two weeks ago.

play08:19

- Okay. All right, so let me ask you a question.

play08:23

You know, you wrote book about how to build a mind.

play08:26

You have a lot about how the human mind is constructed.

play08:29

A lot of the progress in AI, AI systems are being built

play08:32

on what we understand about neural networks, right?

play08:34

So, clearly our understanding of this helps with AI.

play08:39

In the last two years,

play08:40

by watching these large language models,

play08:42

have we learned anything new about our brains?

play08:45

Are we learning about

play08:46

the insides of our skulls as we do this?

play08:48

- It really has to do with the amount of connections.

play08:52

The brain is actually organized fairly differently.

play08:56

The things near the eye, for example, deal with vision.

play09:02

And we have different ways of implementing

play09:04

different parts of the brain that remember different things.

play09:07

We actually don't need that.

play09:09

In a large language model, all the connections are the same.

play09:13

We have to get the connections up to a certain point.

play09:16

If it approximately matches what the brain does,

play09:19

which is about a trillion connections,

play09:23

it will perform kind of like the brain.

play09:25

We're kind of almost at that point.

play09:27

- [Nick] Wait, so you think.

play09:28

- GPT-4 is 400 billion.

play09:31

The next ones will be a trillion or more.

play09:34

- So, the construction of these models,

play09:36

they are more efficient in their construction

play09:38

than our brains are?

play09:41

- We make them to be as efficient as possible,

play09:44

but it doesn't really matter how they're organized.

play09:47

And we can actually create certain software

play09:51

that will actually expand the amount of connections

play09:54

more for the same amount of computation.

play10:00

But it really has to do with how many connections

play10:06

are particular computers is responsible for.

play10:11

- So, as we approach AGI,

play10:15

we're not looking for a new understanding

play10:17

of how to make these machines more efficient?

play10:19

The transformer architecture was clearly very important.

play10:22

We can really just get there with more compute.

play10:25

- But the software and the learning is also important.

play10:28

I mean, you could have a trillion connections,

play10:31

but if you didn't have something to learn from,

play10:34

it wouldn't be very effective.

play10:35

So, we actually have to be able to collect all this data.

play10:39

So, we do it on the web and so on.

play10:41

I mean, we've been collecting stuff on the web

play10:45

for several decades.

play10:48

That's really what we're depending on to be able

play10:52

to train these large language models.

play10:58

And we shouldn't actually call them large language models,

play11:02

because they deal with much more than language.

play11:05

I mean, it's language,

play11:06

but you can add pictures,

play11:09

you can add things that affect disease

play11:14

that have nothing to do with language.

play11:17

In fact, we're using now simulated biology

play11:23

to be able to simulate different ways to affect disease.

play11:31

And that has nothing to do with language,

play11:34

but they really should be called large event models.

play11:38

- Do you think there's anything that happens

play11:40

inside of our brains that can be captured

play11:42

by computation and by math?

play11:45

- No. I mean, what would that be? I mean.

play11:48

(Ray and audience laughing)

play11:50

- Okay, quick poll of the audience.

play11:52

Raise your hand if you think there's something in your brain

play11:54

that cannot be captured by computation or math, like a soul.

play11:59

All right, so convince them that they're wrong, Ray.

play12:01

- I mean, consciousness is very important,

play12:05

but it's actually not scientific.

play12:08

There's no way I could slide somebody in

play12:10

and the light will go on.

play12:12

Oh, this one's conscious.

play12:13

No, this one's not.

play12:15

It's not scientific,

play12:19

but it's actually extremely important.

play12:23

And another question, why am I me?

play12:26

How come what happens to me?

play12:28

I'm conscious of, and I'm not conscious

play12:31

of what happens to you.

play12:34

These are deeply mysterious things,

play12:37

but they're really not, it's really not conscious.

play12:39

So, Marvin Minsky, who was my mentor for 50 years, he said,

play12:44

it's not scientific and therefore

play12:45

we shouldn't bother with it.

play12:47

And any discussion of consciousness,

play12:49

he would kind of dismiss, but he actually did.

play12:54

His reaction to people was totally dependent

play12:57

on whether he felt they were conscious or not.

play12:59

So, he actually did use that.

play13:03

But it's not something that we're ignoring,

play13:05

because there's no way to tell

play13:08

whether something's conscious.

play13:11

And that's not just something

play13:12

that we don't know and we'll discover.

play13:15

There's really no way to tell

play13:17

whether or not something's conscious.

play13:19

- What do you mean, like this is not conscious

play13:20

and you know, the gentleman

play13:21

sitting right there is conscious.

play13:23

I'm pretty confident.

play13:24

- How do you prove that?

play13:29

I mean we kind of agree with human

play13:32

that humans are conscious.

play13:34

Some humans are conscious, not all humans.

play13:36

(audience laughing)

play13:38

But how about animals? We have big disagreements.

play13:43

Some people say animals are not conscious

play13:47

and other people think animals are conscious.

play13:49

Maybe some animals are conscious, and others are not.

play13:52

There's no way to prove that.

play13:55

- Okay, I wanna run down this consciousness question,

play13:59

but before we do that, I wanna make sure

play14:00

I understood your previous answer correctly.

play14:03

So, the feeling I get of being in love

play14:06

or the feeling, any emotion that I get

play14:12

could eventually be represented

play14:13

in math in a large language model?

play14:16

- Yeah, I mean certainly the behavior,

play14:18

the feelings that you have,

play14:20

if you are with somebody that you love.

play14:25

It's definitely dependent on what the connections do.

play14:28

You can tell whether or not that's happening.

play14:32

- All right, and back to,

play14:37

is everybody here convinced?

play14:38

- [Audience] No.

play14:39

= Not entirely.

play14:40

All right, well close enough.

play14:41

So, you don't think that it's worth

play14:44

trying to define consciousness?

play14:45

I mean, you spend a fair amount in your book

play14:47

giving different arguments about what consciousness means,

play14:49

but it seems like your argument on stage

play14:51

that we shouldn't try to define it?

play14:56

- There's no way to actually prove it.

play14:58

I mean, we have certain agreements.

play15:01

I agree that all of you are conscious,

play15:02

you actually made it into this room.

play15:04

So, that's a pretty good indication that you're conscious.

play15:09

But that's not a proof.

play15:12

And there may be human beings

play15:14

that don't seem quite conscious at the time.

play15:18

Are they conscious or not?

play15:20

And animals, I mean I think elephants

play15:22

and whales are conscious,

play15:24

but not everybody agrees with that.

play15:26

- So, at what point can we then,

play15:28

essentially how long will it be until we can,

play15:32

essentially download the entire contents of your brain

play15:36

and express it through some kind of a machine?

play15:40

- That's actually an important question,

play15:42

'cause we're gonna talk about longevity.

play15:45

We're gonna get to a point

play15:46

where we have longevity escape velocity.

play15:49

And it's not that far away.

play15:51

I think if you're diligent,

play15:53

you'll be able to achieve that by 2029.

play15:55

That's only five or six years from now.

play16:00

And that, so right now you go through a year,

play16:03

use up a year of your longevity,

play16:05

but you get back from scientific progress

play16:08

right now about four months.

play16:10

But that scientific progress is on an exponential curve.

play16:13

It's gonna speed up every year.

play16:15

And by 2029, if you're diligent,

play16:18

you'll use up a year of your longevity with a year passing.

play16:21

But you'll get back a full year.

play16:23

And past 2029, you'll get back more than a year.

play16:27

So, you'll actually go backwards in time.

play16:30

Now, that's not a guarantee of infinite life

play16:33

because you could have a 10-year-old

play16:38

and you could compute his longevity as many, many decades

play16:41

and he could die tomorrow.

play16:45

But what's important about

play16:46

actually capturing everything in your brain,

play16:49

we can't do that today,

play16:51

and we won't be able to do that in five years.

play16:54

But you will be able to do that by the singularity,

play16:57

which is 2045.

play16:59

And so, at that point you can actually go inside the brain

play17:02

and capture everything in there.

play17:04

Now, your thinking is gonna be a combination

play17:07

of the amount you get from computation,

play17:11

which will add to your thinking.

play17:14

And that's automatically captured.

play17:18

I mean, right now, anything that's you have

play17:21

in a computer is automatically captured today.

play17:26

And the kind of additional thinking we'll have

play17:29

by adding to our brain that will be captured.

play17:33

But the connections that we have in the brain

play17:40

that we start with will still have that.

play17:44

That's not captured today,

play17:45

but that will be captured in 2045.

play17:47

We'll be able to go inside the brain

play17:49

and capture that as well.

play17:51

And therefore, we'll actually capture the entire brain,

play17:56

which will be backed up.

play17:58

So, even if you get wiped out,

play18:00

you walk into a bomb and it explodes,

play18:02

we can actually recreate everything

play18:04

that was in your brain by 2045.

play18:08

That's one of the implications of the singularity.

play18:14

Now, that's doesn't absolutely guarantee,

play18:17

because I mean the world could blow up

play18:19

and all the computer,

play18:26

all the things that contained computers could blow up

play18:28

and so you wouldn't be able to to recreate that.

play18:35

So, we never actually get to a point

play18:36

where we absolutely guarantee that you live forever.

play18:40

But most of the things that right now would upset

play18:46

capturing that will be overcome by that time.

play18:50

- There's a lot there, Ray.

play18:52

Let's start with escape velocity.

play18:55

So, do you think that anybody in this audience,

play18:57

in their current biological body

play18:59

will live to be 500 years old?

play19:02

- You're asking me?

play19:03

- Yeah.

play19:05

- Absolutely, I mean, if you're gonna

play19:08

be alive in five years,

play19:10

and I imagine all of you will be alive in five years.

play19:14

- Oh okay, if they're alive for five years,

play19:16

they will likely live to be 500 years old?

play19:20

- If they're diligent.

play19:21

And I think the people in this audience will be diligent so.

play19:25

- Wow, all right.

play19:26

Well, you can drink whatever you want

play19:28

as long as you don't get run over tonight,

play19:29

'cause you don't have to worry about decline.

play19:31

(audience laughing)

play19:32

All right, so let me ask you a question.

play19:34

I wanna get, we're gonna spend a lot of time

play19:36

on what the singularity is,

play19:37

what it means, and what it'll be like.

play19:38

But I wanna ask some questions that'll lead us up there.

play19:40

So, I'm gonna take this question

play19:41

from Mark Sternberg and modify it slightly.

play19:44

In the timeframe, AI will be able to do,

play19:47

or sufficiently sophisticated computers in your argument

play19:50

can do everything that the human brain can do.

play19:53

What will they not be able to do in the next 10 years?

play20:01

- Well, one thing has to do with being creative.

play20:07

And some people go, they'll be able

play20:08

to do everything a human can do,

play20:11

but they're not gonna be able to create new knowledge.

play20:14

That's actually wrong,

play20:15

because we can simulate, for example, biology.

play20:20

And the Moderna vaccine for example,

play20:23

we didn't do it the usual way,

play20:24

which is somebody sits down and thinks,

play20:26

well, I think this might work.

play20:28

And then they try it out.

play20:29

It takes years to try it out in multiple people

play20:33

and it's one person's idea about what might work.

play20:36

They actually listed everything that might work

play20:40

and there was actually several billion

play20:41

different mRNA sequences and they said let's try them all.

play20:46

And they tried every single one by simulating biology

play20:50

and that took two days.

play20:52

So, one weekend they tried out

play20:53

several billion different possibilities

play20:56

and then they picked the one

play20:57

that turned out to be the best.

play21:00

And that actually was the Moderna vaccine up until today.

play21:10

Now, they did actually test it on humans.

play21:12

We'll be able to overcome that as well,

play21:15

'cause we'll be able to test

play21:18

using simulated biology as well.

play21:20

They actually decided to test it.

play21:23

It's a little bit hard to give up testing on humans.

play21:26

We will do that.

play21:27

So, you can actually try out every single one,

play21:30

pick the best one, and then you can try out that

play21:33

by testing on a million simulated humans

play21:37

and do that in a few days as well.

play21:39

And that's actually the future

play21:40

of how we're gonna create medications for diseases.

play21:44

And there's lots of things going on now with cancer

play21:46

and other diseases that are using that.

play21:51

So, that's a whole new method.

play21:54

This actually starting now.

play21:56

Started right with the Moderna vaccine.

play21:58

We did another cure for a mental disease

play22:06

that's actually now in stage three trials.

play22:10

That's gonna be how we create medications from now on.

play22:14

- But what are the frontiers?

play22:15

What can we not do?

play22:17

- So, that's where computers being creative

play22:21

and it's not just actually trying something

play22:23

that occurs to it.

play22:25

It makes a list of everything that's possible

play22:27

and tries it all.

play22:28

- Is that creativity or is that just brute force

play22:31

with maximum capability?

play22:34

- It's much better than any other form of creativity.

play22:39

And yes, it's creative,

play22:40

'cause you're trying out every single possibility

play22:43

and you're doing it very quickly

play22:45

and you come up with something that we didn't have before.

play22:47

I mean, what else would creativity be?

play22:51

- All right, so we're gonna

play22:52

cross the frontier of creativity.

play22:53

What will we not cross?

play22:55

What are the challenges that will be

play22:56

outstanding the next 10 years?

play22:57

- Well, we don't know everything,

play23:00

and we haven't gone through this process.

play23:02

It does require some creativity to imagine what might work.

play23:07

And we have to also be able to simulate it

play23:10

in a biochemical simulator.

play23:15

So, we actually have to figure that out

play23:18

and we'll be using people for a while to do that.

play23:22

So, we don't know everything.

play23:24

I mean, to be able to do everything

play23:26

a human being can do is one thing,

play23:28

but there's so much we don't know that we wanna find out.

play23:33

And that requires creativity.

play23:36

That will require some kind of human creativity

play23:42

working with machines.

play23:45

- All right, let's go back to what's gonna happen

play23:46

to get us to the singularity.

play23:48

So, clearly we have the chart

play23:50

that you showed on the power of compute.

play23:51

It's been very steady, you know, moving straight up,

play23:54

you know, on a logarithmic scale on a straight line.

play23:56

There are a couple of other elements

play23:58

that you think are necessary to get to the singularity.

play24:01

One, is the rise of nanobots

play24:03

and the other is the rise of brain machine interfaces.

play24:06

And both of those have gone more slowly than AI.

play24:10

So, convince the audience that.

play24:12

- Well, it would be slow,

play24:14

because anytime you affect the human body,

play24:19

a lot of people are gonna be concerned about it.

play24:23

If we do something with computers, we have a new algorithm,

play24:27

or we increase the speed of it,

play24:36

nobody really is concerned about it.

play24:39

You can do that.

play24:40

Nobody cares about any dangers in it.

play24:46

I mean that's the reality.

play24:47

- [Nick] Well, there's some dangers

play24:48

that people care about, yes.

play24:49

- Yeah, but it goes very, very quickly.

play24:52

That's one of the reasons it goes so fast.

play24:55

But if you're affecting the body,

play24:57

we have all kinds of concerns

play25:00

that it might affect it negatively.

play25:02

And so, we wanna actually try it on people.

play25:05

- But the reason brain machine interfaces

play25:09

haven't moved in an exponential curve

play25:11

isn't just because, you know,

play25:14

lots of people are concerned about the risks to humans.

play25:17

I mean, as you explain in the book,

play25:19

they just don't work as well as they could.

play25:26

- If we could try things out without having to test it,

play25:29

it would go a lot faster.

play25:30

I mean, that's the reason it goes slowly.

play25:41

There's some thought now that we could actually

play25:45

figure out what's going on inside the brain

play25:47

and put things into the brain

play25:49

without actually going inside the brain.

play25:51

We wouldn't need something like brain link.

play25:54

We could just, I mean there's some tests

play25:59

where we can actually tell what's going on in the brain

play26:02

without actually putting something inside the brain.

play26:05

And that might actually be a way

play26:07

to do this much more quickly.

play26:10

- But your prediction about the singularity,

play26:12

depends, maybe I'm reading it wrong,

play26:14

not just on the continued exponential growth of compute,

play26:17

but on solving this particular problem too, right?

play26:25

- Yes, because we wanna increase

play26:28

the amount of intelligence that humans can command.

play26:32

And so, we have to be able

play26:32

to marry the best computers with our actual brain.

play26:38

- And why do we have to do that?

play26:39

Because like right now, here I go,

play26:41

I have my phone in some ways this augments my intelligence.

play26:44

It's wonderful.

play26:45

- Yeah, but it's very slow.

play26:46

I mean, if I ask you a question,

play26:48

you're gonna have to type it in,

play26:50

or speak it and it takes a while.

play26:52

I mean, I ask a question

play26:54

and then people fool around with their computer.

play26:57

It might take 15 seconds or 30 seconds.

play27:00

It's not like it just goes right into your brain.

play27:05

I mean, these are very useful.

play27:06

These are brain extenders.

play27:08

We didn't have these a little while ago.

play27:12

Generally, in my talks, I ask people,

play27:15

"Who here has their phone?"

play27:17

I'll bet here maybe there's one or two people,

play27:20

but everybody here has their phone.

play27:24

That wasn't two, five years ago,

play27:26

definitely wasn't two, 10 years ago.

play27:29

And it is a brain extender,

play27:31

but it does have some speed problems.

play27:35

So, we wanna increase that speed.

play27:37

A question could just come up where we're talking

play27:41

and the computer would instantly tell you what the answer is

play27:44

without you having to fool around with an external device,

play27:48

and that's almost feasible today.

play27:52

And something like that would be helpful to do this.

play27:57

- But could you not get a lot of the good

play28:01

that you talk about if we just kept.

play28:03

The problem with connecting our brains to the machines

play28:06

is suddenly you're in this whole world,

play28:08

these complicated privacy issues

play28:09

where stuff is being injected in my brain,

play28:11

stuff in my brain is, you know, is going elsewhere.

play28:13

Like you're opening up a whole host of ethical,

play28:16

moral, existential problems.

play28:17

Can't you just make the phones a lot better?

play28:21

- Well, that's the idea that we can do that

play28:23

without having to go inside your brain,

play28:27

but be able to tell what's going on in your brain

play28:29

externally without going inside the brain,

play28:36

you know, with some kind of device.

play28:37

- All right, well, let's keep moving into the future.

play28:39

So, we're moving into the future.

play28:40

We have exponential growth of computer.

play28:42

We solve a way of, you know, ideally figuring out

play28:45

how to communicate directly with your brain

play28:47

to speed things up.

play28:47

Explain why nanobots are essential

play28:49

to your vision of where we're going.

play28:52

- Well, if you really wanna tell

play28:53

what's going on inside the brain,

play28:56

you've gotta be able to go

play28:57

at the level of the particles in the brain

play29:01

so we can actually tell what they're doing,

play29:06

and that's feasible.

play29:10

We can't actually do it, but we can show that it's feasible.

play29:14

And that's one possibility.

play29:20

We're actually hoping that you could do this

play29:22

without actually affecting the brain at all.

play29:29

- Okay. All right, so we're pushing ahead.

play29:31

We've got nanobots that running around inside of our brain.

play29:33

They're understanding our head,

play29:35

they're extracting thoughts, they're inputting thoughts.

play29:38

Let's go to this nice question,

play29:39

which fits in lovely from Louise Condraver.

play29:42

What are the five main ethical questions

play29:44

that we will face as that happens?

play29:52

- Is four enough?

play29:54

- Four is fine.

play29:57

There might even be six, Ray, but you can give us four.

play30:07

- I mean we're gonna have a lot more power,

play30:11

if we can actually with our own brain control computers.

play30:18

Is I give people too much power?

play30:24

Also, I mean right now we talk about

play30:28

having a certain amount of value based on your talent.

play30:39

This will give talent to people

play30:41

who otherwise don't have talent.

play30:44

And talent won't be as important,

play30:48

because you'll be able to gain talent

play30:51

just by merging with the right kind of large language model,

play30:56

or whatever we call them.

play31:00

And it also seemed kind of arbitrary

play31:03

why we would give more power

play31:05

to somebody who has more talent,

play31:08

'cause they didn't create that talent,

play31:10

they just happened to have it.

play31:15

But everybody says we should give

play31:20

somebody who has talents in an area more power.

play31:26

This way you'd be able to gain talent,

play31:30

as in the "Matrix".

play31:31

You could learn to fly a helicopter

play31:36

just by downloading the right software

play31:38

as opposed to spending a lot of time doing that.

play31:43

Is that fair or unfair?

play31:50

I mean I think that would fall

play31:52

into the ethical challenge area.

play32:03

And it's not like we get to the end of this and say,

play32:07

okay, this is finally what the singularity is all about

play32:10

and people can do certain things

play32:12

and they can't do other things, but it's over.

play32:15

We will never get to that point.

play32:17

I mean this curve is gonna continue.

play32:20

The other curve, it's gonna continue indefinitely.

play32:26

And we've actually shown, for example,

play32:28

with nanotechnology we can create a computer

play32:31

where one liter computer would actually

play32:35

match the amount of power that all human beings today have.

play32:41

Like 10th to the 10th persons

play32:45

would all fit into one liter computer.

play32:51

Does that create ethical problems?

play32:56

So, I mean a lot of the implications kind of run against

play33:00

what we've been assuming about human beings.

play33:05

- Wait, on the talent question, which is super interesting.

play33:08

Do you feel like everybody,

play33:11

when we get to 2040 will have equal capacities?

play33:16

- I think we'll be more different,

play33:18

because we'll have different interests

play33:19

and you might be into some fantastic type of music

play33:25

and I might be into some kind of

play33:27

literature or something else.

play33:28

I mean we're gonna have different interests

play33:33

and so, we'll excel at certain things

play33:37

depending on what your interests are.

play33:40

So, it's not like we all have the same amount of power,

play33:43

but we all have fantastic power

play33:45

compared to what we have today.

play33:47

- And if you're in Texas where there are no regulations,

play33:49

you'll probably get it first

play33:50

instead of you in Massachusetts.

play33:51

- Exactly, yeah.

play33:53

(audience laughing)

play33:54

- Let me ask you another ethical question,

play33:55

while we're on this one.

play33:56

So, about a few minutes ago you mentioned the capacity to,

play34:00

you know, replicate someone's brain and bring 'em back.

play34:03

So, let's say I do that with my father.

play34:04

Passed away six years ago sadly.

play34:07

I bring him back and I'm able

play34:09

to create a mind and a body just like my father's, right?

play34:13

It's exact perfect replica, all of his thoughts.

play34:16

What happens to the, all the bills that he owed when he die?

play34:20

Because like that's a lot of money

play34:22

and a lot of bill collectors call me.

play34:23

Do we have to pay those off or are we good?

play34:27

- Well, we're doing something like that with my daughter

play34:34

and you can read about this in her book

play34:36

and it's also in my book.

play34:38

We collected everything my father had written.

play34:42

He died when I was 22.

play34:44

So, he is been dead for more than 50 years.

play34:50

And we fed that into a large language model

play34:55

and basically, asked it the question,

play34:58

of all the things he ever wrote,

play35:01

what best answers this question?

play35:04

And then you could put any question you want

play35:07

and then you could talk to him.

play35:08

You'd say something,

play35:10

you'd then go through everything he ever had written

play35:13

and find the best answer

play35:15

that he actually wrote to that question.

play35:18

And it actually was a lot like talking to him.

play35:21

You could ask him what he liked about music.

play35:23

He was a musician.

play35:26

He actually liked Brahms the best,

play35:29

and it was very much like talking to him.

play35:34

And I reported on this in my book

play35:36

and Amy talks about this in her book.

play35:41

And Amy actually asked the question,

play35:43

could I fall in love with with this person

play35:46

even though I've never met him?

play35:48

And she does a pretty good job.

play35:50

I mean you really do fall in love with this character

play35:52

that she creates even though she never met him.

play36:02

So, we can actually, with today's technology,

play36:05

do something where you can actually emulate somebody else.

play36:10

And I think as we get further on

play36:11

we can actually do that more and more responsibly

play36:16

and more and more that really would match that person

play36:21

and actually, emulate the way he would move,

play36:23

and so on, his tone of voice.

play36:25

- And well, you know, my dad, he loved Brahms too,

play36:27

particularly those piano trios.

play36:28

So, if we can solve the back taxes problem,

play36:30

we'll get my dad and your dad's bots hang out,

play36:34

it would be great.

play36:34

- Well, yeah, that'd be cool.

play36:37

- All right.

play36:39

(audience laughing)

play36:40

All right, we got 20 minutes left.

play36:42

I wanna get to the thing that I most wanna understand,

play36:44

'cause it's something that's,

play36:45

by the way, this book is wonderful.

play36:46

I think you guys are all gonna get

play36:47

signed copies of it when it comes out.

play36:49

It's truly remarkable, as are all of Ray's books,

play36:52

whether you agree or disagree,

play36:53

they'll definitely make you think more.

play36:55

One of the things that I don't think you do in this book

play36:57

is describe what a day will be like in 2045

play37:03

when we're all much more intelligent.

play37:05

So it's 2045, we're all a million times as intelligent.

play37:09

I wake up, do I have breakfast or do I not have breakfast?

play37:16

- Well, the answer to that question is

play37:19

kind of the same as it's now,

play37:21

but first of all, the reason it's called a singularity

play37:29

is because we don't really fully understand that question.

play37:35

Singularity is borrowed from physics.

play37:37

Singularity in physics is where you have a black hole

play37:42

and no light can escape.

play37:43

And so, you can't actually tell what's going on

play37:45

inside the black hole.

play37:47

And so, we call it a singularity, a physical singularity.

play37:51

So, this is a historical singularity,

play37:54

but we're borrowing that term from physics

play37:57

and call it a singularity,

play37:59

because we can't really answer the question.

play38:01

If we actually multiply our intelligence a million fold,

play38:04

what's that like?

play38:06

It's a little bit like asking a mouse,

play38:10

gee, what would it be like,

play38:11

if you had the amount of intelligence of this person?

play38:16

The mouse wouldn't really even understand the question.

play38:20

It does have intelligence,

play38:22

has a fair amount of intelligence,

play38:24

but it couldn't understand that question.

play38:26

It couldn't articulate an answer.

play38:30

That's a little bit what it would be like for us

play38:32

to take the next step in intelligence by adding

play38:37

all the intelligence that the singularity would provide.

play38:39

- Wait, wait, I just wanna make sure I understand.

play38:41

- But I'll give you one answer.

play38:44

I said if you're diligent, you'll achieve

play38:48

longevity escape velocity in five or six years.

play38:58

And if we wanna actually emulate everything

play39:04

that's going on inside a brain,

play39:08

let's go out a few more years.

play39:10

Let's say the 2040, 2045.

play39:13

Now, there's a lot, you talk to a person,

play39:17

they've got all the connections that they had originally,

play39:20

plus all this additional connections

play39:22

that we add through having them access computers

play39:29

and that becomes part of their thinking.

play39:33

So, can you suppose that person like blows up

play39:39

or something happens to their mind.

play39:43

You definitely can recreate everything

play39:45

that's of a computer origin.

play39:49

'Cause we do that now, anytime we create anything

play39:51

with a computer, it's backed up.

play39:53

So, if the computer goes away,

play39:56

you've got the backup and you can recreate it.

play39:59

Maybe says, okay,

play40:00

but what about their thinking in their normal brain

play40:05

that's not done with computers?

play40:10

We don't have some ways of backing that up.

play40:13

When we get to the singularity with 2045,

play40:15

we'll be able to back that up as well,

play40:18

because we'll be able to figure out,

play40:21

we'll have some ways of actually figuring out

play40:23

what's going on in that sort of mechanical brain.

play40:32

And so, we'll be able to back up both their normal brain

play40:36

as well as the computer edition.

play40:41

And I believe that's feasible by 2045.

play40:46

- In your vision of it.

play40:48

- So, you can back up their entire brain.

play40:51

Now, that doesn't guarantee,

play40:52

I mean the whole world could blow up

play40:54

and you lose all the data centers.

play40:56

And so, it's not absolute guarantee.

play40:59

- That' ll be ashamed,

play41:02

but what I don't understand is

play41:04

will we even be fully distinct people

play41:06

if we're sharing memories

play41:08

and we're all uploading our brains to the cloud

play41:12

and we're getting all this information coming back

play41:14

directly into our neocortex, are we still distinct?

play41:20

- Yes, but we could also

play41:24

find new ways of communicating.

play41:27

So, the computers that extend my brain

play41:32

interact with computers to extend your brain.

play41:35

We could create something that's like a hybrid or not

play41:40

and it would be up to our own decision

play41:42

as to whether or not to do that.

play41:44

So, there'll be some new ways of communicating.

play41:47

- Let me ask another question about this.

play41:49

This is what, when I was reading the book,

play41:51

this is where I kept getting stuck.

play41:52

You are extremely optimistic, right?

play41:55

You're optimistic about where we are today.

play41:58

You're optimistic that technology

play41:59

has been a massive force for good.

play42:01

You're optimistic that it'll continue

play42:02

to be a massive force for good.

play42:04

Yet, there is a lot of uncertainty

play42:06

in the future you were describing.

play42:09

- Well, first of all, I'm not necessarily optimistic

play42:15

the things that can go wrong.

play42:18

We had things that can go wrong before we had computers.

play42:25

When I was a child, atomic weapons were created

play42:32

and people were very worried about an atomic war.

play42:36

And we would actually get under our desk

play42:37

and put our hands behind our head

play42:39

to protect us against an atomic war.

play42:43

And it seemed to work, actually.

play42:45

We're still here,

play42:47

but if you would ask people,

play42:50

we had actually two weapons that went off in anger

play42:54

and killed a lot of people within a week.

play42:58

And if you'd ask people, what's the chance

play43:00

that we're gonna go another 80 years

play43:01

and this will never happen again.

play43:03

Nobody would say that, that was true,

play43:09

but it has happened.

play43:11

Now, that doesn't mean it's not gonna happen next week,

play43:15

but anyway, that's a great danger.

play43:18

And I think that's a much greater danger than computers are.

play43:23

Yes, there are dangers,

play43:24

but the computers will also be more intelligent

play43:27

to avoid kinds of dangers.

play43:32

Yes, there's some bad people in the world,

play43:35

but I mean, go back 80, 90 years,

play43:40

we had 100 million people die in Asia

play43:44

and Europe from World War II.

play43:48

We don't have wars like that anymore.

play43:50

We could, and we certainly

play43:52

have the atomic weapons to do that.

play43:57

And you could also imagine computers

play43:58

could be involved with that.

play44:03

But if you actually look,

play44:06

and this goes right through war and peace.

play44:09

First of all, you, if you look at my lineage of computers

play44:15

going from tiny fraction of one calculation to 65 billion,

play44:20

that's a 20 quadrillion fold increase

play44:23

that we've achieved in 80 years.

play44:29

And look at this,

play44:30

US personal income is done in constant dollars.

play44:33

So, this has nothing to do with inflation.

play44:37

And this is the average income in the United States.

play44:45

It's multiplied by about a hundred fold

play44:53

and we live far more successfully,

play44:57

if you actually, people say,

play44:59

oh, things were great 100 years ago, they weren't.

play45:03

And you can look at this chart,

play45:05

and lots of, I've got 50 charts in the book,

play45:08

which are the kind of progress we've made.

play45:11

Number of people that live in dire poverty

play45:14

has gone down dramatically.

play45:16

And we actually did a poll where they asked people,

play45:19

people that live in poverty, has it gone up or down?

play45:23

80% said it's gone up.

play45:25

But the reality is it's actually fallen by 50%,

play45:36

in the last 20 years.

play45:39

So, what we think about the past,

play45:43

is really the opposite of what's happened.

play45:46

Things have gotten far better than they have

play45:49

and computers are gonna make things even better.

play45:52

I mean, just the kind of things you can do now

play45:54

with a large language model didn't exist two years ago.

play45:58

- Do you ever worry that take it as a given,

play46:03

if computers have made things better,

play46:04

take it as a given that personal income will keep going up.

play46:07

Do you ever worry it's just coming too quickly

play46:09

and it'll be better if maybe the slope of the Kurzweil Curve

play46:12

was a little less steep?

play46:13

- I's a big difference in the past.

play46:16

I mean, talk about what effect did the railroad have?

play46:21

I mean, lots of jobs were lost

play46:23

or even the cotton ginny that happened 200 years ago

play46:27

and people were quite happy

play46:29

making money with the cotton ginny

play46:31

and suddenly that was gone and machines were doing that.

play46:35

And people say, well, wait till this gets going,

play46:37

all jobs will be lost.

play46:39

And that's actually what was said at that time.

play46:45

But actually, income went up

play46:48

and more and more people worked.

play46:51

And if you say, well, what are they gonna do?

play46:54

You couldn't answer that question,

play46:55

because it was in industries that nobody had a clue of.

play46:59

Like for example, all of electronics.

play47:04

So, things are getting better even if jobs are lost.

play47:10

Now, you can certainly point to jobs

play47:12

like take computer programming.

play47:19

Google has, I don't know, 60,000 people

play47:21

that program computers and lots of other companies do.

play47:27

At some point, that's not gonna be a feasible job.

play47:31

They can already code.

play47:33

Large language models can write code

play47:35

not quite the way an expert programmer can do.

play47:40

But how long is that gonna take?

play47:43

It's measured in years, not in decades.

play47:49

Nonetheless, I believe that things will get better,

play47:52

because we wipe out jobs,

play47:55

but we create other ways of having an income.

play48:00

And if you actually point to something,

play48:03

let's say this machine

play48:06

and this is being worked on, can wash dishes.

play48:10

You just have a bunch of dishes that'll pick the ones

play48:13

that have to go in the dishwasher

play48:14

and clean everything else up,

play48:16

and that will wash dishes for you.

play48:21

Would we want that not to happen?

play48:24

Would we say, well, this is kind of upsetting things,

play48:27

let's get rid of it.

play48:28

It's not gonna happen.

play48:29

And no one would would advocate that.

play48:33

So, we'll find things to do.

play48:37

We'll have other methods of distributing money

play48:42

and it'll continue these kinds of curves

play48:46

that we've seen already.

play48:48

- It's kind of remarkable that we got large language models

play48:50

before we've got robotic dishwashers.

play48:54

You have grandchildren, you know?

play48:56

What would you tell a young person?

play48:58

You know, they buy in, they agree that

play49:00

or you know, how would you tell them

play49:03

to best prepare themselves for what will be a,

play49:06

if you're correct, a remarkably different future?

play49:11

- I'd be less concerned about what will make money

play49:15

and much more concerned about what turns them on.

play49:21

They love video games and so they should learn about that.

play49:27

They should read literature that turns them on.

play49:30

Some of those literature in the future

play49:32

will be created by computers,

play49:36

and find out what in the world

play49:42

has a positive effect on their mental being.

play49:46

- And if you know that your child or your grandchild,

play49:50

this gets to one of the questions

play49:51

that is asked on the screen here.

play49:53

If you know that someone is gonna live

play49:55

for hundreds of years, as you predict,

play49:58

how does that affect the way,

play50:00

certainly it means they shouldn't retire at 65.

play50:02

But what else does it change

play50:04

about the way they should think about their lives?

play50:06

- Well, I talk to people and they say,

play50:08

"Well, I wouldn't wanna live past 100."

play50:11

Or maybe they're a little more ambitious to say,

play50:14

"I don't wanna live past 110."

play50:19

But if you actually look at

play50:22

when people decide they've had enough

play50:25

and they don't wanna live anymore, that never, ever happens

play50:30

unless these people are in some kind of dire pain.

play50:33

They're in physical pain, or emotional pain,

play50:36

or spiritual pain, or whatever,

play50:39

and they just cannot bear to be alive anymore.

play50:42

Nobody takes their lives other than that.

play50:47

And if we can actually overcome many kinds of

play50:51

physical problems and cancer's wiped out and so on,

play50:56

which I expect to happen,

play50:58

people will be even that much more happy to live

play51:03

and they'll wanna continue to experience tomorrow,

play51:08

and tomorrow's gonna be better and better.

play51:12

These kinds of progress, it's not gonna go away.

play51:16

So, people will want to live,

play51:22

you know, unless they're in dire pain.

play51:24

But that's what the whole sort

play51:26

of medical profession is about,

play51:28

which is gonna be greatly amplified by tomorrow's computers.

play51:32

- Can I ask you a great question

play51:33

that has popped on the screen.

play51:34

This is from Colin McCabe.

play51:35

"AI is a black box, nobody knows how it was built.

play51:39

How do you show that AI is trustworthy to users

play51:41

who want to trust it, adopt it, and accept it?

play51:44

Particularly, if you're gonna upload it

play51:46

directly into your brain?"

play51:50

- Well, it's not true that nobody knows how they work.

play51:53

- Right. Most people who are using a large language model

play51:57

don't know what data sense went into it.

play51:59

They're things that happen in the transformer layer

play52:01

that even the architects don't understand.

play52:04

- Right, but we're gonna learn more and more about that.

play52:08

And in fact, how computers work will be,

play52:10

I think a very common type of talent

play52:15

that people want to gain.

play52:20

And ultimately, we'll have more trust of computers.

play52:24

I mean, large language models aren't perfect

play52:26

and you can ask it a question

play52:27

and it can give you something that's incorrect.

play52:32

I mean, we've seen that just recently.

play52:38

The reason we have these computers

play52:42

give you incorrect information is

play52:44

it doesn't have the information to begin with

play52:47

and it actually, doesn't know what it doesn't know.

play52:50

And that's actually, something we're working on

play52:54

so that it knows, well, I don't know that

play52:58

that's actually very good, if it can actually say that.

play53:01

'Cause right now it'll find the best thing it knows

play53:04

and if it's never trained on that information

play53:08

and there's nothing in there that tells you,

play53:10

it'll just give you the best guess,

play53:12

which could be very incorrect.

play53:16

And we're actually, learning to be able to

play53:18

figure out when it knows and when it doesn't know.

play53:21

But ultimately, we'll have a pretty good confidence

play53:29

when it knows and when it doesn't know.

play53:31

And we can actually, rely on what it says.

play53:34

- So, your answer to the question is,

play53:35

A, we will understand more,

play53:37

and B, they'll be much more trustworthy,

play53:39

so it won't be as risky to not understand them?

play53:42

- Right. - Okay.

play53:43

You've spent your life making predictions,

play53:47

some of which, like the Turing Test,

play53:49

you've held onto 'em been remarkably accurate.

play53:51

As you move from a overwhelming optimist

play53:53

to now slightly of a pessimist.

play53:55

What is your prediction?

play53:57

- Well, my books have always had a chapter

play53:58

on how these things can go wrong.

play54:03

- Tell me a prediction that you are chewing over right now,

play54:08

but you're not sure whether you wanna make it

play54:10

or whether you don't wanna make it.

play54:15

- I mean there's well known dangers in nanotechnology,

play54:21

if someone were to create a nanotechnology

play54:24

that replicates well known,

play54:28

if it replicates everything into paperclips.

play54:32

Turn the entire world into paperclips.

play54:36

That would not be positive.

play54:38

- No.

play54:40

Unless you're staples, but then.

play54:43

- And that's feasible.

play54:46

Take somebody who's a little bit mental to do that,

play54:55

but it could be done and we actually,

play55:01

will have something that actually avoids that.

play55:07

So, we'll have something that can detect

play55:10

that this is actually turning everything into paperclips

play55:13

and destroy it before it does that.

play55:18

But I mean I have a chapter in this new book

play55:21

"The Singularity is Nearer"

play55:25

that talks about the kinds of things that could happen.

play55:27

- Oh, the most remarkable part of this book

play55:29

is he does exactly the mathematical calculations

play55:31

on how long it would take nanobots

play55:33

to turn the world into gray goo

play55:34

and how long it would take the blue goo

play55:36

to stop the gray goo, that's remarkable.

play55:38

The book will be out soon.

play55:39

You definitely need to read until the end.

play55:41

But this leads to a,

play55:43

maybe let me try and answer the question I asked before is,

play55:46

what should young people think about and be working on?

play55:49

And you said their passions and what turns them on.

play55:52

Shouldn't they be thinking through

play55:57

how to design and architect these future systems

play56:00

so they're less likely to turn us

play56:02

into gray goo or paper clips?

play56:03

- Yeah, absolutely, yeah.

play56:04

I don't know if everybody wants to work on that but.

play56:06

- But folks in this room, right, technologically minded,

play56:08

you guys should all be working on

play56:10

not turning us into gray goo, right?

play56:11

- Yes, that'd be on the list, you know.

play56:14

- But then that leads to another question,

play56:16

which is, what will the role of humans be

play56:19

in thinking through that problem

play56:21

when they're only a millionth, or a billionth,

play56:23

or a trillionth as intelligent as machines?

play56:28

- Say that again.

play56:29

- So, we're gonna have these really hard problems to solve.

play56:32

- Yeah. - Right?

play56:33

Right now we are along with our machines, you know,

play56:38

we can be extremely intelligent,

play56:39

but 10 years from now, 15 years from now,

play56:42

there will be machines that will be

play56:43

so much more intelligent than us.

play56:45

What will the role of humans be

play56:48

in trying to solve these problems?

play56:50

- First of all, I see those as extensions of humans.

play56:52

And we wouldn't have them,

play56:53

if we didn't have humans to begin with.

play56:56

And humans have a brain that can think these things through.

play56:58

And we have this thumb,

play57:01

it's not really very much appreciated,

play57:04

but like whales and elephants,

play57:06

actually have a larger brain than we have

play57:08

and they can probably think deeper thoughts,

play57:10

but they don't have a thumb.

play57:11

And so, they don't create technology.

play57:15

A monkey can create, it actually has a thumb,

play57:18

but it's actually down an inch or so

play57:22

and therefore it really can't grab very well.

play57:24

So, it can create a little bit of technology,

play57:26

but the technology it creates

play57:28

cannot create other technology.

play57:30

So, the fact that we have a thumb means

play57:32

we can create integrated circuits

play57:36

that can become a large language model

play57:39

that comes from the human brain.

play57:48

And it's actually trained with everything

play57:49

that we've ever thought.

play57:51

Anything that human beings have thought

play57:53

that's been documented,

play57:54

and it can go into these large language models.

play57:59

And everybody can work on these things.

play58:02

And it's not true, well,

play58:03

only certain wealthy people will have it.

play58:06

I mean, how many people here have phones?

play58:09

If it's not 100% it's like 99.9%.

play58:14

And you don't have to be kind of from a wealthy group.

play58:20

I mean, I see people who are homeless

play58:22

who have their own phone.

play58:25

It's not that expensive.

play58:29

And so, that represents

play58:32

the distribution of these capabilities.

play58:35

It's not something you have to be

play58:36

fabulously wealthy to afford.

play58:39

- So, you think that we're heading into a future

play58:41

where we're gonna live much longer

play58:42

and we'll be much more equal?

play58:44

- Say again?

play58:45

- Well, you think we're heading into a society

play58:46

where we'll live much longer, be wealthier,

play58:48

but also much more equality?

play58:50

- Yes, absolutely.

play58:51

And we've seen that already.

play58:53

- All right. Well, we are at time,

play58:55

but Ray and I'll be back in 2124, 2224 and 2324.

play59:01

So, thank you for coming today.

play59:02

Thank you so much.

play59:03

He is an American treasure.

play59:04

Thank you, Ray Kurzweil.

play59:07

(dramatic music)

Rate This

5.0 / 5 (0 votes)

関連タグ
AI進化未来予測社会影響レイ・カーツワイル技術革新人工知能倫理問題延命技術意識アップロードインタビュー