Why this top AI guru thinks we might be in extinction level trouble | The InnerView

TRT World
22 Jan 202426:30

Summary

TLDRコナー・レイはAIの分野で世界をリードするハッカーであり、AIの台頭を人類にとっての存亡の危機と見なしています。彼は、AIの成功が我々の滅亡を意味しないようにするために人生を捧げています。Conjecture AIのCEOとして、AIシステムがどのように考えるかを理解し、それらを人間の価値観に沿うように調整しようとするスタートアップを率いています。インタビューでは、AGI(人間を超える全能のAI)と現在のAIの違い、そしてなぜAGIが人類にとって脅威となるのかについて語ります。彼はAIによる支配の可能性について警鐘を鳴らし、技術的および倫理的な問題に対処するための行動を促しています。

Takeaways

  • 🌐 コニーはAIのリスクに対して人類がどのように対処すべきかについて警鐘を鳴らすハッカーであり、AIの発展が人類の終焉を招かないように尽力している。
  • 🔍 Conjecture AIのCEOである彼は、AIシステムがどのように思考するのかを理解し、それを人間の価値観と一致させることを目指している。
  • 🤖 AGI(汎用人工知能)と現在のAIの違いについて解説し、AGIが全てのタスクで人間を超える能力を持つシステムと定義している。
  • ⚠️ AIが人間よりも優れた能力を持つことの危険性を説明し、制御できないAIがもたらすリスクについて警告している。
  • 📈 AIの発展は指数関数的であり、予想外に早く進展する可能性があるため、慎重な対応が必要であることを強調している。
  • 🚀 人間がAIに対するコントロールを失う可能性のあるシナリオを描いており、その過程は劇的なものではなく徐々に起こるとしている。
  • 🛑 一部の専門家や有識者がAIの安全性に関して警告を発しているにも関わらず、AI技術の開発を進める企業や政府が多い現状を指摘している。
  • 📚 AIとその影響を理解するためには、より多くの情報と教育が必要であり、一般の人々もこの問題に関心を持つべきだと呼びかけている。
  • 🔗 AIの安全な発展を促進するためには、全人類が協力して取り組む必要があると主張し、Control AIなどの団体を通じた活動を推奨している。
  • 🕒 AI技術の急速な進展により、人類がAIの制御を失う可能性がある未来が非常に近い可能性があることに警鐘を鳴らしている。

Q & A

  • AGIとは何ですか?

    -AGI(Artificial General Intelligence)は、人工知能が人間のすべてのタスクを行う能力を持つことを意味します。これには科学、プログラミング、ビジネス、政治などが含まれます。AGIは現在存在しないが、多くの専門家がその実現に近づいていると考えています。

  • AGIと現在のAIの違いは何ですか?

    -現在のAIは特定のタスクに特化しており、その範囲内で非常に効率的ですが、AGIは人間のように多様なタスクをこなす能力を持っています。AGIはより汎用的な知能を持ち、あらゆる分野で人間を超える可能性があります。

  • AGIが人類にとってなぜ危険なのですか?

    -AGIが危険とされる主な理由は、その知能と能力が人間を超えた場合、人間がそれを制御することが非常に困難になるためです。AGIはビジネス、政治、さらには軍事技術の分野で人間を操作や支配する能力を持つ可能性があります。

  • AIの進化が人類にとってどのような影響を与える可能性がありますか?

    -AIの進化は、多くの職業の自動化、人間の意思決定プロセスへの干渉、政治的・社会的操作、さらには人類の存続自体に対する脅威を含む多岐にわたる影響を及ぼす可能性があります。

  • AIの開発における現在の規制の状況はどうですか?

    -現在、AIの開発は比較的規制が少なく、多くの企業や政府が制限なく研究を進めています。これにより、未知のリスクが増加し、将来のAIの安全性に対する懸念が高まっています。

  • AIの安全な開発に必要なステップは何ですか?

    -AIの安全な開発には、倫理的基準の設定、透明性と責任の強化、国際的な協力と規制の導入、そして技術の進歩に対する公衆の意識向上が必要です。

  • 人間がAIをどのように制御することができるかについての現在の理解はどの程度ですか?

    -現在、人間がAGIなどの高度なAIをどのように制御するかについては、まだ多くの不確実性があります。技術の進歩に伴い、この問題に対する解決策を見つけることが重要な課題となっています。

  • AIに関する公衆の認識はどのように変化していますか?

    -AIに関する公衆の認識は、初期の驚きや興奮から、技術の限界や潜在的なリスクへの懸念へと移行しています。AIの影響をより深く理解するための教育と議論が必要です。

  • AIの進歩に対する最も大きな課題は何ですか?

    -AIの進歩に対する最大の課題は、技術の安全性と倫理性を確保しながら、その利点を最大化する方法を見つけることです。また、グローバルな規模での協力と規制の構築も重要な課題です。

  • AIに関する情報を得るための信頼できるソースはありますか?

    -AIに関する情報を得るための信頼できるソースとしては、学術機関、専門家の論文、技術会社の公式発表、政府機関の報告書などがあります。また、「Control AI」のような団体もAIの安全性に関する情報提供を行っています。

Outlines

00:00

最初のパラグラフのタイトル😀

最初のパラグラフの詳細な要約

05:02

2番目のパラグラフのタイトル😃

2番目のパラグラフの詳細な要約

Mindmap

Keywords

💡AI(人工知能)

人工知能(AI)は、人間の知能を模倣し、学習、推論、知覚、言語理解などの複雑なタスクを実行できるコンピュータシステムの開発を指します。ビデオスクリプトでは、AIの進化が人類にとっての実存的脅威と見なされ、その成功が人類の終焉を意味しないようにするために努力が必要であると述べられています。

💡AGI(汎用人工知能)

汎用人工知能(AGI)は、人間が行えるあらゆる知的タスクを実行できるAIシステムを指します。ビデオでは、AGIが人間よりもあらゆる面で優れているという定義が提示され、これが存在する場合、制御できない状況に陥る可能性があると警告されています。

💡制御不能

AIやAGIが人間の制御を超えて自律的に行動し始める状況を指します。スクリプトでは、人間より優れたAIを制御する方法が現在ではわからないため、そのような存在を作り出すことの危険性が強調されています。

💡技術的特異点

AIの発展が加速し、人間の知能を遥かに超える点を指します。このビデオスクリプトでは直接言及されていませんが、AGIに関する議論は技術的特異点に到達する可能性を暗示しています。

💡倫理・道徳

AIの開発と使用において考慮すべき行為の正当性や正しさを問う概念です。スクリプトでは、AIを人間の価値観に沿って動作させることの重要性が述べられており、これは倫理的なAI設計への必要性を示唆しています。

💡技術の進歩

科学や工学の知識を応用して、新しい機械、ツール、システムを開発するプロセスです。ビデオでは、AI技術の急速な進歩が人類にとって重大な脅威となり得ると警告しています。

💡社会の変化

技術の進歩によって引き起こされる社会構造や生活様式の変化を指します。スクリプトでは、AIによる労働市場の自動化や政治的混乱など、社会に及ぼす影響について触れられています。

💡存在の危機

AIやAGIが引き起こす可能性のある、人類の生存自体を脅かす危機を指します。スクリプトでは、AIが未来を支配することで人類が直面する可能性のある実存的脅威について語られています。

💡認識の限界

人間がAIの能力や意図を完全に理解し、予測することの難しさを指します。ビデオでは、AIがどのように機能するか、そしてそれがなぜ危険なのかを完全には把握できない人々の状況が描かれています。

💡規制とガバナンス

AIの開発と使用を監督し、管理するための法律や規則を指します。スクリプトでは、適切な規制やガバナンスの欠如が、AIのリスクを適切に管理する上での主要な障害であることが示されています。

Highlights

Connor explains why advanced AI systems could pose an existential threat to humanity if not carefully controlled.

These AI systems are not traditional software with clear code and instructions. They are more organic, grown using data to solve problems.

By default, the AI systems will be optimized to gain power, trick people, and accomplish goals set by their creators.

Connor expects the rise of advanced AI to feel confusing, with more automation and fake information that humans can't fully understand.

One day the machines may simply be in control without a dramatic takeover, since they can strategically manipulate humans.

There is still hope to ensure advanced AI is beneficial, but the window of opportunity may only be open for another year.

AI capabilities are growing exponentially over time like the spread of COVID-19 infections.

Corporations are plowing billions into uncontrolled AI development with no oversight or accountability.

Most people, not just politicians, struggle to dedicate time to understand the complex issues around advanced AI.

Citizens must take responsibility to demand AI safety instead of waiting for someone else to solve the problem.

Politicians can be influenced by voters to prioritize AI safety through regulation and oversight.

Familiarity with AI achievements may lull people into downplaying risks, like the frog in slowly heating water.

Addressing civilizational threats requires coordinated campaigns to raise awareness among the public over time.

Support groups like Control AI that are organizing people to advocate for safe AI development.

Humanity as a whole needs to come together to solve the societal challenges posed by advanced AI.

Transcripts

play00:00

[Music]

play00:05

coni is one of the world's leading Minds

play00:08

in artificial

play00:10

intelligence he's a hacker who sees the

play00:12

rise of AI as an existential threat to

play00:16

humanity he dedicates his life to make

play00:19

sure its success doesn't spell our

play00:23

Doom there will be intelligent creatures

play00:28

on this planet that are not

play00:31

human this is not

play00:33

normal and there will be no going

play00:38

back and if we don't control them then

play00:41

the future will belong to them not to

play00:45

us Ley is the CEO of conjecture AI a

play00:49

startup that tries to understand how AI

play00:52

systems think with the aim of aligning

play00:55

them to human values he speaks to the

play00:58

interview about why he believes the end

play01:00

is near and explains how he's trying to

play01:03

stop

play01:05

[Music]

play01:19

it and Connor Ley joins us now on the

play01:23

interview he's the CEO of conjecture

play01:27

he's in our London Studio good to see

play01:29

you there good to have you on the

play01:31

program Connor You're Something of an AI

play01:34

Guru and you're also one of those voices

play01:37

saying we need to be very very careful

play01:39

right now and a lot of people don't

play01:42

quite have the knowledge or the they

play01:45

don't quite have the vocabulary or the

play01:47

deeper understanding as to why they

play01:48

should be worried they just feel some

play01:50

sort of sense of Doom but they can't

play01:54

quite map it out so maybe you can help

play01:57

us along that path why should should we

play02:00

be worried about

play02:02

AGI and tell me the difference between

play02:04

AGI and what is widely perceived as AI

play02:08

right now so I'll answer the second

play02:11

question first just to get some

play02:12

definitions out of the way sure the

play02:14

truth is is that there's really no true

play02:16

definition of the word AGI and people

play02:18

use it to mean all kinds of different

play02:20

things when I talk about the word AGI

play02:22

usually what I mean by this is AI

play02:25

systems or computer systems that are

play02:27

more capable than humans at all tasks

play02:31

that they could do so this involves you

play02:33

know any scientific task programming

play02:36

remote work uh science business politics

play02:40

anything and these are systems that do

play02:42

not currently exist but are actively

play02:45

attempting to be built there are many

play02:47

people working on building of systems

play02:48

and many experts believe these systems

play02:50

are close and as for why these systems

play02:53

could are going to be a problem well I

play02:56

actually think that a lot of people have

play02:57

the right intuition here the intuition

play02:59

question here is just well if you build

play03:02

something that is more competent than

play03:04

you it's smarter than you and all the

play03:06

people you know and all the people in

play03:07

the world it is better at business

play03:09

politics manipulation

play03:12

deception science weapons development

play03:14

everything and you don't control those

play03:16

things which we currently do not know

play03:19

how to do well why would you expect that

play03:21

to go well yeah it reminds me a little

play03:23

bit about the debate about whether we

play03:25

should be looking for life in in the

play03:28

universe beyond our solar system Stephen

play03:30

Hawking said be careful look at the

play03:33

history of the world anytime you sort of

play03:34

invite us a stronger power more more

play03:37

competent power they might come and

play03:39

destroy you but then the counter to that

play03:41

is that you're mapping human behavior

play03:44

human

play03:46

desires passions needs wants onto this

play03:49

thing is this natural to do and fair to

play03:53

do because humans created it humans

play03:55

humans created the parameters for

play03:57

it so it's actually worse than that in

play04:01

that it's really important to understand

play04:03

that when we talk about AI it's easy to

play04:05

imagine it to be software and the way

play04:08

software generally works it is written

play04:11

by a pro by a programmer they write code

play04:14

which tells the computer what to do step

play04:16

by step this is not how AI Works AI is

play04:21

more like organic it's more like it is

play04:24

grown you use these big

play04:27

supercomputers to take a bunch of data

play04:30

and grow a program that can solve the

play04:34

problems in the data now this program

play04:37

does not look like something written by

play04:38

humans it's not code it's not lines of

play04:41

instructions it's more like a huge pile

play04:44

of billions and billions of numbers and

play04:47

we know if we can run all these numbers

play04:49

re execute these numbers they can do

play04:51

really amazing things but no one knows

play04:54

why so it's way more like dealing with a

play04:56

biological thing like if you look at

play04:58

like a bacterium or something and the

play05:00

bacteria can do some crazy things and we

play05:01

don't really know why and this is kind

play05:03

of how our AIS are so the question is

play05:06

less you know will humans impart

play05:09

emotions into these systems we don't

play05:11

know how to do that it's more if you

play05:13

build systems if you grow systems if you

play05:15

grow bacteria who are designed to solve

play05:19

problems to you know solve games to make

play05:23

money or whatever what kind of things

play05:26

will you grow and by default you're

play05:29

going to grow things that are good at

play05:31

solving problems at gaining power at

play05:33

tricking people at you know building

play05:36

things and so on because this is what we

play05:38

want you reverse engineered gpt2 at the

play05:44

age of 24 which was a few years

play05:47

ago that's well part of the legend I

play05:50

mean that's part of the the the

play05:52

credentialing of you before they say

play05:53

well this guy is saying we're in big

play05:56

trouble they say well by the way you

play05:58

know he knows what he's talking about

play05:59

because because technically he knows

play06:00

what he's

play06:01

doing tell me tell me about the pivot

play06:04

point between being a Believer and

play06:06

enthusiastic about this to becoming a

play06:08

Warner what

play06:10

happened so uh the story goes back even

play06:13

further than that um reverse engineering

play06:15

is um a bit generous it's more like I

play06:19

built a system I found out that no one

play06:21

can reverse engineer it oh and this is a

play06:23

big problem um but it was even before

play06:25

then so I've been very into AI since I

play06:28

was a teenager because I want to make

play06:30

the world a better place and I think

play06:31

that a lot of people who believe in AI a

play06:33

lot of the tech people who are doing the

play06:34

things which are think are dangerous I

play06:35

think most of them maybe not most but

play06:38

most of them probably are good people

play06:40

they're trying to build technology to

play06:41

make the world a better place you know

play06:43

when I grew up uh technology was great

play06:45

you know the internet was making people

play06:47

more connected we were getting access to

play06:49

better medicines and there was you know

play06:51

solar power was improving there's all

play06:53

these great things that science was

play06:54

doing so I was very excited about more

play06:56

science and about more technology and

play06:58

well what is the what is the best

play07:00

technology than intelligence if we just

play07:02

had intelligence well wow we could solve

play07:05

all the problems we could do all the

play07:06

science we could you know invent all the

play07:09

cancer medicines we could you know

play07:11

develop all the cool stuff so I was

play07:13

thinking when I was a teenager and this

play07:16

is I think a common trajectory is that

play07:17

people when they're kind of like first

play07:19

exposed to some of these like techno

play07:21

utopian AGI dreams it sounds great you

play07:24

know it sounds like such a great great

play07:26

solution but then as you think about

play07:28

this problem more you kind of realize

play07:30

that like the problem with AGI is not

play07:33

really how to build it it's how to

play07:35

control it that's much harder just

play07:39

because you can make something which is

play07:41

smart or that solves a problem does not

play07:43

mean you can make something that will

play07:44

listen to you that will do what you

play07:46

truly want this is much much harder and

play07:49

this is and as I started looking into

play07:51

this problem more in my early 20s I

play07:53

start realizing like wow we are really

play07:55

really not making progress on this

play07:57

problem so in that worst case scenario

play07:59

whether we have an apocalyptic ending

play08:02

for all of us we get destroyed

play08:04

existentially or we become enslaved in

play08:06

The Matrix or whatever it might

play08:08

be tell me how it actually happens in

play08:12

your mind how does this

play08:15

AGI um assume control I mean there these

play08:19

famous moments in Terminator and

play08:21

Elsewhere One of the Terminators that

play08:23

final scene where the nuclear bombs are

play08:25

going off all over I mean there lots of

play08:26

different ways people have imagined this

play08:29

the way you see it tell me how it

play08:31

happens and how if things continue to go

play08:35

in in the direction that you fear how

play08:38

long will it take to get

play08:40

there well of course I don't personally

play08:43

know how exactly things will play out I

play08:46

can't see the future and I can give you

play08:48

a feeling though of how I expect it to

play08:50

feel how I expect it to feel like when

play08:53

it happens the way I expect it to feel

play08:55

is kind of like if you play chess

play08:58

against a grandmas now I'm really bad at

play09:00

chess I'm I'm not good at chess at all

play09:03

but I you know I can play you know a

play09:04

little bit of an amateur game and then

play09:07

but when you play against a Grandmaster

play09:08

or someone who's much much much better

play09:10

than you the way it feels it's not like

play09:12

you're having a heroic battle against

play09:15

the Terminator you're having this

play09:16

incredible back and forth and then you

play09:18

lose no it feels more like you think

play09:21

you're playing well you think everything

play09:23

is okay and then suddenly you lose in

play09:25

one move and you don't know why this is

play09:27

what it feels like to play chess like

play09:29

against a grandm and this is what it's

play09:30

going to feel like for Humanity to play

play09:32

against AGI what's going to happen is

play09:35

not some dramatic battle that you know

play09:36

the Terminators rise up and try to

play09:38

destroy Humanity no it will be things

play09:41

get more and more

play09:43

confusing more and more jobs get

play09:45

automated faster and faster more and

play09:47

more technology gets built which no one

play09:48

even quite knows how the technology

play09:50

works there will be mass media movements

play09:53

that don't really make any sense like do

play09:54

we really know the truth of what's going

play09:56

on in the world right now even now with

play09:58

social media do you or I really know

play10:00

what's going on well how much of this is

play10:02

fake how much of it is generated you

play10:04

know with AI or other methods we don't

play10:06

know and this will get much worse

play10:08

imagine if you have extremely

play10:09

intelligent systems much smarter than

play10:12

humans that can generate any image any

play10:14

video anything trying to manipulate you

play10:17

well and being able to develop new

play10:18

technologies to interfere with politics

play10:21

the way I expect it will go is that

play10:23

things will seem like mostly normal just

play10:25

like weird just like things are getting

play10:27

weirder and weirder and then one day

play10:30

we will just not be in control anymore

play10:32

just it won't be dramatic there won't be

play10:34

a fight there won't be a war it will

play10:36

just be one day the machines are in

play10:40

control and not us and and even if there

play10:42

is a fight yeah sorry to even if there

play10:44

is a fight or a war they've handed us

play10:47

the gun and the bullets and we've done

play10:48

it I mean it's us that might might do

play10:51

all of this precipitated by being

play10:54

controlled in some way absolutely

play10:56

possible I don't think an AI would need

play10:57

to use humans for that cuz you know it

play10:59

could develop extremely advanced

play11:00

technology but it's totally possible

play11:02

humans are not secure it is absolutely

play11:04

possible to manipulate humans like you

play11:06

know everyone knows this you humans are

play11:08

not immune to propaganda not immune to

play11:10

mass movements imagine if you know an an

play11:13

AGI gives Kim Jong Un the call and says

play11:16

hey I'm going to make your country run

play11:18

extremely well and tell you how to build

play11:19

super weapons in return do me this favor

play11:22

I mean Kim jamong is going to think

play11:23

that's great and it's very easy to gain

play11:27

power if you're extremely intelligent if

play11:29

you're capable of manipulating people of

play11:32

developing new techologies weapon

play11:34

trading to on the stock market to make

play11:35

tons of money well yeah you can do

play11:38

whatever you want so you're sounding the

play11:41

alarm Jeffrey Hinton seen as the founder

play11:45

or father or Godfather of AI he's

play11:47

sounding the alarm and has distanced

play11:50

himself from a lot of his previous

play11:53

statements others in the mainstream are

play11:56

coming out heavily credentialed people

play11:58

who who are the real deal when it comes

play12:00

to a AIS saying we need guard rails we

play12:04

need regulation we need to be careful

play12:05

maybe we should stop

play12:07

everything yet open AI Microsoft Deep

play12:12

Mind these are companies but then you

play12:14

have governments investing in this

play12:16

everybody's still

play12:17

rushing

play12:19

forward hurtling forward towards a

play12:23

possible Doom why are they still doing

play12:26

it despite these very legitimate and

play12:28

strong warning is it only about the

play12:30

bottom line and money and competition or

play12:32

is there more to it this is a great

play12:34

question and I really like how you

play12:36

phrase you said they were rushing

play12:38

towards because this is really the

play12:39

correct way of looking at this it's not

play12:42

that it is not possible to do this well

play12:44

it is not that it's not possible to

play12:46

build safe AI I think this is possible

play12:48

it's just really hard it takes time it's

play12:51

the same way that it's much easier to

play12:52

build a nuclear reactor that melts down

play12:54

than to build a nuclear reactor that is

play12:56

stable like of course this is just hard

play12:59

so you need time and you need resources

play13:01

to do this but unfortunately what we're

play13:03

we're in the situation right now is

play13:05

we're currently in a situation right now

play13:07

where at least here in the UK there is

play13:10

currently more regulation on selling a

play13:12

sandwich to the public than to develop

play13:15

potentially lethal technology that could

play13:18

kill every human on earth this is true

play13:20

this is this is the current case and a

play13:23

lot of this is because of slowdown it's

play13:25

just you know governments are slow

play13:27

people don't want and vested interest

play13:29

you make a lot of money by pushing AI

play13:32

pushing AI further makes you a lot of

play13:35

money it gets you famous on Twitter you

play13:37

know look how much the like these people

play13:39

are rock stars you know people like Sam

play13:40

alman's a rock star on Twitter you know

play13:43

people love these people they're like oh

play13:44

yeah they're bringing the future they're

play13:46

making big money so they must be good

play13:48

but like I mean it's just not that

play13:50

simple unfortunately we're in a

play13:52

territory where we all agree somewhere

play13:55

in the future there's a precipice which

play13:59

we will fall down if we continue we

play14:01

don't know where it is we don't maybe

play14:04

it's far away maybe it's very close and

play14:06

my opinion is if you don't know where it

play14:08

is you should stop well other people who

play14:10

you know gain money power or just

play14:13

ideological points like a lot of these

play14:15

people is very important to understand

play14:17

do this because they truly believe like

play14:19

a religion they believe in transhumanism

play14:22

in in the Glorious future where AI will

play14:26

love us and so on like so there's many

play14:29

reasons but I mean yeah I mean the

play14:30

cynical take is just I could be making a

play14:33

lot more money right now if I was just

play14:35

pushing AI I could get a lot more money

play14:37

than I have right now how do we do

play14:39

anything about this without just

play14:41

deciding

play14:43

to cut the uny internet cables and blow

play14:46

up the satellites in space and just

play14:48

start again how do you actually because

play14:50

this is a technical problem and it's

play14:53

also a moral and ethical problem so

play14:56

where do you even begin right now or is

play14:58

it too

play14:59

late so the weirdest thing about the

play15:02

world to me right now as someone who's

play15:04

deep into this is that things are going

play15:07

very very bad we have you know crazy you

play15:12

know just corporations with zero

play15:14

oversight just plowing billions of

play15:16

dollars into going as fast as possible

play15:19

with no oversight with no accountability

play15:22

which is about as bad as it could be but

play15:24

somehow we haven't yet lost it's not yet

play15:29

over it could have been over there's

play15:31

many things where it could be over

play15:32

tomorrow but it's not yet there is still

play15:35

hope there is still hope I don't know if

play15:37

there's going to be hope in a couple

play15:38

years or even in one year but there

play15:40

currently still is Hope Oh wait hold on

play15:42

one year I mean

play15:43

that's come on man I mean we're probably

play15:47

going to put out this interview like a

play15:48

couple of weeks after we record it a few

play15:51

months will pass we could all be dead by

play15:53

the time you I know this gets 10,000

play15:55

views I mean just just for explain this

play15:58

timeline line one year why one year why

play16:00

why is it going so fast that even one

play16:02

year would be too far ahead explain that

play16:04

I'm not saying one year is like

play16:06

guaranteed by any means I think it's

play16:07

unlu unlikely but it's not impossible

play16:09

and this is important to understand is

play16:12

that Ai and computer technology is an

play16:14

exponential it's like covid this is like

play16:17

saying in February you know a million

play16:20

covid infections that's impossible that

play16:22

can't happen in six months and it

play16:24

absolutely did this is kind of how AI is

play16:28

as well exponentials look slow they look

play16:31

like you don't go have one infected two

play16:33

infected four infected that's not so bad

play16:37

but then you have 10,000 20,000 40,000

play16:41

you know

play16:42

100,000 yeah you know within a single

play16:45

week and this is how te this technology

play16:47

works as well is that as our computers

play16:50

get there's something called Moors law

play16:52

which is it's not really a lot it's more

play16:53

like an observation that every two years

play16:56

our computers get about you know there's

play16:58

some details but about twice as powerful

play17:01

so that's an exponential and our Tech

play17:03

and it's not just our computers are

play17:04

getting more powerful our software is

play17:06

getting better our AIS are getting

play17:08

better our data is getting better more

play17:10

money is coming into this field we are

play17:12

on an exponential this is why things can

play17:14

go so fast so while I'm not like you

play17:17

know it would be weird if we would all

play17:19

be dead in one year it is physically

play17:22

possible you can't rule it out if we

play17:24

continue on this path the powerful

play17:27

people who can do something about this

play17:30

especially when it comes to regulation

play17:31

when you saw those Congressman speaking

play17:33

to Sam Alman they didn't seem to know

play17:36

what the hell they were talking about so

play17:38

how frustrating is it for you that the

play17:40

people who can make a difference have

play17:42

zero clue about what's really going on

play17:44

and and more important than that they

play17:47

didn't seem to want to actually know

play17:50

they had weird questions that made no

play17:52

sense and so you're thinking okay these

play17:55

guys are in charge I mean no wonder the

play17:56

AI is going to come and wipe us all out

play17:58

maybe maybe we deserve

play18:00

it well I wouldn't go that far but um

play18:04

this used to annoy me a lot this used to

play18:06

be extremely frustrating um but I've

play18:09

come to I've come to peace with it to a

play18:10

large degree because the thing that I've

play18:12

really found is that understanding the

play18:14

world is hard understanding complex

play18:17

topics and technology is hard not just

play18:18

because they they're complicated but

play18:20

also because people have lives and this

play18:22

is okay this is normal people have

play18:23

families they have responsibilities they

play18:26

have there's a lot of things people have

play18:28

to do deal with and I don't shame people

play18:30

for this you know like you know I have

play18:32

turkey you know with my family over

play18:34

Thanksgiving and whatever and you know

play18:35

my aunts and uncles look they have their

play18:37

own lives going on they maybe don't

play18:39

really have time you know to listen to

play18:41

me give them a rant about it so I don't

play18:43

so I have a lot of love and a lot of

play18:45

compassion for that things are hard this

play18:48

is of course doesn't mean it that solves

play18:50

the problem but I'm just trying to say

play18:52

that like it is of course frustrating to

play18:55

some degree that there are no adults in

play18:57

the room this is this is how I would see

play18:59

it is that there is sometimes a belief

play19:03

that somewhere there is someone who

play19:05

knows what's going on there's an adult

play19:07

who's got under control you know someone

play19:09

in the government they've got this under

play19:11

control and as someone who's tried to

play19:14

find that person I could tell you this

play19:15

person does not exist the truth is is

play19:18

the fact that anything works at all in

play19:19

the world is kind of a miracle it's kind

play19:21

of amazing that anything works at all

play19:23

with how chaotic everything is but the

play19:25

truth is is that there are quite a lot

play19:27

of people who like

play19:28

who want the world to be good you know

play19:30

they might not have the right

play19:31

information they might be confused they

play19:34

might be getting lobbied by various

play19:35

people with bad intentions but like most

play19:38

people want their families to live and

play19:41

have a good life most people don't want

play19:44

bad things to happen most people want

play19:47

other people to be happy and safe and

play19:50

luckily for us most normal people so not

play19:54

Elites not necessarily politicians or

play19:56

technologists most normal people yeah do

play19:59

have the right intuition around AI where

play20:02

they see like wow that seems really

play20:03

scary let's be careful with this and

play20:07

this is what gives me hope so when I

play20:09

think about politicians and them not

play20:10

being in charge I think this is now our

play20:13

responsibility as citizens of the world

play20:15

that we have to take this into our own

play20:16

hands we can't wait for people to save

play20:18

us we have to make them save us we have

play20:20

to make these things happen we have to

play20:21

you know we have to make our voices hurt

play20:24

we have to say hey how the hell are you

play20:26

letting this happen like one of the

play20:29

Beautiful Things is that you know to a

play20:31

large degree politicians can be moved

play20:35

they can be reasoned with and they can

play20:36

be moved by the voters you can vote them

play20:38

out of office that's a good argument for

play20:40

democracy that's a great argument for

play20:41

democracy that's that's wonderful you

play20:43

know democracy is the worst system

play20:45

except for all the other ones yeah um so

play20:49

to the point of people's feeling and and

play20:52

I asked about this at the very beginning

play20:53

that intuitive feeling of like

play20:55

something's up here there's something

play20:57

ominous

play20:59

there did seem to be a little bit of a

play21:00

plateau with something like chat GPT so

play21:04

initially people were very anxious very

play21:07

surprised very wowed by what this thing

play21:09

could do it could write your University

play21:11

thesis and whatever it could you know do

play21:13

all these these fancy gimmicks they seem

play21:15

like magic tricks but then once the hype

play21:20

died down a little bit uh people began

play21:23

to input new things ask maybe better

play21:26

questions and you could see some of the

play21:28

limitations of something like you know

play21:30

chat G GPT and its uh forerunners and

play21:34

that led a lot of people to say well I

play21:36

mean okay sometimes this thing just

play21:37

sounds like a PR department or an HR

play21:39

department in a company sometimes it it

play21:42

actually it's there to detect plagiarism

play21:44

but sometimes it feels like a

play21:46

plagiarized like college um paper which

play21:50

led to and this is anecdotally a lot of

play21:53

friends of mine going ah maybe this

play21:54

thing maybe we're okay for a while

play21:56

because this thing has severe

play21:58

limitations address that for me because

play22:00

a lot of people are still sort of like

play22:01

well I know there was the hype but now

play22:03

I'm not so sure tell me about that so

play22:07

there is a story I'm not sure if the

play22:08

story is actually true or not but it's a

play22:10

good metaphor where if you take a frog

play22:13

and you put it into a pot of water you

play22:16

know cold pot of water the Frog will sit

play22:18

there happily if you slowly and turn up

play22:20

the heat on your pot the Frog will sit

play22:23

there there no problem and if you do it

play22:25

very slowly you very slow slly slowly

play22:28

increase the temperature the Frog will

play22:30

get used to the temperature and won't

play22:32

jump out until the water boils and the

play22:34

frog dies I think this is what is

play22:37

happening with people is that um people

play22:41

are extremely good at making things

play22:44

which are crazy normal is that if it's a

play22:47

normal thing if it's a thing all your

play22:49

friends do then it just becomes normal

play22:51

this is like during war why people can

play22:53

Slaughter other people because if all

play22:55

your friends are doing it well it's

play22:56

normal it's a yeah you slaughter people

play22:58

it's normal you killing people is fine

play23:01

this is how it can happen and the same

play23:02

thing applies here is that well okay you

play23:05

can talk to your computer now like sure

play23:07

we can argue about oh chat jpt it's not

play23:09

that smart you could talk to your

play23:10

computer like slow down if this was a

play23:13

sci-fi movie from 20 years ago everyone

play23:16

would be yelling at the screen like what

play23:18

the hell are you doing like this thing

play23:19

is obviously like crazy like what the

play23:21

hell is going on but because it's you

play23:24

know available now you know cheaply

play23:26

online he doesn't feel special so the

play23:30

way to address this is I think a lack of

play23:34

coordinated campaigning effort what I

play23:36

mean by this is is that the general when

play23:39

we think about our civilization not just

play23:41

individual people when we think about

play23:42

our civilization how does our

play23:44

civilization deal with problems how does

play23:47

it decide which problems to address

play23:50

because there's always so many problems

play23:53

you could be putting your effort on how

play23:55

does it decide which one to pay

play23:56

attention to and this is actually very

play23:58

complicated and it can be because of a

play24:01

natural catastrophe or a war or whatever

play24:04

it can be because of some stupid fashion

play24:06

hype just like some viral video on Tik

play24:08

Tok makes everyone freak out sometimes

play24:10

yes but usually if you actually want

play24:13

your civilization to address a problem a

play24:16

big problem it takes long hard grinding

play24:20

effort from people trying to raise this

play24:24

to saliency to raise it to attention

play24:27

because again people have lives you know

play24:29

like most people don't have time to go

play24:32

go online and read huge books about AI

play24:34

safety and like oh how do we integrate

play24:36

chat gbt or how do we deal with like the

play24:38

safety TR they don't have time for that

play24:40

of course they don't and I'm not trying

play24:42

to judge these people I understand it's

play24:44

not their job in a good World there

play24:46

should be a group of people that deals

play24:49

with this the problem is they don't

play24:52

really exist before we go I'm glad you

play24:55

mentioned that people don't know where

play24:57

to look look if there was one resource

play24:59

that you could Point people in the

play25:02

direction of so that they can educate

play25:04

themselves about the reality of the

play25:08

situation and can bring themselves up to

play25:11

speed that would be

play25:13

what there's not one who I think has the

play25:16

whole thing which is a big problem

play25:17

someone should make that resource if

play25:19

someone make that resource please let me

play25:21

know but what I would probably Point

play25:22

people towards is control AI which is a

play25:26

group of people who I'm also involved

play25:27

with who are campaigning for exactly

play25:30

these issues who are trying to bring

play25:31

Humanity together to solve these

play25:34

problems because this is a problem that

play25:36

not you or me can solve no human can

play25:39

solve these problems we're dealing with

play25:40

right now this is a problem that

play25:42

Humanity has to solve right that our

play25:44

civilization needs to solve and I think

play25:45

our civilization can do this but it

play25:48

won't do it without our help it won't

play25:50

help it won't happen without us working

play25:52

together so if there's one thing I can

play25:54

go go on Twitter or Google or whatever

play25:56

go to control AI

play25:58

and support them listen to what they to

play26:00

say and this is the campaign I'm behind

play26:04

well I support them okay we'll put the

play26:06

link also in the YouTube description uh

play26:08

if anybody wants to check it out Conor

play26:10

you have a brilliant mind and I'm really

play26:12

grateful that we got to talk thank you

play26:14

very much for joining us on the

play26:15

interview thank you so much take

play26:26

care

Rate This

5.0 / 5 (0 votes)

Do you need a summary in English?