OpenAI Is FALLING Apart. (Ilya Sutskever Leaving, Super alignment Solved? Superintelligence)

TheAIGRID
15 May 202443:17

Summary

TLDRこの動画スクリプトでは、OpenAIの重要なメンバーであるイリア・サトヴァが組織を離れるニュースが取り上げられており、それがAIの安全性に関する懸念を高めています。また、新しいチーフサイエンティストであるジャコブが期待されていますが、AIのスーパーインテリジェンス(ASI)をどのように制御し、人類に脅威にならないようにするのかという問題が提起されています。さらに、AIが進化し、2029年までに人間のレベルに達するという予測や、メタ(旧Facebook)がAI研究に積極的に取り組んでいる様子が触れられ、AIの急速な発展とそれに伴う脅威、そしてその可能性について語られています。

Takeaways

  • 📉 Ilya SutskeverがOpenAIを離れる:Ilya Sutskeverは、OpenAIを離れて、人類に貢献するという個人的に意味のあるプロジェクトに取り組むと発表しました。
  • 👨‍🔬 Jackobが新しいChief Scientistに:Ilyaの後任として、JackobがOpenAIの新しいChief Scientistに任命されました。彼は、重要なプロジェクトを多数リードしており、急速で安全な進歩を続けると期待されています。
  • 🔍 AI安全性チームの離脱:OpenAIのAI安全性に関するチームから複数の重要なメンバーが離脱しており、これはAI安全性にとって深刻な問題となっています。
  • 🤖 超インテリジェンス(ASI)への道:OpenAIは、ASIのコア技術的課題を4年以内に解決するという目標を掲げていますが、その道のりを確保するため、多くの研究と試行錯誤が必要とされています。
  • 🧵 離脱の理由:Ilya SutskeverやJan Leikeを含むチームメンバーの離脱は、OpenAIの毒性工作环境や、AGI(人工知能)の安全性に関する意見の相違と関連している可能性があります。
  • 🚀 AGIの到達:Daniel关于AGIの到達についての見解は、2024年には15%、2025年には30%の確率でAGIが実現される可能性があるとしています。
  • 🌐 競争と勝者の独占:AGIまたはASIを手に入れた企業は、他の企業を大幅に引き離す可能性があり、その技術は非常に価値があります。
  • 🔑 技術の制御:現在、人工超知能(ASI)を制御する方法は不明であり、もしトレーニングが予想よりも良く機能した場合、制御不能のASIが発生する可能性があります。
  • 🔮 未来の展望:Ray kurzweilなどの予測では、2029年までに人工知能が人間のレベルに達すると予想されており、その後ASIが実現される可能性があります。
  • ⚙️ ブラックボックス問題:ディープラーニングのブラックボックス性質により、AIモデルの内部動作を完全に理解することは困難であり、これは将来の高度なAIシステムの安全性に関わる重要な問題です。
  • 🌟 AGIとASIの衝撃:ASIを制御する企業は、まるで神のような力を持つ可能性があり、その技術は私たちにとって魔法のように見えます。

Mindmap

Transcripts

play00:00

so I'm just going to get straight into

play00:01

this video because there's no point in

play00:03

wasting time I think certain parts of

play00:05

open aai are truly starting to fall

play00:08

apart and this video might be a long one

play00:10

but trust me every single slide has been

play00:12

carefully created so that you guys can

play00:14

understand all of the information and

play00:16

how you can see here that one of the

play00:18

main things that we got today was the

play00:20

news of Ilia satova and open AI this was

play00:23

something that a lot of people were

play00:25

actually waiting on because we hadn't

play00:27

heard from IIA pretty much since

play00:29

November and even in recent interviews

play00:32

Sam Alman consistently refused to speak

play00:34

upon what Ila's status was at opening

play00:37

eye but we finally have the news he says

play00:39

that Ilia and open ey are going to part

play00:41

ways this was very sad to me Ilia is

play00:44

easily one of the greatest minds of Our

play00:46

Generation a Guiding Light of our field

play00:49

and a dear friend his Brilliance and

play00:51

vision are well known his warmth and

play00:53

compassion are less welln but no less

play00:55

important opening ey would not be what

play00:57

it is today without him although he has

play00:59

some personally meaningful he is going

play01:01

to work on I am forever grateful for

play01:03

what he did here and committed to

play01:05

finishing the mission we started

play01:06

together I am happy that for so long I

play01:09

got to be close to such a genuinely

play01:11

remarkable genius and someone so focused

play01:14

on getting the best future for Humanity

play01:16

Jacob is going to be our new Chief

play01:18

scientist jackob is easily one of the

play01:20

greatest minds of Our Generation and I'm

play01:22

thrilled he's taking the Baton here he

play01:24

has run many of our most important

play01:26

projects and I'm very confident he will

play01:28

lead us to make rapid and safe progress

play01:29

press towards our mission of ensuring

play01:32

that AGI benefits everyone so that

play01:34

statement there clearly shows that Elia

play01:36

satova is no longer working at open aai

play01:39

and is going to be working on something

play01:41

else and this is something that at least

play01:44

we can say now we have some kind of

play01:45

closure on where one of the greatest

play01:47

mines in AI where they are going to be

play01:50

now one of the things I did want to know

play01:52

is I wanted to know that who is going to

play01:54

be replacing Elia satova at open Ai and

play01:57

that is of course jackob now essentially

play02:00

if we look at who jackob is jaob is now

play02:03

the chief scientist at open aai where he

play02:05

has led transformative research

play02:07

initiative since 2017 he has previously

play02:10

served as director of research

play02:12

spearheading the development of GPT 4

play02:14

and open A5 and fundamental research in

play02:17

large scale reinforcement learning and

play02:19

deep learning optimization he has been

play02:21

instrumental in refocusing the company's

play02:23

Vision towards scaling deep Learning

play02:25

System and jackob holds a PhD in

play02:27

theoretical computer science from

play02:28

Carnegie melan University so clearly

play02:31

this is someone with a very very

play02:34

impressive resume and clearly has all of

play02:36

the necessary skills to take on the AI

play02:39

Niche now as for Elia satova something

play02:42

that I know many people have been

play02:44

wondering is what did he say and he

play02:46

posted this tweet after quite some time

play02:49

he said after almost a decade I have

play02:51

made the decision to leave open AI the

play02:54

company's trajectory has been nothing

play02:55

short of miraculous and I'm confident

play02:58

that open AI will build AGI that is both

play03:00

safe and beneficial under the leadership

play03:02

of Sam mman Greg Brockman miror moratti

play03:05

and now under the excellent research

play03:07

leadership of jackob it was an honor and

play03:10

a privilege to have worked together and

play03:12

I will miss everyone dearly so long and

play03:14

thanks for everything I am excited for

play03:16

what comes next a project that is very

play03:18

personally meaningful to me about which

play03:19

I shall share details in due time so

play03:23

clearly Ilia seems to have left on good

play03:25

terms despite the entire tumultuous

play03:29

period that was the firing of Sam Alman

play03:32

but I think one of the most interesting

play03:34

things that most people are looking

play03:36

forward to now is of course what comes

play03:38

next so it says that he is excited for

play03:41

what comes next and it's a project

play03:43

that's very personally meaningful to me

play03:45

about which I will share details in due

play03:47

time so whatever Ilia satova is going to

play03:49

do next I'm guessing that we will

play03:51

receive an update I guess maybe a few

play03:53

weeks could be a few months I have no

play03:54

idea but it seems that we're going to be

play03:56

getting an update sometime in the near

play03:59

future now essentially there was also

play04:01

this Peach whilst things on the surface

play04:04

might look like opening eyes completely

play04:05

fine the following slides that I'm about

play04:07

to show you do showcase an entirely

play04:09

different picture because open AI has

play04:12

been losing key members of their most

play04:14

important team in regards to AI safety

play04:17

and I'm about to break this all down for

play04:18

you because when I started to do the

play04:20

research on this I was like wow things

play04:22

are starting to look a little bit

play04:25

pessimistic in terms of AI safety now

play04:28

Ilia also did tweet this pict picture

play04:30

with the rest of the open AI team but I

play04:32

do remember that during December there

play04:34

were some tweets by IIA satova where

play04:36

there were some tweets that were vaguely

play04:38

worded in a way that kind of implied

play04:41

that openai was in a toxic work

play04:43

environment he said that I learned many

play04:44

lessons this past month one such lesson

play04:47

is that the phrase the beatings will

play04:48

continue until morale improves applies

play04:50

more than than it has any right to and

play04:53

this of course could be just dubious

play04:55

speculation with regards to what he was

play04:56

talking about but during the time that

play04:58

this was tweeted it could be argued that

play05:00

it was only related to one key event and

play05:03

that of course was open AI at that time

play05:06

but there was not really many statements

play05:08

in addition to this because of the

play05:10

secrecy of why Sam one was fired now

play05:13

here's where things get really really

play05:15

crazy most people don't know about super

play05:17

alignment because it's something that is

play05:19

in the future so if you don't know what

play05:21

super alignment is this is opening I's

play05:24

plan to solve super intelligence and

play05:26

that is basically a system that is much

play05:29

better than AGI so it says our goal is

play05:31

to solve the core technical challenges

play05:33

of super intelligence alignment in 4

play05:35

years and basically what they did was

play05:38

they decided to build a specific team to

play05:41

solve this specific problem because they

play05:43

knew that in the future they're going to

play05:44

have AGI and after AGI comes ASI which

play05:47

is artificial super intelligence and

play05:49

super intelligence is quite hard to

play05:52

explain but just think of it like this

play05:54

if AGI can do a better task than any

play05:56

human at pretty much everything and it's

play05:58

going to be everywhere artificial super

play06:00

intelligence is going to be able to do

play06:03

things that you can't even fathom it's

play06:04

going to be able to create new knowledge

play06:06

do new research discover cures for

play06:09

certain incurable diseases at the moment

play06:11

it's pretty much going to feel like a

play06:13

magical time if we manage to get super

play06:15

intelligence right now essentially with

play06:18

super intelligence you've got the

play06:19

problem because you're Building A system

play06:20

that is that smart it could go Rogue and

play06:22

if a super intelligence goes Rogue we

play06:24

literally don't stand a chance because

play06:26

if a system is super intelligent it's

play06:28

going to be able to outsmart us and

play06:30

we're not going to be able to understand

play06:31

what its goals are or even what it's

play06:33

doing and you can see here that they

play06:35

said that this might not even work it

play06:36

says whil this is an incredibly

play06:38

ambitious goal we're not guaranteed to

play06:40

succeed we optimistic that a focused

play06:42

concentrated effort can help solve this

play06:44

problem there are many ideas that have

play06:46

shown promise in preliminary experiments

play06:49

and we have increasingly useful metrics

play06:51

for progress and we can use today's

play06:52

models to study many of these problems

play06:54

empirically so here's where things start

play06:56

to fall apart for open AI okay here's

play06:59

where things really start to you know

play07:01

like the alarm bills really start to go

play07:02

so elas atava okay has made this core

play07:05

research his focus and will be

play07:07

co-leading this team with Jan like the

play07:09

head of alignment okay and it says

play07:11

joining the team are researchers and

play07:13

Engineers from previous alignment teams

play07:15

as well as other researchers across the

play07:16

company now the thing is okay this super

play07:19

alignment team that was meant to figure

play07:21

out how to solve the alignment problem

play07:23

for artificial super intelligence okay

play07:25

elas satava is now gone and Jan like

play07:28

today actually quit quit Okay so two key

play07:31

members of the super alignment team are

play07:33

now gone you can see that earlier today

play07:35

Jan like literally tweeted I resigned

play07:37

okay now the thing that I think was the

play07:40

most interesting about this was the fact

play07:42

that Janik didn't say anything other

play07:44

than I resigned remember when Elia

play07:46

satova resigned he said this heartfelt

play07:48

meage that you know shows that he kind

play07:51

of cares about opening eye and where it

play07:52

goes but Jan like stating that you know

play07:54

I resigned with you know just no further

play07:57

context I mean it leaves it open to

play08:00

complete speculation with as to why he

play08:02

did resign now like I said before this

play08:04

is a problem because these were the

play08:06

people that were trying to solve super

play08:08

alignment but you have to understand

play08:10

that this does actually get a lot worse

play08:12

because they're not the only people that

play08:14

resigned from Super alignment in terms

play08:16

of the team now one of the things that

play08:18

you need to know about super alignment

play08:19

and a lot of people are starting to

play08:21

speculate is that because the head of

play08:24

alignment at Super alignment resigned

play08:26

today maybe this means that they solved

play08:29

the alignment problem okay in terms of

play08:31

aligning a super intelligence so if you

play08:33

don't believe this let's take a look at

play08:35

what Jan like said in a recent interview

play08:37

now I've listened to this it's over two

play08:39

hours but this is the main bit that you

play08:41

need to pay attention to so he says if

play08:43

you're thinking about how you align the

play08:45

super intelligence how do you align a

play08:46

system that's vastly smarter than humans

play08:49

I don't know I don't have an answer I

play08:50

don't think anyone really has an answer

play08:52

but that's also not the problem that we

play08:54

fundamentally need to solve maybe this

play08:56

problem isn't even solvable by humans

play08:59

who live today but there is this easier

play09:01

problem how do you align the system that

play09:03

is in the Next Generation how do you

play09:05

align GPT n+ one and that is a

play09:08

substantially easier problem so it's

play09:10

basically saying how do you align

play09:11

currently GPT 5 then when you have GPT 5

play09:14

how do you align gp6 when you have GPT 6

play09:17

how do you align gpt7 so he's basically

play09:20

stating that that is a much easier

play09:22

problem and he goes into more detail

play09:23

here and he says okay and this is

play09:25

basically the highlighted part and so if

play09:27

you get a virtual system to be aligned

play09:29

it can then solve the alignment problem

play09:31

for GPT n plus1 and then you can itly

play09:33

bootstrap yourself until you're at Super

play09:35

intelligence level and you figured out

play09:37

how to align that so basically he's

play09:39

stating that even if humans can solve

play09:41

the problem of GPT n plus1 basically

play09:44

humans shouldn't be doing that but a

play09:45

virtual system as smart as human should

play09:47

be doing that and then if you get that

play09:49

system to be aligned it can then solve

play09:51

the alignment problem for GPT n plus1

play09:54

and then you can itely bootstrap

play09:55

yourself until you actually at Super

play09:57

intelligence if you're thinking about

play09:59

like how do you actually align a super

play10:00

intelligence how do you align the system

play10:02

that's vastly smarter than humans I

play10:04

don't know I don't have an answer I

play10:06

don't think anyone really has an answer

play10:08

but it's also not the problem that we

play10:10

fundamentally need to solve right

play10:12

because like maybe this problem isn't

play10:14

even solvable by like humans who live

play10:16

today but there's this like easier

play10:19

problem which is like how do you align

play10:21

the system that is the next Generation

play10:24

how do you align GPD n plus1 and that is

play10:27

a substantially easier problem

play10:29

and then even more if humans can solve

play10:32

that problem then so should a virtual

play10:36

system that is as smart as the humans

play10:39

working on the problem and so if you get

play10:42

that virtual system to be aligned it can

play10:44

then solve you know the alignment

play10:47

problem for GPT n plus one and then you

play10:49

can iteratively bootstrap yourself until

play10:52

you you know actually you're like at

play10:55

Super intelligence level and you figured

play10:57

out how to align that and of course

play10:59

what's important when you're doing this

play11:01

is like at Ed each step you have to make

play11:04

enough progress on the problem that

play11:05

you're confident that gbd n plus one is

play11:08

aligned enough that you can use it for

play11:10

alignment research and he says of course

play11:13

what's important when you're doing this

play11:15

is that at each step you have to make

play11:16

enough progress on the problem that

play11:18

you're confident that GPT n plus1 the

play11:20

next model whatever it is is aligned

play11:22

enough so that you can actually use it

play11:24

for alignment research so basically what

play11:26

he's saying is that every time we go an

play11:27

increase from GPT 4 to GPT or whatever

play11:30

next Frontier Model is we have to make

play11:32

sure that that model is so aligned that

play11:33

we can then use that current model for

play11:36

alignment research but this is why a lot

play11:38

of people now have thought okay that if

play11:41

two of the key members of the founding

play11:43

members of super alignment have now left

play11:45

okay remember these guys left they

play11:47

didn't get fired these guys left that

play11:49

means and remember he said I resigned

play11:51

okay with nothing no further context

play11:53

that means that maybe they actually

play11:54

managed to solve this alignment problem

play11:56

okay and if you think that it's not that

play11:58

bad take a look at this this person

play12:00

Leopold who used to work at Super

play12:03

lineman at open ey he also no longer

play12:05

works at open ey now he was actually

play12:08

fired okay it says open ey has fired two

play12:11

researchers for allegedly leaking

play12:13

information according to a person with

play12:15

knowledge of the situation okay Leopold

play12:18

and someone else okay who was also an

play12:21

ally of Ilia satova who participated in

play12:23

a failed effort to force Sam mman last

play12:25

fall and you can also see here it says

play12:27

opening ey staffers had actually

play12:29

disagreed on whether the company was

play12:30

developing AI safely enough now what's

play12:33

crazy about this okay is that the two

play12:35

people that were fired recently at open

play12:38

they were actually members of the super

play12:39

alignment team so when you look at the

play12:41

picture now you can see that in the

play12:44

recent paper where openai were talking

play12:45

about weak to strong generalization

play12:47

eliciting strong capabilities with weak

play12:49

supervision this is basically a paper

play12:52

where they're trying to solve super

play12:53

alignment and they showing how it can be

play12:55

done I've crossed out four names here

play12:57

because these four people no long work

play12:59

at open ey we can see pavl Leopold Jan

play13:02

like and Elia Sova none of these people

play13:05

work at open ey anymore this is

play13:06

basically a paper where they're trying

play13:08

to solve super alignment and they're

play13:09

showing how it can be done I've crossed

play13:11

out four names here because these four

play13:13

people no longer work at open ey we can

play13:16

see pavl Leopold Jan like and elas Sova

play13:19

none of these people work at open ey

play13:21

anymore pavl who was a you know

play13:23

researcher okay who was on the previous

play13:25

paper and super AI alignment open AI he

play13:28

actually now works at Elon musk's AI

play13:30

company so it will be interesting to see

play13:32

how his career also develops now what's

play13:34

also crazy is that you might be thinking

play13:36

okay just a few people from Super

play13:38

alignment left that might not be that

play13:39

crazy well that's not the truth as well

play13:42

as I dug into more things I realized

play13:44

that more people from open ey also left

play13:46

as well so you can see right here that

play13:48

it says open ey researchers Daniel and

play13:51

William recently left the company behind

play13:54

C chat GPT it says Daniel said on a

play13:56

forum he doesn't think that open ey will

play13:58

behave responsibly around the time of

play14:00

AGI so this is a key reason and I think

play14:04

it's important to understand that when

play14:06

people leave we have to look at why they

play14:07

leave okay if someone's leaving for you

play14:09

know family reasons or personal reasons

play14:11

that is completely different but if

play14:13

someone's stating that they're leaving

play14:14

this company because he doesn't believe

play14:16

that they'll behave responsibly around

play14:17

the time of AGI that is a key key key

play14:21

indicator that maybe just maybe

play14:23

something is wrong you can see here it

play14:25

says Daniel was on the governance team

play14:28

and Saunders were worked on super

play14:29

alignment team at openi so this is five

play14:33

people from the original Super alignment

play14:35

team have now completely left open eye

play14:38

and that is remarkable considering the

play14:40

fact that since the super alignment team

play14:42

was formed it was meant to solve super

play14:44

alignment within 4 years but literally

play14:46

five of the founding members have gone

play14:48

and four of them that were on this

play14:50

recent paper are no longer there and you

play14:52

have to understand that super alignment

play14:53

is one of the key things that we need to

play14:55

solve if we're going to increase air

play14:57

capabilities which is why many is

play14:59

speculating that the GPT n+1 alignment

play15:02

problem has been solved since it was a

play15:04

substantially easier problem to solve

play15:06

now one thing that did really concern me

play15:09

was Daniel's actual post about magical

play15:13

capabilities and other futuristic things

play15:16

that artificial super intelligence can

play15:18

do and this is something that I did

play15:21

Cover in another video which I'm going

play15:23

to include here now but the point is as

play15:25

I'm about to explain to you how crazy

play15:27

this is about to get guessing that if we

play15:30

take a look at what we've seen here

play15:32

which I'm about to explain and the fact

play15:34

that members of the super lineman team

play15:36

are just gone like literally five six

play15:37

people from the team are now gone some

play15:39

people are speculating whether or not

play15:41

this problem has solved and now it's

play15:43

literally just a compute problem and

play15:44

we're just building data centers because

play15:46

we already know how to get to AGI but

play15:49

anyways if you want to take a look at

play15:51

what Daniel said because he said he

play15:53

doesn't believe that open a are going to

play15:55

uh you know behave responsibly around

play15:57

the time of AGI because it's a very

play15:59

powerful tool that will be allowed to do

play16:00

pretty much anything in terms of the

play16:02

fact that it's going to grant them

play16:03

immense power and will literally shift

play16:05

Dynamics in certain countries I think

play16:07

you need to take a look at how crazy

play16:10

this document is what Daniel said before

play16:12

he left open a eye because it was truly

play16:14

eye opening to see what he thinks the

play16:16

future is going to be like intense so

play16:18

we're going to go through this list

play16:19

because there actually is quite a lot of

play16:21

things to talk about and a lot of things

play16:23

that you do need to be aware of because

play16:25

a lot of people that saw this list

play16:26

thought about a few things but didn't

play16:28

think about how the industry is going to

play16:30

evolve as a whole so let me tell you why

play16:32

this was actually genuinely a shocking

play16:34

statement because there was one of them

play16:36

that I saw and I was like okay that is a

play16:39

super big deal and that just completely

play16:40

changes my timeline so one of the things

play16:42

that he did state is essentially

play16:45

probably there will be AGI coming soon

play16:47

and that's any year now and this is

play16:50

something that unfortunately doesn't

play16:51

surprise me if you've been paying

play16:53

attention to this base you'll know that

play16:54

we've had many different instances and

play16:57

inklings of AGI and essentially many of

play17:00

us do kind of feel like we're on the

play17:02

edge of our seat because we know that

play17:04

AGI is just around the corner now that's

play17:06

for a variety of factors like the fact

play17:08

that openi has been you know in a really

play17:10

really strange position with it you know

play17:12

delaying releases on multiple products

play17:14

with it having the Sam Alman firing um

play17:17

and other companies are also having

play17:19

major breakthroughs and working on AGI

play17:21

as well so um literally any year now we

play17:24

could definitely be getting AGI not only

play17:26

just because of the way that these

play17:27

companies are but the significant

play17:29

investment that they're also getting to

play17:31

now he also uh spoke here and you can

play17:33

see that he responded to someone's

play17:35

question in terms of the percentage of

play17:37

AGI so he said why do you have a 15%

play17:40

chance for 2024 and only an additional

play17:42

15 for 2025 now I do think we get AGI by

play17:45

the end of 2025 or at least you know

play17:48

some kind of lab makes an insane

play17:50

breakthrough and we have AGI by the end

play17:52

of 2025 that's just what I kind of

play17:53

believe but he says do you really think

play17:55

that there's a 15% chance of AGI this

play17:57

year and he says yes I really do I'm

play17:59

afraid I can't talk about all of the

play18:00

reasons for this you know I work at open

play18:02

AI but mostly it should be figureoutable

play18:04

from the publicly available information

play18:06

which we've discussed several times on

play18:07

this channel now my timelines were

play18:09

already fairly short 2029 is the median

play18:11

which is essentially the most common and

play18:13

when I joined openi in early 2022 and

play18:15

the things have gone mostly as I've

play18:17

expected I've learned a bunch of stuff

play18:19

some of which updated me towards and

play18:21

some of which updated me downwards as

play18:23

for the 15 15% thing I don't feel

play18:26

confident that those are the right

play18:27

numbers rather those numbers express my

play18:29

current state of uncertainty I could see

play18:31

the case for making 2024 number higher

play18:34

than the 2025 of course because

play18:36

exponential distribution Vibes if it

play18:38

doesn't work now then there evidence it

play18:39

won't works next year and I could also

play18:41

see the case for making 2025 year higher

play18:44

of course projects take twice as long as

play18:46

one expects due to the planning fallacy

play18:47

so essentially he's stating that you

play18:49

know 15% this year 30% chance next year

play18:51

but of course he's saying that you know

play18:52

he could be completely wrong now with

play18:54

AGI a predictions of course it's

play18:56

anyone's guess but um this prediction by

play18:58

AR invest essentially is a good visual

play19:01

media to kind of look at when you're

play19:03

looking at how AGI is going to progress

play19:05

in the next couple of years now

play19:06

something I do want to say about this is

play19:08

the exponential nature of things because

play19:10

they also do you know take that into

play19:11

account with the fact that essentially

play19:13

it was predicted for the end of uh you

play19:15

know the 2029 or 2030 which is where

play19:18

many people have predicted it I'm not

play19:20

going to get into the next point in a

play19:21

moment which is really what just kind of

play19:23

shocked me um but essentially you can

play19:24

see here that every time you know as as

play19:27

time is going down you can see that

play19:28

we're going down like this way as the

play19:30

for forecast era continues it seems that

play19:32

by 2027 but of course we can see that

play19:34

there are these huge drop offs where

play19:35

technology kind of just keeps on

play19:37

dropping off so of course it's like

play19:39

exponential it's kind of like s-curve

play19:40

growth where you kind of go up and then

play19:42

you kind of plateau for a little bit and

play19:43

then you kind of get that next boom once

play19:45

there is that next thing we're kind of

play19:46

seeing like an inverted S curve um on

play19:48

that graph as well and I know I showed

play19:50

this in a previous video just wanted to

play19:51

show it again so you guys can visualize

play19:52

where things are going so the median the

play19:54

most common prediction is 2029 some

play19:56

people are predicting next year and

play19:57

there are a few small reasons why but I

play19:59

definitely do believe that if anyone is

play20:00

going to get to it it will be open AI

play20:02

obviously because of um the kind of

play20:04

talent that they have the kind of you

play20:05

know researches that they have it's a

play20:06

unique team and open AI researchers are

play20:09

some of the most sought after Talent

play20:11

like you know um essentially it's so

play20:13

crazy that you know I think there was

play20:14

someone recently that was hired from uh

play20:16

Google that went to a different company

play20:18

and then Google started paying the

play20:19

person four times more just because AI

play20:20

researchers are so in demand right now

play20:22

because it's such a competitive space

play20:24

that um there is one tweet that I do I'm

play20:26

going to come back to again that I think

play20:27

you guys need to understand that if

play20:29

someone develops AGI I think you guys

play20:31

have to understand that it's going to be

play20:32

a Winner Takes all scenario because it's

play20:34

only a race until one company reach

play20:36

reaches AI once once that happens the

play20:38

distance to their competitors is

play20:40

potentially infinite and it will be

play20:41

noticeable for example raising a quar

play20:43

trillion of the US GP now of course some

play20:46

people are stating that this is where

play20:47

you know open AI has already achieved

play20:48

AGI they're just trying to raise raise

play20:50

compute because they realized that we're

play20:52

going to need a lot more compute for

play20:53

this kind of system but Others May

play20:55

disagree and I kind of do agree that you

play20:57

know potentially um they probably have

play20:59

internal AGI but just need more compute

play21:01

to actually bring the system to reality

play21:03

because certain things they just simply

play21:05

can't test because they're trying to run

play21:06

GPT 4 they're also trying to run some

play21:08

other systems like Sora they're also

play21:09

trying to to to give some of their

play21:11

compute to Super alignment um so that is

play21:14

a thing as well now this is the

play21:15

statement that really did Shock Me Okay

play21:18

um and this is why uh I my timelines got

play21:21

updated because it changes everything

play21:23

okay so it said he says uh probably

play21:26

whoever controls AGI will be you will be

play21:28

able to use it to get to artificial

play21:30

super intelligence shortly thereafter

play21:32

maybe in another year give it or take a

play21:34

year now you have to understand that AGI

play21:37

is essentially a robot that is

play21:38

apparently as good as all

play21:41

humans pretty much any task okay so

play21:43

pretty much any task you can think of um

play21:45

that can be done in a non-physical realm

play21:47

and AGI is going to be able to be better

play21:49

than 99% of humans okay according to um

play21:52

deep mined levels of AGI paper and

play21:54

essentially right now we have gbt 4

play21:56

which is a lowlevel AGI so when we do

play21:58

get that AGI system it's going to

play22:00

accelerate everything because it means

play22:01

that you know we can just duplicate

play22:03

researchers we can duplicate

play22:04

mathematicians we can duplicate people

play22:06

um doing a whole bunch of stuff okay and

play22:08

essentially this is crazy because

play22:10

artificial super intelligence is

play22:12

completely Next Level artificial

play22:14

superintelligence is an intelligence

play22:16

that is so smart that it will be able to

play22:17

just make consistent breakthroughs and

play22:19

it's going to fundamentally change our

play22:21

understanding of everything we know

play22:22

because because it's going to be that

play22:23

smart okay and that is of course a

play22:26

problem because of course there's

play22:27

alignment problems and stuff like that

play22:28

but the problem is is that artificial

play22:29

super intelligence is something that

play22:31

people didn't really even talk about

play22:32

because it's so it's seemingly so far

play22:34

away but they're sating that whoever

play22:37

controls AGI will be able to use it to

play22:38

get to ASI shortly after so if it's a

play22:40

true AGI like a really good one um

play22:43

getting to ASI won't take that long and

play22:45

that is a true statement and something

play22:46

that I didn't think about that much but

play22:48

it's crazy because super intelligence

play22:49

open I have openly stated that um super

play22:52

intelligence will be the most impactful

play22:53

technology Humanity's ever invented and

play22:55

could help us solve many of the world's

play22:57

most important problems but the vast

play22:59

power of superintelligence could also be

play23:01

very dangerous and it could lead to

play23:03

disempowerment of humanity or even human

play23:05

extension and it states that while super

play23:07

intelligence seems far off now we

play23:09

believe it could arrive this decade and

play23:11

that's why this is kind of shocking

play23:13

because open eye are saying that you

play23:14

know okay some people think that AGI is

play23:16

going to be by 2029 but they're stating

play23:18

that not AGI by 29 2029 we state that

play23:20

super intelligence could be here by the

play23:22

end of this decade so super intelligence

play23:24

could be here which means that you know

play23:26

if we take a look and we kind of like

play23:28

look at the actual you know data and we

play23:30

think okay what's actually going on here

play23:32

we could get AGI realistically by 2026

play23:35

then we could get AGI by 2029 that's

play23:38

something that could happen due to the

play23:39

nature of exponential growth and these

play23:41

timelines and open AI stated that

play23:43

themselves so that's why they're also

play23:44

actually working on that kind of

play23:46

alignment because they know that it is

play23:48

very very soon now in addition if you

play23:50

want to talk predictions you have to

play23:51

call on Rayo as well essentially he's a

play23:53

futurist and he has made a lot of

play23:56

predictions 147 and there's an 86% win

play23:59

ratio I guess whatever you want to call

play24:00

it now of course some people have you

play24:03

know debated whether or not this ratio

play24:05

is as high as he claims but um I would

play24:07

say that his predictions have come true

play24:09

a decent amount now essentially his

play24:11

prediction on AGI is that artificial

play24:14

intelligence will achieve human level by

play24:16

2029 which is once again still going to

play24:18

be pretty crazy even if it does happen

play24:19

at 2029 because if we take a look at it

play24:21

because I remember Elon Musk stated that

play24:23

by 2027 everyone's timelines is getting

play24:25

shorter and shorter by the day and we do

play24:27

know that if we take a look at what's

play24:29

actually going on right now um if we had

play24:31

AGI within 2 years it's something that

play24:32

genely wouldn't surprise everyone

play24:34

especially with what we saw with s and

play24:36

now another thing about um you know Ray

play24:38

kwell that he actually stated that was

play24:39

actually quite shocking and this is why

play24:42

um I don't think you guys understand the

play24:43

kind of world that we could be living in

play24:45

if we actually do get AGI and then ASI

play24:47

is because um he's stating that you know

play24:49

there's a possibility that we might

play24:50

achieve immortality by the year 2030 and

play24:52

that's because of course we are like

play24:54

doing well in terms of you know

play24:56

longevity research and that kind of

play24:57

stuff but if we do have artificial super

play24:59

intelligence it's going to allow us to

play25:00

do a lot of things like a lot of

play25:02

breakthroughs that are just going to

play25:03

completely change everything and that's

play25:05

why this is so shocking because I didn't

play25:07

realize that it could only take a year I

play25:10

I don't know I mean I think that maybe

play25:12

people aren't thinking about things such

play25:13

as you know the actual compute the

play25:15

actual you know laws in place that might

play25:17

try to regulate this kind of stuff into

play25:19

the ground the kind of uh maybe there's

play25:21

going to be some kind of I guess you

play25:22

could say Financial crashes or

play25:24

essentially other things that could

play25:25

potentially stop this but provided

play25:27

everything is smooth like like there's

play25:28

no you know Black Swan event there's no

play25:30

like Bubonic plague the world doesn't

play25:31

need to go into a shutdown and AGI

play25:33

research isn't kind of delayed um ASI by

play25:36

the end of the decade is a pretty scary

play25:38

thing to think about okay and that is

play25:39

why I stated that this genuinely did

play25:41

shock me and one of the craziest things

play25:43

as well like I said I was going to come

play25:44

back to this okay how on Earth do other

play25:46

companies catch up like I think think

play25:48

about this okay so let's say your

play25:50

opening eye okay you are working on

play25:51

artificial general intelligence you do

play25:53

it one day you wake up your researchers

play25:55

you know your whole team is like look

play25:56

we've done it we've achieved AGI we've

play25:58

benchmarked it on all of this it's 99%

play26:00

on this on that and that and that um

play26:02

we've done it we've achieved AGI boom

play26:04

okay how on Earth do other companies

play26:06

catch up because the moment you get AGI

play26:09

you can use it to I guess you could say

play26:10

get towards ASI and you know you

play26:13

immediately get like your company your

play26:15

company just scales like 10x overnight

play26:16

or even 100x overnight because all you

play26:18

need to do is get the AGI to be able to

play26:20

do certain things and you know it's

play26:22

going to be relatively cheap to you in

play26:24

terms of hiring another person that

play26:26

you'd have to pay like a million a year

play26:27

with open AI you could essentially have

play26:29

these super powerful researchers doing

play26:31

tons and tons of alignment research you

play26:33

know ASI research and your company could

play26:35

get an additional 100 employees every

play26:36

day as long as you're scaling with

play26:37

compute how on Earth do other AI

play26:40

companies catch up to a company that's

play26:42

basically achieved escape velocity and I

play26:45

don't think they will like I genuinely

play26:46

don't think that other companies will

play26:48

catch up unless they quickly unless you

play26:50

know somehow it leaks and the agit Tech

play26:52

is you know widely distributed and then

play26:54

of course you know when I say AGI Tech

play26:56

I'm actually talking about the fact that

play26:57

the agite tech is going to be the

play26:59

research papers and the research behind

play27:01

it not you know opening eye giving you

play27:03

restrained access like they do with gbt

play27:05

4 because the verion that we even get a

play27:07

very very nerfed down model to what the

play27:09

raw capabilities of the models offer so

play27:11

essentially um you know this is some of

play27:14

anthropics pitch deck when they wanted

play27:16

to raise money in 2023 and they

play27:19

basically said that we believe that

play27:20

companies that train the best 2025 to

play27:22

2026 models will be too far ahead for

play27:25

anyone to catch up in subsequent Cycles

play27:27

so um and if you don't know who

play27:29

anthropic are they're a big AI company

play27:31

that is kind of competing with openi

play27:32

some of the open AI researchers did

play27:34

leave to create anthropic because they

play27:36

wanted to focus on safety but the point

play27:38

here is that I don't think they catch up

play27:39

and it does make sense if you have a

play27:41

company that has AGI they have arguably

play27:43

the best technology in in the last 20

play27:45

years and with that they can grow their

play27:47

company exponentially so I don't think

play27:49

people catch up I think it's just you

play27:51

know they're going to be so far gone

play27:52

that it's going to be pretty crazy to

play27:54

see what happens and I think the reason

play27:56

that this is a thing is because this is

play27:58

why people are stating that openi have

play28:00

achieved Ai and they're currently using

play28:01

it to develop things like Sora and stuff

play28:03

and if that is true it kind of does make

play28:05

sense cuz Sora definitely blew my hat

play28:07

off like it's just like whoa like even

play28:09

as someone who looks at AI all the time

play28:11

when I saw that I was like whoa okay I

play28:12

didn't think we were that close but um

play28:14

yeah it's it's definitely pretty crazy

play28:16

and um it it goes on okay so here it

play28:18

states Godlike Powers okay so it says

play28:20

probably whoever controls ASI listen to

play28:22

this this is the craziest bit that I was

play28:23

reading this and I was like is this even

play28:25

real am I even living in a reality right

play28:26

now it says probably who whoever

play28:28

controls artificial superintelligence

play28:30

will have access to spread to a spread

play28:32

of powerful skills and abilities that

play28:34

will be able to build and wield

play28:36

technologies that seem like magic to us

play28:38

just as modern tech would seem like

play28:40

magic to medievals this will probably

play28:42

give them Godlike Powers over whoever

play28:44

doesn't control ASI so that brings an

play28:46

important question do you think open AI

play28:48

let's say they have ASI they have it

play28:50

aligned do you think open AI are going

play28:52

to distribute ASI or are they just going

play28:53

to you know patent all the Technologies

play28:55

as a kind of subsidiary of open as of

play28:57

open cuz if they have ASI and nobody

play29:00

else has it that's going to be the most

play29:01

valuable thing on the planet and if

play29:03

they're able to distribute cure if

play29:05

they're able to distribute you know new

play29:06

technology I mean that's going to make

play29:08

the company super super super valuable

play29:10

because like it states here they're

play29:12

probably going to give them Godlike

play29:13

Powers over anyone who doesn't control

play29:14

ASI because that level of smartness is

play29:17

unfathomable like it's very hard to

play29:19

conceptualize how smart it is because

play29:21

according to several reports and you

play29:22

know researchers and stuff like that

play29:23

it's basically like trying to explain

play29:26

economics to essentially a b like you

play29:29

know a b the thing that buzzes around

play29:30

try to explain economics to that it it

play29:33

it's it's very hard to conceptualize how

play29:35

you would even begin to explain that to

play29:37

a be I mean first you'd have to teach it

play29:39

English then you'd have to teach it so

play29:40

many other different concepts and um

play29:42

that is going to be something that is

play29:44

pretty pretty crazy I mean I mean trying

play29:46

to even teach it abstract context so um

play29:49

whilst this does seem good and whilst

play29:50

you know Godlike powers and stuff like

play29:52

that and you know which is why all these

play29:53

companies are racing to achieve AGI

play29:55

because they know once that is there

play29:57

it's like you gain an instant 100x speed

play29:59

boost in this kind of race the problem

play30:01

is is that there is the blackbox problem

play30:04

and a lot of people are starting to

play30:06

forget about this problem as we Edge

play30:08

closer and closer towards the edge of

play30:09

this huge Cliff that we could be on um

play30:12

is the fact that it states in general

play30:13

there's a lot we don't understand about

play30:15

modern deep learning modern AIS are

play30:17

trained not built SL programmed we can't

play30:20

theorize for example that they are

play30:22

generally robustly helpful and in and

play30:24

honest instead of just biing their time

play30:26

we can't check so the problem here is

play30:28

that um we don't know how these AR

play30:30

models work we actually don't know

play30:32

what's inside them we don't know how

play30:35

everything is going together it's not

play30:36

like you write a code and you understand

play30:38

exactly how the code works this is not

play30:40

how these AI models are going to be and

play30:42

in the future um it's going to be a

play30:43

bigger problem because if we're you know

play30:45

growing an AI which is what some

play30:46

researchers have claimed which is

play30:48

essentially that would be a more

play30:49

accurate description if we're doing that

play30:51

how on Earth are we then going to

play30:53

understand really um these even super

play30:55

intelligent systems if we don't really

play30:57

understand the ones we have now so um

play30:59

it's pretty crazy it's it's generally

play31:01

like like I'm trying hard to put it into

play31:04

words but it is a very very giant

play31:05

problem that people are trying to solve

play31:07

and of course um here's we have this

play31:09

okay so the alignment problem further

play31:11

currently no one knows how to control

play31:12

artificial super intelligence which is

play31:14

true and they are working on it this is

play31:15

what open ey is currently working on and

play31:17

it says if one of our training runs

play31:19

turns out to work way better than we

play31:21

expect we'd have a rogue artificial

play31:23

super Intelligence on our hands and

play31:25

hopefully it would have internalized

play31:27

enough human effects that things would

play31:29

be okay and that's a crazy statement I

play31:30

don't care what you say that is insane

play31:32

because he's basically saying that look

play31:34

if our training runs work out to be

play31:36

better than we expect unfortunately

play31:37

we're going to have a rogue ASI on our

play31:39

hands because we don't know how to we

play31:41

don't know how to align it they're

play31:42

basically saying that look if we train

play31:43

the next model and it's super smart or

play31:45

artificially super intelligent which I

play31:46

don't think it will be I do think that

play31:48

you need a ton of compute just like how

play31:49

you skilled things up with before it

play31:51

says hopefully we're just we're just

play31:52

basically just hoping that it's not

play31:54

crazy okay um and that is quite scary

play31:56

that you know hope is mentioned here so

play31:58

it says there are some reasons to be

play32:00

hopeful about that but there are also

play32:01

some reasons to be pessimistic and the

play32:02

literature on the topic is small and

play32:04

preag dtic which is of course true then

play32:07

of course we have um Sam Alman which is

play32:09

a great clip which you guys should take

play32:10

a look at because he actually talking

play32:12

about um the alignment problem is like

play32:13

we're going to make this incredibly

play32:15

powerful system and be really bad if it

play32:17

doesn't do what we want or or if it sort

play32:20

of has you know goals that are uh either

play32:23

in conflict with ours um many Sci-Fi

play32:25

movies about what happens there or goals

play32:27

where it just like doesn't care about us

play32:28

that much and so the alignment problem

play32:30

is how do we build AI that that does

play32:34

what is in the best interest of humanity

play32:36

how do we make sure that Humanity gets

play32:38

to determine the you know the future of

play32:40

humanity um and how do we avoid both

play32:43

like accidental misuse um like where

play32:45

something goes wrong that we didn't

play32:46

intend intentional misuse where like a

play32:48

bad person is like using an AGI for

play32:50

great harm even if it that's what the

play32:52

person wants and then the kind of like

play32:54

you know inner alignment problems where

play32:55

like what if this thing just becomes a

play32:57

creature that views this as a threat the

play32:59

the way that I think the self-improving

play33:00

systems help us is not necessarily by

play33:03

the nature of self-improving but like we

play33:05

have some ideas about how to solve the

play33:07

alignment problem at small scale um and

play33:09

we've you know been able to align open

play33:10

ai's biggest models better than we

play33:12

thought we we would at this point so

play33:13

that's good um we have some ideas about

play33:16

what to do next um but we cannot

play33:18

honestly like look anyone in the eye and

play33:19

say we see out 100 years how we're going

play33:21

to solve this problem um but once the AI

play33:23

is good enough that we can ask it to

play33:24

like hey can you help us do alignment

play33:26

research um I think think that's going

play33:28

to be a new tool in the toolbox so

play33:29

essentially In that clip Sam mman

play33:31

Actually does talk about how they're

play33:32

going to use AI an internalized version

play33:34

of maybe an a AGI or narrow AI That's

play33:37

able to really really understand how to

play33:39

align um these AI systems and of course

play33:41

he does talk about the fact that you

play33:43

know we could have an AI that just you

play33:44

know eventually evolves into some kind

play33:46

of creature that just you know does its

play33:47

own thing and that's pretty scary coming

play33:49

from someone who's the CEO of a major

play33:51

company that is building some of the

play33:52

most impactful technology that we will

play33:54

have in our lifetimes and essentially of

play33:57

course here we talk about the best plan

play33:59

and it says our current best plan

play34:01

championed by the people winning the

play34:02

race to AI is to use each generation of

play34:05

AI systems to figure out how to align

play34:07

and control the Next Generation and this

play34:09

plan might work but skepticism is

play34:11

warranted on many levels so open AI did

play34:13

actually talk about um their approach to

play34:15

this and I think it's important to

play34:16

actually look at this because their goal

play34:18

is to build a roughly human level

play34:20

automated alignment researcher and then

play34:22

basically saying that we can then use

play34:23

vast amounts of compute to scale our

play34:25

effort and itely align super

play34:27

intelligence super Intelligence being

play34:28

that crazy Smart Level AI system that's

play34:30

going to have goals beyond our

play34:32

understanding and essentially they're

play34:33

saying to align the first automated

play34:34

alignment researcher we're going to need

play34:36

to develop a scalable Training Method

play34:37

validate the resulting model and stress

play34:39

test the entire alignment pipeline so of

play34:41

course they're going to do adverse

play34:42

serial testing where essentially they're

play34:43

going to test the entire pipeline by

play34:45

basically stating that you know they're

play34:46

going to try to just see what kind of

play34:48

goes wrong but in a kind of sandbox

play34:50

environment and of course try to like

play34:52

like detect how things would go wrong so

play34:54

um I'm guessing that this is you know

play34:56

one of their approaches and of course uh

play34:58

they've shown that this kind of does

play35:00

work so essentially there's a thing

play35:02

called weak to strong generalization

play35:03

eliciting strong capabilities with weak

play35:05

supervision so I'm going to show you

play35:06

guys that page now and essentially here

play35:08

you can see they talk about the super

play35:09

intelligent problem and of course super

play35:10

intelligent is a big problem and this is

play35:12

actually pretty recent which is uh quite

play35:14

interesting this was December the 14th

play35:15

2023 so around 2 three months ago they

play35:18

said we believe super intelligence and

play35:19

AI vastly smarter than humans could be

play35:21

developed within the next 10 years

play35:22

however we don't know how to reliably

play35:24

steer and control superhuman AI systems

play35:26

so solving this problem is essential for

play35:28

ensuring that even the most advanced AI

play35:30

systems are beneficial to humanity just

play35:32

going to zoom in here we formed the T

play35:33

alignment team earlier this year to

play35:34

solve this problem and today we're

play35:36

releasing the team's first paper which

play35:37

introduces a new research Direction um

play35:39

for officially solving superhuman models

play35:41

so basically they state that you know

play35:43

future AI systems will be capable of

play35:45

extremely complex and creative behaviors

play35:47

that will make it hard for humans to

play35:49

basically look over them and watch and

play35:50

understand for example superhuman models

play35:52

may be able to write millions of lines

play35:54

of code potentially dangerous computer

play35:55

code that will be very hard for even

play35:57

expert must to understand so essentially

play35:59

they made this kind of setup here and

play36:01

with this setup they say um to make

play36:03

progress on this core challenge we

play36:04

propose an analogy we can empirically

play36:06

study today can we use a smaller less

play36:08

capable model to supervise a larger more

play36:10

capable model so you can see here we've

play36:12

got traditional machine learning where

play36:13

we have the supervisor looking at

play36:15

student which is not as smart as them

play36:16

but it isn't too vastly smarter then we

play36:18

have super alignment which is um of

play36:20

course you know where essentially the

play36:22

human researcher is trying to supervise

play36:24

a student that is way smarter than it

play36:25

that's where you can see the robot

play36:26

that's just a Above This human level you

play36:29

can see here in this diagram and it's

play36:30

like you know how on Earth is that

play36:31

supposed to work what they're trying to

play36:33

do is like look okay if we can get you

play36:35

know a smaller robot a smaller AI system

play36:37

to supervise a larger AI system that's

play36:39

beneath human level hopefully we can

play36:41

scale this progress and then when we get

play36:43

to this level of super alignment

play36:44

hopefully that thing kind of works and

play36:46

essentially what they did was they did

play36:47

this they said when we supervise GPT 4

play36:50

with a gpt2 level model using this

play36:52

method on NLP tasks the resulting model

play36:54

typically performs Somewhere Between

play36:55

gpt3 and GPT 3.5 and it says we were

play36:58

able to recover much of GP gp4s

play37:00

capabilities with only much weaker

play37:02

supervision so it says this method is a

play37:04

proof of concept with important

play37:05

limitations for example it still doesn't

play37:07

work on chat GPT preference data however

play37:10

we also find Signs of Life with other

play37:11

approaches such as optimal early

play37:13

stopping and bootstrapping from small to

play37:15

intermediate to large models so

play37:16

essentially um this is just their first

play37:18

paper on kind of thinking how on Earth

play37:20

they could even you know try and solve

play37:21

this but I do think that this is

play37:23

something that is important now of

play37:25

course this is the problem okay it says

play37:27

for for one thing there is an ongoing

play37:28

race to AI with multiple Mega

play37:30

corporations participating and only a

play37:33

small fraction of their compute and

play37:35

labor is going towards alignment and

play37:36

control research and one worry is that

play37:39

they aren't taking this seriously enough

play37:40

now basically you know the the slide

play37:42

just before there if you saw what

play37:44

opening I said uh I'm not sure on that

play37:45

page somewhere opening eyes said that

play37:47

20% of their overall compute is going to

play37:49

Safety Research which does make sense

play37:51

because guys if you haven't you know

play37:52

heard of the elephant in the room the

play37:54

elephant in the room is that essentially

play37:55

if these uh super intelligent systems

play37:57

don't work out um we all die and of

play37:59

course you might be thinking how on

play38:00

Earth do we all die I could play a clip

play38:02

but essentially you just have to think

play38:04

about it like this okay um you know how

play38:06

ants right just you know walk around

play38:07

they do their thing um imagine if an ant

play38:09

created a human and then humans start

play38:11

creating highways as a result of humans

play38:12

creating highways uh we destroy ant

play38:14

colonies because we need to remove their

play38:16

environment in order to place down a

play38:17

highway we need to place down homes and

play38:19

we just see ants as a minor

play38:20

inconvenience and because of that um of

play38:22

course ants die in the process and some

play38:23

people are speculating that this is

play38:25

going to be the same with artificial in

play38:27

elligence and we have no idea if this is

play38:29

going to be true or not because the only

play38:30

way to find out is to do it and if we do

play38:32

it and we will die then I guess we're

play38:34

never going to really know because we're

play38:35

all dead so as horrible as that is the

play38:38

point I'm trying to make here as well is

play38:39

that all these companies are now placing

play38:41

their chips on AGI because they've

play38:42

realized that yo this is this next

play38:44

technology whoever holds this key is

play38:46

going to pretty much control um I think

play38:48

a lot of the world's resources because

play38:50

if you have an intelligent ASI system

play38:52

and you just ask it you know how do we

play38:53

become the most valuable company in the

play38:54

world it's it's going to get it right

play38:56

like I mean if it's smarter than us it's

play38:58

going to get it right so however long

play38:59

it's going to take um that's going to be

play39:00

an interesting thing so meta's going all

play39:02

in this is Mark Zuckerberg stating that

play39:04

you know his company's just going all in

play39:05

AI hey everyone today I'm bringing

play39:08

meta's two AI research efforts closer

play39:10

together to support our long-term goals

play39:13

building general intelligence open-

play39:15

sourcing it responsibly and making it

play39:17

available and useful to everyone in all

play39:19

of our daily lives it's become clearer

play39:22

that the next generation of services

play39:24

requires building full general

play39:26

intelligence building the best AI

play39:28

assistants AIS for creators AIS for

play39:30

businesses and more that needs advances

play39:32

in every area of AI from reasoning to

play39:35

planning to coding to memory and other

play39:37

cognitive abilities this technology is

play39:38

so important and the opportunities are

play39:41

so great that we should open source and

play39:44

make it as widely available as we

play39:45

responsibly can so that way everyone can

play39:48

bet we're building an absolutely massive

play39:50

amount of infrastructure um to support

play39:52

this by the end of this year we're going

play39:54

to have around 350,000

play39:57

Nvidia h100s or around 600,000 h100

play40:01

equivalents of compute if you include

play40:02

other Jeep we're currently training

play40:03

llama 3 and we've got an exciting road

play40:05

map of of future models that we're going

play40:07

to keep training responsibly so that

play40:09

just shows you that all of these

play40:10

companies are truly just pouring

play40:12

billions of dollars into this and the

play40:14

crazy thing is is that they're making

play40:15

breakthroughs okay it's not just like

play40:16

they're doing this just for fun these

play40:18

guys are making breakthroughs you can

play40:19

see that recently they made a technical

play40:21

breakthrough this isn't meta by the way

play40:22

this is a company private company called

play40:23

Magic that could Ena enable active

play40:25

reasoning capabilities similar to open

play40:27

qstar model which was apparently a crazy

play40:29

crazy breakthrough and this is why I

play40:31

state that timelines are getting shorter

play40:32

and shorter we have people stating you

play40:34

know crazy crazy things you know and of

play40:37

course this is once again brings us back

play40:38

to the mullock problem which is

play40:39

essentially if AGI is going to be any

play40:42

year now and if of course you know

play40:44

timelines are getting shorter because

play40:45

whoever controls AGI is going to be able

play40:47

to get to ASI shortly thereafter we have

play40:49

this problem of you know Safety Research

play40:51

being an issue and of course you know

play40:53

some people even left open AI you know

play40:55

and the people who who made anthropic

play40:57

you know Dario amod who left open AI to

play40:59

start anthropic because he wanted to

play41:00

focus on safety they even recently you

play41:03

know did a paper on sleeper agents I

play41:05

might include a clip from the video

play41:06

where I talked about that and why that

play41:07

was really bad and why everyone missed

play41:09

the mark on that and some people were

play41:10

stting that oh you know Lal this is just

play41:12

dumb um but essentially we do have a

play41:15

problem on our hands because the

play41:17

timelines every day seem to be getting

play41:19

shorter and shorter whether it be an

play41:20

open a employee whether it be you know a

play41:22

company making a private breakthrough

play41:24

that enables um you know active

play41:25

reasoning I think it it's not smart to

play41:28

underestimate the fact that AGI will be

play41:30

used to get to ASI shortly thereafter

play41:32

and this statement okay the fact that

play41:34

you know whoever controls ASI will have

play41:36

access to a powerful skills and ability

play41:38

that will seem like magic to us um just

play41:40

like modern tech would seem like magic

play41:41

to medievals isn't to be underestimated

play41:44

because if we like think about it like

play41:45

this okay this is why super intelligence

play41:47

is so crazy like if we go back to for

play41:49

example you know when they just had

play41:50

castles and you know the medieval times

play41:52

or whatever if we just go back to that

play41:53

time and if you know we ask them how

play41:55

would you defeat this Army in the future

play41:57

future okay let's say how would you

play41:58

defeat this Army in the future they

play41:59

would say oh we'd get our cannon balls

play42:01

we'd get our bow and arrows and we'd be

play42:02

able to defeat them but they wouldn't

play42:04

because we'd have tanks and we'd have

play42:06

planes and we have this advanced level

play42:07

of technology that would just simply

play42:08

destroy anything that they'd ever have

play42:10

and that's a problem with artificial

play42:12

superintelligence if you're trying to

play42:13

think of something that is very hard to

play42:15

conceptualize so um I mean all of the

play42:17

current Tech that we do have like if you

play42:19

saw an iPhone you brought it back 100

play42:20

years it would seem like magic like if

play42:22

you saw a drone it would look like magic

play42:24

I mean it's pretty crazy okay um and we

play42:26

do know that that is a 100 years ago

play42:28

without artificial super intelligence so

play42:30

you can imagine how crazy things are

play42:32

going to look like I mean I genuinely

play42:33

can't even imagine to believe what the

play42:34

future is going to look like are we

play42:35

going to be you know all IM Immortal and

play42:38

I mean how is the timeline going to be I

play42:40

think either one thing happens either it

play42:42

comes either faster than we think or

play42:43

later than we thinks I don't think it

play42:44

comes on time because I do think there's

play42:46

always certain factors that people

play42:48

aren't thinking about and of course who

play42:50

knows maybe we'll hit a wall maybe you

play42:51

know AGI doesn't come later down the

play42:53

line because we figure out that you know

play42:55

there's some kind of war that we can't

play42:56

get past and requ

play42:57

more years of breakthroughs and you know

play42:59

gb4 we're kind of stagnant at that for a

play43:01

bit but um it will be interesting to see

play43:03

where we do go and how these timelines

play43:05

do evolve because um things are moving

play43:08

rapidly and if you did enjoy this um

play43:09

it's important to subscribe to the

play43:10

channel because uh every day I release a

play43:12

video on the most important and most

play43:14

pressing AI news that you need to be

play43:16

aware of

Rate This

5.0 / 5 (0 votes)

関連タグ
OpenAIイリア・サトヴァAI研究退社スーパーインテリジェンスAI安全AGI技術進歩未来予測人類脅威